As the founder and head of Brookes’ Visual Artificial Intelligence research group, I have been conducting work at the current boundaries of human action recognition. My group has built in just a few years a leadership position in the field of deep learning for real-time action detection, localisation and recognition, which has led to be best detection accuracies to date and the only system able to localise multiple actions on the image plane in (better than) real time. We are now shifting towards work at the current boundaries of visual AI, such as: (i) the design of new deep learning architecture able to regress whole action tubes in real time; (ii) structured-output DNs with as output part-based discriminative models, (iii) deep neural video captioning incorporating attention models and prior logical knowledge, & (iv) the creation of a theory of mind for visual AIs. In the past I have proposed spectral embedding techniques for unsupervised 3D segmentation and matching which have been rather highly cited in the field (see Google metrics). The group has heavily invested in action and activity recognition via discriminative part-based models, in partnership with Oxford’s Torr Vision group, generating an IJCV paper and prizes at MLVR and BMVC, and published results on metric learning for generative dynamical models (cfr. PAMI 2014 article).
I am a recognised leader in the field of uncertainty theory and belief functions. My reputation comes from the formulation of a geometric approach to uncertainty in which probabilities, possibilities, belief measures and random sets are represented and analysed by geometric means. This work is going to be published as a Springer monograph “The geometry of uncertainty” in 2017, and was published as a separate monograph by Lambert in 2014. My work concerns all the mathematical properties of non-additive probabilities and their application to decision making under partial or missing data, including: the generalisation of the law of total probability, the notions of upper and lower likelihoods, and that of generalised random variables.
Within machine learning my work is directed at understanding the mathematics of deep learning, providing new robust foundations for statistical learning theory, and developing novel tools based on the theory of random sets, in particular generalisation of the logistic regression frameworks and of max-entropy classifiers. I also worked on manifold learning for generative dynamical models, and the generalization of bilinear classifiers to the tensorial case in order to model multiple nuisance factors (see EPSRC First Grant). I am part of research consortia which aim at applying machine learning to human-robot interaction (the creation of emotional avatars), factory robotics (the coordination of fleets of automated forklifts), surgical robotics (robotic assistant surgeon arms) and healthcare (home monitoring and the early diagnosis of dementia from accelerometer and video data captured via wearable devices, cfr. a recent EPSRC bid and a joint Gait & Posture paper with Prof Dawes).
Research group membership
Visual Artificial Intelligence Laboratory (Director): http://cms.brookes.ac.uk/staff/FabioCuzzolin
Torr Vision Group, Oxford University (Associate member): www.robots.ox.ac.uk/~tvg/
Research grants and awards
To date I attracted external fundingfor a total of circa £1,300,000 (not fully incorporating the €4.3M Horizon 2020 project SARAS, which I co-wrote with the Coordinator Riccardo Muradore), and internally-funded studentships for an equivalent amount of £216,000. As head of a research unit I annually receive £28,000 in QR money from the Department, as a result of the group's REF 2014 performance. In 2014-2015 four other bids reached the final stage or scored very highly but were eventually not funded, for a total of £1,615,000.
Several grant applications are (as of November 2017) pending for around £1-2M, including a Leverhulme Trust Research Grants at the final, full proposal stage, a Horizon 2020 COST action, a Horizon 2020 FET bid, the Expanding Excellence in England AGILE bid. A host of other applications are in the making, including two Engineering and Physical Sciences (EPSRC) ones, a H2020 Factory of the Future (FoF) project, and two with Innovate UK led by Cortexica Vision Systems and AnyVision. More details below.
Past and current funding
- Oxford Brookes University - Central Research Funding (CRF). Uncertainty in Computer Vision (Dec 2008). Travel money for a total of £3,000.
- Oxford Brookes University - “Intelligent Transport Systems" doctoral training programme. Multi Sensor Fusion for Simultaneous Localization and Mapping on Autonomous Vehicles (Jan 2011). Own role: Director of Studies. Period: September 2011 - October 2014. One PhD Studentship.
- EPSRC - First Grant Scheme. Tensorial modeling of dynamical systems for gait and activity recognition (Feb 2011). Own role: Principal Investigator. Budget: £122,000. Period: July 2011 - January 2014. Rated 6/6, 6/6, 5/6, 6/6. The project has generated three articles on PAMI, IJCV and IEEE TFS, and a number of follow-up grant bids.
- Oxford Brookes University - Faculty of Technology: Next 10 Programme Award (Oct 2012). Own role: Director of Studies. One PhD Studentship. Period: September 2014 - October 2017.
- Onera - Elsevier – Int. Society of Information Fusion (ISIF): Sponsorship of BELIEF 2014 (2014). £7,000.
- Oxford Brookes 150th Anniversary Scholarship: Online action recognition for human-robot interaction (July 2014). Own role: Director of studies. Personnel: one PhD studentship. Period: September 2015-October 2018.
- Innovate UK: Meta Vision Knowledge Transfer Partnership (Apr 2015). Own role: Academic supervisor. Period: September 2015 - August 2017. Budget: £160,000. Personnel: one KTP associate. In Partnership with Meta Vision LTD.
- NVIDIA: Hardware Grant Request (Oct 2015). Donation: one Titan X GPU card to support the group's work on online action recognition (£650).
- Oxford Brookes - IIT Bombay internship scheme: Action detection and recognition from videos (Jan 2017). Budget: £3,500. Role: Supervisor.
Horizon 2020, Call ICT-27-2017: SARAS - Smart Autonomous Robotic Assistant Surgeon (Aug 10 2017). Duration: 3 years. Coordinator: U. Verona, Italy. Own role: Scientific Officer (SO) and WP Leader. Together with Coordinator Riccardo Muradore I was the main contributor to the success of this bid. In particular, I have single-handedly rewritten both part 2 (impact) and part 3 (management), without which the project would never have been funded. Budget: €4,315,640 (own share: €596,073).
Innovate UK - Knowledge Transfer Partnership with Createc (August 2018). Own role: Academic supervisor and Lead Academic. Budget: £190,000. Duration: 2 years. Personnel: one KTP associate.
- Oxford Brookes University, School of Engineering, Computing and Mathematics - Fellowship in AI for autonomous driving (August 2018). Own role: PI. Budget: £100,000. Duration: 2 years.
- Huawei Technologies - Video analysis and activity recognition (April 2019). Own role: PI. Budget: £280,000. Duration: 3 years. Personnel: one postdoctoral researcher, one PhD student.
- UKIERI - Some novel paradigms for analyzing human actions in complex videos (March 2019). Own role: PI for the UK side. Budget: £42,000. Co-PI Prof Subhasis Chaudhuri, Director of IIT Bombay.
Awards
2017 CVPR Charades challenge, 2nd place (with student G. Singh), June 2017 (1st place won by Google Deepmind’s TeamKinetics).
2016 CVPR ActivityNet action detection challenge, 2nd place (with student G. Singh). Jun 2016.
Next 10 Award - Oxford Brookes University - Faculty of Technology (Oct 2012). Research accelerator programme, awarded to the top emerging researchers in the Faculty.
- Outstanding Reviewer Award - British Machine Vision Conference - BMVC 2012 (Sep 2012).
- Short-listed for the Best Paper Award - British Machine Vision Conference (BMVC 2012). For the paper: “Learning discriminative space-time actions from weakly labelled videos".
- Best Poster Prize - INRIA Visual Recognition and Machine Learning Summer School (VRML 2012), July 2012. For the poster: “Learning discriminative space-time actions from weakly labelled videos".
- Best Poster Award - International Symposium on Imprecise Probabilities Theories and Applications (ISIPTA’11), July 2011. For the poster: “Geometric conditional belief functions in the belief space".
- Short-listed for the Best Paper Award - ECSQARU 2011, Jun 2011. For the paper “On consistent approximations of belief functions in the mass space".
- Best Paper Award - Pacific Rim International Conference on Artificial Intelligence (PRICAI’08), Dec 2008. For the paper: “Alternative formulations of the theory of evidence based on basic plausibility & commonality assignments.
- Marie Curie fellowship (Sep 2006), in partnership with INRIA Rhone-Alpes, Perception group.
In addition, my PhD student Suman Saha has won the reading group prize at ICVSS 2015, the International Computer Vision Summer School. My MSc student Misbah Munir has won in February 2017 the OBSEA (the Oxford Brookes Social Entrepreneur Awards) Try It Award to fund a proof of concept of her work on deep learning for video captioning.
Research projects

Computer Vision
The AI and Vision group, in close collaboration with Oxford University's Torr Vision Group, has a multi-year experience in hot computer vision topics such as action, gesture and activity recognition, pose estimation, segmentation and matching of articulated bodies, voxelset analysis, video retrieval. Machine learning techniques employed range from deep learning and convolutional neural networks, to hidden Markov models, metric learning, dimensionality reduction, and discriminative part-based models.

Machine Learning
The group is active in machine learning (in particular metric learning for dynamical models, weakly supervised classification, imprecise dynamical models), and its application to problems such as big data, gait and daily activity analysis for dementia diagnosis and monitoring, vehicle classification via inductive loops, and activity localisation and recognition.

Uncertainty Theory
Prof Cuzzolin is a world expert in the theory of belief functions, having published two monographs on the topic. His main contribution is a geometric approach to uncertainty theory in which every measure can be represented as a point of an appropriate convex space. The group is active on a number of topics in this field, including: probability and possibility transformation for efficient computation, decision making, the algebra of decision spaces (frames of discernment), and the total belief theorem.

Artificial Intelligence
Artificial intelligence is already part of our lives. Smart cars will engage our roads in less than ten years' time; shops with no checkout, which automatically recognise customers and what they purchase, are already open for business. But to enable machines to deal with uncertainty, we must fundamentally change the way machines learn from the data they observe so that they will be able to cope with situations they have never encountered in the safest possible way. Interacting naturally with human beings and their complex environments will only be possible if machines are able to put themselves in people's shoes: in other words, to read our minds.

Robotics
The AI and Vision group collaborates very closely with the Cognitive Robotics group, led by Professor Nigel Crook and Dr Matthias Rolf. In particular, we are interested in human-robot interaction, EEG classification coupled with humanoid robots for the creation of emotional robot avatars, and robot assistants for laparoscopic surgery.
The two groups are involved in the Intelligent Transport Systems doctoral training programme, and collaborate with the Autonomous Driving research group led by Dr Andrew Bradley.

E-health
We also closely collaborate with Prof Helen Dawes and her MOReS centre on the application of AI, machine learning and vision to healthcare, including: the early diagnosis of dementia, the diagnosis of diabetes from gait signatures, and the monitoring of people with prolonged disorder of consciousness using multimodal deep learning.
For more information please consult the Visual AI Laboratory web site (see below).
Further information
http://cms.brookes.ac.uk/staff/FabioCuzzolin/