Dr Andrew Bradley
Senior Lecturer
School of Engineering, Computing and Mathematics
Role
Dr Andrew Bradley is a Senior Lecturer in the School of Engineering, Computing and Mathematics.
Research
Centres and institutes
Groups
Projects
- Artificial intelligence for autonomous driving
Projects as Co-investigator
- Innovative sustainable transport solutions(01/01/2023 - 31/12/2026), funded by: Oxfordshire County Council, funding amount received by Brookes: £54,000, funded by: Oxfordshire County Council
- Epistemic AI(01/03/2021 - 28/02/2025), funded by: European Commission, funding amount received by Brookes: £966,236, funded by: European Commission
Publications
Journal articles
-
Singh G, Akrigg S, Di Maio M, Fontana V, Javanmard Alitappeh R, Saha S, Jeddisaravi K, Yousefi F, Culley J, Nicholson T, Omokeowa J, Khan S, Grazioso S, Bradley A, Di Gironimo G, Cuzzolin F, 'ROAD: The ROad event Awareness Dataset for Autonomous Driving'
IEEE Transactions on Pattern Analysis and Machine Intelligence 45 (1) (2022) pp.1036-1054
ISSN: 0162-8828 eISSN: 1939-3539AbstractPublished here Open Access on RADARHumans approach driving in a holistic fashion which entails, in particular, understanding road events and their evolution. Injecting these capabilities in an autonomous vehicle has thus the potential to take situational awareness and decision making closer to human-level performance. To this purpose, we introduce the ROad event Awareness Dataset (ROAD) for Autonomous Driving, to our knowledge the first of its kind. ROAD is designed to test an autonomous vehicle’s ability to detect road events, defined as triplets composed by a moving agent, the action(s) it performs and the corresponding scene locations. ROAD comprises 22 videos, originally from the Oxford RobotCar Dataset, annotated with bounding boxes showing the location in the image plane of each road event. We propose a number of relevant detection tasks and provide as a baseline a new incremental algorithm for online road event awareness, based on inflating RetinaNet along time. We also report the performance on the ROAD tasks of Slowfast and YOLOv5 detectors, as well as that of the top participants in the ROAD challenge co-located with ICCV 2021. Our baseline results highlight the challenges faced by situation awareness in autonomous driving. Finally, ROAD allows scholars to investigate exciting tasks such as complex (road) activity detection, future road event anticipation and the modelling of sentient road agents in terms of mental states. The dataset is available at https://github.com/gurkirt/road-dataset; the baseline code can be found at https://github.com/gurkirt/3D-RetinaNet.
-
Fursa I, Fandi E, Musat V, Culley J, Gil E, Teeti I, Bilous L, Sluis IV, Rast A, Bradley A, 'Worsening Perception: Real-time Degradation of Autonomous Vehicle Perception Performance for Simulation of Adverse Weather Conditions'
SAE International Journal of Connected and Automated Vehicles 5 (1) (2022) pp.87-100
ISSN: 2574-0741 eISSN: 2574-075XAbstractPublished here Open Access on RADARAutonomous vehicles rely heavily upon their perception subsystems to see the environment in which they operate. Unfortunately, the effect of variable weather conditions presents a significant challenge to object detection algorithms, and thus it is imperative to test the vehicle extensively in all conditions which it may experience. However, development of robust autonomous vehicle subsystems requires repeatable, controlled testing - while real weather is unpredictable and cannot be scheduled. Real-world testing in adverse conditions is an expensive and time-consuming task, often requiring access to specialist facilities. Simulation is commonly relied upon as a substitute, with increasingly visually realistic representations of the real-world being developed. In the context of the complete autonomous vehicle control pipeline, subsystems downstream of perception need to be tested with accurate recreations of the perception system output, rather than focusing on subjective visual realism of the input - whether in simulation or the real world. This study develops the untapped potential of a lightweight weather augmentation method in an autonomous racing vehicle - focusing not on visual accuracy, but rather the effect upon perception subsystem performance in real time. With minimal adjustment, the prototype developed in this study can replicate the effects of water droplets on the camera lens, and fading light conditions. This approach introduces a latency of less than 8 ms using compute hardware well suited to being carried in the vehicle - rendering it ideal for real-time implementation that can be run during experiments in simulation, and augmented reality testing in the real world.
-
Garlick S, Bradley A, 'Real-Time Optimal Trajectory Planning for Autonomous Vehicles and Lap Time Simulation Using Machine Learning'
Vehicle System Dynamics 60 (12) (2021) pp.4269-4289
ISSN: 0042-3114 eISSN: 1744-5159AbstractPublished hereWidespread development of driverless vehicles has led to the formation of autonomous racing, where technological development is accelerated by the high speeds and competitive environment of motorsport. A particular challenge for an autonomous vehicle is that of identifying a target trajectory – or, in the case of a competition vehicle, the racing line. Many existing approaches to finding the racing line are either not time-optimal solutions, or are computationally expensive - rendering them unsuitable for real-time application using on-board processing hardware. This study describes a machine learning approach to generating an accurate prediction of the racing line in real-time on desktop processing hardware. The proposed algorithm is a feed-forward neural network, trained using a dataset comprising racing lines for a large number of circuits calculated via traditional optimal control lap time simulation. The network predicts the racing line with a mean absolute error of ±0.27m, and just ±0.11m at corner apex - comparable to human drivers, and autonomous vehicle control subsystems. The approach generates predictions within 33ms, making it over 9,000 times faster than traditional methods of finding the optimal trajectory. Results suggest that for certain applications data-driven approaches to find near-optimal racing lines may be favourable to traditional computational methods.
-
Fernandez Ferreira M, Bradley A, Toso A, Neighbour G, 'Formula Club-E: The New Class'
Benchmark Mineral Intelligence 1 (2016) pp.60-67
AbstractFormula E is making a stand at the highest level of motorsport – but what is happening in the lower formulae?Published here Open Access on RADAR -
Esser P, Dent S, Jones C, Sheridan BJ, Bradley A, Wade DT, Dawes HT, 'Utility of the MOCA as a cognitive predictor for fitness to drive'
Journal of Neurology, Neurosurgery and Psychiatry 87 (5) (2015) pp.567-568
ISSN: 0022-3050 eISSN: 1468-330XAbstractPublished hereDetermining fitness to drive is a major concern affecting aging and disabled populations, particularly concerning reduced cognitive functioning, functional limitations and reduced vision [1, 2]. The Royal Society for Prevention of Accidents encourages aging drivers to maintain their licence (for independence, mobility and quality of life), emphasising that prematurely removing someone’s driving licence negatively affects their quality of life - the consequences of which outweigh the chance of being involved in a collision, for both the driver and the remainder of society [3].
The gold standard test in the United Kingdom (UK) to determine the ability to drive is an on-road driving assessment, and clinicians have the opportunity to refer patients to an independent Mobility Centre (accredited by Driving Mobility) where an assessment will be performed based upon on-road driving experience as judged by a professional driving instructor and occupational therapist[4]. The assessment is resource expensive and only a limited number of individuals are referred. To date no screening test is clinically implemented in the UK which accurately determines fitness to drive[4].
This study sets out to evaluate the potential of the Montreal Cognitive Assessment (MOCA) as a screening tool, for people with concerns regarding cognitive capacity; to determine pass/fail cuts offs for on-road driving assessment.
-
Colunga IF, Bradley A, 'Modelling of transient cornering and suspension dynamics, and investigation into the control strategies for an ideal driver in a lap time simulator'
Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 228 (10) (2014) pp.1185-1199
ISSN: 0954-4070 eISSN: 2041-2991AbstractLap time simulation is one of the most powerful tools for evaluating design proposals in motorsport engineering. In particular, transient simulations play an important role as the ultimate and more accurate approach than other static or quasi-steady-state methodologies. In this paper, first, the method to transform the differential equations of a system into a formally linear continuous and then discrete state-space representation, particularised for a seven-degree-of-freedom suspension and a transient cornering model, is proposed. The use of time-variant coefficients in the matrices of the model will allow the non-linear and time-variant characteristics of these systems to be described. Second, in the case of the transient cornering model, this representation is translated into a transfer function in order to apply a discrete control strategy such as a finite-time strategy or a predictive strategy for an adaptive ideal driver. It was found that the calculation methodology described above can be successfully applied with a more than acceptable degree of accuracy according to comparison with the results using other mechanical or numerical software (ADAMS or Simulink). It was also observed that the use of the curvature of the track as a reference in the control closed loop is sufficiently accurate to force the car to follow the target path closely. Furthermore, both predictive control and finite-time control (including integration) provide excellent results with a smooth response of the steering input. Many lap time simulators are language specific; however, the methodology proposed in this paper will run in most programming languages.Published here -
Fraser GW, Carpenter JD, Rothery DA, Pearson JF, Martindale A, Huovelin J, Treis J, Anand M, Anttila M, Ashcroft M, Benkoff J, Bland P, Bowyer A, Bradley A, Bridges J, Brown C, Bulloch C, Bunce EJ, Christensen U, Evans M, Fairbend R, Feasey M, Giannini F, Hermann S, Hesse M, Hilchenbach M, Jorden T, Joy KH, Kaipiainen M, Kitchingman I, Lechner P, Lutz G, Malkki A, Muinonen K, Naranen J, Portin P, Prydderch M, San Juan J, Sclater E, Schyns E, Stevenson TJ, Struder L, Syrjasuo M, Talboys D, Thomas P, Whitford C, Whitehead S, 'The mercury imaging X-ray spectrometer (MIXS) on bepicolombo'
Planetary and Space Science 58 (1-2) (2010) pp.79-95
ISSN: 0032-0633 eISSN: 1873-5088AbstractMIXS measurements of surface elemental composition will help determine rock types, the evolution of the surface and ultimately a probable formation process for the planet. In this paper we present MIXS and its predicted performance at Mercury as well as discussing the role that MIXS measurements will play in answering the major questions about Mercury. (C) 2009 Elsevier Ltd. All rights reserved.Published here
Conference papers
-
Teeti I, Bhargav, R, Singh, V, Bradley A, Banerjee, B, Cuzzolin F, 'Temporal DINO: A Self-supervised Video Strategy to Enhance Action Prediction'
(2023) pp.3273-3283
ISSN: 2473-9944 ISBN: 9798350307443AbstractPublished hereThe emerging field of action prediction - the task of forecasting action in a video sequence - plays a vital role in various computer vision applications such as autonomous driving, activity analysis and human-computer interaction. Despite significant advancements, accurately predicting future actions remains a challenging problem due to high dimensionality, complex dynamics and uncertainties inherent in video data. Traditional supervised approaches require large amounts of labelled data, which is expensive and time-consuming to obtain. This paper introduces a novel self-supervised video strategy for enhancing action prediction inspired by DINO (self-distillation with no labels). The approach, named Temporal-DINO, employs two models; a ‘student’ processing past frames; and a ‘teacher’ processing both past and future frames, enabling a broader temporal context. During training, the teacher guides the student to learn future context by only observing past frames. The strategy is evaluated on ROAD dataset for the action prediction downstream task using 3D-ResNet, Transformer, and LSTM architectures. The experimental results showcase significant improvements in prediction performance across these architectures, with our method achieving an average enhancement of 9.9% Precision Points (PP), which highlights its effectiveness in enhancing the backbones’ capabilities of capturing long-term dependencies. Furthermore, our approach demonstrates efficiency in terms of the pretraining dataset size and the number of epochs required. This method overcomes limitations present in other approaches, including the consideration of various backbone architectures, addressing multiple prediction horizons, reducing reliance on hand-crafted augmentations, and streamlining the pretraining process into a single stage. These findings highlight the potential of our approach in diverse video-based tasks such as activity recognition, motion planning, and scene understanding. Code can be found at https://github.com/IzzeddinTeeti/ssl_pred.
-
Zhu, H, Tran TMT, Benjumea S, Bradley A
, 'A Scenario-Based Functional Testing Approach to Improving DNN Performance'
SOSE 2023 (2023) pp.199-207
ISSN: 2835-3307 eISSN: 2835-3161 ISBN: 9798350322392AbstractPublished here Open Access on RADARThis paper proposes a scenario-based functional testing approach for enhancing the performance of machine learning (ML) applica- tions. The proposed method is an iterative process that starts with testing the ML model on various scenarios to identify areas of weakness. It follows by a further testing on the suspected weak scenarios and statistically evaluate the model’s performance on the scenarios to confirm the diagnosis. Once the diagnosis of weak scenarios is confirmed by test results, the treatment of the model is performed by retraining the model using a transfer learning technique with the original model as the base and applying a set of training data specifically targeting the treated scenarios plus a subset of training data selected at random from the original train dataset to prevent the so-call catastrophic forgetting effect. Finally, after the treatment, the model is assessed and evaluated again by testing on the treated scenarios as well as other scenarios to check if the treatment is effective and no side-effect caused. The paper reports a case study with a real ML deep neural network (DNN) model, which is the perception system of an autonomous racing car. It is demonstrated that the method is effective in the sense that DNN model’s performance can be improved. It provides an efficient method of enhancing ML model’s performance with much less human and compute resource than retrain from scratch.
-
Iqbal S, Ball P, Kamarudin MH, Bradley A, 'Simulating Malicious Attacks on VANETs for Connected and Autonomous Vehicle Cybersecurity: A Machine Learning Dataset'
(2022)
AbstractPublished hereConnected and Autonomous Vehicles (CAVs) rely on Vehicular Adhoc Networks with wireless communication between vehicles and roadside infrastructure to support safe operation. However, cybersecurity attacks pose a threat to VANETs and the safe operation of CAVs. This study proposes the use of simulation for modelling typical communication scenarios which may be subject to malicious attacks. The Eclipse MOSAIC simulation framework is used to model two typical road scenarios, including messaging between the vehicles and infrastructure - and both replay and bogus information cybersecurity attacks are introduced. The model demonstrates the impact of these attacks, and provides an open dataset to inform the development of machine learning algorithms to provide anomaly detection and mitigation solutions for enhancing secure communications and safe deployment of CAVs on the road.
-
Kanacı A, Teeti I, Bradley A, Cuzzolin F, 'Self-Supervised Pretraining for Object Detection in Autonomous Driving'
(2022)
AbstractPublished here Open Access on RADARThe detection of road agents, such as vehicles and pedestrians are central in autonomous driving. Self-Supervised Learning (SSL) has been proven to be an effective technique for learning discrimi-native feature representations for image classification , alleviating the need for labels, a remarkable advancement considering how time-consuming and expensive labelling can be in autonomous driving. In this paper, we investigate the effectiveness of contrastive SSL techniques such as BYOL and MoCo on the object (agent) detection task using the ROad event Awareness Dataset (ROAD). Our experiments show that using self-supervised learning we can achieve a 3.96% improvement on the AP@50 metric for agent detection compared to supervised pretraining. Extensive comparisons and evaluations of current state-of-the-art SSL methods (namely MOCO, BYOL, SCRL) are conducted and reported for the object detection task.
-
Teeti I, Shahbaz A, Khan S, Bradley A, Cuzzolin F, 'Vision-based Intention and Trajectory Prediction in Autonomous Vehicles: A Survey'
(2022) pp.5630-5637
AbstractPublished here Open Access on RADARThis survey targets intention and trajectory prediction in Autonomous Vehicles (AV), as AV companies compete to create dedicated prediction pipelines to avoid collisions. The survey starts with a formal definition of the prediction problem and highlights its challenges, to then critically compare the models proposed in the last 2-3 years in terms of how they overcome these challenges. Further, it lists the latest methodological and technical trends in the field and comments on the efficacy of different machine learning blocks in modelling various aspects of the prediction problem. It also summarises the popular datasets and metrics used to evaluate prediction models, before concluding with the possible research gaps and future directions.
-
Teeti I, Musat V, Khan S, Rast A, Cuzzolin F, Bradley A, 'Vision in Adverse Weather: Augmentation Using CycleGANs with Various Object Detectors for Robust Perception in Autonomous Racing'
(2022)
AbstractPublished here Open Access on RADARIn an autonomous driving system, perception - identification of features and objects from the environment
- is crucial. Autonomous racing, in particular, features high speeds and small margins that demand rapid and accurate perception systems. During the race, the weather can change abruptly, causing significant degradation in perception, resulting in ineffective manoeuvres. In order to improve detection in adverse weather, deep-learning-based models typically require extensive datasets captured in such conditions - the
collection of which is a protracted and costly process. However, recent developments in Cycle-GAN architectures allow the synthesis of highly realistic scenes in multiple weather conditions. To this end, we introduce an approach of using synthesised adverse condition datasets in autonomous racing (generated using CycleGAN) to improve the performance of four out of five state-of-the-art detectors by an average of 42.7 and 4.4 mean average precision (mAP) percentage points in the presence of night-time conditions and droplets, respectively. Furthermore, we present a comparative analysis of five object detectors - identifying
the optimal pairing of detector and training data for use during autonomous racing in challenging conditions. -
Musat V, Fursa I, Newman P, Cuzzolin F, Bradley A, 'Multi-weather city: Adverse weather stacking for autonomous driving'
(2021)
AbstractPublished here Open Access on RADARAutonomous vehicles make use of sensors to perceive the world around them, with heavy reliance on visionbased sensors such as RGB cameras. Unfortunately, since these sensors are affected by adverse weather, perception pipelines require extensive training on visual data under harsh conditions in order to improve the robustness of downstream tasks - data that is difficult and expensive to acquire. Based on GAN and CycleGAN architectures, we propose an overall (modular) architecture for constructing datasets, which allows one to add, swap out and combine components in order to generate images with diverse weather conditions. Starting from a single dataset with ground-truth, we generate 7 versions of the same data in
diverse weather, and propose an extension to augment the generated conditions, thus resulting in a total of 14 adverse weather conditions, requiring a single ground truth. We test the quality of the generated conditions both in terms of perceptual quality and suitability for training downstream tasks, using real world, out-of-distribution adverse weather extracted from various datasets. We show improvements in both object detection and instance segmentation across all conditions, in many cases exceeding 10 percentage points
increase in AP, and provide the materials and instructions needed to re-construct the multi-weather dataset, based upon the original Cityscapes dataset. -
Culley J, Garlick S, Gil Esteller E, Georgiev P, Fursa I, Vander Sluis I, Ball P, Bradley A, 'System Design for a Driverless Autonomous Racing Vehicle'
(2020)
AbstractPublished here Open Access on RADARThe rising popularity of autonomous vehicles has led to the development of driverless racing cars, where the
competitive nature of motorsport has the potential to drive innovations in autonomous vehicle technology. The challenge of racing requires the sensors, object detection and vehicle control systems to work together at the highest possible speed and computational efficiency. This paper describes an autonomous driving system for a self-driving racing vehicle application using a modest sensor suite coupled with accessible processing hardware, with an object detection system capable of a frame rate of 25fps, and a mean average precision of 92%. A modelling tool is developed in open-source software for real-time dynamic simulation of the autonomous vehicle and associated sensors, which is fully interchangeable with the real vehicle. The simulator provides performance metrics, which enables accelerated and enhanced quantitative analysis, tuning and optimisation of the autonomous control system algorithms. A design study demonstrates the ability of the simulation to assist in control system parameter tuning - resulting in a 12% reduction in lap time, and an average velocity of 25 km/h - indicating the value of using simulation for the optimisation of multiple parameters in the autonomous control system. -
Pollock I, Alshaigy B, Bradley A, Krogstie BR, Kumar V, Ott L, Peters AK, Riedesel C, Wallace C, '1.5 Degrees of Separation: Computer Science Education in the Age of the Anthropocene'
(2019)
ISBN: 9781450375672AbstractPublished hereClimate change is the defining challenge now facing our planet. Limiting global warming to 1.5 degrees, as advocated by the Intergovernmental Panel on Climate Change, requires rapid, far-reaching, and unprecedented changes in how governments, industries, and societies function by 2030. Computer Science plays an important role in these efforts, both in providing tools for greater understanding of climate science and in reducing the environmental costs of computing. It is vital for Computer Science students to understand how their chosen field can both exacerbate and mitigate the problem of climate change.
We have reviewed the existing literature, interviewed leading experts, and held conversations at the ITiCSE 2019 conference, to identify how universities, departments, and CS educators can most effectively address climate change within Computer Science education. We find that the level of engagement with the issue is still low, and we discuss obstacles at the level of institutional, program and departmental support as well as faculty and student attitudes. We also report on successful efforts to date, and we identify responses, strategies, seed ideas, and resources to assist educators as they prepare their students for a world shaped by climate change.
-
Mikler J, Camacho S, Bradley A, Gonzalez-Mancera A, 'Multibody Simulation of an Electric Go-Kart: Influence of Powertrain Weight Distribution on Dynamic Performance'
(2019)
AbstractPublished here Open Access on RADARA multibody model of an electric go-kart was developed in Msc-Adams Car software to simulate the vehicle’s dynamic performance. In contrast to an ICE kart, its electric counterpart bares an extra weight load accounted for the batteries and other powertrain components. The model is inspired on a prototype vehicle developed at Universidad de los Andes. The prototype was built on top of an ICE frame where a PMAC motor, controller, battery pack and the subsequent powertrain components were installed. A petrol-based Go-kart weight distribution was defined as baseline and several variants of the electric adaptation with different weight distributions were constructed. The main objective of the model is to evaluate different configurations and identify the ones that can give performance advantages. Step steer simulations ran at 40 km/h (64 mph) were analyzed to assess the dynamic performance of the vehicle for different configuration of the battery bank placement.
For most iterations of powertrain location, considerable differences in dynamic response were obtained and the handling balance was identified as Understeer contrary to a priori thoughs. Understeer gradient, weight distribution for both axles, trajectory among other results of interest were observed in the simulations. The model allowed to showcase the effect of redistribution of weight on the dynamic behavior in this specific application. Among the main consequences lies the fact that battery distribution can affect the lifting of the internal rear tire and the detriment in turning effectiveness.
Other publications
-
Benjumea A, Teeti I, Cuzzolin F, Bradley A, 'YOLO-Z: Improving small object detection in YOLOv5 for autonomous vehicles', (2021)
AbstractAs autonomous vehicles and autonomous racing rise in popularity, so does the need for faster and more accurate detectors. While our naked eyes are able to extract contextual information almost instantly, even from far away, image resolution and computational resources limitations make detecting smaller objects (that is, objects that occupy a small pixel area in the input image) a truly challenging task for machines and a wide open research field. This study explores ways in which the popular YOLOv5 object detector can be modified to improve its performance in detecting smaller objects, with a particular focus on its application to autonomous racing. To achieve this, we investigate how replacing certain structural elements of the
model (as well as their connections and other parameters) can affect performance and inference time. In doing so, we propose a series of models at different scales, which we name ‘YOLO-Z’, and which display an improvement of up to 6.9% in mAP when detecting smaller objects at 50% IOU, at a cost of just a 3ms increase in inference time compared to the original YOLOv5. Our objective is not only to inform future research on the potential of adjusting a popular detector such as YOLOv5 to address specific tasks, but also to provide insights on how specific changes can impact small object detection. Such findings, applied to the wider context of autonomous vehicles, could increase the amount of contextual information available to such systems. -
Yassine N, Hayatleh K, Choubey B, Barker S, Bradley A, 'Driver Fatigue Monitoring Using Blink Rate Detection', (2017)
-
Bradley A, 'Vehicle dynamic analysis using a computer vision system', (2015)