School of Engineering, Computing and Mathematics
Faculty of Technology, Design and Environment
Phone number: (01865) 483508
Humans approach driving in a holistic fashion which entails, in particular, understanding road events and their evolution. Injecting these capabilities in an autonomous vehicle has thus the potential to take situational awareness and decision making closer to human-level performance. To this purpose, we introduce the ROad event Awareness Dataset (ROAD) for Autonomous Driving, to our knowledge the first of its kind. ROAD is designed to test an autonomous vehicle’s ability to detect road events, defined as triplets composed by a moving agent, the action(s) it performs and the corresponding scene locations. ROAD comprises 22 videos, originally from the Oxford RobotCar Dataset, annotated with bounding boxes showing the location in the image plane of each road event. We propose a number of relevant detection tasks and provide as a baseline a new incremental algorithm for online road event awareness, based on inflating RetinaNet along time. We also report the performance on the ROAD tasks of Slowfast and YOLOv5 detectors, as well as that of the top participants in the ROAD challenge co-located with ICCV 2021. Our baseline results highlight the challenges faced by situation awareness in autonomous driving. Finally, ROAD allows scholars to investigate exciting tasks such as complex (road) activity detection, future road event anticipation and the modelling of sentient road agents in terms of mental states. The dataset is available at https://github.com/gurkirt/road-dataset; the baseline code can be found at https://github.com/gurkirt/3D-RetinaNet.
Autonomous vehicles rely heavily upon their perception subsystems to see the environment in which they operate. Unfortunately, the effect of variable weather conditions presents a significant challenge to object detection algorithms, and thus it is imperative to test the vehicle extensively in all conditions which it may experience. However, development of robust autonomous vehicle subsystems requires repeatable, controlled testing - while real weather is unpredictable and cannot be scheduled. Real-world testing in adverse conditions is an expensive and time-consuming task, often requiring access to specialist facilities. Simulation is commonly relied upon as a substitute, with increasingly visually realistic representations of the real-world being developed. In the context of the complete autonomous vehicle control pipeline, subsystems downstream of perception need to be tested with accurate recreations of the perception system output, rather than focusing on subjective visual realism of the input - whether in simulation or the real world. This study develops the untapped potential of a lightweight weather augmentation method in an autonomous racing vehicle - focusing not on visual accuracy, but rather the effect upon perception subsystem performance in real time. With minimal adjustment, the prototype developed in this study can replicate the effects of water droplets on the camera lens, and fading light conditions. This approach introduces a latency of less than 8 ms using compute hardware well suited to being carried in the vehicle - rendering it ideal for real-time implementation that can be run during experiments in simulation, and augmented reality testing in the real world.
Widespread development of driverless vehicles has led to the formation of autonomous racing, where technological development is accelerated by the high speeds and competitive environment of motorsport. A particular challenge for an autonomous vehicle is that of identifying a target trajectory – or, in the case of a competition vehicle, the racing line. Many existing approaches to finding the racing line are either not time-optimal solutions, or are computationally expensive - rendering them unsuitable for real-time application using on-board processing hardware. This study describes a machine learning approach to generating an accurate prediction of the racing line in real-time on desktop processing hardware. The proposed algorithm is a feed-forward neural network, trained using a dataset comprising racing lines for a large number of circuits calculated via traditional optimal control lap time simulation. The network predicts the racing line with a mean absolute error of ±0.27m, and just ±0.11m at corner apex - comparable to human drivers, and autonomous vehicle control subsystems. The approach generates predictions within 33ms, making it over 9,000 times faster than traditional methods of finding the optimal trajectory. Results suggest that for certain applications data-driven approaches to find near-optimal racing lines may be favourable to traditional computational methods.
Determining fitness to drive is a major concern affecting aging and disabled populations, particularly concerning reduced cognitive functioning, functional limitations and reduced vision [1, 2]. The Royal Society for Prevention of Accidents encourages aging drivers to maintain their licence (for independence, mobility and quality of life), emphasising that prematurely removing someone’s driving licence negatively affects their quality of life - the consequences of which outweigh the chance of being involved in a collision, for both the driver and the remainder of society .
The gold standard test in the United Kingdom (UK) to determine the ability to drive is an on-road driving assessment, and clinicians have the opportunity to refer patients to an independent Mobility Centre (accredited by Driving Mobility) where an assessment will be performed based upon on-road driving experience as judged by a professional driving instructor and occupational therapist. The assessment is resource expensive and only a limited number of individuals are referred. To date no screening test is clinically implemented in the UK which accurately determines fitness to drive.
This study sets out to evaluate the potential of the Montreal Cognitive Assessment (MOCA) as a screening tool, for people with concerns regarding cognitive capacity; to determine pass/fail cuts offs for on-road driving assessment.
Autonomous vehicles make use of sensors to perceive the world around them, with heavy reliance on visionbased sensors such as RGB cameras. Unfortunately, since these sensors are affected by adverse weather, perception pipelines require extensive training on visual data under harsh conditions in order to improve the robustness of downstream tasks - data that is difficult and expensive to acquire. Based on GAN and CycleGAN architectures, we propose an overall (modular) architecture for constructing datasets, which allows one to add, swap out and combine components in order to generate images with diverse weather conditions. Starting from a single dataset with ground-truth, we generate 7 versions of the same data indiverse weather, and propose an extension to augment the generated conditions, thus resulting in a total of 14 adverse weather conditions, requiring a single ground truth. We test the quality of the generated conditions both in terms of perceptual quality and suitability for training downstream tasks, using real world, out-of-distribution adverse weather extracted from various datasets. We show improvements in both object detection and instance segmentation across all conditions, in many cases exceeding 10 percentage pointsincrease in AP, and provide the materials and instructions needed to re-construct the multi-weather dataset, based upon the original Cityscapes dataset.
The rising popularity of autonomous vehicles has led to the development of driverless racing cars, where thecompetitive nature of motorsport has the potential to drive innovations in autonomous vehicle technology. The challenge of racing requires the sensors, object detection and vehicle control systems to work together at the highest possible speed and computational efficiency. This paper describes an autonomous driving system for a self-driving racing vehicle application using a modest sensor suite coupled with accessible processing hardware, with an object detection system capable of a frame rate of 25fps, and a mean average precision of 92%. A modelling tool is developed in open-source software for real-time dynamic simulation of the autonomous vehicle and associated sensors, which is fully interchangeable with the real vehicle. The simulator provides performance metrics, which enables accelerated and enhanced quantitative analysis, tuning and optimisation of the autonomous control system algorithms. A design study demonstrates the ability of the simulation to assist in control system parameter tuning - resulting in a 12% reduction in lap time, and an average velocity of 25 km/h - indicating the value of using simulation for the optimisation of multiple parameters in the autonomous control system.
Climate change is the defining challenge now facing our planet. Limiting global warming to 1.5 degrees, as advocated by the Intergovernmental Panel on Climate Change, requires rapid, far-reaching, and unprecedented changes in how governments, industries, and societies function by 2030. Computer Science plays an important role in these efforts, both in providing tools for greater understanding of climate science and in reducing the environmental costs of computing. It is vital for Computer Science students to understand how their chosen field can both exacerbate and mitigate the problem of climate change.
We have reviewed the existing literature, interviewed leading experts, and held conversations at the ITiCSE 2019 conference, to identify how universities, departments, and CS educators can most effectively address climate change within Computer Science education. We find that the level of engagement with the issue is still low, and we discuss obstacles at the level of institutional, program and departmental support as well as faculty and student attitudes. We also report on successful efforts to date, and we identify responses, strategies, seed ideas, and resources to assist educators as they prepare their students for a world shaped by climate change.
A multibody model of an electric go-kart was developed in Msc-Adams Car software to simulate the vehicle’s dynamic performance. In contrast to an ICE kart, its electric counterpart bares an extra weight load accounted for the batteries and other powertrain components. The model is inspired on a prototype vehicle developed at Universidad de los Andes. The prototype was built on top of an ICE frame where a PMAC motor, controller, battery pack and the subsequent powertrain components were installed. A petrol-based Go-kart weight distribution was defined as baseline and several variants of the electric adaptation with different weight distributions were constructed. The main objective of the model is to evaluate different configurations and identify the ones that can give performance advantages. Step steer simulations ran at 40 km/h (64 mph) were analyzed to assess the dynamic performance of the vehicle for different configuration of the battery bank placement.
For most iterations of powertrain location, considerable differences in dynamic response were obtained and the handling balance was identified as Understeer contrary to a priori thoughs. Understeer gradient, weight distribution for both axles, trajectory among other results of interest were observed in the simulations. The model allowed to showcase the effect of redistribution of weight on the dynamic behavior in this specific application. Among the main consequences lies the fact that battery distribution can affect the lifting of the internal rear tire and the detriment in turning effectiveness.
As autonomous vehicles and autonomous racing rise in popularity, so does the need for faster and more accurate detectors. While our naked eyes are able to extract contextual information almost instantly, even from far away, image resolution and computational resources limitations make detecting smaller objects (that is, objects that occupy a small pixel area in the input image) a truly challenging task for machines and a wide open research field. This study explores ways in which the popular YOLOv5 object detector can be modified to improve its performance in detecting smaller objects, with a particular focus on its application to autonomous racing. To achieve this, we investigate how replacing certain structural elements of themodel (as well as their connections and other parameters) can affect performance and inference time. In doing so, we propose a series of models at different scales, which we name ‘YOLO-Z’, and which display an improvement of up to 6.9% in mAP when detecting smaller objects at 50% IOU, at a cost of just a 3ms increase in inference time compared to the original YOLOv5. Our objective is not only to inform future research on the potential of adjusting a popular detector such as YOLOv5 to address specific tasks, but also to provide insights on how specific changes can impact small object detection. Such findings, applied to the wider context of autonomous vehicles, could increase the amount of contextual information available to such systems.