BSc (Hons), PhD, PFHEA
Faculty of Technology, Design and Environment
Phone number: +44 (0) 1865 484530
Location: Wheatley Campus
Undergraduate computing courses inevitably include a high degree of regeneration in order to keep abreast of this rapidly changing field. Introductory programming modules in particular need to adapt to changing trends and languages. Until recently, the focus of debate within the Oxford Brookes University curriculum has therefore been on the course content, but since 2012 there has been a major change in the method of delivery through the introduction of a new apprenticeship model. This paper seeks to reflect on this, and other recent changes which have led to improved student engagement and results. The data is limited however, and so the results presented here are not conclusive.
Humans describe images in terms of nouns and adjectives while algorithms
operate on images represented as sets of pixels. Bridging this gap between
how humans would like to access images versus their typical representation
is the goal of image parsing, which involves assigning object and attribute
labels to pixel. In this paper we propose treating nouns as object labels and
adjectives as visual attribute labels. This allows us to formulate the image
parsing problem as one of jointly estimating per-pixel object and attribute
labels from a set of training images. We propose an efficient (interactive
time) solution. Using the extracted labels as handles, our system empowers
a user to verbally refine the results. This enables hands-free parsing of an
image into pixel-wise object/attribute labels that correspond to human semantics.
Verbally selecting objects of interests enables a novel and natural
interaction modality that can possibly be used to interact with new generation
devices (e.g. smart phones, Google Glass, living room devices). We
demonstrate our system on a large number of real-world images with varying
complexity. To help understand the tradeoffs compared to traditional
mouse based interactions, results are reported for both a large scale quantitative
evaluation and a user study.
We present an Embodied Conversational Agent(ECA) that incorporates a context-sensitive mechanism for handling user barge-in. The affective ECA engages the user in social conversation, and is fully implemented. We will use actual examples of system behaviour to illustrate. The ECA is designed to recognise and be empathetic to the emotional state of the user. It is able to detect, react quickly to, and then follow up with considered responses to different kinds of user interruptions. The design of the rules which enable the ECA to respond intelligently to different types of interruptions was informed by manually analysed real data from human-human dialogue. The rules represent recoveries from interruptions as two-part structures: an address followed by a resumption. The system is robust enough to man- age long, multi-utterance turns by both user and system, which creates good opportunities for the user to interrupt while the ECA is speaking.
The behavioural diversity of chaotic oscillator can be controlled into periodic dynamics and used to model locomotion using central pattern generators. This paper shows how controlled chaotic oscillators may improve the adaptation of the robot locomotion behaviour to terrain uncertainties when compared to nonlinear harmonic oscillators. This is quantitatively assesses by the stability, changes of direction and steadiness of the robotic movements. Our results show that the controlled Wu oscillator promotes the emergence of adaptive locomotion when deterministic sensory feedback is used. They also suggest that the chaotic nature of chaos controlled oscillators increases the expressiveness of pattern generators to explore new locomotion gaits.