Rebecca Raper

  • Rebecca Raper

    Rebecca Raper is from Sheffield. She joined Oxford Brookes as a research student in April 2018 and her thesis title is ‘Autonomous Moral Artificial Intelligence’.

    How did you hear about Oxford Brookes University?

    I discovered Oxford Brookes University through findaphd.com. The PhD programme appealed to me, so I began to research the University. On first impressions, Oxford Brookes looked to have a good computing department and facilities. The overall University looked welcoming for new students, and Oxford as a city looked beautiful.

    What attracted you to Oxford Brookes University to conduct your research?

    What attracted me to Oxford Brookes University was the nature of the research project offered, which allowed me to progress my previous academic interests, within artificial intelligence and computing. I found the proposed topic extremely interesting and there was funding available which would allow me to study full time. I received a welcoming response when I emailed to find out about the research.

    What were you doing before?

    I completed a master’s degree in philosophy in 2010, after which I worked for several years in analyst roles. In the year prior to beginning my PhD, I was undertaking a further master’s in psychology, whilst working as a support worker for Mencap.

    How easy did you find it to settle into the research environment?

    I struggled at first to find direction and formal structure for my research, but my supervisors were very supportive and gradually things became clearer. Being entirely new to Oxford, I didn’t know any other researchers, but I found other PhD students and staff members in the department to be very helpful and forthcoming. The initial one-day induction was very well structured and gave an overall idea of what a PhD student should do. It was also nice to meet students in a similar position. I have found the networking events throughout the year a good opportunity to meet people.

    Tell us about your research.

    The aim of my project is to design, create and evaluate autonomous moral artificial intelligence. The topic sits at the intersection of philosophy, psychology and computer science and involves looking at moral theory and finding a way to successfully put this into a machine. 

    Artificial Intelligence (AI) is a subject that has gained increasing public interest in recent years and it is becoming more popular. Simply put, AI involves re-creating human intelligence in a computer. Recent well-known advancements in this area include IBM’s Watson AI that was able to beat human contestants at Jeopardy, and the programme AlphaGo, created by Deepmind, that was able to successfully beat a human world champion at the board game Go. In the media, we have seen recent controversial advancements in autonomous car technology (the ability for a car to drive on its own), autonomous weapons, and robotics; all using AI. This has provoked a huge public response. Questions arise surrounding whether AI will take all our jobs or – worse still – whether machines will ever become so super intelligent that they pose a threat to human existence (see Bostrom’s Superintelligence).

    As AI becomes ever more autonomous and involved in our day-to-day lives, there is the need for it to be governed in a similar way to humans – by morals. We would hope that a robot interacting with us on a daily basis would make appropriate moral decisions or that AI used to make important decisions makes those ethically. However, it is not so obvious how to make an AI moral. The traditional approach to programming an AI does not seem applicable when it comes to morals. A set of morals cannot be simply programmed into an AI (1) because it would be impossible to ever exhaustively think of every moral available and (2) because it’s not obvious how this would help an AI decide how to act in a moral situation. 

    The purpose of my PhD is to find a more suitable approach to placing morals within AI. The hypothesis is that the best approach is to mimic the development of morals within children, and so create an AI that learns morals itself in a similar way. I am in the process of designing an AI that will be able to learn appropriate moral behaviours by picking up on social cues from its environment. The AI will then be evaluated by placing it in a game scenario, where we determine whether it behaves morally. It involves understanding the psychology of how children develop morals, sociology to ascertain the appropriate social cues, and some economics and game theory. 

    What do you enjoy about being a research student?

    I find the topic I am researching extremely interesting. It is at the cutting edge of artificial intelligence research and I get to work with enthusiastic and stimulating people here at Brookes. As a PhD student, I have a deep pool of opportunities, including all the resources available at the University and from the Graduate College. I found initial orientation at the University difficult, but supervisors and staff here were very willing to help me settle in. Just asking people lots of questions helped me get the support I needed.

    What do you think about the research training offered at Oxford Brookes?

    The research training programme is rich and well planned. The induction helped prepare me for the first phase of my PhD and I have found training courses, from both the Graduate College and within my department, to be well taught and informative. Graduate College training courses have given me general skills to help me within academia, whilst the department programmes have given more specific tools – such as how to use Endnote.

    What are your future plans?

    At the moment, I am unsure of my future plans, but I feel I have lots of options open to me. If I fit in well to academia through my research, I will pursue a position within a university after my PhD, but I will also look at what kinds of opportunities there are within industry.

    I also have a start-up business that I am in the process of establishing, so this could be something to work on.