“The idea that everyone should be equally safe is contestable”

“The idea that everyone should be equally safe is contestable”

How should moral dilemmas of automated driving be addressed? Computer scientist Edmond Awad discusses where the ethical discourse on autonomous vehicles is heading.

The Moral Machine he helped design highlighted the ethical dilemmas of automated driving. But how should they be addressed? Computer scientist Edmond Awad discusses the risks of autonomous vehicles and where the ethical discourse is heading.

What led you to undertake the Moral Machine experiment?

EDMOND AWAD: Our initial motivation for doing the experiment was twofold. First, we wanted to provide a simple way for the public to engage in an important societal discussion. Second, we wanted to collect data to identify which factors people think are important for autonomous vehicles (AVs) to consider in resolving ethical trade-offs. The public interest in the platform surpassed our wildest expectations, with the website going viral on multiple occasions in alt=""various countries. At some point, we realised that the sheer scale of the data enabled us to conduct a study that is much more ambitious than we had originally anticipated. In particular, the global coverage enabled us to make cross-cultural comparisons that have seldom, if ever, been possible with social psychology experiments.

So essentially, the Moral Machine was an effort to engage the public in the ethical discussion surrounding AVs. What were some of the challenges you faced?

The first challenge was to design scenarios that were close to real-life, while keeping the experiment manageable at the same time. We considered nine different factors that were combined to form realistic scenarios: type of intervention (stay-in-lane/swerve), relationship to AV (pedestrians/passengers), legality (lawful/unlawful), gender (male/female), age (younger/older), social status (higher/lower), fitness (fit/large), number of characters (more/ fewer), species (humans/pets). We also considered 20 different characters, including male/female adults, elderly people, athletes, doctors, etc. Then, there were many other aspects that, had we considered them, would have made the scenarios more realistic. For example, we did not introduce uncertainty about the fates of the characters (life-or-death outcomes were certain), or about the classification of these characters (characters were recognised as adults or children and so on, with 100 percent certainty).

The second challenge was reach. The high number of factors and characters considered resulted in millions of distinct possible scenarios, and that needed to be matched with hundreds or thousands (if not millions) of participants.

Ethical challenges

How do ethics and morality play into the realm of pure technology?

We usually tend to evaluate the performance of technology in terms of whether it is doing what it is designed to do. But we don’t usually stop there. We also take performance into consideration, in terms of factors like efficiency, safety and security. These factors are ethically relevant. The problem becomes more complex when we realise that these are not the only ethically relevant factors that we should care about. Recently, we started to realise that other important, ethically relevant factors like privacy, fairness, transparency and agency were insufficiently considered in the design of technologies that we use every day. The work on the ethics of AI in the last few years has mainly focused on including such factors in technology at an earlier stage.

alt=""When it comes to the specific technology of AVs, where do ethical issues arise?

When driving on the road, AVs will make some decisions that we, as drivers, make without thinking much, such as how we laterally position the vehicle in the driving lane. When making such a positioning decision, AVs are probably optimising for something, such as efficiency, safety, liability or a combination of all those things. But even this mundane decision may have a societal impact in aggregate. Consider, for example, two different programmes, used by two different manufacturers, that differ in where they position AVs within their lanes. Suppose that after driving for thousands of miles, we notice that both have similar safety levels (resulting, say, in the death of 20 people in one month), but they differ in the proportions of subgroups being killed: Programme A results in killing 15 passengers and five cyclists, and Programme B results in killing one passenger and 19 cyclists. What would be the morally preferable programme? It’s easy to see how Programme B would be more popular among customers, which could motivate manufacturers to become more protective of their passengers, thus putting cyclists on the road at a disadvantage. This goes for other small decisions on the road as well, like the decision to brake or speed up when approaching a yellow light.

Cultural preferences in moral decision-making

It seems likely that people from different cultures would make different choices in specific situations. Did that come through in the study?

We found that, while most countries agreed on the general direction of the preferences (such as sparing younger lives over older lives), the magnitude of these preferences were considerably different across borders. We also found that countries are broadly grouped into three main clusters: Western (including a majority of English-speaking, Catholic, Orthodox and Protestant countries), Eastern (including a majority of Muslim, Confucian and South Asian countries), and Southern (comprised of Latin America and former French colonies). Clusters largely differ in the weight they give to preferences. For example, the preference to spare younger lives over older lives is much less pronounced for countries in the Eastern cluster and much higher for countries in the Southern cluster. The study also found predicting factors when it came to country-level differences. One example is that the strength of rule of law in a country correlated with the preference to spare the lawful.

The human perspective on moral decisions made by machine intelligence: The self-driving car must choose between killing a toddler and killing an elderly person. (Photo: Simon Landrein)

What lessons should we draw from these cross-country results?

The main lesson is that programming ethical decisions in AVs using a certain set of rules is likely to get different levels of pushback in different countries. For example, if AVs are programmed in a way that disadvantages jaywalkers, then such AVs may be more acceptable in some countries than in others. Or to put it another way, if one manufacturer figures out a software update that improves the safety of road users, but only at the expense of jaywalkers, should the software be implemented? Our findings predict that such an update may be more acceptable in countries where the rule of law is stronger.

The Social Dilemma

An interesting aspect of the Moral Machine experiment was that those who participated through the website were confronted with a scenario in which an autonomous vehicle’s AI may choose to sacrifice the safety of an AV passenger to, for example, save the lives of a mother and child. Did the results indicate that humans are selfless in that regard and accept their fate in such a situation, or did they show that self-preservation is paramount in our ethical considerations?

When designing the Moral Machine experiment, we tried to focus on cases where website users were not direct participants in the scenario. They were just judging it from a bird’s-eye view. We made this choice because we anticipated that imagining direct involvement would strongly influence their answers and would thus limit the scope of the experiment.

Edmond Awad is a Lecturer in the Department of Economics at the University of Exeter Business School. Prior to that, he was a Postdoctoral Associate at the Media Lab at the Massachusetts Institute of Technology. In 2016, Edmond led the design and development of Moral Machine. (Photo: private)

A preceding study (published in 2016 by my co-authors, Bonnefon et al.) considered cases in which participants were asked to imagine being passengers inside the AV (alone or with a family member or co-worker). In dilemmas similar to the ethical thought experiment known as the “trolley problem”, where the AV would have to sacrifice one or two passengers to save 10 to 20 pedestrians, they found that participants preferred to buy an AV that would protect the passengers at the expense of the pedestrians. Interestingly, though, participants acknowledged that sacrificing the passengers in order to save up to 10 times more pedestrians is more morally acceptable. This indicates a tension between the self-interest of individuals and the collective interest of society, a situation known as the Social Dilemma.

Because of these and other dilemmas, some argue that AVs should make random decisions, choosing from various options that have emerged from surveys such as the Moral Machine. What is your view?

If everyone is equally safe or equally at risk in an AV-dominated environment, regardless of their physical features, such an approach does seem justifiable. But that doesn’t necessarily make it the solution. First, the idea that everyone should be equally safe is contestable. Some may argue that extra care should be given to keep vulnerable individuals safer, even at the expense of others. Others may argue that people who are more reckless, or those who participate in the generation of risk, should not enjoy the same safety as everyone else. Second, the idea that AVs would make random choices might have a direct effect on the use and adoption of AVs. An important step in all of this is the ability to quantify risk to individuals and to identify whether some groups enjoy (or will enjoy) more safety at the expense of others in a future, AV-dominated environment.

This interview is part of a collection of essays and interviews by Alexander Görlach:

Shaping A New Era – how artificial intelligence will impact the next decade (PDF)