#006 The moral dilemma of self-driving cars

#006 The moral dilemma of self-driving cars

How should autonomous vehicles decide who to sacrifice?

Almost 1.3 million people die in road crashes each year. One of the most compelling reasons in favor of the introduction of autonomous vehicles is to enhance driving safety and reduce road casualties. However, this prompts the question of whether these cars will be able to make decisions in a split second, and respond just as well as experienced human drivers. And most importantly, how an AI-powered software will make life and death choices when emergency occurs?

Will self-driving cars will be able to make decisions in a split second [1]


While almost all major global car manufacturers are working on the development of autonomous driving technology, a possibility that an AI software makes a choice between life and death scares a lot of people. Also the moral and ethical dilemmas of behavior of self-driving vehicles is not yet thoroughly discussed. It remains unanswered how they should decide in situations regarding different types of risk. On the other hand, AI practitioners are confident that self-driving cars will be able to make optimal decisions when unavoidable collision occurs.

How will an artificial software behave during life and death choices?

With the power of artificial intelligence, driver-less vehicles can understand the traffic situation and safely respond in real-time. They accomplish this using an algorithmic implementations of deep neural networks and they are trained to minimize casualties. However, in everyday situations on the roads there may be occurrences when fatal accident cant be avoided. So, the question is how in such situations self driving cars should respond?

They can try to predict how their future choices will affect a traffic situation. For example, there are suddenly two pedestrians in front of a vehicle. What a car should do? To kill a baby or to kill an old lady? A conscious driver will most likely try to avoid killing a baby. He/she will think that this reaction will minimize the risk. After all, isn’t this the choice that people make when a ship is sinking. We first save woman and babies, while sacrificing man.

Joker [2]

For example, we witness such dilemmas in “Dark Knight” movie. Joker seizes two boats. Boats are filled with explosive. Joker gives the passengers on each boat the trigger for the other. One boat contains average civilians, but the other contains prisoners. Therein lies the moral dilemma; he informs them that one of the boats must blow up the other before midnight, or he’ll detonate both, killing them all.

How autonomous vehicles will be programmed to behave in situations that threaten human life?

In case of a collision, a self-driving car may avoid to respond according to programmers demand. In order for an AI to make such a decision in a split second, it must have highly-developed software to handle all possible scenarios of a specific accident before it even happens.

To better understand the ways in which self-driving cars could make such decisions, scientists examined how humans deal with similar dilemmas. People have shown that they are willing to sacrifice themselves for others and to make decisions based on the age of the victim. Therefor human drivers are able to turn to the sidewalk to reduce the number of lives lost. In human behavior there’s an age, gender and also a species preference. As a matter of fact people had a tendency to save girls over boys, humans over animals, and young over old.

Air Force photo by Margo Wright [3]

The behavior of autonomous car to be considered correct in similar situations depends on what factors will be both morally relevant and socially acceptable. The car can be programmed to choose to kill a man before a woman, old man before a child or an animal before a human.

Would you buy a car that would sacrifice its passengers in case of an accident?

So, let’s take a look at an example:

Driver-less car has to decide whether to hit the group of children on the road or avoid them by suddenly turning away and hitting a solid obstacle while killing or severely injuring the passengers in the vehicle. It is possible that artificial intelligence will choose to sacrifice its passengers to avoid a child or maybe pregnant woman on the road. So, the question is: Who wants to buy that car? Isn’t that bad for businesses? How manufacturers will sell such a product? Such questions remain unanswered, but for experts there are no barrier to continuing the development of self-driving car technologies.

What should the self-driving car do? [4]

Will the world with self-driving vehicles be safer?

Autonomous vehicles will become better than human drivers over time, but there will still be situations where vehicles will have to make ethical decisions to save or possibly risk human life.

Ability to avoid accidents is one of the key things that should distinguish autonomous vehicles from the traditional ones. When they finally conquer the roads, self-driving cars will have to be safer and better drivers than us, since 94% of road accidents happen because of human errors. If people are eliminated from this equation, the number of casualties in is expected to decline dramatically. And who would ever dare to oppose such a welfare for humanity?


References: