When Your Car Must Choose Who to Kill

By: 

Image used via Creative Commons 4.0 via Pixabay

You are in a self-driving car. It is careening down a narrow alleyway, sandwiched between a brick wall and a convenience store. A man exits the shop and pauses in front of the entrance to light a cigarette. At the same time, a crowd of college students trip over each other and stumble in front of the car. To your dismay, the car’s brakes are malfunctioning. Someone’s going to get hurt.

After some complex calculations, your self-driving car concludes that there are three possible outcomes:

1) Continue straight ahead and kill the pedestrians.

2) Swerve right into the brick wall and kill the driver.

3) Swerve left and kill the bystander smoking a cigarette in front of the store.

Every possible decision in this hypothetical scenario has a trade-off: some lives are lost, and some lives are saved. Machine ethics studies how your self-driving car should make the most ethical decisions in a complex situation like this.

The biggest challenge of machine ethics is making ambiguous, moral decisions computable. For a machine to ‘decide’ what to do, there must be a metric that dictates what the ‘right’ decision is. Determining this metric – or series of metrics – raises a lot of difficult questions. Should your self-driving car prioritize the greatest number of survivors? The survivors who are more ‘valuable’ to society? Is it more important for the car to save smarter people? Richer people? Kinder people? Is it discriminatory to quantify the value of a person’s life using individual traits?

Wendell Wallach, author of Moral Machines: Teaching Robots Right from Wrong, defines two general ways to implement machine morality. In the top-down approach, the machine is given a codified moral framework to base all its decisions on. Suppose that your self-driving car was programmed to adhere to utilitarianism, an ethical theory that aims to maximize the happiness of the greatest number of people.

The actual implementation of utilitarianism ,however, is not as straightforward as it seems. Like any theory, utilitarianism is general and abstract. Figuring out how to apply it to specific scenarios can be a very interpretative and subjective process. In our case, what if the pedestrians were a crowd of homeless and chronically unemployed people, and the man in front of the store was a skilled neurosurgeon? Is it more utilitarian for your car to save four homeless people, or one skilled neurosurgeon who has potential to save many more lives in the future?

The other approach to machine morality, according to Wallach, is the bottom-up approach. This method teaches the machine how to make moral decisions using machine learning algorithms. In other words, the machine combs through a bunch of data and draws its own conclusions from it, instead of having a set of rules explicitly specified by engineers. To learn in this way, the machine can be given training data that is either labeled or unlabeled.

Labeled training data is a bunch of problems paired with the correct solutions. For our careening car, labeled training data would be a dataset of collision scenarios paired with the “right” action for the car to take. The greatest difficulty here is that creating a substantial amount of labeled training data is time consuming and subjective endeavor, especially for moral issues that do not have an objectively correct answer. Who, after all, gets to decide what the “correct” answer is? How did they choose?

Unlabeled training data does not have any answers attached. The machine surmises patterns from the scenario data and creates its own guidelines. For instance, our car could learn from a dataset of the split-second decisions actual drivers made in collision scenarios. There is no indication of whether each driver’s reaction was correct or not. Instead, the machine spots patterns in how human drivers act, treats these as guidelines, and applies them to new situations.

Unsupervised learning like this assumes that the decision most people make is the best solution. But a popular choice is not necessarily the most ethical one. For example, unlabeled training data of real drivers will have strong self-preservation bias. Like any organism with survival instinct, human drivers tend to make decisions that maximize their chances of survival. This means that cases of a driver sacrificing himself for the greater good will be sparse. A self-driving car trained with such a dataset would therefore also be biased towards sacrificing anybody other than the driver. While this may be great news for the driver, it is debatable whether a significant self-preservation bias will lead to ethical decisions.

Both top-down and bottom-up approaches have serious flaws. The guiding principles in the top-down approach can be too general and abstract to apply in specific scenarios. The bottom-up approaches process a large variety of inputs and data, but struggle at establishing an explicitly ethical goal or framework. What does that mean for our self-driving car? It means that it needs a hybrid of the two approaches: an overall ethical guide supplemented by machine learning for case by case specificity.

As this technology develops, self-driving cars and their designers will struggle when handling ambiguous scenarios, especially ones with ethical ambivalence. Software engineers cannot write a program with an infinite number of specific, hard-coded responses to every possible scenario a car could encounter. Additionally, as the aviation industry has learned the hard way - when a new automated system is deployed, there will be a temporary uptick in accidents that shed light on software bugs and hardware flaws for engineers to fix.

However, despite the hurdles in developing safe, ethical self-driving cars, it will definitely be a positive move for human lives.  According to the National Highway Traffic Safety Administration, over 90% of car accidents in the U.S. are caused by driver error. Human error – from dozing off to texting – can be removed from the equation, saving lives that could have been lost to drunk drivers and distracted teenagers. And moving toward safer roads, with fewer fatalities, is a win for us all.

 

 

Topic: 

Tags: 

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd> <p> <div> <br> <sup> <sub>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.