+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

An MIT professor explains why we are still a long ways off from solving one of the biggest problems with self-driving cars

Mar 5, 2017, 18:53 IST

https://waymo.com/ontheroad/

MIT associate professor Iyad Rahwan has asked 3 million people to consider the "Trolley problem" when it comes to self-driving cars.

Advertisement

The Trolley problem goes like this: a runaway trolley is barreling toward five people on a track who cannot move. But you have the option to pull a lever and send it to a side track where you see one person standing. What would you do?

But as Rahwan puts it, the Trolley problem gets thornier when considering self-driving cars. The first scenario puts the ethical burden on a person. But if a self-driving car is in a lose-lose situation where it must make a choice, we're asking a robot in our everyday environment to make the call.

"The idea of a robot having an algorithm programmed by some faceless human in a manufacturing plant somewhere making decisions that has life-and-death consequence is very new to us as humans," Rahwan told Business Insider.

Rahwan's work highlights the difficulty of assessing what should happen if a self-driving car gets into an accident. Should cars be programmed to act a certain way in dicey scenarios?

Advertisement

The Trolley debate has lingered in the background for quite some time as automakers advance their self-driving car efforts. Rahwan helped bring it to the surface in October 2015 when he co-wrote a paper "Autonomous vehicles need experimental ethics."

But the debate arguably got to the forefront of discussion when Rahwan launched "MIT's Moral Machine" - a website that poses a series of ethical conundrums to crowdsource how people feel self-driving cars should react in tough situations. The Moral Machine is an extension of Rahwan's 2015 study.

Rahwan said since launching the website in August 2016, MIT has collected 26 million decisions from 3 million people worldwide. He is currently analyzing whether cultural differences play a role in the responses given.

Reuters/Marcos Brindicci

Rahwan admits the debate itself isn't without its flaws. The Trolley problem is purposefully simple so it's easier to understand, allowing researchers to really assess people's psychological processing.

Advertisement

"The downside of that is it looks very unrealistic and looks like a situation that would never happen or be very rare," he said.

Still, that doesn't mean these aren't questions worth asking, Rahwan said.

"They need an answer to this question because, ultimately, it's not about a specific scenario or accident, it's about the overall principle that an algorithm has to use to decide relative risk," he said.

Some automakers have publicly addressed this question.

Business Insider

In October, Christoph von Hugo, the manager of driver-assistance systems at Mercedes-Benz, said that future autonomous vehicles would put the driver first in a lose-lose situations.

Advertisement

"If you know you can save at least one person, at least save that one. Save the one in the car," he said in an interview with Car and Driver. "If all you know for sure is that one death can be prevented, then that's your first priority."

Following the story's publication, Mercedes said the quote was taken out of context to several publications. A Daimler spokesperson reiterated that stance in an email to Business Insider:

"For Daimler it is clear that neither programmers nor automated systems are entitled to weigh the value of human lives," the spokesperson wrote. "There is no instance in which we've made a decision in favor of vehicle occupants. We continue to adhere to the principle of providing the highest possible level of safety for all road users."

Rahwan also said it's unlikely engineers will program a specific decision into their algorithms.

"No one is going to build a car that says the life of one child is worth 1-and-a-half adult or something like that. This is unlikely," Rahwan said.

Advertisement

Ford

But automakers should be transparent with their data so independent researchers can assess whether certain self-driving cars are behaving in a biased fashion, Rahwan said. For example, if data shows a self-driving car is disproportionately harming specific people, like hitting cyclists over pedestrians, programmers should revisit their algorithms to see what's going wrong.

The National Highway Traffic Safety Administration acknowledged in a September report that self-driving cars could favor certain decisions over others even if they aren't programmed explicitly to do so.

Self-driving cars will rely on machine learning, a branch of artificial intelligence that allows computers, or in this case cars, to learn over time. Since cars will learn how to adapt to the driving environment on their own, they could learn to favor certain outcomes.

"Even in instances in which no explicit ethical rule or preference is intended, the programming of an HAV may establish an implicit or inherent decision rule with significant ethical consequences," NHTSA wrote in the report, adding that manufacturers must work with regulators to address these situations.

Advertisement

Rahwan said programming for specific outcomes isn't the right approach, but thinks companies should be doing more to let the public know that they are considering the ethics of driverless vehicles.

"In the long run, I think something has to be done. There has to be some sort of guideline that's a bit more specific, that's the only way to obtain the trust of the public," he said.

NOW WATCH: Google's self-driving car has a huge problem

Please enable Javascript to watch this video
You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article