We always knew that eventually we’d see self-driving cars on the road one day, perhaps by 2030 or so, but during the past year or so things have really ramped up significantly. A sort of ‘arms race’ is emerging between various vehicle manufacturers such as Tesla, BMW and Mercedes-Benz, each with their own cache of ‘self-driving tech’ which is gradually taking the task of driving away from the person sitting behind the wheel. Right now we have cars which can drive themselves in stop start traffic, judge their speed based on radar readings reflected off the cars around them and stick to their own lane based on feedback from cameras around the vehicle. Better still, when you get to your destination they can park themselves, too. Soon all you’ll have to do is turn the thing off, step out and walk away.
Technologies like these will help our driving become safer, cleaner and more fuel efficient. But before the technology can progress further and hopefully result in cars which can handle all aspects of driving, that is with zero help or input required from the driver, vehicle manufactures must first solve an impossible ethical dilemma of algorithmic morality. If you’re not sure what that means, it can be more easily thought of as what the car should be programmed to do in the event of an unavoidable accident.
Picture this hypothetical situation. Your new self-driving car is transporting you along a regular suburban road with an 80kmh+ (50mph) speed limit, when an oncoming truck suddenly swerves across into your lane at the very last second. The self-driving system in your vehicle will be faced with a dilemma, and in just a few milliseconds make a decision on what it is going to do. The vehicle is about to be involved in an accident which most likely going to be fatal for you, with the only alternative being to swerve off the road to avoid the collision. But it has already detected two pedestrians walking there, so what should it do?
Should it minimize the loss of life and simply apply the brakes, sending you straight under the front of the truck and sacrificing your life in order to save others, or should it protect your life at all costs and swerve anyway, potentially killing the two pedestrians? Either way, you’d be sitting there as a passenger, probably browsing the internet on your smart phone, and there’s nothing you could do about it. As humans we’re programmed to look after ourselves, but self-driving cars probably won’t think the same way. Who would buy a car that has been programmed to potentially sacrifice you, the owner?
To get answers, a gentleman by the name of Jean-Francois Bonnefon at the Toulouse School of Economics in France got together with a couple of friends and made an effort to find out. Bonnefon says that while there is no right or wrong answer to the question, public opinion will play a strong role in how future self-driving vehicles will react. To do this, they posed a bunch of different ethical dilemmas to a large number of people and collated the responses. Their idea is that the public is much more likely to go along with a scenario that aligns with their own views.
The participants were given scenarios similar to the one above, in which one or more pedestrians could be saved if the car was to swerve into a barrier instead, killing its occupant. At the same time, the researchers varied some of the details such as the actual number of pedestrians that could be saved, whether the driver or an on-board computer made the decision to swerve and whether the participants were asked to imagine themselves as the occupant or an anonymous person.
In general, the participants were comfortable with the idea that self-driving vehicles should be programmed to minimize the death toll. That, after all, is surely the whole point of designing self-driving vehicles in the first place – to engineer human error out of the equation. But the participants were willing to go only so far. “Participants were not as confident that autonomous vehicles would be programmed that way in reality—and for a good reason: they actually wished others to cruise in utilitarian autonomous vehicles, more than they wanted to buy utilitarian autonomous vehicles themselves,” Bonnefon and his team concluded.
So essentially, people are generally in favour of cars that sacrifice the occupant to save other lives, as long they don’t have to drive one themselves. Bonnefon and co say these issues raise many important questions: “Is it acceptable for an autonomous vehicle to avoid a motorcycle by swerving into a wall, considering that the probability of survival is greater for the passenger of the car, than for the rider of the motorcycle? Should different decisions be made when children are on board, since they both have a longer time ahead of them than adults, and had less of a choice to be in the car in the first place? If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?”
These problems cannot be ignored, say the team: “As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent.”