top of page
Writer's pictureKathryn Stagg

The Self-Driving Car & The Trolley Problem


Red and Yellow Trolley Driving on Tracks
Photo by Sebastian on Unsplash

Driving is often made up of difficult decisions, made under duress and with little time. Those decisions aren’t always as simple as deciding between right and wrong, or safe or dangerous. Forced to reckon with the behaviors of the road users around us, driving can get messy. For example, having to choose between steering sharply to the left, into another lane of traffic, of risk being hit by a driver coming into yours.


These questions are fodder for ethicists, who have often weighed up situations and asked which of two unenviable decisions was the right one. A classic example is the Trolley Problem – a hypothetical in which you are driving a trolley that won’t stop, and you must choose between staying on your current track and running over many people or switching tracks and running over only a few.


Questions like these are fundamentally about human choice and the ethics involved in the choices we make, but with the advent of autonomous cars, ethical dilemmas such as the Trolley Problem could become something that AI technology will have to figure out. Can a car make a choice in a scenario where either outcome is unfortunate? And do we feel comfortable giving that decision-making capability over to AI?


In an article for The Washington Post, Dalvin Brown delves into how the autonomous vehicle industry is tackling problems surrounding AI and ethics. The short answer: they hope to avoid it altogether.


“How do you teach a car to make complex, life or death decisions in seemingly lose-lose scenarios on the road?... Artificial intelligence is good at a lot, such as knowing that an object of a specific size is on the road ahead… AI might not be so helpful at solving ethical dilemmas that humans have yet to reach a consensus about…”

Autonomous vehicle development is still a far way off from even beginning to tackle questions of how AI technology would or should react in a situation where some form collision or danger is inevitable and merely a matter of degree. But even should that point be reached, it’s a stretch to imagine consumers being comfortable with vehicles making those sorts of decisions.


The hope is that the need won’t arise. From the beginning, autonomous vehicles have carried with them a sort of utopian ideal. Experts hope that autonomous vehicles will provide the answer to collisions, getting rid of them all together. We are frequently promised that the mass adoption of autonomous vehicles will make the roads safer for everyone.


“… many…. are approaching the issue from a different perspective: Why not stop cars from getting in life-or-death situations in the first place? It’s an idealistic view of autonomous driving. Still, it’s a starting place. After all, the whole point of automated cars is to create road conditions where vehicles are more aware than humans are, and thus better at predicting and preventing accidents. That might avoid some of the rare occurrence where human life hangs in the balance of a split-second decision.”

In order for that to be a reality, the playing field would have to be all autonomous, though. Human error, unfortunately, is the key to over 90% of collisions. It is autonomous vehicles, with their multiple cameras and sensors, with their inability to get distracted or be swayed by emotion, who have the ability to make all the right decisions. If all the vehicles on a given roadway are autonomous, it’s perhaps not unrealistic to hope that such life-and-death decisions never need to be made. But if we’re still a ways out from seeing autonomous vehicles becoming the norm, we’re still that much further from seeing them completely replace the current driving landscape that is a mixture of types of vehicles, with varying levels of technology.


To read more about this topic, check out the original article in The Washington Post.



コメント


bottom of page