Robot cars take moral philosophy on a new ride.

The Google driverless car might not be real enough for you to buy one, but it is certainly real enough to be included in some noteworthy thought experiments. The really interesting part about automated driving is also how you design for automated decision making in the event of an impending crash. Take the following example:
A Robot car is driving at the speed limit on a winding road. There is a cliff on the right side, and a car heading in the opposite direction in the other lane. A group of pedestrians tries to cross the road in front of the robo-car. The scenario is such that a crash is iminent and the robot car does not have enough time to stop, but must rather choose from three options:

  • Stay the course with maximum braking applied and hit the pedestrians
  • Steer into oncoming traffic
  • Drive off a cliff and sacrifice the passengers of the autonomous car

This might sound like a ridiculous situation, but in fact – moral philosophists love it. Finally there is a real practical application for the age old “trolley problem”. In the traditional version there are only two options; stay the course and kill 5, or divert and kill 1. The answers fall generally into two camps—consequentialism and principalism. For consequentialists, the end result matters most, so you should always choose to kill the least number of people. According to principalists, you should never act to deliberately kill, such as choosing to divert the trolley. Inaction is permissible, so you should stay the course and kill 5. The question becomes  muddled, however,  when you take the additional step of preprogramming a decision.
Jumping to the back of the book, here is my proposed set of rules for the autobot to follow. This will certainly not be the final word, but programmers, lawyers, government officials and passengers will have to figure out something. (I like to take inspiration from Azimov’s Three Laws – realizing that the entire scenario above plays out when the first law must be violated.)

  1. A robo-car cannot deliberately sacrifice it’s passengers in the interest of another vehicle (would you want to enter the martyr mobile?).
  2. Robo-cars are only responsible for the best possible human response in the same situation; (mash break pedal and scream).
  3. Robo-cars are responsible for default roadway heading, and may deviate only when a safe alternative lane option exists.

Certainly these rules aren’t as elegantly self-referential as Azimov’s, but we’re just getting the conversation started. We haven’t even touched the question of when a human can over-ride the machine controls.


About livingthememe

engineer and armchair philosopher
This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s