June 26, 2016

A Machine's Response to Risk

Whom should I kill?” That is the fundamentally thorny question that researchers are considering in the programming of autonomous vehicles, as discussed in this article in the New York Times.

More specifically, the question of whom to sacrifice in the event of a high-speed avoidance maneuver – the guy in the car or the pedestrian in the crosswalk – is one which researchers are asking in the hopes of framing the car’s understanding and response to situations of mortal risk.

To the majority of respondents of a recent poll of autonomous vehicle passengers, the answer was clear: ‘hit the pedestrians’. This is not surprising, but it opens a whole raft of moral questions that are not purely theoretical. In a world of autonomous vehicles, this situation will arise, as it currently does with humans behind the wheel.

Making a split-second decision of how to avoid injury to oneself and others is a terrible choice for a person to have to make. The first instance of an autonomous vehicle choosing to hit ‘person X’ in order avoid killing ‘person(s) Y’ will be even more contentious; with lingering societal anxiety over agency and moral priority. This will be doubly fraught because the car will have made a pre-programmed decision, with the priority of lives already part of its parameters in a given situation.

Even stranger, it may be possible that an autonomous car’s ‘prioritization parameters’ could be one of its advertised features. Perhaps cars will be set with a standard set of avoidance programming; but for a bit more, you can get one that will put its passengers’ lives first.

It’s all very weird and unsettling, but I’m a ‘glass half full’ kind of guy. Just think how busy it will keep the lawyers.

Posted in

Support

If you love this region and have a view to its future please subscribe, donate, or become a Patron.

Share on

Comments

Leave a Reply to DanCancel Reply

  1. My eighty year old mother was the driver in an accident which resulted in the death of her passenger and my son works with the “machine learning” group at Oxford which developed some of the key mathematics used in autonomous vehicles. So this is not just a theoretical moral question for me. Here’s how the stories of my mother and my son intertwine:
    My son tells me that one of the misunderstandings about autonomous vehicles is that are pre-programmed to do specific things in specific situations (eg, when the car is turning left in the snow at a busy intersections, the decision is “x”). This is the old way autonomous vehicles were conceptualized, and it led to enormously complex rules that didn’t cover all the possible cases and that scared everyone, particularly the autonomous vehicle people because they, like me and you, don’t want to be responsible for cars killing people.
    Then a fundamental shift occurred: instead of specific (ie pre-programmed) rules to cover every situations, researchers found that with more data and more processing power, computers could use simpler rules to access hundreds of thousands of similar situations stored in memory and choose the option with the best outcome, something closer to how people think.
    But it doesn’t stop there. In the old paradigm, the car’s computer would run the algorithm according to the rules and out would pop the decision which the computer would follow. In the new paradigm, with so much more processing power and data, the computer is able to look at the outcome of its decision and make a new decision, look at the outcome of that decision and make a new decision, and so on, all in microseconds. Try and then adjust — something closer to the “fuzzy logic” that people use, rather than a pre-programmed decision tree.
    That brings us to the case in this article: how would a car make a moral decision between choosing between killing a pedestrian and killing a passenger. Well, people don’t think that way, so why should computers? When people are faced with a situation where they are driving around a corner and suddenly have to choose between going head on into a truck or swerving left and killing a pedestrian, they don’t think “should the pedestrian die or should I and the truck driver die?”. People usually try to save everyone, first by saving themselves by swerving out of the way of the truck and then by trying to avoid the pedestrian, or if they can’t avoid, at least hope that the pedestrian will not be too badly hurt, or if they’re hurt, at least they won’t die, etc etc. In other words, its a sequence of decisions or reactions that involve trying to ensure everyone survives.
    Onboard computers are heading in the same direction: look at a situation, make the best decision for everyone to survive, look at the situation again, make the best decision for everyone to survive, and so on. But two things will be different: first, the computer will have much more data to draw on, and second, the computer will adjust much more quickly to the situation as it evolves, making it much more likely that everyone survives.
    This is not just techno-optimism on my side; take the case of my mother. My elderly mother was involved in a rear end crash on a highway. She was a careful driver, taking another elderly friend to a doctor’s appointment. But the car in front of her suddenly stopped just at the moment she was looking down at her speedometer. When she looked up she didn’t have time to fully break or swerve and she ended up hitting the car in front. It wasn’t a high speed crash but her passenger was injured and died later in hospital.
    An autonomous car, on the other hand, would not have looked away from the road to check the speedometer because it already knows the speed, the distance to the car in front, the location of the cars around it, the distance to the edge of the pavement and so on (and Google and others have demonstrated this over millions of driving miles). So when traffic ahead suddenly stopped, the car would have recalculated the distance, noticed that it was closing in on a vehicle, calculated breaking and avoidance distances, chosen the option with the highest chance of success, implemented the choice, and a few milliseconds later did it all again and readjusted.
    Given the choice, I’m sure my mother would have wanted to be in the autonomous vehicle because the chances of everyone surviving were much better. And that’s my point: if you look carefully at the kind of accidents that kill people, they aren’t usually out at the moral extreme — “should I hit that pedestrian in the street or swerve over that cliff at my own peril?”. They are usually the mundane ones where people aren’t paying attention, or where the driver is 80 and doesn’t react quickly, or where the driver reacts suddenly and oversteers to avoid a collision causing the car to lose control or the driver is drunk. I don’t know about you, but in all of those situations I’d choose the autonomous vehicle.

    1. Oh, you’re correct about the numbers. There will be far fewer deaths and serious crippling injuries with autonomous vehicles. This number will not be zero, however. There will be instances of life and death decisions being made, even if the majority of these are in response to unpredictable human behaviour.
      If you are right about these cars using the same fuzzy logic mapping as human drivers do in responsive situations, then that’s not comforting. People’s reactions are often terrible, and certainly not rational (ie, save as many people as possible) – consciously or otherwise. I have even less faith in technology than in humanity. The last thing machines need to be doing is taking after us.

      1. Fuzzy logic is something computers can do in a much more organized way than people, but the principle is similar, and that’s the point. It’s more effective to make decisions and revise them than to make a single decision and stick with it even if the environment changes. That’s in fact what people tend to do under extreme stress — like finding themselves in an oversteer situation and doubling down by steering further in the original direction. When they’re not under stress, their fuzzy logic works better. Computers tend not to freak out the same way. And yes, there will be life and death decisions to be made as there always have been but the Google cars have caused only one accident after a massive number of miles, and that accident was at 3km. Considering that it’s an extremely new technology, that’s startling.

  2. In all of these autonomous vehicle (AV) scenarios the AV is viewed as a kind of guided missile with a sophisticated crash avoidance system. But what about defensive driving actions? How does that work? What if you are in the AV and about to be T-boned by a clunker? Or rear ended by a clunker? What does the programming have to say about this type of scenario?

      1. No answers there to the T-bone and rear end scenarios. The programmer has no code for evasive actions from incoming missiles at these locations.

  3. And then, suddenly, there’s this: the first fatality in a self driving car when neither the car nor the driver saw the white side of a tractor trailer rig turning left across their path:
    http://www.nytimes.com/2016/07/01/business/self-driving-tesla-fatal-crash-investigation.html?_r=0
    And this: fighter pilots lose to Artificial Intelligence computers in repeated tests of evasive actions in simulated aerial dogfights:
    https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=aerial%20dogfight%20ai

Subscribe to Viewpoint Vancouver

Get breaking news and fresh views, direct to your inbox.

Join 7,303 other subscribers

Show your Support

Check our Patreon page for stylish coffee mugs, private city tours, and more – or, make a one-time or recurring donation. Thank you for helping shape this place we love.

Popular Articles

See All

All Articles