May 30, 2018

How the “Perception Module” of an Autonomous Vehicle Killed a Pedestrian

Last week, the National Transportation Safety Board released its preliminary crash report related to the pedestrian fatality caused by the autonomous vehicle (AV) in Tempe, Arizona this past March.

Trust The Economist to wade right into the muddy waters; since this report has not received much coverage in the rest of the media, we’ll join the fray.

The NTSB confirmed what has been previously reported — the AV’s braking system had been disabled. But why?

There are three computer systems that run the autonomous vehicle.

The first is the “perception” system that identifies objects that are nearby. The second is the “prediction” module which games through how those identified objects might behave relative to the autonomous vehicle.

The last module implements the predictions of object movement suggested by the second module. Also called the “driving policy”, this third computer system controls the speed of the car, or turns the vehicle as required.

It’s no surprise that the perception module is the most challenging to program, but also the one that is required to ensure that all users can safely use the road surface. Sebastian Thrun from Stanford University describes that in the Google AV project’s infancy, “our perception module could not distinguish a plastic bag from a flying child.”

And that may be what happened to the pedestrian killed while walking the bicycle across the street in Arizona. Although her movement was detected by the perception module, a full six seconds before the fatal crash, it “classified her as an unknown object, then as a vehicle and finally as a bicycle, whose path it could not predict.

And here is the sad — and scary — part: “Just 1.3 seconds before impact, the self-driving system realised that emergency braking was needed. But the car’s built-in emergency braking system had been disabled, to prevent conflict with the self-driving system; instead a human safety operator in the vehicle is expected to brake when needed.”

“But the safety operator, who had been looking down at the self-driving system’s display screen, failed to brake in time. Ms Herzberg was hit by the vehicle and subsequently died of her injuries.”

Because random braking can cause challenges like being rear-ended by others, the perception system on AV’s does not slow down when it gets confused — that’s why there are human safety drivers to “trouble shoot” the system when the car can’t make the right choice.

The problem is that humans are fallible, and do not pay attention all the time. While AV’s will be safer than today’s vehicles which have 94 per cent of “accidents” (really crashes) caused by human error, it will be the fine tuning of the prediction module that will increase consumer confidence on the ability to keep other road users safe too.

As the senior vice president of Intel Corporation Amnon Shashua statesSociety expects autonomous vehicles to be held to a higher standard than human drivers.” 

That means zero road deaths and zero deaths of vulnerable road users. This accident needs to be carefully examined to ensure it never happens again.

Photo: TheInternetofBusiness

Posted in

Support

If you love this region and have a view to its future please subscribe, donate, or become a Patron.

Share on

Comments

Leave a Reply to Arno SchortinghuisCancel Reply

  1. This was the fault of two people :he driver of the auto, and the person or persons who designed and implemented the software and hardware.

    The black box full of chips (aka ” Perception Module”) is blameless.

    We really need to stop assuming that computers are magical things with minds of their own. HAL was fiction, and still is. Computers do exactly what programmers and their bosses tell them to do – no more, no less.

    When you blame the hardware you’re allowing the creators to escape responsibility.

    1. “Computers do exactly what programmers and their bosses tell them to do – no more, no less.”

      This understates the case. The reality is more frightening.

      Computers do what they are told: but that is often very different from what we intend to tell them or what we think we have told them. A bug is an error that occurs because we told the computer to do one thing when we wanted it to do something else. The complexity of most non-trivial software is simply beyond human comprehension.

      Machine Learning (AI) muddies the waters. With ML, we no longer tell the computer what to do. Instead, we teach it to learn by throwing a lot of pre-categorized data at it and letting it figure out some way to distinguish one category from another. The parameters that represent “learning” are opaque to us: the only way we have of knowing whether it has learned is empirically, by testing it out. This is a picture of a cat, we say, this is a dog, this is another cat – and repeat 100,000 times. Then we show it 100,000 pictures and ask it to categorize them itself. If it gets them right, we say hey, it’s artificially intelligent! Now we can trust it to know a cat from a dog!

      But can we? It’s an exercise in statistics. If your empirical tests aren’t diverse or representative enough, things like this happen:

      https://www.theverge.com/2017/11/2/16597276/google-ai-image-attacks-adversarial-turtle-rifle-3d-printed

      Worse, every corporation (maybe every product and model) has different algorithms. Every algorithm learns differently. Even two identical implementations will diverge if they continue to learn: at which point we can’t rely on them to behave the same way, or to respond the same way to tests.

      Trusting a computer to drive is like trusting a dog not to bite you. We know empirically that most dogs don’t bite… but we can’t see inside the dog’s mind. On the plus side, dogs weren’t made yesterday. We have tens of thousands of years of experience with them; our brains are programmed to look for indications of state of mind, from the movement of the body to the look in the eyes. The computer was born yesterday, is unlike its siblings, and is almost completely inscrutable.

      https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

      As a long-time software developer, the incredible faith (there is no other word for it) that so many people place in computers scares me.

  2. The advance of robots or AI machines replacing complex human tasks, like driving, is vastly exaggerated. The human driving is not a grocery checkout or fast food order taking clerk with a far smaller range of tasks.

    1. Human drivers are very poor at operating a motor vehicle. Note that the human emergency driver in this case failed to act. We put a lot of effort into making air traffic safe but death by car is considered normal. Collision detection and avoidance is the first step and this should be mandated into all new cars ASAP. I am sure that autonomous vehicles will become much safer than human drivers if they are not already.

  3. Look at that criss-cross design in the median. To my eyes that looks like walkways that one is meant to walk on and then cross.

    Editor’s Note: The “criss cross design is meant to be “ornamental” and not for walking according to Arizona highway authorities.

  4. So if a ‘human safety operator’ is needed to overview the activities of the perception module, the prediction module, and the driving policy then why again do we call these vehicles autonomous? Wouldn’t it be better to call this kind of driving ‘distracted driving’?

Subscribe to Viewpoint Vancouver

Get breaking news and fresh views, direct to your inbox.

Join 7,284 other subscribers

Show your Support

Check our Patreon page for stylish coffee mugs, private city tours, and more – or, make a one-time or recurring donation. Thank you for helping shape this place we love.

Popular Articles

See All

All Articles