Avoiding a false positive just became a huge negative

I was encouraged by the Tempe, AZ police department’s quick release of the dashcam video from the recent Uber fatality. But there are two more layers to this accident investigation, and – as bad as the dashcam video is at first glance – my prediction is that after the NTSB’s investigation, conclusions will be far more damning. And, the ultimate cause of accident should be especially concerning to bicyclists and motorcyclists.

The video already contradicts earlier statements by the police (and others) that this was the kind of accident that would’ve happened with any normal human driver at the wheel. Even the low-res visible-spectrum video that’s been released picks up the pedestrian a couple of seconds out. No attentive human driver would have hit her.

The video already provides a stomach-churn-inducing illustration of the poor record humans have when it comes to monitoring systems that usually work, and the ‘handover’ problem. It’s a powerful argument for Augmented, not Automated Driving.

 With apologies to the New York Times, this (ahem) 'borrowed' graphic shows that the pedestrian/bicyclist crossed at least three lanes of clear, open roadway prior to impact. It's inconceivable to me that the Uber vehicle's lidar system didn't detect her. The implication is that the car's control algorithm decided she was a 'false positive' signal, and chose to ignore her.

With apologies to the New York Times, this (ahem) 'borrowed' graphic shows that the pedestrian/bicyclist crossed at least three lanes of clear, open roadway prior to impact. It's inconceivable to me that the Uber vehicle's lidar system didn't detect her. The implication is that the car's control algorithm decided she was a 'false positive' signal, and chose to ignore her.

And, if my engineer-heavy Twitter feed is any indication, the Uber vehicle's lidar and radar sensors certainly should have detected the crossing pedestrian (with bicycle) even in total darkness. This was, in fact, precisely the kind of scenario in which AV companies claim superiority over human drivers.

If you think this could not be worse for Uber, and the AV business in general, just wait. It’s going to turn out the car did detect the crossing pedestrian, and it ‘chose’ not to slow down.

What I mean is this: AVs process information from a suite of sensors, but that’s just data. The real trick is parsing it and processing it, in order to make the myriad driving decisions that human drivers make all the time.

I’ve had many conversations with engineers and programmers working in this field, and because of my special interest in motorcycles (and bicycles) they’ve often told me, Most of the work’s done to ensure we can identify and avoid pedestrians, at one end of the size scale, and cars and trucks at the other. A lot of the time, we sort of assume that because bicycles and motorcycles are in between those extremes, we’ll pick them up, too.

The challenge comes when the cars’ algorithms start sorting through signals to avoid false positives – situations where the cars’ sensors detect something like leaves swirling in the wind, or heavy spray off a truck’s tires in the rain, and the car applies the brakes when it shouldn’t. AV makers know from their own research that such false positives are not only dangerous for following vehicles, they’re frustrating for the people in the AV.

Avoiding false positives is one of the biggest AI/algorithm challenges when making an AV.

You heard it here first: The Uber vehicle involved in this accident did detect something in the road – something that turned out to be a person, pushing a bicycle, with a profile further camouflaged by several large plastic bags on the handlebar. But the relevant sensor images didn’t match anything in the cars’ ‘experience’ (teleologically speaking) so the car, programmed to avoid false positives, ‘chose’ not to slow down.

If I’m right, this is the most damning possible cause for an accident. We’d excuse, or at least sympathize with, a human driver  who said, It was dark; it was not a place I expected someone to cross; I just didn’t see her until it was too late.

But we’d throw the book at a human driver who said, Sure I saw her, but I just don’t stop for people pushing bicycles with those fucking garbage bags on the handlebar.

And, if I’m right, this accident really emphasizes the subtleties of (even crappy) human driving, and the remaining challenges of building an AV that will be meaningfully better at driving. Human drivers slow down, avoid, and stop for all kinds of things before they figure out exactly what they are.

 It's not a laughing matter, but this Twitter user pretty much nails the problem.

It's not a laughing matter, but this Twitter user pretty much nails the problem.

This is a powerful argument for Augmented, not Automated Driving... If the Uber ‘safety driver’ had been paying attention and seen the car’s lidar imagery on a heads up display in the windshield, the incident would not even have resulted in a close call. It’s easy to imagine a system that would have alerted the driver to something in the road ahead, at least two or three seconds away, at which point any average human driver would have had an excellent chance of avoiding a collision on a wide and unobstructed roadway under good traction conditions – especially in a vehicle (like the one modified by Uber) that includes excellent ABS.

Thanks to the R&D done in pursuit of AVs we now already have the technology to dramatically augment human drivers’ capabilities and safety, we're just not applying it. That massive, global R&D effort will – eventually – safely and conveniently replace many human drivers and lead to improved mobility for non-drivers. But until then, this accident emphasizes the risks of rushing to take human drivers out of the equation before AVs are smart enough to make all our decisions for us.