Uber’s Self-Driving Car Saw the Woman It Killed, Report Says

Uber’s Self-Driving Car Saw the Woman It Killed, Report Says

https://ift.tt/2ILEwX7

The federal investigators examining Uber’s fatal self-driving crash in March released a preliminary report this morning. It lays out the facts of the collision that killed a pedestrian in Tempe, Arizona, and explains what the car actually saw that night.

The National Transportation Safety Board won’t determine the cause of the crash or issue safety recommendations to stop others from happening until it releases its final report, but this first look makes two things clear: Engineering a car that drives itself is very hard. And any self-driving car developer that is currently relying on a human operator to monitor its testing systems—to keep everyone on the road safe—should be extraordinarily careful about the design of that system.

The report says that the Uber car, a modified Volvo XC90 SUV, had been in autonomous mode for 19 minutes and was driving at about 40 mph when it hit 49-year-old Elaine Herzberg as she was walking her bike across the street. The car’s radar and lidar sensors detected Herzberg about six seconds before the crash, first identifying her as an unknown object, then as a vehicle, and then as a bicycle, each time adjusting its expectations for her path of travel.

About a second before impact, the report says, “the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision.” Uber, however, does not allow its system to make emergency braking maneuvers on its own. Rather than risk “erratic vehicle behavior”—like slamming on the brakes or swerving to avoid a plastic bag—Uber relies on its human operator to watch the road and take control when trouble arises.

Furthermore, Uber had turned off the Volvo’s built-in automatic emergency braking system to avoid clashes with its own tech. This is standard practice, experts say. “The vehicle needs one master,” says Raj Rajkumar, an electrical engineer who studies autonomous systems at Carnegie Mellon University. “Having two masters could end up triggering conflicting commands.” But that works a lot better when the master of the moment works the way it’s meant to.

1.3 seconds before hitting Elaine Herzberg, Uber’s car decided emergency braking was necessary—but didn’t have the ability to do that on its own. The yellow bands show distance in meters, and the purple indicates the car’s path.

NTSB

The Robot and the Human

These details of the fatal crash point to at least two serious flaws in Uber’s self-driving system: software that’s not yet ready to replace humans, and humans who were ill-equipped to keep their would-be replacements from doing harm.

Today’s autonomous systems rely on machine learning: They “learn” to classify and respond to situations based on datasets of images and behaviors. The software is shown thousands of images of a cyclist, or a skateboarder, or an ambulance, until it learns to identify those things on its own. The problem is that it’s hard to find images of every sort of situation that could happen in the wild. Can the system distinguish a tumbleweed from a toddler? A unicyclist from a cardboard box? In some of these situations, it should be able to predict the object’s movements, and respond accordingly. In others, the vehicle should ignore the tumbleweed, refrain from a sudden, dangerous braking action, and keep on rolling.

Herzberg, walking a bike loaded with plastic bags and moving perpendicular to the car, outside the crosswalk and in a poorly lit spot, challenged Uber’s system. “This points out that, a) classification is not always accurate, which all of us need to be aware of,” says Rajkumar. “And b) Uber’s testing likely did not have any, or at least not many, images of pedestrians with this profile.”

Solving this problem is a matter of capturing all the strange, unpredictable edge cases on public roads, and figuring out how to train systems to deal with them. It’s the engineering problem at the heart of this industry. It’s supposed to be hard. The car won’t get it right every time, especially not in these early days.

That’s why Uber relied on human safety drivers. And it’s what makes the way they structured their program troubling. At the time, the company’s human operators were paid about $24 an hour (and given plenty of energy drinks and snacks) to work eight-hour shifts behind the wheel. They were told what routes to drive, and what to expect from the software. Above all, they were instructed to keep their eyes on the road at all times, to remain ready to grab the wheel or stomp the brakes. Uber has caught (and fired) drivers who were looking at their phones while on the job—and that shouldn’t surprise anybody.

“We know that drivers, that humans in general, are terrible overseers of highly automated systems,” says Bryan Reimer, a engineer who studies human-machine interaction at MIT. “We’re terrible supervisors. The aviation industry, the nuclear power industry, the rail industry have shown this for decades.”

Yet Uber placed the burden for preventing crashes on the preoccupied shoulders of humans. That’s the tremendous irony here: In its quest to eliminate the humans who cause more than 90 percent of American crashes, which kill about 40,000 people every year, Uber hung its safety system on the ability of a particular human to be perfect.

There other ways to test potentially life-saving tech. Some autonomous developers require two people in every testing vehicle, one to sit behind the wheel and another to take notes on specific events and system failures during the drive. (Uber originally had two operators in each car, but switched to solo drivers late last year.) The safety driver behind the wheel of the crashed Uber told NTSB investigators she wasn’t watching the road in the moments leading up to the collision because she was looking at the car’s interface—which is built into the center console, outside a driver’s natural line of sight. If another human were handling that job, which includes noting observations about the car’s behavior, the person behind the wheel might have spotted Herzberg—and saved her life.

Or, Uber could have given its system the ability to monitor a driver’s attentiveness to the road, and emit a beep or a buzz if it discovers the person behind the wheel isn’t staying on task. Cadillac’s semi-autonomous SuperCruise system uses an infrared camera on the steering column to watch a driver’s head position, and issue warnings when they look away from the road for too long.

Uber’s system didn’t even have a way to alert the driver when it determined emergency braking was necessary, the report says. Many cars on the market today can detect imminent collisions, and alert the driver with flashing red lights or loud beeping. That sort of feature could have helped here. “That is kind of mind-boggling, that the vehicle system did nothing and they had to depend entirely on the driver,” says Steven Shladover, a UC Berkeley research engineer who has spent decades studying automated systems.

Uber says it’s working on its “safety culture,” and has not yet resumed testing, which it paused after the crash. “Over the course of the last two months, we’ve worked closely with the NTSB,” a spokesperson said in a statement. “As their investigation continues, we’ve initiated our own safety review of our self-driving vehicles program.” The company hired former NTSB chair and aviation expert Christopher Hart earlier this month to advise it on safety systems.

Whatever changes Uber makes, they won’t appear in Tempe anytime soon. The company plans to resume testing in Pittsburgh this summer, home to its R&D center. But it’s shutting down its Arizona operation altogether. The move had very human consequences. Uber laid off about 300 workers in the state—many of them safety drivers.

Tech

via Wired Top Stories https://ift.tt/2uc60ci

May 24, 2018 at 02:48PM

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.