NEWS

How autonomous cars try to adapt to the world as it is

Self-driving cars will probably end up reshaping the urban infrastructures. But as long as they have to share the road with manned vehicles, they will need to adapt and learn to communicate.

Today, we are promised a future where cities will be filled with autonomous cars seamlessly communicating with each other. No more accidents, no more traffic jams, no more parking nightmares. But before that bright future happens, autonomous cars will have to cohabit with regular ones. “Better vehicle-to-person communication will be crucial, as most experts predict a lengthy period — perhaps decades — of mixed traffic, with robot cars navigating roads alongside human-driven ones”, writes the San Francisco Chronicle in a comprehensive article on the subject. And even once they’re ubiquitous, autonomous cars will still have to communicate with other users (bicycles, scooters, mopeds, pedestrians). Which is why the study of the relationship between machines and humans is now a booming field. Particularly in the United States, social scientists (mainly anthropologists, sociologists and psychologists) are teaming up with roboticians, engineers, designers and programmers to understand how people interact with autonomous vehicles.

Building trust through communication

First, there’s the issue of acceptance. “It’s crucial to make self-driving cars accepted in society so people feel they are trustworthy and part of daily life,” Sameep Tandon, CEO and co-founder of self-driving car company Drive.ai, tells the San Francisco Chronicle. “Otherwise there’s a risk people will think of this as the robot apocalypse.” That may be where we are now. In March 2018, for the first time, a woman was killed by a driverless Uber in Arizona (it was later established the vehicle had seen her and decided not to brake). In California, one third of the collisions involving autonomous cars in 2018 were caused by humans attacking the vehicule. The New York Times recently explored the reasons why humans tend to attack robots:

cognitive neuroscientist Agnieszka Wykowska evokes a “Frankenstein syndrome,” in which “we are afraid of this thing that we don’t really fully understand, because it’s a little bit similar to us, but not quite enough.” In short, trust is not there yet.

Today, all self-driving cars must embark a human operator seating in the driver’s seat, which may increase the level of acceptance. The Center for Design Research at Stanford University is thinking about the next step, and testing how people react to an empty car. They’ve devised a “car seat” suit that renders the driver “invisible” at first glance: “Our techniques are theater-like ways of simulating the future, like live-action, improvisational role-play for science,” Wendy Ju, who leads that experiment, tells the San Francisco Chronicle. “There’s a comedy to it, but we are dead serious about collecting real behavioral responses.”

AI-powered cars

Then there’s the matter of communication. Cars will have to be able to understand what is happening around them and to react while clearly signaling their intentions. Waymo’s mini-vans are already capable of reading the hand signals of bicyclists. Google has realised that its cars are too “polite” and can stay stuck for a long time at intersections, letting all other vehicles go first. A car programmed to stop whenever there’s an obstacle would rapidly create traffic jams for no valid reason.

So the idea is not to program the understanding of external situations, but to teach it. Waymo exposes its vehicles to virtual re-creations of real-life situations; Nissan feeds them with analyses of a day in the life of an urban intersection. “So many complicated things can happen in the real world,” says Drive.ai’s Tandon. “If you program a rule for every single case, you’d have a decision tree so complicated no one could deal with it. Instead, we use deep learning to make the process go seamlessly. We want our vehicles to learn from as much data as possible.”

Once the situation has been read and a decision has been made, the car will need to communicate its intentions. How do you signal to a pedestrian that they can cross in front of you, if not with a hand gesture or a nod? Drive.ai is now experimenting with light displays on the roof of the car. Nissan has appointed anthropologist Melissa Cerfkin to work on the understandability of white LED arcs, also on the roof, that signal intentions. Other methods could include “audible cues (perhaps a polite voice saying ‘Cross now,’ or a musical tone as at some stoplights); rooftop displays showing symbols or words; laser devices to project a message such as a crosswalk on the road ahead to indicate that it’s safe to cross; and cars that wirelessly transmit their intentions to other vehicles”, imagines the San Francisco Chronicle. For now, car manufacturers are each working on their own solutions. The next step will be to establish standardised communication methods that can be included in the official road safety rules. That means one thing: instead of trying to change their environment, self-driving cars must find the best way to adapt to it.

Read more:
Making Autonomous Vehicles a Reality: Lessons from Boston and Beyond”, BCG, October 2017.