|Buy F&SF • Read F&SF • Contact F&SF • Advertise In F&SF • Blog • Forum|
Recently Paul got to ride in a self-driving car of the present. There was no chatty humanoid robot at the wheel. In today's self-driving car, the car itself is the robot. The car's sensors feed data into fast computers programmed to interpret that data and use it to predict what will happen next and drive appropriately.
In this column, we'll report on Paul's ride, examine the current state of self-driving cars, and consider some of the ways that these vehicles might influence the future—for good and for ill.
Paul lives two blocks from Google X, where self-driving cars were being developed. Every day he sees several self-driving Google cars cruising around his Mountain View neighborhood.
The older models are Lexus SUVs festooned with lasers, radars, and cameras. A newer, and cuter, beetle-shaped car has sensors integrated into its body. This electric beetle hums as it drives around, letting pedestrians know it is coming.
The research center is in the former Mayfield Mall, the first enclosed air-conditioned mall in California. The roof of the old mall was a huge parking lot. Now it is fenced off from all traffic except self-driving cars. High above street level, it's shielded from prying eyes.
When Google opened their research center, they invited Paul and all his neighbors to come for a tour and a ride in a self-driving car. When Paul's turn came to take a ride, a friendly tour guide opened the car door and Paul slid into a front seat. Then he looked around. There was no steering wheel, no accelerator pedal, and (here's the part where panic started to set in) no brake pedal. Time to fasten your seatbelts!
The tour guide pressed a button to start the car. There was a countdown—three, two, one—and they were off and humming around a course on the roof of the old mall. A large video screen displayed what the car was "seeing" with its lidar (laser radar). The vehicle deftly avoided light poles, clearly visible on the screen.
Then a Google employee masquerading as a crazy pedestrian stepped in front of the car. Instinctively, Paul braced for a collision.
Instead, the outline of the pedestrian appeared on the screen and the car stopped. Its sensors had detected the person, relayed the sensor output to the computer, and, in keeping with its programming, the computer and the car had reacted quickly. The car continued when the pedestrian finished crossing.
Suddenly another vehicle pulled out in front of the car, and it stopped again, politely waiting until the way was clear. The car then approached a cyclist riding one of the ubiquitous green and yellow Google bicycles. The cyclist made a hand signal for a left turn, and the car slowed and gave him room. (Paul wondered when he had last seen a bicyclist give a hand signal.) The car's screen registered the hand signal and the car behaved appropriately—perhaps more appropriately than many human drivers.
After much too short a drive Paul had to get out. He was impressed. So impressed that he'd love to wangle another ride out on the streets of town.
Unlike the car that Paul rode in, street-safe, self-driving cars have detachable steering wheels, accelerator, and brake pedals so that a driver can take control. To date, Google's autonomous cars have driven over two million miles with only a dozen or so accidents. In most of these accidents, the self-driving car was hit by a human driver who didn't expect any car to obey the letter of the law and actually stop at a red light before turning right.
One self-driving car hit a bus that ought to have yielded right-of-way, but didn't. In an article on the accident, a bus driver confirmed something that we human drivers have long suspected. "Buses have the right of way by tonnage," he said. Google reportedly reprogrammed tonnage into the computers on its cars.
Another Google car got ticketed for driving twenty-five mph in a thirty-five mph zone, causing a six-car back-up behind it. If you do that in California, you must by law pull over and let the other cars pass.
Originally, Google's goal was to make a "level five" self-driving car that needed no human intervention whatsoever. You could get into the car, press a button and say "take me to your leader" (or whatever destination you prefer). The car would take you there.
Recently, Google's parent company, Alphabet, changed that strategy, spinning off a new company, named Waymo, to deal with self-driving cars. Waymo does not plan to build cars but rather to develop operating systems that will let other manufacturers build self-driving vehicles. The software developed by Waymo will not be for level five autonomous vehicles, but for cars with a steering wheel, accelerator and brake pedals so that a human can take over.
Seems like we humans just don't want to relinquish control. Much as we are inclined to resist our robot overlords, there are some very good reasons to consider turning the driving over to the car.
Paul just got back from India where he observed some amazing driving. Without the aid of stop signs or traffic lights, drivers move rapidly through intersections crowded with people, bicycles, trucks, cows and other cars. These human-driven vehicles behave in a coordinated fashion to avoid collisions. (To see some example of this, search YouTube for "amazing intersections.")
The traffic flow in these intersections reminded Paul of starlings. These birds gather in flocks known as murmurations. Thousands of birds fly as a group at speeds topping forty miles per hour, separated by just over a body length. The flock can change direction in an instant, the large group acting like a single organism.1
Scientists are still trying to figure out exactly how the birds manage this neat trick. It's clear that the birds need fast reaction times to avoid collisions. We don't know the average starling reaction time, but the average human braking time is 2.3 seconds. (Optional: If you want to measure your own reaction time, see the sidebar below.) In that time, a car going 100 km/hr (60 mph) will travel 200 feet, over twelve car lengths before you even press on the pedal. Computers are much faster.
But fast reaction time isn't the only advantage the starlings have. Some scientists who study flocking behavior think that each bird bases its movements on observations of six or seven nearby birds. As each bird responds to the movement of its immediate neighbors, changes in speed and direction propagate through the flock.
Like the starlings in a murmuration, robot cars can be in constant communication with other cars sharing the road. Vehicle-to-vehicle communication technology—V2V for short—makes it possible for cars to pack closer together on freeways, allowing three or more times as many cars to fill the same space. If something happens and one car has to slow down, all the cars behind it can begin to brake with a speed-of-light delay plus a little processing-time delay.
Think of those high-speed, traffic-light-free intersections in India that made Paul decide that he'd never drive there. Then imagine all the cars moving at twice the speed. (You might want to keep your eyes closed.)
But robot cars don't just talk to each other using V2V communication. They also also communicate with the road itself. This is called vehicle to infrastructure communication (V2I for short).
There are traffic apps that let you look at a map of the route you plan to travel, looking for traffic jams and planning routes to avoid them. There are also apps that plan a route for you, relying on information from the road and from other drivers. Unfortunately, there's a downside to everyone having these apps. If everyone checks the map and changes their route, they may all arrive at the same alternate route, creating the congestion they were trying to avoid.
V2I communication allows a set of roadways to inform vehicles of where other vehicles are headed. The car can determine what areas are going to become crowded, thus avoiding traffic jams and slowdowns before they form.
When cars can all communicate and make decisions (without the interference of any primate road rage), stop lights and stop signs at intersections won't be needed for vehicles. But cars aren't the only travelers on the road. What about pedestrians?
That takes us to vehicle to pedestrian communication or (you guessed it) V2P, for short. V2P comes into play when a self-driving car detects where a cell phone is in use, even if the person carrying it is out of sight. The car is ready to react if the pedestrian bumbles into the roadway.
And we do mean "bumbles." It's interesting to think of a cell phone as safety equipment. Here in San Francisco, cell phones are more commonly a source of danger. Pedestrians with their eyes on the phone screen stumble, fall, walk into things, and even step right into traffic. Thanks to this ubiquitous technology, urban pedestrians had to go to the hospital for emergency treatment ten times more often in 2014 as they did in 2006.2
Cell phones could increase pedestrian safety by telling their owners when to wait at an intersection and when to cross. Unfortunately, we suspect most people would just ignore the tiny robot overlord in their pocket and continue their bumbling ways. But we can also imagine a dystopian future in which the phone uses hard-to-ignore methods for getting its owner's attention. Say you're approaching an intersection and you just happen to be checking a cute cat video that your BFF texted you. Suddenly your screen goes blank. "I'm cutting off your cat video," says the tiny robot overlord. "If you ever want to see whether Fluffy jumps into the box, you just have to wait right here. It's for your own good. Trust me."
In the driver's version of this dystopia, the app could flash warnings to both the car and the pedestrian, thus pissing them both off. The robot cars won't be prone to road rage, but we suspect they will be capable of inducing it in humans.
In a brave new world with V2V, V2I, and V2P, accidents will be less common. But still there will be accidents. And the self-driving cars must be ready for that possibility. That means the programmer setting up the car's system must figure out the car's "crash optimization algorithm." In a situation where the car has to crash into something, what will it choose to crash into?
Suppose a car has a choice of running into a crowd of pedestrians or missing them and hitting a concrete pillar. Running into the pillar is more likely to injure the people in the car, while running into the pedestrians is more likely to injure them. What should the car be programmed to choose?
When surveyed, people say a car should take action to save the most lives. But when these same people are asked to make a choice between two cars, their answer changes. Car #1 will save the most lives, even if that decision kills the car's passenger. Car #2 will make the decision that saves the passenger, even if that means wiping out an entire troop of Girl Scouts. It doesn't surprise us that people almost always picked Car #2.
For another perspective on this dilemma, Paul discussed the ethics of self-driving cars with a group of Buddhist monks when he was in India. His workshop group of monks instantly suggested a consideration that hadn't occurred to Paul: the people in the car had chosen to be in the car. The pedestrians, on the other hand, had made no such choice. So perhaps, the monks said, the car should favor saving the pedestrians, who were essentially innocent bystanders.
What about economic consequences? How will the creation of self-driving cars and trucks affect the millions of taxi drivers and professional truck drivers in the United States? And what about the body shops that repair cars after collisions, the lawyers who represent people injured in accidents? How will self-driving cars affect highway safety and insurance rates?
So many questions to consider as robots grab the steering wheel and take us on a ride into the future. Is there a brake pedal? I don't see one.
To measure your reaction time, all you need is a ruler, a friend, and a table.
Rest your arm on the table with your hand sticking out over the edge. Hold your thumb and index finger about an inch apart, ready to pinch together. Have your friend hold the ruler with its zero point between your thumb and finger.
With no warning, your friend drops the ruler. The moment you see the ruler start to fall (that's the flash of brake lights!), bring your thumb and finger together to catch it.
You'll know how far the ruler fell by checking where you grabbed it. Consult the table to find out your reaction time.
Where you caught the ruler – Reaction time
2 inches – 0.10 seconds
4 inches – 0.14 seconds
6 inches – 0.17 seconds
8 inches – 0.20 seconds
10 inches – 0.23 seconds
12 inches – 0.25 seconds
Of course when you brake a car you do it with your feet. So to get a better reading of braking reaction time, you should really catch the ruler between your feet. Good luck with that.
Pat Murphy and Paul Doherty recently published their first fiction collaboration, a story inspired in part by research for a past column. "Cold Comfort," a story about the melting of the permafrost, is available in Bridging Infinity, edited by Jonathan Strahan. It will be reprinted in Gardner Dozois' Year's Best Science Fiction: Volume 34. For more on Paul's work and his latest adventures, visit www.exo.net/~pauld. You can learn more about what Pat's up to at www.brazenhussies.net/Murphy.
To contact us, send an email to Fantasy & Science Fiction.
Copyright © 1998–2020 Fantasy & Science Fiction All Rights Reserved Worldwide
If you find any errors, typos or anything else worth mentioning, please send it to email@example.com.
To contact us, send an email to Fantasy & Science Fiction.
Copyright © 1998–2020 Fantasy & Science Fiction All Rights Reserved Worldwide