We are reading more and more about self-driving cars. California seems to be leading the way but we are hearing of pilot projects in many locations. Vivek Wadhwa, author of a new book, “The Driver in the Driverless Car: How Our Technology Choices Will Create the Future,” writes an article on the Washington Post website about his views of the technology
Self-driving cars should leave us all unsettled. Here’s why
It is a warm autumn morning, and I am walking through downtown Mountain View, Calif., when I see it. A small vehicle that looks like a cross between a golf cart and a Jetson-esque, bubble-topped spaceship glides to a stop at an intersection. Someone is sitting in the passenger seat, but no one seems to be sitting in the driver seat. How odd, I think. And then I realize I am looking at a Google car. The technology giant is headquartered in Mountain View, and the company is road-testing its diminutive autonomous cars there.
This is my first encounter with a fully autonomous vehicle on a public road in an unstructured setting.
The Google car waits patiently as a pedestrian passes in front of it. Another car across the intersection signals a left-hand turn, but the Google car has the right of way. The automated vehicle takes the initiative and smoothly accelerates through the intersection. The passenger, I notice, appears preternaturally calm.
I am both amazed and unsettled. I have heard from friends and colleagues that my reaction is not uncommon. A driverless car can challenge many assumptions about human superiority to machines.
Though I live in Silicon Valley, the reality of a driverless car is one of the most startling manifestations of the future unknowns we all face in this age of rapid technology development. Learning to drive is a rite of passage for people in materially rich nations (and becoming so in the rest of the world): a symbol of freedom, of power, and of the agency of adulthood, a parable of how brains can overcome physical limitations to expand the boundaries of what is physically possible. The act of driving a car is one that, until very recently, seemed a problem only the human brain could solve.
Driving is a combination of continuous mental risk assessment, sensory awareness, and judgment, all adapting to extremely variable surrounding conditions. Not long ago, the task seemed too complicated for robots to handle. Now, robots can drive with greater skill than humans — at least on the highways. Soon the public conversation will be about whether humans should be allowed to take control of the wheel at all.
This paradigm shift will not be without costs or controversies. For sure, widespread adoption of autonomous vehicles will eliminate the jobs of the millions of Americans whose living comes of driving cars, trucks, and buses (and eventually all those who pilot planes and ships). We will begin sharing our cars, in a logical extension of Uber and Lyft. But how will we handle the inevitable software faults that result in human casualties? And how will we program the machines to make the right decisions when faced with impossible choices — such as whether an autonomous car should drive off a cliff to spare a busload of children at the cost of killing the car’s human passenger?
I was surprised, upon my first sight of a Google car on the street, at how mixed my emotions were. I’ve come to realize that this emotional admixture reflects the countercurrents that the bow waves of these technologies are rocking all of us with: trends toward efficiency, instantaneity, networking, accessibility, and multiple simultaneous media streams, with consequences that include unemployment, cognitive and social inadequacy, isolation, distraction, and cognitive and emotional overload.
Once, technology was a discrete business dominated by business systems and some cool gadgets. Slowly but surely, though, it crept into more corners of our lives. Today, that creep has become a headlong rush. Technology is taking over everything: every part of our lives, every part of society, every waking moment of every day. Increasingly pervasive data networks and connected devices are enabling rapid communication and processing of information, ushering in unprecedented shifts — in everything from biology, energy and media to politics, food and transportation — that are redefining our future. Naturally we’re uneasy; we should be. The majority of us, and our environment, may receive only the backlash of technologies chiefly designed to benefit a few. We need to feel a sense of control over our own lives; and that necessitates actually having some.
The perfect metaphor for this uneasy feeling is the Google car. We welcome a better future, but we worry about the loss of control, of pieces of our identity, and most importantly of freedom. What are we yielding to technology? How can we decide whether technological innovation that alters our lives is worth the sacrifice?
The noted science-fiction writer William Gibson, a favorite of hackers and techies, said in a 1999 radio interview (though apparently not for the first time): “The future is already here; it’s just not very evenly distributed.” Nearly two decades later — though the potential now exists for most of us, including the very poor, to participate in informed decision-making as to its distribution and even as to bans on use of certain technologies — Gibson’s observation remains valid.
I make my living thinking about the future and discussing it with others, and am privileged to live in what to most is the future. I drive an amazing Tesla Model S electric vehicle. My house, in Menlo Park, close to Stanford University, is a “passive” home that extracts virtually no electricity from the grid and expends minimal energy on heating or cooling. My iPhone is cradled with electronic sensors that I can place against my chest to generate a detailed electrocardiogram to send to my doctors, from anywhere on Earth.
Many of the entrepreneurs and researchers I talk with about breakthrough technologies such as artificial intelligence and synthetic biology are building a better future at a breakneck pace. One team built a fully functional surgical-glove prototype to deliver tactile guidance for doctors during examinations — in three weeks. Another team’s visualization software, which can tell farmers the health of their crops using images from off-the-shelf drone-flying video cameras, took four weeks to build.
The distant future, then, is no longer distant. Rather, the institutions we expect to gauge and perhaps forestall new technologies’ hazards, to distribute their benefits, and to help us understand and incorporate them are drowning in a sea of change as the pace of technological change outstrips them.
The shifts and the resulting massive ripple effects will, if we choose to let them, change the way in which we live, how long we live for, and the very nature of being human. Even if my futuristic life sounds unreal, its current state is something we may laugh at within a decade as a primitive existence — because our technologists now have the tools to enable the greatest alteration of our experience of life since the dawn of humankind. As in all other manifest shifts — from the use of fire to the rise of agriculture and the development of sailing vessels, internal-combustion engines, and computing — this one will arise from breathtaking advances in technology. It is far larger, though, is happening far faster, and may be far more stressful to those living through this new epoch. Inability to understand it will make our lives and the world seem even more out of control.
A broad range of technologies are now advancing at an exponential pace, everything from artificial intelligence to genomics to robotics and synthetic biology. They are making amazing and scary things possible — at the same time.
Broadly speaking, we will, jointly, choose one of two possible futures. The first is a utopian “Star Trek” future in which our wants and needs are met, in which we focus our lives on the attainment of knowledge and betterment of mankind. The other is a “Mad Max” dystopia: a frightening and alienating future, in which civilization destroys itself.
These are both worlds of science fiction created by Hollywood, but either could come true. We are already capable of creating a world of tricorders, replicators, remarkable transportation technologies, general wellness and an abundance of food, water and energy. On the other hand, we are capable too now of ushering in a jobless economy; the end of all privacy; invasive medical-record keeping; eugenics; and an ever worsening spiral of economic inequality: conditions that could create an unstable, Orwellian or violent future that might undermine the very technology-driven progress that we so eagerly anticipate. And we know that it is possible to inadvertently unwind civilization’s progress. It is precisely what Europe did when, after the Roman Empire, humanity slid into the Dark Ages, a period during which significant chunks of knowledge and technology that the Romans had hard won through trial and error disappeared from the face of the Earth. To unwind our own civilization’s amazing progress will require merely cataclysmic instability.
It is the choices we all make which will determine the outcome. Technology will surely create upheaval and destroy industries and jobs. It will change our lives for better and for worse simultaneously. But we can reach “Star Trek” if we can share the prosperity we are creating and soften its negative impacts; ensure that the benefits outweigh the risks; and gain greater autonomy rather than becoming dependent on technology.
The oldest technology of all is probably fire, even older than the stone tools that our ancestors invented. It could cook meat and provide warmth; and it could burn down forests. Every technology since this has had the same bright and dark sides. Technology is a tool; it is how we use it that makes it good or bad. There is a continuum limited only by the choices we make jointly. And all of us have a role in deciding where the lines should be drawn.