The Tesla Experiment: Drive, It Said

http://www.nytimes.com/2016/07/17/opinion/the-tesla-experiment-drive-it-said.html

Version 0 of 1.

Defenders and skeptics discuss an Autopilot system that recently led to a fatality.

To the Editor: Lee Gomes is probably correct that a totally autonomous self-driving car is years, and maybe decades, away from becoming a reality (“Self-Driving Cars, Fueled by Hype,” Sunday Review, July 10). As a former computer system designer, I have always felt that many of the decisions involved in driving a car were too “fuzzy” to be effectively enshrined in computer code. Incorporating both visual and audible cues into our decision-making process is just one part of the driving process in which we engage mostly subconsciously. For example, how would a car’s visual processor distinguish between a pedestrian trying to hitch a ride and a police officer flagging you down because of a downed power line ahead? Will an audio processor recognize that the sound of an ambulance siren coming from a side street of the intersection we are about to cross means that we need to stop in spite of the fact that we have the green light? Finally, deciding how hard to hit the gas pedal when accelerating into a traffic circle is often a challenge, even for warm-blooded drivers.

That said, the effort to develop an autonomous self-driving car is a worthy exercise. Adaptive cruise control, lane change warnings and automatic braking triggered by obstruction sensors are just a few of the features that will make driving safer even if we never reach the driverless end state. As Tesla, Google, Volvo and the others work to take the human element (and all of its inherent weaknesses — distractions, slow reflexes, declining vision and hearing) out of the driving equation, accidents will no doubt be reduced, and many, such as rear-end collisions, may be eliminated.

ROBERT CHECCHIO

Dunellen, N.J.

To the Editor: Re “Self-Driving Cars Need Driver’s Ed,” by Michael Sivak and Brandon Schoettle (Op-Ed, July 7): Where has this mania for developing self-driving cars come from? Is there a groundswell of consumer interest in this technology? Your sobering Op-Ed article raises so many reliability and safety concerns that any reasonable person should be given pause.

I live in the Boston area, where driving is chaotic, unruly and often bewilderingly rude — but at least there is a person behind the wheel who can correct such excesses quickly. If self-driving vehicles can’t read the nuances of snow, flooded roads or other weather conditions, just one of the problems your article mentions, that alone makes them dangerous. Several thousand pounds of metal piloted by technology incapable of adjusting for nuance or unusual situations seems like a recipe for avoidable disasters.

SALLY PEABODY

Medford, Mass.

To the Editor: In the wake of the Tesla Autopilot fatality and continuing National Highway Traffic Safety Administration investigation, Michael Sivak and Brandon Schoettle make the important, if perhaps self-evident, point that self-driving cars must be certified safe before public use. The real question for policy makers, however, is what constitutes an appropriately “safe” autonomous vehicle. Our current transportation system exacts a terrible toll: More than 35,000 people died on American roads in 2015, an almost 8 percent increase over 2014, and the system is almost completely dependent on petroleum, constraining American foreign policy and exposing our servicemen and women to conflict.

Autonomous vehicles have the potential to reduce traffic fatalities, expand mobility access to millions, and enhance national and economic security by building a fuel-diverse transportation system. These benefits compel the deployment of autonomous vehicles once their safety matches today’s cars with all their flaws. Imposing excessive regulation and barriers to deployment runs contrary to the national interest.

ROBBIE DIAMOND and AMITAI Y. BIN-NUN

Washington

Mr. Diamond is the chief executive and founder of Securing America’s Future Energy (SAFE), and Mr. Bin-Nun is director of its Autonomous Vehicle Initiative.

To the Editor: While the cause of the accident that killed a driver in a self-driving car remains to be determined, the assertion by Tesla that data is “unequivocal” supporting the enhanced safety of self-driving should be strongly challenged (“A Fatality Forces Tesla to Confront Its Limits,” front page, July 2). Driving conditions in the real world are incredibly complex. When is the benefit of “decreased workload” for the driver offset by inattention? When does the relative passivity encouraged by Autopilot decrease reaction time? In what ways might the driver unconsciously become a “back seat driver”?

As a psychiatrist who appreciates how sophisticated the brain is, I would argue that safe driving is no doubt enhanced by some technology. However, we risk losing ourselves — literally and figuratively — in becoming overly reliant on technology. Energy, time and money would be better spent teaching youths the dangers of using technology while driving rather than chasing a grandiose dream.

LARRY S. SANDBERG

New York

To the Editor: The core of Tesla’s Autopilot problem is no different from what we have been facing in cockpit automation. When something nonroutine and unthinkable happens, it is up to the human operator to intervene, take over and save the day. Of course this could be a challenge as he or she has been out of the loop, removed from active, hands-on control of the system, and can also be suffering from lack of situational awareness and skill atrophy. Air France and Asiana crashes in 2009 and 2013, respectively, are sober reminders of the perils of automation. It was Chesley B. Sullenberger’s intervention and improvisation that safely landed a crippled US Airways plane in the Hudson River in 2009, saving the lives of 155 people.

Tesla’s not making “details of the accident public for nearly two months — and then not until regulators announced their inquiries” — possibly for public relations reasons, is unconscionable. Tesla, its enthusiastic “brand evangelist” drivers and its boss, Elon Musk, should note what the Nobel physicist Richard P. Feynman said in the context of another technological system failure, when the space shuttle Challenger exploded in 1986: “For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled.”

NAJMEDIN MESHKATI

Los Angeles

The writer, a professor of engineering and aviation safety at the University of Southern California, was involved in studies of a G.M. concept car in the mid-1990s.

To the Editor: Re “A Tesla Driver Using Autopilot Dies in a Crash” (front page, July 1): My friend Joshua Brown is dead — at 40 years old. The Tesla “self-driving” car killed him. Josh was a brilliant, enthusiastic technologist, an entrepreneur and a great human being. He loved his Tesla.

I am not a technophobe. I have had a long career in computers and software. I have learned that just because something can be done, sometimes it shouldn’t be. The self-driving car that killed him wasn’t malicious; it was just poorly designed and insufficiently tested. It didn’t react well to a simple real-life situation — a tractor-trailer truck turning left in front of him.

If you are contemplating a self-driving car, be very, very careful. It will take many more years before it is really reliable in all real-life situations. It’s cool technology, for sure, but it might kill you, as it did my friend. My heart aches for him.

IVAR WOLD

Moultonborough, N.H.

To the Editor: You got a lot right in your July 11 editorial on the Tesla crash that led to a fatality (“Lessons From the Tesla Crash”). You correctly urge federal regulators to “hasten development of a communications system it has been working on for several years that would allow cars to transmit their location, speed and other data to one another.” If that system had been in place, you note, Joshua Brown might have survived.

The federal safety standard mandating this vehicle-to-vehicle communications system has been developed. It’s just been stuck for months in the bureaucratic review process. Meanwhile, there are efforts underway at the Federal Communications Commission that would at best delay (and at worst, completely derail) the deployment of the lifesaving technologies. Powerful interests want to seize this safety spectrum currently allocated for intelligent transportation systems. The government should put safety first. Get the safety standard out, and then look at ways to share spectrum that don’t interfere with lifesaving technology.

JOHN BOZZELLA

President and Chief Executive, Association of Global Automakers

Washington

To the Editor: Tesla gets put under the magnifying glass for a single event, while the same technology has prevented numerous accidents. Some of them have been documented by Tesla owners’ dashboard cameras.

While Tesla’s Autopilot system can be operated hands-free, it does require the driver to grasp the steering wheel every few minutes to ensure that the driver is still engaged. Drivers must be ready to take over at a moment’s notice. It is no longer possible to leave the driver’s seat while Autopilot is in use.

No safety technology is 100 percent effective. But given that Tesla’s vehicle fleet has logged over 130 million miles with Autopilot turned on, the statistics look promising, even with one tragic fatality. The biggest risk lies in overconfidence on the part of some Tesla drivers — the assumption that Autopilot will save them from any situation. While it’s cool to think of Autopilot as “autonomous,” it is really just a driver’s aid. A pilot is still required.

TODD R. LOCKWOOD

South Burlington, Vt.

The writer drives a Tesla.