For Law Students


Join Now

Are autonomous vehicles really safer than humans?

Share:
Robot-Driver

The mere notion of a self-driving vehicle furnishes the world with the exceptional opportunity to profoundly change today’s transportation industry. At the early stages, autonomous vehicles were nothing but a marvel far beyond our comprehension, much like if you were to show a jet engine to someone at the turn of the century. But within the short span of just a few years, the idea of a driverless car transcended beyond the realm of science fiction and into reality.

The progression of contemporary technology—through the agency of the autonomous vehicle—is generating a continuum between the somewhat outmoded human-dependent automobiles and the automated, computer-based vehicles that are currently infiltrating the consumer transportation market. Certainly, “robot cars”[2] are the genesis of what is now reputed as the technological age of driving.

“The reality about transportation is that it’s future-oriented. If we’re planning for what we have, we’re behind the curve.” – Anthony Foxx[1]

Developing and refining the safety and security of automobile travel has always been of paramount concern to the citizens of this country. Until recently, this was because human error was deemed a crucial factor that sits at the epicenter of the upsurge in fatalities resulting from car accidents.[3] In fact, “[s]ome ninety percent of motor vehicle crashes are caused at least in part by human error.”[4]

By their very nature, autonomous vehicles were crafted to purposely battle the phenomenon of reckless or negligent driving; the underlying philosophy was built on the premise that a computer-based system is less likely to err than an absent-minded human behind the wheel. Building on said premise, manufacturers of autonomous vehicle often rave about the technology’s far-reaching ability to suggestively improve road safety by eliminating the human element. But exactly makes the autonomous vehicle less prone to such accidents?

On March 19, 2018, an autonomously operated, Uber-owned vehicle gravely struck and killed a female pedestrian walking her bike across the suburban streets of the tech powerhouse state of Arizona.[5] The injured woman was subsequently rushed to the hospital where she later succumbed to her injuries and was pronounced dead.[6] As a preliminary matter, Uber resoundingly suspended any and all autonomous vehicle testing in a myriad of cities.[7] But the supreme concern here centered on the fact that a safety driver was inside the car while the vehicle was traveling on autonomous mode—the computer system was the underlying operator of the vehicle at the time of the accident.

Why is this important? While the aftermath of this unfortunate incident should not be discounted, this event nonetheless serves as a medium for assessing legal responsibility when a human driver is no longer operating the vehicle behind the wheel.

The vehicle in question, a Volvo XC90 Sport, was furnished with Uber’s cutting-edge sensor system when traveling at a rate of 40 mph on a 45 mph limit street immediately before it struck the woman.[8] Furthermore, internal conditions (i.e. functionality of the software system) as well as external conditions (i.e. weather, etc.) were excluded as potential factors that might have caused this accident.[9] These intricate facts are paramount within this particular context on account of the fact that this was the first fatality arising out of a fully autonomous vehicle.[10] As a consequence, any legal action that emerges from this case could create a forthcoming battle, pitting peer-to-peer ridesharing transportation networks (like Uber and Lyft) against the various manufacturers of software systems utilized in autonomous vehicles.[11]

This new renaissance-like era of automated vehicles jettisons the old-fashioned legal theories plaintiffs relied on to bring forth a lawsuit for damages resulting from a car accident. With respect to vehicles that employ levels of automation ranging from 0-2,[12] where the human driver maintains full control and operation, a plaintiff commonly brought suit against the negligent driver for failing to exercise reasonable care under the applicable state statute. Contrastingly, litigation that involves vehicles that are equipped with levels 3-5 requires plaintiffs to bring forth a lawsuit against either (or both) of the following parties: (1) automated software system’s manufacturer for a design defect that caused the accident; or (2) the human driver for failing to heed the system’s audio and/or visual warnings that necessitate control takeover, and the driver is required to fully operate the vehicle in certain situations. As a result, a plaintiff pursuing litigation against the former will not require a finding of fault based on traditional negligence, but a showing that the automated system’s design was inherently defective, rendering it unsafe for use.[13]

But there is a fundamental element that lawmakers have yet to conscientiously explore: far beyond the technological modeling of these vehicles, it is the legal, moral, and ethical concerns that these software and automobile manufacturers cannot accurately model.[14] In point of fact, placing artificially intelligent (“AI”) vehicles—coupled with the dangerous consequences that accompany them—on public, communal streets is uncharted legal territory.

The underlying principle lies within the realm of voluntariness. By way of example, when NASA places an astronaut on an orbital shuttle en route to the moon, the astronaut is volunteering to withstand the potentially hazardous consequences.[15] Contrarily, deploying AI-based vehicles on public roads that are populated by unsuspecting bystanders, cyclists, pedestrians, and other drivers who did not volunteer themselves to be put in danger is an entirely different story.

While this technology is well ahead of society’s overall expectation, the recent accident that involved Uber’s automated, Volvo-manufactured vehicle raises an important question: is an inattentive human driver a reasonable fail-safe alternative in the event the technology requires the human to take over?

Grappling with this novel issue enables society to tone-down its relentless appreciation for such technology, and thoroughly address the important issues affecting our streets today.[16] Is the current state of this technology fully developed in a way that is both safe and secure for the public to use? The short answer: probably not. The abundance of lawsuits that have emerged by dint of this technology provides ample evidence that there are still a host of factors that need to be addressed before these vehicles can fully impact the consumer transportation market.

The heart-rending death caused by this accident should serve as an imperative reminder that autonomous vehicles are still in the experimental stage, and governments are still trying to figure out how to regulate it.

Footnotes
[1] United States Secretary of Transportation.
[2] Robot cars is one of many terms coined to reference autonomous vehicles.
[3] Jeffrey K. Gurney, Sue My Car Not Me: Products Liability and Accidents Involving Autonomous Vehicles, 13 U. Ill. J.L. Tech. & Pol’y 247, 250 (2013) (citing World Health Org., Global Plan for the Decade of Action for Road Safety 2011-2020 4 (Mar. 2010), www.who.int/roadsafety/d… /plan/plan_english.pdf) [hereinafter “Sue My Car Not Me”].
[4] Bryant Walker Smith, Human Error As a Cause of Vehicle Crashes, The Center for Internet Society, Stanford Law School, December 18, 2013, available at: cyberlaw.stanford.edu/bl… (“NHTSA’s 2008 National Motor Vehicle Crash Causation Survey is probably the primary source for the common assertion by NHTSA officials that “[h]uman error is the critical reason for 93% of crashes” (at least if “human error” and “driver error” are conflated). The 93% figure is absent from the report itself (probably intentionally) but calculable from the totals given on page 24.”). See also, Early Estimate of Motor Vehicle Traffic Fatalities in 2015, National Highway Traffic Safety Administration, July 2016, available at: crashstats.nhtsa.dot.gov…. (“A statistical projection of traffic fatalities for 2015 shows that an estimated 35,200 people died in motor vehicle traffic crashes. This represents an increase of about 7.7 percent as compared to the 32,675 fatalities that were reported to have occurred in 2014.”).
[5] See generally, Matt McFarland, Bill Gates invests $80 million to build Arizona smart city, CNN Money, November 13, 2017, available at: money.cnn.com/2017/11/13… (“A group associated with a Gates investment company has invested $80 million in a high-tech planned development outside Phoenix. The community in Belmont will be designed around high-speed networks, autonomous vehicles, high-speed digital networks, data centers, new manufacturing technologies and autonomous logistics hubs.”).
[6] Andrew J. Hawkins, Uber halts self-driving tests after pedestrian killed in Arizona, THE VERGE, March 19, 2018, available at: www.theverge.com/2018/3/…. See also, Daisuke Wakabayashi, Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam, N.Y. TIMES, March 19, 2018, available at: www.nytimes.com/2018/03/… [hereinafter “Self-Driving Uber Car Kills Pedestrian in Arizona”].
[7] While Uber halted testing of such vehicles in cities like Pittsburgh, San Francisco, and Toronto, they swiftly pulled all of its inventory of autonomous vehicles from public roads in the State of Arizona. See Id.
[8] See generally, Troy Griggs and Daisuke Wakabayashi, How a Self-Driving Uber Killed a Pedestrian in Arizona, N.Y. Times, March 20, 2018, available at: www.nytimes.com/interact….
[9] See Id.
[10] See Self-Driving Uber Car Kills Pedestrian in Arizona, supra note 6.
[11] Tina Bellon, Fatal self-driving-auto accident raises novel legal questions, REUTERS, March 21, 2018, available at: www.reuters.com/article/… [hereinafter “Fatal self-driving-auto accident raises novel legal questions”].
[12] The National Highway Traffic and Safety Administration (“NHTSA”) provided an updated policy, formally accepting and delineating the 5 levels of autonomy prescribed in the SAE International’s J3016 document. See Hope Reese, Updated: Autonomous driving levels 0 to 5: Understanding the differences, TECH REPUBLIC, January 20, 2016, available at: www.techrepublic.com/art….
[13] See generally, Fatal self-driving-auto accident raises novel legal questions, supra note 11.
[14] CNBC Interview with Adam Jonas, Morgan Stanley Equity Analyst [hereinafter “Adam Jonas Interview”].
[15] Example derived from the CNBC interview with Adam Jonas. See Adam Jonas Interview, supra note 12.
[16] Elise Sanguinetti, partner of the law firm of Arias Sanguinetti Wang & Torrijos LLP stated, “What this [situation] says is that the technology (behind self-driving cars) is obviously not ready to be deployed in a safe way on our public roadways … The company operating this vehicle should be named the driver, and held to at least the same standard of care as any other (human) driver.” See Rex Crum, Distracted driving or technical error? Uber fatality raises self-driving car questions, The Mercury News, March 23, 2018, available at: www.mercurynews.com/2018….

Ryan Mardini Ryan M. Mardini is a third year law student at the University of Detroit Mercy School of Law, focusing his concentration on corporate and business law. Mardini has dedicated his career to being recognized as an up-and-coming future corporate attorney with a keen focus on making a change within the city of Detroit.