The mere notion of a self-driving vehicle furnishes the world with the exceptional opportunity to profoundly change today’s transportation industry. At the early stages, autonomous vehicles were nothing but a marvel far beyond our comprehension, much like if you were to show a jet engine to someone at the turn of the century. But within the short span of just a few years, the idea of a driverless car transcended beyond the realm of science fiction and into reality.
The progression of contemporary technology—through the agency of the autonomous vehicle—is generating a continuum between the somewhat outmoded human-dependent automobiles and the automated, computer-based vehicles that are currently infiltrating the consumer transportation market. Certainly, “robot cars”[2] are the genesis of what is now reputed as the technological age of driving.
“The reality about transportation is that it’s future-oriented. If we’re planning for what we have, we’re behind the curve.” – Anthony Foxx[1]
Developing and refining the safety and security of automobile travel has always been of paramount concern to the citizens of this country. Until recently, this was because human error was deemed a crucial factor that sits at the epicenter of the upsurge in fatalities resulting from car accidents.[3] In fact, “[s]ome ninety percent of motor vehicle crashes are caused at least in part by human error.”[4]
By their very nature, autonomous vehicles were crafted to purposely battle the phenomenon of reckless or negligent driving; the underlying philosophy was built on the premise that a computer-based system is less likely to err than an absent-minded human behind the wheel. Building on said premise, manufacturers of autonomous vehicle often rave about the technology’s far-reaching ability to suggestively improve road safety by eliminating the human element. But exactly makes the autonomous vehicle less prone to such accidents?
On March 19, 2018, an autonomously operated, Uber-owned vehicle gravely struck and killed a female pedestrian walking her bike across the suburban streets of the tech powerhouse state of Arizona.[5] The injured woman was subsequently rushed to the hospital where she later succumbed to her injuries and was pronounced dead.[6] As a preliminary matter, Uber resoundingly suspended any and all autonomous vehicle testing in a myriad of cities.[7] But the supreme concern here centered on the fact that a safety driver was inside the car while the vehicle was traveling on autonomous mode—the computer system was the underlying operator of the vehicle at the time of the accident.
Why is this important? While the aftermath of this unfortunate incident should not be discounted, this event nonetheless serves as a medium for assessing legal responsibility when a human driver is no longer operating the vehicle behind the wheel.
The vehicle in question, a Volvo XC90 Sport, was furnished with Uber’s cutting-edge sensor system when traveling at a rate of 40 mph on a 45 mph limit street immediately before it struck the woman.[8] Furthermore, internal conditions (i.e. functionality of the software system) as well as external conditions (i.e. weather, etc.) were excluded as potential factors that might have caused this accident.[9] These intricate facts are paramount within this particular context on account of the fact that this was the first fatality arising out of a fully autonomous vehicle.[10] As a consequence, any legal action that emerges from this case could create a forthcoming battle, pitting peer-to-peer ridesharing transportation networks (like Uber and Lyft) against the various manufacturers of software systems utilized in autonomous vehicles.[11]
This new renaissance-like era of automated vehicles jettisons the old-fashioned legal theories plaintiffs relied on to bring forth a lawsuit for damages resulting from a car accident. With respect to vehicles that employ levels of automation ranging from 0-2,[12] where the human driver maintains full control and operation, a plaintiff commonly brought suit against the negligent driver for failing to exercise reasonable care under the applicable state statute. Contrastingly, litigation that involves vehicles that are equipped with levels 3-5 requires plaintiffs to bring forth a lawsuit against either (or both) of the following parties: (1) automated software system’s manufacturer for a design defect that caused the accident; or (2) the human driver for failing to heed the system’s audio and/or visual warnings that necessitate control takeover, and the driver is required to fully operate the vehicle in certain situations. As a result, a plaintiff pursuing litigation against the former will not require a finding of fault based on traditional negligence, but a showing that the automated system’s design was inherently defective, rendering it unsafe for use.[13]
But there is a fundamental element that lawmakers have yet to conscientiously explore: far beyond the technological modeling of these vehicles, it is the legal, moral, and ethical concerns that these software and automobile manufacturers cannot accurately model.[14] In point of fact, placing artificially intelligent (“AI”) vehicles—coupled with the dangerous consequences that accompany them—on public, communal streets is uncharted legal territory.
The underlying principle lies within the realm of voluntariness. By way of example, when NASA places an astronaut on an orbital shuttle en route to the moon, the astronaut is volunteering to withstand the potentially hazardous consequences.[15] Contrarily, deploying AI-based vehicles on public roads that are populated by unsuspecting bystanders, cyclists, pedestrians, and other drivers who did not volunteer themselves to be put in danger is an entirely different story.
While this technology is well ahead of society’s overall expectation, the recent accident that involved Uber’s automated, Volvo-manufactured vehicle raises an important question: is an inattentive human driver a reasonable fail-safe alternative in the event the technology requires the human to take over?
Grappling with this novel issue enables society to tone-down its relentless appreciation for such technology, and thoroughly address the important issues affecting our streets today.[16] Is the current state of this technology fully developed in a way that is both safe and secure for the public to use? The short answer: probably not. The abundance of lawsuits that have emerged by dint of this technology provides ample evidence that there are still a host of factors that need to be addressed before these vehicles can fully impact the consumer transportation market.
The heart-rending death caused by this accident should serve as an imperative reminder that autonomous vehicles are still in the experimental stage, and governments are still trying to figure out how to regulate it.