How “Self-Driving Cars” Blur Accountability

Metaphors are the poetry of our everyday conversations—pithy, vivid, and powerful tools that transform complex ideas into concepts we can easily grasp. But, as with any shortcut, they can lead us astray, especially when they obscure the details we most need to see. This is particularly true in the world of artificial intelligence (AI), where the metaphor of a “self-driving car” promises convenience while quietly sidestepping crucial questions of accountability.

When tragedy strikes, as it sometimes does, who bears responsibility? Is it the car, the driver, the programmer, or the engineer? If we dig deeper, it becomes clear that this metaphor hides more than it reveals. Let’s explore how metaphors, particularly in the case of autonomous vehicles, distort our understanding of accountability and what we can do to ensure that responsibility is properly assigned.


The Seduction of the “Self-Driving Car”

The term “self-driving car” conjures a sleek, futuristic vehicle cruising effortlessly down a highway, making decisions as if it were human. It’s a metaphor designed to inspire trust in the technology and simplify its complexity for public consumption. After all, who wouldn’t want a car that “drives itself,” freeing you to read a book or catch up on emails during your commute?

But this linguistic sleight of hand masks a far more intricate reality. Autonomous vehicles are not truly independent; they are intricate systems born from the collaboration of hundreds, if not thousands, of human minds. Programmers write the code. Engineers design the hardware. Managers set budgets that dictate the scope of safety features. Governments create regulations—or fail to. And, ultimately, drivers are often expected to monitor these “self-driving” systems.

The car, then, is not “self-driving” at all. It is, as the late philosopher Bruno Latour might have argued, an “actor-network”—a product of countless human and non-human influences working in concert. To call it “self-driving” is not just misleading; it’s a disservice to the truth.


When Metaphors Collide With Reality: The Case of a Fatal Accident

In March 2018, a self-driving Uber vehicle struck and killed a pedestrian in Tempe, Arizona. The incident sparked international debate about AI ethics and accountability. Early media reports focused on the “failure” of the vehicle, reinforcing the idea that it was the car’s responsibility to avoid the crash. But as investigators dug deeper, a complex web of shared responsibility emerged.

The car’s sensors had detected the pedestrian but misclassified her as an object that did not require immediate action. The software, developed by human programmers, had been trained to prioritize certain types of obstacles over others. Meanwhile, the safety driver, whose job was to intervene in emergencies, was distracted. Uber’s management had also decided to disable a key safety feature that could have prevented the accident—all in the name of cost-cutting and speed-to-market.

This tragic incident illustrates the inadequacy of the “self-driving car” metaphor. By implying autonomy, it shields the network of humans behind the machine from scrutiny, making it harder to assign responsibility when things go wrong.


The Layers of Responsibility: Who’s Really to Blame?

Assigning responsibility in the case of autonomous vehicles is like untangling a ball of yarn. Was it the safety driver’s fault for failing to pay attention? Or the programmer’s fault for writing flawed algorithms? Or perhaps the engineer’s for designing inadequate sensors? What about the company executives who prioritized cost savings over safety? Or the policymakers who failed to create robust regulations?

These questions are not theoretical. In our legal systems, determining accountability often requires enormous resources to dissect the chain of causation. This process becomes exponentially more complex when AI is involved because, as Kevin Gibson argues in the Journal of Business Ethics, metaphors like “self-driving car” render the causes of failure even more indecipherable.

A 2020 study by the RAND Corporation highlights this challenge. Researchers found that public trust in autonomous vehicles decreased dramatically when people were unsure who would be held accountable for accidents. This lack of clarity isn’t just an academic problem; it has real-world implications for the adoption of AI technologies.


The Danger of Metaphors in AI

The problem with the “self-driving car” metaphor is not unique. Similar linguistic shortcuts permeate discussions about AI: machines “think,” algorithms “decide,” and systems “learn.” These phrases are convenient but dangerously misleading. They anthropomorphize machines, attributing to them qualities that belong to humans.

Consider this: When an airplane crashes, we don’t say the plane “decided” to crash. We investigate the pilots, the manufacturers, and the air traffic controllers. Yet when a “self-driving” car causes an accident, the metaphor tempts us to hold the car itself accountable, absolving the people behind it of their roles.

As the excerpt rightly points out, when you look closely, the metaphor falls apart: the car was never driving alone.


Bridging the Gap: Toward a Culture of Accountability

To ensure that accountability is not lost in the fog of metaphor, we need to adopt a more nuanced approach to AI and its language. Here are three key steps we can take:

  1. Shift the Narrative: We need to abandon simplistic metaphors like “self-driving” and replace them with language that reflects the collaborative nature of these systems. Terms like “human-assisted automation” or “collaborative driving” might be clunkier, but they are also more honest.
  2. Regulate Transparency: Governments and organizations must establish clear guidelines for documenting the design, development, and deployment of AI systems. Who wrote the code? Who approved the budget? These details should be as traceable as an airplane’s black box data.
  3. Educate the Public: Public understanding of AI needs to go beyond the buzzwords. As consumers, we should demand transparency and reject narratives that obscure the human roles in AI failures.

As Professor Sandra Wachter, an AI ethicist at Oxford University, once said, “Technology is never neutral—it reflects the values, biases, and priorities of the people who create it.” Recognizing this truth is the first step toward creating a culture of accountability.


A Final Reflection: Metaphors and Responsibility

Metaphors shape how we understand the world, but they also shape how we assign blame. The “self-driving car” metaphor, with its seductive simplicity, risks absolving the real actors of their responsibilities. When we fall for such metaphors, we risk turning tragedies into mysteries, leaving victims without justice and society without accountability.

But we can do better. By scrutinizing the metaphors we use and demanding greater transparency, we can ensure that responsibility remains as visible as the technology itself. Because, at the end of the day, the car never drove itself—and neither should our understanding.

.V. Avatar

Leave a Reply

Your email address will not be published. Required fields are marked *