Ride Or Die: Should Your Self-Driving Car Be Able To Determine Whether You Live?

Though they may seem worlds apart, technology and ethics are in fact more interrelated today than they’ve ever been. As we continue to propel towards a future entirely dependent on technology, thinking of the philosophical implications at hand becomes prudent. In the case of chimeras – embryos in medical research created out of animal and human DNA – the ethical questions are obvious. But what about when it comes to, say, our cars?

Hands On The Wheel. Unh-Unh…Driverless Cars Are Coming (Video)

The reality is, even the most mundane and common forms of technology have been the subject of ethical debates for generations. Today, some of the most provocative philosophizing is centered around self-driving cars, which are operated and navigated by cheap moving and storage companies through the use of artificial intelligence. Uber has already begun implementing autonomous vehicles as part of a test fleet in the U.S., and while the cars do have a human technician sitting in the passenger’s side, it’s likely only a matter of time before they become obsolete. As such, the need to “write an ethical code” is becoming more prominent, at least according to digital-technology focused lifestyle blog, The Illusion of More.

Earlier this month, IOM explored the ethical dilemmas presented by the advent of driverless technology. To begin, writer David Newhoff explains the Trolley Problem. “This hypothetical challenge asks whether or not you would make the decision to divert a speeding train from one track to another knowing that doing so will kill one person but save the lives of several others,” he explains. Clearly, such nuanced thinking is a human trait, and one that is (so far) impossible to assign to artificial intelligence.

This Is Your Brain On Technology…How You’re Getting Jacked

This particular ethical dilemma is imbued also with psychological implications. As Newhoff writes, “the ways in which we cope with tragedy in modern, developed society—or the anticipation of potential tragedy—does not generally encompass the kind of determinism implied by [the Trolley Problem].” In the case of human drivers (for instance one who swerves to avoid hitting an animal in the road, only to crash into an oncoming vehicle and killing all of the passengers inside) the split-second decision making is not necessarily based on critical thought, but rather instinct. Once that instinct is substituted with programmed knowledge, an ethical dilemma presents itself and becomes hard to ignore.

self driving car 4

“What happens when humans pre-determine the outcome of certain ethical dilemmas and encode these into machines that we then grant authority to make these decisions?,” Newhoff asks. He points to the MIT-created Moral Machine, which allows users to take part in an interactive experiment in which they are shown moral dilemmas involving a driverless car. Users are then asked to “judge” which outcome they think is “more acceptable” and then compare their ethical choices to those of others who have been presented with the same dilemmas.

A Computer Inside The Mind? Here’s Why That’s Not Insane In the Brain.

“What’s eerie some of these Moral Machine tests,” Newhoff writes, “is the implication that the data set used to enable an AI to make ethical decisions could theoretically include more than mere numbers (i.e. that the machine would simply default to save more lives than it takes).” For example, age comes into play. Would a self-driving car swerve to avoid a car carrying a newborn and instead crash into a car carrying two 90-year-olds? And what about a passenger’s “value” in society? Should AI dictate that a car carrying the President should be saved over a car carrying a Starbucks barista? What about race? Sexual orientation? Criminal history? Clearly, programming ethics-based algorithms into a car’s brain opens the door for countless dilemmas.

self driving car 3

“After all, it’s hard not to notice the dystopian implications of a man-made, ethical determinism when we remove the element of chance and cede authority to carry out life-and-death decisions to machines,” says Newhoff. He calls things like chance, fate, and God’s will “psychological buffers” humans use to rationalize or explain tragedy, and by removing those buffers, “tragic events naturally beg explanation and, therefore, an instinct to assign blame.” Our cars, then, could become culpable parties in society, but one impossible to lame blame on in any functional way (as we do with, let’s say, a drunk driver).

Millions Of Americans May Soon Lose Their Jobs…To Their Trucks (Video)

Furthermore, as Newhoff  posits, programming vehicles with an algorithm that predetermines which lives are worth more than others opens up room for human intervention. That is, people could ostensibly “override any code that might not favor them as the chosen survivors of an accident.” Supposing AI vehicles become the standard, tech-savvy owners of these autonomous vehicles could “jackknife” the entire system in their favor, effectively removing any likelihood of being killed in a car accident. And, if these vehicles do indeed become the standard, chances are private ownership would be replaced with government-sanctioned fleets of vehicles, operated entirely within the oversight of an administrative body. Newhoff suggests that brands like Mercedes-Benz and Tesla, who have made bold claims about becoming the go-to models for consumer-ready driverless passenger vehicles, “are merely stepping stones toward a public system, or a highly regulated one.”

self driving car 2

If the government is indeed to become the sole administrative body in the development, testing, and merchandising of our future vehicles, it would only be an extension of the regulatory powers it already has (over things like safety, fuel requirements, and other elements of car manufacturing mandated by the government), but it’s an extension of power that carries with it its own set of philosophical dilemmas, including how the development of that technology is to be funded. As President Obama stated in a recent interview cited by Newhoff, “it is essential that public funding play a role in the development of AI.”

It appears that the question no longer is “if we” but rather “when we,” as it relates to self-driving cars. The technology is already here, but what about the conversations about what it means for us? Perhaps it’s a conversation to be had on your next road trip.