What Happens When Robots Become Judges? A New Study Examines The Question (Audio)

It’s everywhere. From the self check-out aisles at the grocery store to the self-driving vehicles appearing on roadways across the country. Artificial intelligence is moreso the technology of today than it is the technology of the future, but there are still aspects to it we have not quite yet grappled fully. The ethics of adopting AI into more and more facets of our every day lives are by far the aspect with the biggest implications, for it’s within ethics that we determine what is right from what is wrong. In some instances – for example, when using artificial intelligence to automatically detect a gas leak and alert the police – its incorporation is undeniably “good.” But it gets less cut-and-dry when moving into the realm of robotics, medicine, law and more complex philosophical corners of modern life.

As such, there do exist pockets of academia in which the exploration of ethics and artificial intelligence is pursued. In a recent report from NPR, a group of scholars at Carnegie Mellon University tasked with such an endeavor was given a $10 million grant from a law firm. Already in existence is the Partnership on Artificial Intelligence to Benefit People and Society, a thinktank of sorts established by industry leaders that grapples with the concept, as well. However, this group of scholars is unique in that they will likely publish one of the first and preeminent group of findings on the subject – which will hopefully become a founding chapter of global studies on the subject.

Much of the studying of AI as it pertains to ethics comes from, interestingly enough, science fiction. Namely, Isaac Asimov’s “Three Laws of Robotics,” which state: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law; and 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Plug in “AI” for “robot,” and you get a basic sense of the schematics of what is – for all intents and purposes – still a burgeoning field.

robots-law

As chairman of the law firm in question, Peter Kalis, tells NPR, the breakneck speed with which technology is advancing is making it hard for us to answer questions about ethics in real time. The most common question is “what happens when you make robots that are smart, independent thinkers — and then try to limit their autonomy?,” which conjures up images of a robot apocalypse, where human creations become omnipotent and reclaim power from us. But there are many, many nuanced questions involved with AI. For example, as Ambrosia for Heads has previously put forth, “Should Your Self-Driving Car Be Able To Determine Whether You Live?” As Kalis posits, “One expert said we’ll be at a fulcrum point when you give an instruction to your robot to go to work in the morning and it turns around and says, ‘I’d rather go to the beach.’ Or, more perilously, if we were to launch a robot on the battlefield and all of [a] sudden it took a more partial liking to the enemy than it did to its human sponsor.”

Eventually, such questions will have to exist side-by-side with our system of laws and in the case of this particular group of scholars, the U.S. Constitution. “It says that every person should benefit from equal protection under the law. Well, I don’t think anyone contemplated that person would include an artificially intelligent robot,” says Kalis before mentioning that he often hears people “maintaining that artificially intelligent robots ought to replace judges.” When – not if – we get to that point, he says, ” it’s a matter of profound constitutional and social consequence for any country, any nation which prizes the rule of law.”

For more information on Carnegie Mellon’s plans to study artificial intelligence and ethics, visit the university’s official website.