What Ex Machina's Alex Garland Gets Wrong About Artificial Intelligence

Ex Machinaopens this weekend. Itsdirector, Alex Garland of28 Days Lateracclaim, appeared on Marketplacetoday to discuss the role of artificial intelligence in the film. Garland madea claim that is common but, in my view,flawed: If you had a machine that said I dont want you to switch me off, he told Ben Johnson, and you have reason to believe it was telling the truth,you then pretty much immediately have to give it all of the ethical rights and considerations that we give each other.

We cannot grant AI the full set of rights that apply to humans, at least not without radical changes to our laws, norms, and institutions. Society is set up for peoplepeople who are varied, yes, but who share a set of physical, mental, and perhaps spiritualcharacteristics that law and ethics more or less assume and rely upon.

Consider what I call the Copy orVote Paradox, a thought experiment which places two fundamental rights next to one another and asks us which we would like to grant to AI, since we cannot grant both. I lay out the Paradox in myrecent paperRobotics and the Lessons of Cyberlaw:

Imagine, with [Duke law professor] James Boyle, that an artificial intelligence announces it has achieved self-awareness, a claim no one seems able to discredit. Boyle examines the difficulty we might face in shutting this system down and explores some sensible arguments on either side. But why stop there? Say the intelligence has also read Skinner v. Oklahoma, a Supreme Court case that characterizes the right to procreate as one of the basic civil rights of man. The machine claims the right to make copies of itself (the only way it knows to replicate). These copies believe they should count for purposes of representation in Congress and, eventually, they demand a pathway to suffrage. Of course, conferring such rights to beings capable of indefinitely self-copying would overwhelm our system of governance. Which right do we take away from this sentient entity, then, the fundamental right to copy, or the deep, democratic right to participate?

There is reason to believe we will never be able to recreate so-called strong artificialintelligence. But assuming that we do, neither the intelligence itself, nor the vehicle for that intelligence, will resemble people as we understand them. With protagonists like Ex Machinas Ava, Garland isencouragingwhat law professorNeil Richards and roboticist Bill Smart term the Android Fallacywhich is to say, thinking of a machine as a person because it resembles one superficially. I would not go as far asRichards and Smart to claimthat robots are just tools like any other; robots and AI raise manynovel questions of law and policy. But I caution the audience ofEx Machina, Chappy,Her,Robot and Frank, and other films about robots and AI to think critically aboutwhat conferring human rights upon machines would actually entail.

This isnot to say we should avoid suchtopics. Much closer to the mark is the work on display over the next two days at the fourth annual robotics law and policy conference We Robot(#werobot on Twitter Twitter).In these draft papers, youwill see considered reflections by people withtraining in law, computer science, electrical engineering, and other disciplines on how contemporary and futureinstitutions should think about robots and AI. Another excellent resource is the Open Letter from the Future of Life Institute, signed by dozens of top researchers in a variety of fields, that lays out a research agenda for short and long term research into responsiblemachine learning.

And it is certainly not to say we should avoid movies likeEx Machina!The reviews are glowing and the issues the film actually raises are fascinating. I, for one,welcome our robot movies.

Read the rest here:

What Ex Machina's Alex Garland Gets Wrong About Artificial Intelligence

Related Posts

Comments are closed.