I, Alexa: Should we give artificial intelligence human rights? – Digital Trends

A few years ago, the subject of AI personhood and legal rights for artificial intelligence would have been something straight out of science fiction. In fact, it was.

Douglas Adams second Hitchhikers Guide to the Galaxy book, The Restaurant at the End of the Universe, tells the story of a futuristic smart elevator called the Sirius Cybernetics Corporation Happy Vertical People Transporter. This artificially intelligent elevator works by predicting the future, so it can appear on the right floor to pick you up even before you know you want to get on thereby eliminating all the tedious chatting, relaxing, and making friends that people were previously forced to do whilst waiting for elevators.

The ethics question, Adams explains, comes when the intelligent elevator becomes bored of going up and down all day, and instead decides to experiment with moving from side to side as a sort of existential protest.

We dont yet have smart elevators, although judging by the kind of lavish headquarters tech giants like Google and Apple build for themselves, that may just be because theyve not bothered sharing them with us yet. In fact, as weve documented time and again at Digital Trends, the field of AI is currently making a bunch of things possible we never thought realistic in the past such as self-driving cars or Star Trek-style universal translators.

Have we also reached the point where we need to think about rights for AIs?

Its pretty clear to everyone that artificial intelligence is getting closer to replicating the human brain inside a machine. On a low resolution level, we currently have artificial neural networks with more neurons than creatures like honey bees and cockroaches and theyre getting bigger all the time.

Have we also reached the point where we need to think about rights for AIs?

Higher up the food chain are large-scale projects aimed at creating more biofidelic algorithms, designed to replicate the workings of the human brain, rather than simply being inspired by the way we lay down memories. Then there are projects designed to upload consciousness into machine form, or something like the so-called OpenWorm project, which sets out to recreate the connectome the wiring diagram of the central nervous system for the tiny hermaphroditic roundworm Caenorhabditis elegans, which remains the only fully-mapped connectome of a living creature humanity has been able to achieve.

In a 2016 survey of 175 industry experts, the median expert expected human-level artificial intelligence by 2040, and 90 percent expected it by 2075.

Before we reach that goal, as AI surpasses animal intelligence, well have to begin to consider how AIs compare to the kind of rights that we might afford animals through ethical treatment. Thinking that its cruel to force a smart elevator to move up and down may not turn out to be too far-fetched; a few years back English technology writer Bill Thompson wrote that any attempt to develop AI coded to not hurt us, reflects our belief that an artificial intelligence is and always must be at the service of humanity rather than being an autonomous mind.

The most immediate question we face, however, concerns the legal rights of an AI agent. Simply put, should we consider granting them some form of personhood?

This is not as ridiculous as it sounds, nor does it suggest that AIs have graduated to a particular status in our society. Instead, it reflects the complex reality of the role that they play and will continue to play in our lives.

At present, our legal system largely assumes that we are dealing with a world full of non-smart tools. We may talk about the importance of gun control, but we still hold a person who shoots someone with a gun responsible for the crime, rather than the gun itself. If the gun explodes on its own as the result of a faulty part, we blame the company which made the gun for the damage caused.

So far, this thinking has largely been extrapolated to cover the world of artificial intelligence and robotics. In 1984, the owners of a U.S. company called Athlone Industries wound up in court after their robotic pitching machines for batting practice turned out to be a little too vicious. The case is memorable chiefly because of the judges proclamation that the suit be brought against Athlone rather than the batting bot, because robots cannot be sued.

This argument held up in 2009, when a U.K. driver was directed by his GPS system to drive along a narrow cliffside path, resulting in him being trapped and having to be towed back to the main road by police. While he blamed the technology, a court found him guilty of careless driving.

Sean Ryan / Rapid City Journal

There are multiple differences between AI technologies of today (and certainly the future) and yesterdays tech, however. Smart devices like self-driving cars or robots wont just be used by humans, but deployed by them after which they act independently of our instructions. Smart devices, equipped with machine learning algorithms, gather and analyze information by themselves and then make their decisions. It may be difficult to blame the creators of the technology, too.

Courts may hesitate to say that the designer of such a component could have foreseen the harm that occurred.

As David Vladeck, a law professor at Georgetown University in Washington D.C., has pointed out in one of the few in-depth case studies looking at this subject, the sheer number of individuals and firms that participate in the design, modification, and incorporation of an AIs components can make it tough to identify who the party responsible is. That counts for double when youre talking about black boxed AI systems that are inscrutable to outsiders.

Vladeck has written: Some components may have been designed years before the AI project had even been conceived, and the components designers may never have envisioned, much less intended, that their designs would be incorporated into any AI system, much less the specific AI system that caused harm. In such circumstances, it may seem unfair to assign blame to the designer of a component whose work was far removed in both time and geographic location from the completion and operation of the AI system. Courts may hesitate to say that the designer of such a component could have foreseen the harm that occurred.

Awarding an AI the status of a legal entity wouldnt be unprecedented. Corporations have long held this status, which is why a corporation can own property or be sued, rather than this having to be done in the name of its CEO or executive board.

Although it hasnt been tested, Shawn Bayern, a law professor from Florida State University, has pointed out that technically AI may have already have this status due to the loophole that it can be put in charge of a limited liability company, thereby making it a legal person. This might also occur for tax reasons, should a proposal like Bill Gates robot tax ever be taken seriously on a legal level.

Its not without controversy, however. Granting AIs this status would stop creators being held responsible if an AI somehow carries out an action its creator was not explicitly responsible for. But this could also encourage companies to be less diligent with their AI tools since they could technically fall back on the excuse that those tools acted outside their wishes.

There is also no way to punish an AI, since punishments like imprisonment or death mean nothing

Im not convinced that this is a good thing, certainly not right now, Dr. John Danaher, a law professor at NUI Galway in Ireland, told Digital Trends about legal personhood for AI. My guess is that for the foreseeable future this will largely be done to provide a liability shield for humans and to mask anti-social activities.

It is a compelling area of examination, however, because it doesnt rely on any benchmarks being achieved in terms of ever-subjective consciousness.

Today, corporations have legal rights and are considered legal persons, whereas most animals are not, Yuval Noah Harari, author of Sapiens: A Brief History of Humankind and Homo Deus: A Brief History of Tomorrow, told us. Even though corporations clearly have no consciousness, no personality and no capacity to experience happiness and suffering; whereas animals are conscious entities.

Irrespective of whether AI develops consciousness, there might be economic, political and legal reasons to grant it personhood and rights in the same way that corporations are granted personhood and rights. Indeed, AI might come to dominate certain corporations, organizations and even countries. This is a path only seldom discussed in science fiction, but I think it is far more likely to happen than the kind of Westworld and Ex Machina scenarios that dominate the silver screen.

At present, these topics still smack of science fiction but, as Harari points out, they may not stay that way for long. Based on their usage in the real world, and the very real attachments that form with them, questions such as who is responsible if an AI causes a persons death, or whether a human can marry his or her AI assistant, are surely ones that will be grappled with during our lifetimes.

Universal Pictures

The decision to grant personhood to any entity largely breaks down into two sub-questions, Danaher said. Should that entity be treated as a moral agent, and therefore be held responsible for what it does? And should that entity be treated as a moral patient, and therefore be protected against certain interferences and violations of its integrity? My view is that AIs shouldnt be treated as moral agents, at least not for the time being. But I think there may be cases where they should be treated as moral patients. I think people can form significant attachments to artificial companions and that consequently, in many instances, it would be wrong to reprogram or destroy those entities. This means we may owe duties to AIs not to damage or violate their integrity.

In other words, we shouldnt necessarily allow companies to sidestep the question of responsibility when it comes to the AI tools they create. As AI systems are rolled out into the real world in everything from self-driving cars to financial traders to autonomous drones and robots in combat situations, its vital that someone is held accountable for what they do.

At the same, its a mistake to think of AI as having the same relationship with us that we enjoyed with previous non-smart technologies. Theres a learning curve here and, if were not yet technologically at the point where we need to worry about cruelty to AIs, that doesnt mean its the wrong question to ask.

So stop yelling at Siri when it mishears you and asks whether you want it to search the web, alright?

See original here:

I, Alexa: Should we give artificial intelligence human rights? - Digital Trends

Related Posts

Comments are closed.