What our original drama The Intelligence Explosion tells us about AI – The Guardian

The Intelligence Explosion, an original drama published by the Guardian, is obviously a work of fiction. But the fears behind it are very real, and have led some of the biggest brains in artificial intelligence (AI) to reconsider how they work.

The film dramatises a near-future conversation between the developers of an artificial general intelligence named Gnther and an ethical philosopher. Gnther himself (itself?) sits in, making fairly cringeworthy jokes and generally missing the point. Until, suddenly, he doesnt.

It shows an event which has come to be known in the technology world as the singularity: the moment when an artificial intelligence that has the ability to improve itself starts doing so at exponential speeds. The crucial moment is the period when AI becomes better at developing AI than people are. Up until that point, AI capability can only improve as quickly as AI research progresses, but once AI is involved in its own creation, a feedback loop begins. AI makes better AI, which is even better at making even better AI.

It may not end with a robot bursting into a cloud of stars and deciding to ascend to a higher plane of existence but its not far off. A super-intelligent AI could be so much more intelligent than a human being that we cant even comprehend its actual abilities, as futile as explaining to an ant how wireless data transfer works.

So one big question for AI researchers is whether this event will be good or bad for humanity. And thats where the ethical philosophy comes into it.

Dr Nick Bostrom, a philosopher at the University of Oxford, presented one of the most popular explanations of the problem in his book Superintelligence. Suppose you create an artificial intelligence designed to do one thing in his example, running a factory for making paperclips. In a bid for efficiency, however, you decide to programme the artificial intelligence with another set of instructions as well, commanding it to improve its own processes to become better at making paperclips.

For a while, everything goes well: the AI chugs along making paperclips, occasionally suggesting that a piece of machinery be moved, or designing a new alloy for the smelter to produce. Sometimes it even improves its own programming, with the rationale that the smarter it is, the better it can think of new ways to make paperclips.

But one day, the exponential increase happens: the paperclip factory starts getting very smart, very quickly. One day its a basic AI, the next its as intelligent as a person. The day after that, its as smart as all of humanity combined, and the day after that, its smarter than anything we can imagine.

Unfortunately, despite all of this, its main directive is unchanged: it just wants to make paperclips. As many as possible, as efficiently as possible. It would start strip-mining the Earth for the raw materials, except its already realised that doing that would probably spark resistance from the pesky humans who live on the planet. So, pre-emptively, it kills them all, leaving nothing standing between it and a lot of paperclips.

Thats the worst possible outcome. But obviously having an extremely smart AI on the side of humanity would be a pretty good thing. So one way to square the circle is by teaching ethics to artificial intelligences, before its too late.

In that scenario, the paperclip machine would be told make more paperclips, but only if its ethical to do so. That way, it probably wont murder humanity, which most people consider a positive outcome.

The downside is that to code that into an AI, you sort of need to solve the entirety of ethics and write it in computer-readable format. Which is, to say the least, tricky.

Ethical philosophers cant even agree on what the best ethical system is for people. Is it ethical to kill one person to save five? Or to lie when a madman with an axe asks where your neighbour is? Some of the best minds in the world of moral philosophy disagree over those questions, which doesnt bode well for the prospect of coding morality into an AI.

Problems like this are why the biggest AI companies in the world are paying keen attention to questions of ethics. DeepMind, the Google subsidiary which produced the first ever AI able to beat a human pro at the ancient boardgame Go, has a shadowy ethics and safety board, for instance. The company hasnt said whos on it, or even whether its met, but early investors say that its creation was a key part of why Googles bid to acquire DeepMind was successful. Other companies, including IBM, Amazon and Apple, have also joined forces, forming the Partnership on AI, to lead from the top.

For now, though, the singularity still exists only in the world of science fiction. All we can say for certain is that when it does come, it probably wont have Gnthers friendly attitude front and centre.

See the article here:

What our original drama The Intelligence Explosion tells us about AI - The Guardian

Related Posts

Comments are closed.