True AI is both logically possible and utterly implausible …

Suppose you enter a dark room in an unknown building. You might panic about monsters that could be lurking in the dark. Or you could just turn on the light, to avoid bumping into furniture. The dark room is the future of artificial intelligence (AI). Unfortunately, many people believe that, as we step into the room, we might run into some evil, ultra-intelligent machines. This is an old fear. It dates to the 1960s, when Irving John Good, a British mathematician who worked as a cryptologist at Bletchley Park with Alan Turing, made the following observation:

Once ultraintelligent machines become a reality, they might not be docile at all but behave like Terminator: enslave humanity as a sub-species, ignore its rights, and pursue their own ends, regardless of the effects on human lives.

If this sounds incredible, you might wish to reconsider. Fast-forward half a century to now, and the amazing developments in our digital technologies have led many people to believe that Goods intelligence explosion is a serious risk, and the end of our species might be near, if were not careful. This is Stephen Hawking in 2014:

Last year, Bill Gates was of the same view:

And what had Musk, Teslas CEO, said?

The reality is more trivial. ThisMarch, Microsoftintroduced Tay an AI-based chat robot to Twitter. They had to remove it only 16 hours later. It was supposed to become increasingly smarter as it interacted with humans. Instead, it quickly became anevil Hitler-loving, Holocaust-denying, incestual-sex-promoting, Bush did 9/11-proclaiming chatterbox. Why? Because it worked no better than kitchen paper, absorbing and being shaped by the nasty messages sent to it. Microsoft apologised.

This is the state of AI today. After so much talking about the risks of ultraintelligent machines, it is time to turn on the light, stop worrying about sci-fi scenarios, and start focusing on AIs actual challenges, in order to avoid making painful and costly mistakes in the design and use of our smart technologies.

Let me be more specific. Philosophy doesnt do nuances well. It might fancy itself a model of precision and finely honed distinctions, but what it really loves are polarisations and dichotomies. Internalism or externalism, foundationalism or coherentism, trolley left or right, zombies or not zombies, observer-relative or observer-independent, possible or impossible worlds, grounded or ungrounded Philosophy might preach the inclusive vel (girls or boys may play) but too often indulges in the exclusive aut aut (either you like it or you dont).

The current debate about AI is a case in point. Here, the dichotomy is between those who believein true AI and those who do not. Yes, the real thing, not Siri in your iPhone, Roomba in your living room, or Nest in your kitchen (I am the happy owner of all three). Think instead of the false Maria in Metropolis (1927); Hal9000 in 2001: A Space Odyssey (1968), on which Good was one of the consultants; C3PO in Star Wars (1977); Rachael in Blade Runner (1982); Data in Star Trek: The Next Generation (1987); Agent Smith in The Matrix (1999) or the disembodied Samantha in Her (2013). Youve got the picture. Believers in true AI and in Goods intelligence explosion belong to the Church of Singularitarians. For lack of a better term, I shall refer to the disbelievers as members of the Church of AItheists. Lets have a look at both faiths and see why both are mistaken. And meanwhile, remember: good philosophy is almost always in the boring middle.

Singularitarians believe in three dogmas. First, that the creation of some form of artificial ultraintelligence is likely in the foreseeable future. This turning point is known as a technological singularity, hence the name. Both the nature of such a superintelligence and the exact timeframe of its arrival are left unspecified, although Singularitarians tend to prefer futures that are conveniently close-enough-to-worry-about but far-enough-not-to-be-around-to-be-proved-wrong.

Second, humanity runs a major risk of being dominated by such ultraintelligence. Third, a primary responsibility of the current generation is to ensure that the Singularity either does not happen or, if it does, that it is benign and will benefit humanity. This has all the elements of a Manichean view of the world: Good fighting Evil, apocalyptic overtones, the urgency of we must do something now or it will be too late, an eschatological perspective of human salvation, and an appeal to fears and ignorance.

Put all this in a context where people are rightly worried about the impact of idiotic digital technologies on their lives, especially in the job market and in cyberwars, and where mass media daily report new gizmos and unprecedented computer-driven disasters, and you have a recipe for mass distraction: a digital opiate for the masses.

Like all faith-based views, Singularitarianism is irrefutable because, in the end, it is unconstrained by reason and evidence. It is also implausible, since there is no reason to believe that anything resembling intelligent (let alone ultraintelligent) machines will emerge from our current and foreseeable understanding of computer science and digital technologies. Let me explain.

Sometimes, Singularitarianism is presented conditionally. This is shrewd, because the then does follow from the if, and not merely in an ex falso quodlibet sense: if some kind of ultraintelligence were to appear, then we would be in deep trouble (not merely could, as stated above by Hawking). Correct. Absolutely. But this also holds true for the following conditional: if the Four Horsemen of the Apocalypse were to appear, then we would be in even deeper trouble.

At other times, Singularitarianism relies on a very weak sense of possibility: some form of artificial ultraintelligence could develop, couldnt it? Yes it could. But this could is mere logical possibility as far as we know, there is no contradiction in assuming the development of artificial ultraintelligence. Yet this is a trick, blurring the immense difference between I could be sick tomorrow when I am already feeling unwell, and I could be a butterfly that dreams its a human being.

How some nasty ultraintelligent AI will ever evolve autonomously from the computational skills required to park in a tight spot remains unclear

There is no contradiction in assuming that a dead relative youve never heard of has left you $10million. That could happen. So? Contradictions, like happily married bachelors, arent possible states of affairs, but non-contradictions, like extra-terrestrial agents living among us so well-hidden that we never discovered them, can still be dismissed as utterly crazy. In other words, the could is not the could happen of an earthquake, but the it isnt true that it couldnt happen of thinking that you are the first immortal human. Correct, but not a reason to start acting as if you will live forever. Unless, that is, someone provides evidence to the contrary, and shows that there is something in our current and foreseeable understanding of computer science that should lead us to suspect that the emergence of artificial ultraintelligence is truly plausible.

Here Singularitarians mix faith and facts, often moved, I believe, by a sincere sense of apocalyptic urgency. They start talking about job losses, digital systems at risk, unmanned drones gone awry and other real and worrisome issues about computational technologies that are coming to dominate human life, from education to employment, from entertainment to conflicts. From this, they jump to being seriously worried about their inability to control their next Honda Civic because it will have a mind of its own. How some nasty ultraintelligent AI will ever evolve autonomously from the computational skills required to park in a tight spot remains unclear. The truth is that climbing on top of a tree is not a small step towards the Moon; it is the end of the journey. What we are going to see are increasingly smart machines able to perform more tasks that we currently perform ourselves.

If all other arguments fail, Singularitarians are fond of throwing in some maths. A favourite reference is Moores Law. This is the empirical claim that, in the development of digital computers, the number of transistors on integrated circuits doubles approximately every two years. The outcome has so far been more computational power for less. But things are changing. Technical difficulties in nanotechnology present serious manufacturing challenges. There is, after all, a limit to how small things can get before they simply melt. Moores Law no longer holds. Just because something grows exponentially for some time, does not mean that it will continue to do so forever, as The Economist put it in 2014:

From Turkzilla to AIzilla, the step is small, if it werent for the fact that a growth curve can easily be sigmoid, with an initial stage of growth that is approximately exponential, followed by saturation, slower growth, maturity and, finally, no further growth. But I suspect that the representation of sigmoid curves might be blasphemous for Singularitarianists.

Singularitarianism is irresponsibly distracting. It is a rich-world preoccupation, likely to worry people in leisured societies, who seem to forget about real evils oppressing humanity and our planet. One example will suffice: almost 700million people have no access to safe water. This is a major threat to humanity. Oh, and just in case you thought predictions by experts were a reliable guide, think twice. There are many staggeringly wrong technological predictions by experts (see some hilarious ones from David Pogue and on Cracked.com). In 2004 Gates stated: Two years from now, spam will be solved. And in 2011 Hawking declared that philosophy is dead (so whats this you are reading?).

The prediction of which I am most fond is by Robert Metcalfe, co-inventor of Ethernet and founder of the digital electronics manufacturer 3Com. In 1995 he promised to eat his words if proved wrong that the internet will soon go supernova and in 1996 will catastrophically collapse. A man of his word, in 1997 he publicly liquefied his article in a food processor and drank it. I wish Singularitarians were as bold and coherent as him.

Deeply irritated by those who worship the wrong digital gods, and by their unfulfilled Singularitarian prophecies, disbelievers AItheists make it their mission to prove once and for all that any kind of faith in true AI is totally wrong. AI is just computers, computers are just Turing Machines, Turing Machines are merely syntactic engines, and syntactic engines cannot think, cannot know, cannot be conscious. End of story.

This is why there is so much that computers (still) cannot do, loosely the title of several publications Ira Wilson (1970); Hubert Dreyfus (1972; 1979); Dreyfus (1992); David Harel (2000); John Searle (2014) though what precisely they cant do is a conveniently movable target. It is also why they are unable to process semantics (of any language, Chinese included, no matter what Google translation achieves). This proves that there is absolutely nothing to discuss, let alone worry about. There is no genuine AI, so a fortiori there are no problems caused by it. Relax and enjoy all these wonderful electric gadgets.

AItheists faith is as misplaced as the Singularitarians. Both Churches have plenty of followers in California, where Hollywood sci-fi films, wonderful research universities such as Berkeley, and some of the worlds most important digital companies flourish side by side. This might not be accidental. When there is big money involved, people easily get confused. For example, Google has been buying AI tech companies as if there were no tomorrow (disclaimer: I am a member of Googles Advisory Council on the right to be forgotten), so surely Google must know something about the real chances of developing a computer that can think, that we, outside The Circle, are missing? Eric Schmidt, Googles executive chairman, fuelled this view, when he told the Aspen Institute in 2013: Many people in AI believe that were close to [a computer passing the Turing Test] within the next five years.

The Turing test is a way to check whether AI is getting any closer. You ask questions of two agents in another room; one is human, the other artificial; if you cannot tell the difference between the two from their answers, then the robot passes the test. It is a crude test. Think of the driving test: if Alice does not pass it, she is not a safe driver; but even if she does, she might still be an unsafe driver. The Turing test provides a necessary but insufficient condition for a form of intelligence. This is a really low bar. And yet, no AI has ever got over it. More importantly, all programs keep failing in the same way, using tricks developed in the 1960s. Let me offer a bet. I hate aubergine (eggplant), but I shall eat a plate of it if a software program passes the Turing Test and wins the Loebner Prize gold medal before 16 July 2018. It is a safe bet.

Both Singularitarians and AItheists are mistaken. As Turing clearly stated in the 1950 article that introduced his test, the question Can a machine think? is too meaningless to deserve discussion. (Ironically, or perhaps presciently, that question is engraved on the Loebner Prize medal.) This holds true, no matter which of the two Churches you belong to. Yet both Churches continue this pointless debate, suffocating any dissenting voice of reason.

True AI is not logically impossible, but it is utterly implausible. We have no idea how we might begin to engineer it, not least because we have very little understanding of how our own brains and intelligence work. This means that we should not lose sleep over the possible appearance of some ultraintelligence. What really matters is that the increasing presence of ever-smarter technologies is having huge effects on how we conceive of ourselves, the world, and our interactions. The point is not that our machines are conscious, or intelligent, or able to know something as we do. They are not. There are plenty of well-known results that indicate the limits of computation, so-called undecidable problems for which it can be proved that it is impossible to construct an algorithm that always leads to a correct yes-or-no answer.

We know, for example, that our computational machines satisfy the Curry-Howard correspondence, which indicates that proof systems in logic on the one hand and the models of computation on the other, are in fact structurally the same kind of objects, and so any logical limit applies to computers as well. Plenty of machines can do amazing things, including playing checkers, chess and Go and the quiz show Jeopardy better than us. And yet they are all versions of a Turing Machine, an abstract model that sets the limits of what can be done by a computer through its mathematical logic.

Quantum computers are constrained by the same limits, the limits of what can be computed (so-called computable functions). No conscious, intelligent entity is going to emerge from a Turing Machine. The point is that our smart technologies also thanks to the enormous amount of available data and some very sophisticated programming are increasingly able to deal with more tasks better than we do, including predicting our behaviours. So we are not the only agents able to perform tasks successfully.

This is what I have defined as the Fourth Revolution in our self-understanding. We are not at the centre of the Universe (Copernicus), of the biological kingdom (Charles Darwin), or of rationality (Sigmund Freud). And after Turing, we are no longer at the centre of the infosphere, the world of information processing and smart agency, either. We share the infosphere with digital technologies. These are ordinary artefacts that outperform us in ever more tasks, despite being no cleverer than a toaster. Their abilities are humbling and make us reevaluate human exceptionality and our special role in the Universe, which remains unique. We thought we were smart because we could play chess. Now a phone plays better than a Grandmaster. We thought we were free because we could buy whatever we wished. Now our spending patterns are predicted by devices as thick as a plank.

Whats the difference? The same as between you and the dishwasher when washing the dishes. Whats the consequence? That any apocalyptic vision of AI can be disregarded

The success of our technologies depends largely on the fact that, while we were speculating about the possibility of ultraintelligence, we increasingly enveloped the world in so many devices, sensors, applications and data that it became an IT-friendly environment, where technologies can replace us without having any understanding, mental states, intentions, interpretations, emotional states, semantic skills, consciousness, self-awareness or flexible intelligence. Memory (as in algorithms and immense datasets) outperforms intelligence when landing an aircraft, finding the fastest route from home to the office, or discovering the best price for your next fridge.

Digital technologies can do more and more things better than us, by processing increasing amounts of data and improving their performance by analysing their own output as input for the next operations. AlphaGo, the computer program developed by Google DeepMind, won the boardgame Go against the worlds best player because it could use a database of around 30million moves and play thousands of games against itself, learning how to improve its performance. It is like a two-knife system that can sharpen itself. Whats the difference? The same as between you and the dishwasher when washing the dishes. Whats the consequence? That any apocalyptic vision of AI can be disregarded. We are and shall remain, for any foreseeable future, the problem, not our technology. So we should concentrate on the real challenges. By way of conclusion, let me list five of them, all equally important.

We should make AI environment-friendly. We need the smartest technologies we can build to tackle the concrete evils oppressing humanity and our planet, from environmental disasters to financial crises, from crime, terrorism and war, to famine, poverty, ignorance, inequality and appalling living standards.

We should make AI human-friendly. It should be used to treat people always as ends, never as mere means, to paraphrase Immanuel Kant.

We should make AIs stupidity work for human intelligence. Millions of jobs will be disrupted, eliminated and created; the benefits of this should be shared by all, and the costs borne by society.

We should make AIs predictive power work for freedom and autonomy. Marketing products, influencing behaviours, nudging people or fighting crime and terrorism should never undermine human dignity.

And finally, we should make AI make us more human. The serious risk is that we might misuse our smart technologies, to the detriment of most of humanity and the whole planet. Winston Churchill said that we shape our buildings and afterwards our buildings shape us. This applies to the infosphere and its smart technologies as well.

Singularitarians and AItheists will continue their diatribes about the possibility or impossibility of true AI. We need to be tolerant. But we do not have to engage. As Virgil suggests in Dantes Inferno: Speak not of them, but look, and pass them by. For the world needs some good philosophy, and we need to take care of more pressing problems.

Continued here:

True AI is both logically possible and utterly implausible ...

Related Posts

Comments are closed.