Predicting the future of artificial intelligence has always been a fool’s game

From the Darmouth Conferences to Turing's test, prophecies about AI have rarely hit the mark. But there are ways to tell the good from the bad when it comes to futurology

In 1956, a bunch of the top brains in their field thought they could crack the challenge of artificial intelligence over a single hot New England summer. Almost 60 years later, the world is still waiting.

The "spectacularly wrong prediction" of the Dartmouth Summer Research Project on Artificial Intelligence made Stuart Armstrong, research fellow at the Future of Humanity Institute at University of Oxford, start to think about why our predictions about AI are so inaccurate.

The Dartmouth Conference had predicted that over two summer months ten of the brightest people of their generation would solve some of the key problems faced by AI developers, such as getting machines to use language, form abstract concepts and even improve themselves.

If they had been right, we would have had AI back in 1957; today, the conference is mostly credited merely with having coined the term " artificial intelligence".

Their failure is "depressing" and "rather worrying", says Armstrong. "If you saw the prediction the rational thing would have been to believe it too. They had some of the smartest people of their time, a solid research programme, and sketches as to how to approach it and even ideas as to where the problems were."

Now, to help answer the question why "AI predictions are very hard to get right", Armstrong has recently analysed the Future of Humanity Institute's library of 250 AI predictions. The library stretches back to 1950, when Alan Turing, the father of computer science, predicted that a computer would be able to pass the "Turing test" by 2000. (In the Turing test, a machine has to demonstrate behaviour indistinguishable from that of a human being.)

Later experts have suggested 2013, 2020 and 2029 as dates when a machine would pass the Turing test, which gives us a clue as to why Armstrong feels that such timeline predictions -- all 95 of them in the library -- are particularly worthless. "There is nothing to connect a timeline prediction with previous knowledge as AIs have never appeared in the world before -- no one has ever built one -- and our only model is the human brain, which took hundreds of millions of years to evolve."

His research also suggests that predictions by philosophers are more accurate than those of sociologists or even computer scientists. "We know very little about the final form an AI would take, so if they [the experts] are grounded in a specific approach they are likely to go wrong, while those on a meta level are very likely to be right".

Although, he adds, that is more a reflection of how bad the rest of the predictions are than the quality of the philosophers' contributions.

Follow this link:

Predicting the future of artificial intelligence has always been a fool's game

Related Posts

Comments are closed.