Why the quickest path to human-level AI may be letting it evolve on its own – The Next Web

Posted: November 7, 2019 at 10:44 pm

Its become increasingly clear as we reach its limits that deep learning a specific subset of AI technology isnt going to magically lead to human-level artificial intelligence.

If we want robots that can think like us, weve got to stop giving them all the answers. Curiosity and exploration are the two key components of the human intellect that deep learning simply doesnt provide.

In a recent article in Quanta Magazine, writer Matthew Hutson describes the work of computer scientist Kenneth Stanley, who is currently working at Ubers AI lab. Stanleys pioneering work in the field of neuroevolution has paved the way for a new artificial intelligence paradigm that eschews traditional objective-based training models in favor of AI models that have no purpose but to explore and be creative.

Hutson writes:

Biological evolution is also the only system to produce human intelligence, which is the ultimate dream of many AI researchers. Because of biologys track record, Stanley and others have come to believe that if we want algorithms that can navigate the physical and social world as easily as we can or better! we need to imitate natures tactics.

Instead of hard-coding the rules of reasoning, or having computers learn to score highly on specific performance metrics, they argue, we must let a population of solutions blossom. Make them prioritize novelty or interestingness instead of the ability to walk or talk. They may discover an indirect path, a set of steppingstones, and wind up walking and talking better than if theyd sought those skills directly.

Standard deep learning models use a black box a set of weights and parameters that, ultimately, become too complex for developers to describe individually to brew up machine learning algorithms and tweak them until they spit out the right data. This isnt intelligence, its prestidigitation.

If AI could evolve its own solutions and combine those parameters with deep learning, itd be closer to imitating human-level problem solving. At least, thats what Stanley argues.

His research involves building evolutionary algorithms that can function in tandem with deep learning systems. In essence, rather than teaching an AI to solve a problem, he develops algorithms that sort of meander about seeing what theyre capable of. These systems arent rewarded for solving a problem like normal AI paradigms. They just go until something happens. Whats remarkable is that, without a problem to solve, they still manage to solve many kinds of problems far more efficiently than traditional deep learning models.

More from Hutsons article in Quanta:

In one test, they [Stanley and researcher Joel Lehman] placed virtual wheeled robots in a maze and evolved the algorithms controlling them, hoping one would find a path to the exit. They ran the evolution from scratch 40 times. A comparison program, in which robots were selected for how close (as the crow flies) they came to the exit, evolved a winning robot only 3 out of 40 times. Novelty search, which completely ignored how close each bot was to the exit, succeeded 39 times. It worked because the bots managed to avoid dead ends.

Deep learning AI doesnt know what to do when it hits a wall. Once the machine gets stuck, it has to start over again thats why it takes millions of training cycles to teach an AI how to accomplish a task successfully. With Stanleys evolutionary algorithm-based hybrid model, the AI isnt trying to find the exit, its basically just doing stuff and then trying to find more stuff to do. The machines curiosity forces it through the entire maze almost every time because its bent on exploring.

Evolutionary algorithms arent new, but the vein of research surrounding them has been largely swept to the side in favor of more immediately-lucrative development opportunities in standard deep learning technology the kind that fuels B2B and B2C sales. And theyre also under-explored because theyre expensive. Its takes a lot less power to train a narrow-minded AI than it does to run evolutionary algorithms. But the payoff could be huge.

The big idea here is to backdoor human-level intelligence on accident by letting AI evolve its own algorithms though unfettered exploration. Stanley and others believe its possible that AGI could manifest as a byproduct of machine curiosity, just like human consciousness occurred as aresult of biological evolution.

Computers Evolve a New Path Toward Human Intelligence on Quanta Magazine

Link:

Why the quickest path to human-level AI may be letting it evolve on its own - The Next Web

Related Posts