Why Neuroscience Is the Key To Innovation in AI – Singularity Hub

The future of AI lies in neuroscience.

So says Google DeepMinds founder Demis Hassabis in a review paper published last week in the prestigious journal Neuron.

Hassabis is no stranger to both fields. Armed with a PhD in neuroscience, the computer maverick launched London-based DeepMind to recreate intelligence in silicon. In 2014, Google snagged up the company for over $500 million.

Its money well spent. Last year, DeepMinds AlphaGo wiped the floor with its human competitors in a series of Go challenges around the globe. Working with OpenAI, the non-profit AI research institution backed by Elon Musk, the company is steadily working towards machines with higher reasoning capabilities than ever before.

The companys secret sauce? Neuroscience.

Baked into every DeepMind AI are concepts and ideas first discovered in our own brains. Deep learning and reinforcement learningtwo pillars of contemporary AIboth loosely translate biological neuronal communication into formal mathematics.

The results, as exemplified by AlphaGo, are dramatic. But Hassabis argues that its not enough.

As powerful as todays AIs are, each one is limited in the scope of what it can do. The goal is to build general AI with the ability to think, reason and learn flexibly and rapidly; AIs that can intuit about the real world and imagine better ones.

To get there, says Hassabis, we need to closer scrutinize the inner workings of the human mindthe only proof that such an intelligent system is even possible.

Identifying a common language between the two fields will create a virtuous circle whereby research is accelerated through shared theoretical insights and common empirical advances, Hassabis and colleagues write.

The bar is high for AI researchers striving to bust through the limits of contemporary AI.

Depending on their specific tasks, machine learning algorithms are set up with specific mathematical structures. Through millions of examples, artificial neural networks learn to fine-tune the strength of their connections until they achieve the perfect state that lets them complete the task with high accuracymay it be identifying faces or translating languages.

Because each algorithm is highly tailored to the task at hand, relearning a new task often erases the established connections. This leads to catastrophic forgetting, and while the AI learns the new task, it completely overwrites the previous one.

The dilemma of continuous learning is just one challenge. Others are even less defined but arguably more crucial for building the flexible, inventive minds we cherish.

Embodied cognition is a big one. As Hassabis explains, its the ability to build knowledge from interacting with the world through sensory and motor experiences, and creating abstract thought from there.

Its the sort of good old-fashioned common sense that we humans have, an intuition about the world thats hard to describe but extremely useful for the daily problems we face.

Even harder to program are traits like imagination. Thats where AIs limited to one specific task really fail, says Hassabis. Imagination and innovation relies on models weve already built about our world, and extrapolating new scenarios from them. Theyre hugely powerful planning toolsbut research into these capabilities for AI is still in its infancy.

Its actually not widely appreciated among AI researchers that many of todays pivotal machine learning algorithms come from research into animal learning, says Hassabis.

An example: recent findings in neuroscience show that the hippocampusa seahorse-shaped structure that acts as a hub for encoding memoryreplays those experiences in fast-forward during rest and sleep.

This offline replay allows the brain to learn anew from successes or failures that occurred in the past, says Hassabis.

AI researchers snagged the idea up, and implemented a rudimentary version into an algorithm that combined deep learning and reinforcement learning. The result is powerful neural networks that learn based on experience. They compare current situations with previous events stored in memory, and take actions that previously led to reward.

These agents show striking gains in performance over traditional deep learning algorithms. Theyre also great at learning on the fly: rather than needing millions of examples, they just need a handful.

Similarly, neuroscience has been a fruitful source of inspiration for other advancements in AI, including algorithms equipped with a mental sketchpad that allows them to plan convoluted problems more efficiently.

But the best is yet to come.

The advent of brain imaging tools and genetic bioengineering are offering an unprecedented look at how biological neural networks organize and combine to tackle problems.

As neuroscientists work to solve the neural codethe basic computations that support brain functionit offers an expanding toolbox for AI researchers to tinker with.

One area where AIs can benefit from the brain is our knowledge of core concepts that relate to the physical worldspaces, numbers, objects, and so on. Like mental Legos, the concepts form the basic building blocks from which we can construct mental models that guide inferences and predictions about the world.

Weve already begun exploring ideas to address the challenge, says Hassabis. Studies with humans show that we decompose sensory information down into individual objects and relations. When implanted in code, its already led to human-level performance on challenging reasoning tasks.

Then theres transfer learning, the ability that takes AIs from one-trick ponies to flexible thinkers capable of tackling any problem. One method, called progressive networks, captures some of the basic principles in transfer learning and was successfully used to train a real robot arm based on simulations.

Intriguingly, these networks resemble a computational model of how the brain learns sequential tasks, says Hassabis.

The problem is neuroscience hasnt figured out how humans and animals achieve high-level knowledge transfer. Its possible that the brain extracts abstract knowledge structures and how they relate to one another, but so far theres no direct evidence that supports this kind of coding.

Without doubt AIs have a lot to learn from the human brain. But the benefits are reciprocal. Modern neuroscience, for all its powerful imaging tools and optogenetics, has only just begun unraveling how neural networks support higher intelligence.

Neuroscientists often have only quite vague notions of the mechanisms that underlie the concepts they study, says Hassabis. Because AI research relies on stringent mathematics, the field could offer a way to clarify those vague concepts into testable hypotheses.

Of course, its unlikely that AI and the brain will always work the same way. The two fields tackle intelligence from dramatically different angles: neuroscience asks how the brain works and the underlying biological principles; AI is more utilitarian and free from the constraints of evolution.

But we can think of AI as applied (rather than theoretical) computational neuroscience, says Hassabis, and theres a lot to look forward to.

Distilling intelligence into algorithms and comparing it to the human brain may yield insights into some of the deepest and most enduring mysteries of the mind, he writes.

Think creativity, dreams, imagination, andperhaps one dayeven consciousness.

Stock Media provided by agsandrew / Pond5

Continue reading here:

Why Neuroscience Is the Key To Innovation in AI - Singularity Hub

Related Posts

Comments are closed.