Neuromodulation Is the Secret Sauce for This Adaptive, Fast-Learning AI – Singularity Hub

As obstinate and frustrating as we are sometimes, humans in general are pretty flexible when it comes to learningespecially compared to AI.

Our ability to adapt is deeply rooted within our brains chemical base code. Although modern AI and neurocomputation have largely focused on loosely recreating the brains electrical signals, chemicals are actually the prima donna of brain-wide neural transmission.

Chemical neurotransmitters not only allow most signals to jump from one neuron to the next, they also feedback and fine-tune a neurons electrical signals to ensure theyre functioning properly in the right contexts. This process, traditionally dubbed neuromodulation, has been front and center in neuroscience research for many decades. More recently, the idea has expanded to also include the process of directly changing electrical activity through electrode stimulation rather than chemicals.

Neural chemicals are the targets for most of our current medicinal drugs that re-jigger brain functions and states, such as anti-depressants or anxiolytics. Neuromodulation is also an immensely powerful way for the brain to flexibly adapt, which is why its perhaps surprising that the mechanism has rarely been explicitly incorporated into AI methods that mimic the brain.

This week, a team from the University of Liege in Belgium went old school. Using neuromodulation as inspiration, they designed a new deep learning model that explicitly adopts the mechanism to better learn adaptive behaviors. When challenged on a difficult navigational task, the team found that neuromodulation allowed the artificial neural net to better adjust to unexpected changes.

For the first time, cognitive mechanisms identified in neuroscience are finding algorithmic applications in a multi-tasking context. This research opens perspectives in the exploitation in AI of neuromodulation, a key mechanism in the functioning of the human brain, said study author Dr. Damien Ernst.

Neuromodulation often appears in the same breath as another jargon-y word, neuroplasticity. Simply put, they just mean that the brain has mechanisms to adapt; that is, neural networks are flexible or plastic.

Cellular neuromodulation is perhaps the grandfather of all learning theories in the brain. Famed Canadian psychologist and father of neural networks Dr. Donald Hebb popularized the theory in the 1900s, which is now often described as neurons that fire together, wire together. On a high level, Hebbian learning summarizes how individual neurons flexibly change their activity levels so that they better hook up into neural circuits, which underlie most of the brains computations.

However, neuromodulation goes a step further. Here, neurochemicals such as dopamine dont necessarily directly help wire up neural connections. Rather, they fine-tune how likely a neuron is to activate and link up with its neighbor. These so-called neuromodulators are similar to a temperature dial: depending on context, they either alert a neuron if it needs to calm down so that it only activates when receiving a larger input, or hype it up so that it jumps into action after a smaller stimuli.

Cellular neuromodulation provides the ability to continuously tune neuron input/output behaviors to shape their response to external stimuli in different contexts, the authors wrote. This level of adaptability especially comes into play when we try things that need continuous adjustments, such as how our feet strike uneven ground when running, or complex multitasking navigational tasks.

To be very clear, neuromodulation isnt directly changing synaptic weights. (Ughwhat?)

Stay with me. You might know that a neural network, either biological or artificial, is a bunch of neurons connected to each other through different strengths. How readily one neuron changes a neighboring neurons activityor how strongly theyre linkedis often called the synaptic weight.

Deep learning algorithms are made up of multiple layers of neurons linked to each other through adjustable weights. Traditionally, tweaking the strengths of these connections, or synaptic weights, is how a deep neural net learns (for those interested, the biological equivalent is dubbed synaptic plasticity).

However, neuromodulation doesnt directly act on weights. Rather, it alters how likely a neuron or network is to be capable of changing their connectionthat is, their flexibility.

Neuromodulation is a meta-level of control; so its perhaps not surprising that the new algorithm is actually composed of two separate neural networks.

The first is a traditional deep neural net, dubbed the main network. It processes input patterns and uses a custom method of activationhow likely a neuron in this network is to spark to life depends on the second network, or the neuromodulatory network. Here, the neurons dont process input from the environment. Rather, they deal with feedback and context to dynamically control the properties of the main network.

Especially important, said the authors, is that the modulatory network scales in size with the number of neurons in the main one, rather than the number of their connections. Its what makes the NMN different, they said, because this setup allows us to extend more easily to very large networks.

To gauge the adaptability of their new AI, the team pitted the NMN against traditional deep learning algorithms in a scenario using reinforcement learningthat is, learning through wins or mistakes.

In two navigational tasks, the AI had to learn to move towards several targets through trial and error alone. Its somewhat analogous to you trying to play hide-and-seek while blindfolded in a completely new venue. The first task is relatively simple, in which youre only moving towards a single goal and you can take off your blindfold to check where you are after every step. The second is more difficult in that you have to reach one of two marks. The closer you get to the actual goal, the higher the rewardcandy in real life, and a digital analogy for AI. If you stumble on the other, you get punishedthe AI equivalent to a slap on the hand.

Remarkably, NMNs learned both faster and better than traditional reinforcement learning deep neural nets. Regardless of how they started, NMNs were more likely to figure out the optimal route towards their target in much less time.

Over the course of learning, NMNs not only used their neuromodulatory network to change their main one, they also adapted the modulatory networktalk about meta! It means that as the AI learned, it didnt just flexibly adapt its learning; it also changed how it influences its own behavior.

In this way, the neuromodulatory network is a bit like a library of self-help booksyou dont just solve a particular problem, you also learn how to solve the problem. The more information the AI got, the faster and better it fine-tuned its own strategy to optimize learning, even when feedback wasnt perfect. The NMN also didnt like to give up: even when already performing well, the AI kept adapting to further improve itself.

Results show that neuromodulation is capable of adapting an agent to different tasks and that neuromodulation-based approaches provide a promising way of improving adaptation of artificial systems, the authors said.

The study is just the latest in a push to incorporate more biological learning mechanisms into deep learning. Were at the beginning: neuroscientists, for example, are increasingly recognizing the role of non-neuron brain cells in modulating learning, memory, and forgetting. Although computational neuroscientists have begun incorporating these findings into models of biological brains, so far AI researchers have largely brushed them aside.

Its difficult to know which brain mechanisms are necessary substrates for intelligence and which are evolutionary leftovers, but one thing is clear: neuroscience is increasingly providing AI with ideas outside its usual box.

Image Credit: Image by Gerd Altmann from Pixabay

Excerpt from:

Neuromodulation Is the Secret Sauce for This Adaptive, Fast-Learning AI - Singularity Hub

Related Posts

Comments are closed.