The Computer Scientist Trying to Teach AI to Learn Like We Do – Quanta Magazine

Posted: August 8, 2022 at 12:34 pm

Kanan has been toying with machine intelligence nearly all his life. As a kid in rural Oklahoma who just wanted to have fun with machines, he taught bots to play early multi-player computer games. That got him wondering about the possibility of artificial general intelligence a machine with the ability to think like a human in every way. This made him interested in how minds work, and he majored in philosophy and computer science at Oklahoma State University before his graduate studies took him to the University of California, San Diego.

Now Kanan finds inspiration not just in video games, but also in watching his nearly 2-year-old daughter learn about the world, with each new learning experience building on the last. Because of his and others work, catastrophic forgetting is no longer quite as catastrophic.

Quanta spoke with Kanan about machine memories, breaking the rules of training neural networks, and whether AI will ever achieve human-level learning. The interview has been condensed and edited for clarity.

It has served me very well as an academic. Philosophy teaches you, How do you make reasoned arguments, and How do you analyze the arguments of others? Thats a lot of what you do in science. I still have essays from way back then on the failings of the Turing test, and things like that. And so those things I still think about a lot.

My lab has been inspired by asking the question: Well, if we cant do X, how are we going to be able to do Y? We learn over time, but neural networks, in general, dont. You train them once. Its a fixed entity after that. And thats a fundamental thing that youd have to solve if you want to make artificial general intelligence one day. If it cant learn without scrambling its brain and restarting from scratch, youre not really going to get there, right? Thats a prerequisite capability to me.

The most successful method, called replay, stores past experiences and then replays them during training with new examples, so they are not lost. Its inspired by memory consolidation in our brain, where during sleep the high-level encodings of the days activities are replayed as the neurons reactivate.

In other words, for the algorithms, new learning cant completely eradicate past learning since we are mixing in stored past experiences.

There are three styles for doing this. The most common style is veridical replay, where researchers store a subset of the raw inputs for example, the original images for an object recognition task and then mix those stored images from the past in with new images to be learned. The second approach replays compressed representations of the images. A third far less common method is generative replay. Here, an artificial neural network actually generates a synthetic version of a past experience and then mixes that synthetic example with new examples. My lab has focused on the latter two methods.

Unfortunately, though, replay isnt a very satisfying solution.

Read more:

The Computer Scientist Trying to Teach AI to Learn Like We Do - Quanta Magazine

Related Posts