DeepMind Co-founder on AGI and the AI Race – SXSW 2024 – AI Business

Posted: March 14, 2024 at 12:11 am

Artificial general intelligence might be here in a few years, but the full spectrum of practical applications is decades away, according to the co-founder of DeepMind.

Speaking on the sidelines of SXSW 2024, Shane Legg told a group of attendees that while AGI might be achieved in foundation models soon, more factors have to align for it to be practically deployed and used.

He said the cost of AI has to come down and its use in robotics has to mature, among other factors. If it is not economically feasible, companies will not adopt it broadly no matter how mind-blowing AGI can be. In the meantime, near-term applications of AGI are emerging, including AI-powered scientific research assistants.

Legg, who is the chief AGI scientist at Google DeepMind, suggested the term artificial general intelligence years ago after meeting an author who needed a title for his book on an AI system with broad capabilities, not just excel at one thing.

Legg suggested inserting the word general between artificial and intelligence. He and a few others started popularizing the term in online forums. Four years later, Legg said someone else claimed to have coined the term before him.

DeepMind co-founder Shane Legg talking to attendees after his fireside chat

During a fireside chat, Legg defined AGI as a system that can do the sorts of cognitive things people can do and possibly more. He stood by his prior prediction that there is a 50-50 probability AGI will come by 2028.

Related:OpenAI Will Always Offer a Free ChatGPT Version SXSW 2024

But such a prognostication was wildly optimistic back when the prevailing belief was that AGI remains 50 to 100 years away if it came at all.

For a long time, people wouldnt work on AGI safety because they didnt believe AGI will happen, Legg said. They would say, Oh, its not going to happen for 100 years so why would I work on it?

But foundation models have become increasingly able such that AGI doesnt look like its that far away, he added. Large models such as Googles Gemini and OpenAIs GPT-4 exhibit hints of AGI capability.

He said currently, models are at level 3 of AGI, based on the six levels Google DeepMind developed.

Level 3 is the expert level where the AI model has the same capabilities as at least the 90th percentile of skilled adults. But it remains narrow AI, meaning it is particularly good at specific tasks. The fifth level is the highest, where the model reaches artificial superintelligence and outperforms all humans.

What AI models still need is akin to the two systems of thinking from psychology, Legg said. System 1 is when one spontaneously blurts out what one is thinking. System 2 is when one thinks through what one plans to say.

Related:AMD CEO Gets Down at SXSW 2024

He said foundation models today are still at System 1 and needs to progress to System 2 where it can plan, reason through its plan, critiques its chosen path, acts on it, observes the outcome and make another plan if needed.

Were not quite there yet, Legg said.

But he believes AI models will get there soon, especially since todays foundation models already show signs of AGI.

I believe AGI is possible and I think its coming quite soon, Legg said. When it does come, it will be profoundly transformational to society.

Consider that todays advances in society came through human intelligence. Imagine adding machine intelligence to the mix and all sorts of possibilities open up, he said. It (will be) an incredibly deep transformation.

But big transformations also bring risks.

Its hard to anticipate how exactly this is going to play out, Legg said. When you deploy an advanced technology at global scale, you cant always anticipate what will happen when this starts interacting with the world.

There could be bad actors who would use the technology for evil schemes, but there are also those who unwittingly mess up the system, leading to harmful results, he pointed out.

Historically, AI safety falls into two buckets: Immediate risks such as bias and toxicity in the algorithms, and long-term risks from unleashing a superintelligence including the havoc it could create by going around guardrails.

Legg said the line between these two buckets has started to blur based on the advancements of the latest foundation models. Powerful foundation models not only exhibit some AGI capabilities but they also carry immediate risks of bias, toxicity and others.

The two worlds are coming together, Legg said.

Moreover, with multimodality - in which foundation models are trained not only on text but also images, video and audio - they can absorb all the richness and subtlety of human culture, he added. That will make them even more powerful.

Why do scientists need to strive for AGI? Why not stop at narrow AI since it is proving to be useful in many industries?

Legg said that several types of problems benefit from having very large and diverse datasets. A general AI system will have the underlying knowhow and structure to help narrow AI solve a range of related problems.

For example, for human beings to learn a language, it helps if they already know one language so they are familiar with its structure, Legg explained. Similarly, it may be helpful for a narrow AI system that excels at a particular task to have access to a general AI system that can bring up related issues.

Also, practically speaking, it may already be too late to stop AGI development since for several big companies it has become mission critical to them, Legg said. In addition, scores of smaller companies are doing the same thing.

Then there is what he calls the most difficult group of all: intelligence agencies. For example, the National Security Agency (NSA) in the U.S. has more data than anyone else, having access to public information as well as signal intelligence from interception of data from electronic systems.

How do you stop all of them? Legg asked. Tell me a credible plan to stop them. Im all ears.

Original post:

DeepMind Co-founder on AGI and the AI Race - SXSW 2024 - AI Business

Related Posts