Artificial general intelligence might be here in a few years, but the full spectrum of practical applications is decades away, according to the co-founder of DeepMind.
Speaking on the sidelines of SXSW 2024, Shane Legg told a group of attendees that while AGI might be achieved in foundation models soon, more factors have to align for it to be practically deployed and used.
He said the cost of AI has to come down and its use in robotics has to mature, among other factors. If it is not economically feasible, companies will not adopt it broadly no matter how mind-blowing AGI can be. In the meantime, near-term applications of AGI are emerging, including AI-powered scientific research assistants.
Legg, who is the chief AGI scientist at Google DeepMind, suggested the term artificial general intelligence years ago after meeting an author who needed a title for his book on an AI system with broad capabilities, not just excel at one thing.
Legg suggested inserting the word general between artificial and intelligence. He and a few others started popularizing the term in online forums. Four years later, Legg said someone else claimed to have coined the term before him.
DeepMind co-founder Shane Legg talking to attendees after his fireside chat
During a fireside chat, Legg defined AGI as a system that can do the sorts of cognitive things people can do and possibly more. He stood by his prior prediction that there is a 50-50 probability AGI will come by 2028.
Related:OpenAI Will Always Offer a Free ChatGPT Version SXSW 2024
But such a prognostication was wildly optimistic back when the prevailing belief was that AGI remains 50 to 100 years away if it came at all.
For a long time, people wouldnt work on AGI safety because they didnt believe AGI will happen, Legg said. They would say, Oh, its not going to happen for 100 years so why would I work on it?
But foundation models have become increasingly able such that AGI doesnt look like its that far away, he added. Large models such as Googles Gemini and OpenAIs GPT-4 exhibit hints of AGI capability.
He said currently, models are at level 3 of AGI, based on the six levels Google DeepMind developed.
Level 3 is the expert level where the AI model has the same capabilities as at least the 90th percentile of skilled adults. But it remains narrow AI, meaning it is particularly good at specific tasks. The fifth level is the highest, where the model reaches artificial superintelligence and outperforms all humans.
What AI models still need is akin to the two systems of thinking from psychology, Legg said. System 1 is when one spontaneously blurts out what one is thinking. System 2 is when one thinks through what one plans to say.
Related:AMD CEO Gets Down at SXSW 2024
He said foundation models today are still at System 1 and needs to progress to System 2 where it can plan, reason through its plan, critiques its chosen path, acts on it, observes the outcome and make another plan if needed.
Were not quite there yet, Legg said.
But he believes AI models will get there soon, especially since todays foundation models already show signs of AGI.
I believe AGI is possible and I think its coming quite soon, Legg said. When it does come, it will be profoundly transformational to society.
Consider that todays advances in society came through human intelligence. Imagine adding machine intelligence to the mix and all sorts of possibilities open up, he said. It (will be) an incredibly deep transformation.
But big transformations also bring risks.
Its hard to anticipate how exactly this is going to play out, Legg said. When you deploy an advanced technology at global scale, you cant always anticipate what will happen when this starts interacting with the world.
There could be bad actors who would use the technology for evil schemes, but there are also those who unwittingly mess up the system, leading to harmful results, he pointed out.
Historically, AI safety falls into two buckets: Immediate risks such as bias and toxicity in the algorithms, and long-term risks from unleashing a superintelligence including the havoc it could create by going around guardrails.
Legg said the line between these two buckets has started to blur based on the advancements of the latest foundation models. Powerful foundation models not only exhibit some AGI capabilities but they also carry immediate risks of bias, toxicity and others.
The two worlds are coming together, Legg said.
Moreover, with multimodality - in which foundation models are trained not only on text but also images, video and audio - they can absorb all the richness and subtlety of human culture, he added. That will make them even more powerful.
Why do scientists need to strive for AGI? Why not stop at narrow AI since it is proving to be useful in many industries?
Legg said that several types of problems benefit from having very large and diverse datasets. A general AI system will have the underlying knowhow and structure to help narrow AI solve a range of related problems.
For example, for human beings to learn a language, it helps if they already know one language so they are familiar with its structure, Legg explained. Similarly, it may be helpful for a narrow AI system that excels at a particular task to have access to a general AI system that can bring up related issues.
Also, practically speaking, it may already be too late to stop AGI development since for several big companies it has become mission critical to them, Legg said. In addition, scores of smaller companies are doing the same thing.
Then there is what he calls the most difficult group of all: intelligence agencies. For example, the National Security Agency (NSA) in the U.S. has more data than anyone else, having access to public information as well as signal intelligence from interception of data from electronic systems.
How do you stop all of them? Legg asked. Tell me a credible plan to stop them. Im all ears.
Original post:
DeepMind Co-founder on AGI and the AI Race - SXSW 2024 - AI Business
- Creating 'good' AGI that won't kill us all: Crypto's Artificial Superintelligence Alliance - Cointelegraph - March 29th, 2024 [March 29th, 2024]
- Beyond the Buzz: Clear Language is Necessary for Clear Policy on AI | TechPolicy.Press - Tech Policy Press - March 29th, 2024 [March 29th, 2024]
- Whoever develops artificial general intelligence first wins the whole game - ForexLive - March 29th, 2024 [March 29th, 2024]
- AGI and Democracy - Ash Center - March 29th, 2024 [March 29th, 2024]
- Elon Musk Believes 'Super Intelligence' Is Inevitable and Could End Humanity - Observer - March 29th, 2024 [March 29th, 2024]
- What was (A)I made for? - by The Ink - The.Ink - March 29th, 2024 [March 29th, 2024]
- The evolution of artificial intelligence (AI) spending by the U.S. government | Brookings - Brookings Institution - March 29th, 2024 [March 29th, 2024]
- Fetch.ai, Ocean Protocol and SingularityNET to Partner on Decentralized AI - PYMNTS.com - March 29th, 2024 [March 29th, 2024]
- Unveiling Sam Altman's Insights from Lex Fridman Interview - hackernoon.com - March 29th, 2024 [March 29th, 2024]
- Scientists create AI models that can talk to each other and pass on skills with limited human input - Livescience.com - March 29th, 2024 [March 29th, 2024]
- Will AI save humanity? U.S. tech fest offers reality check - Japan Today - March 18th, 2024 [March 18th, 2024]
- Artificial general intelligence and higher education - Inside Higher Ed - March 18th, 2024 [March 18th, 2024]
- The Madness of the Race to Build Artificial General Intelligence - Truthdig - March 18th, 2024 [March 18th, 2024]
- Companies Like Morgan Stanley Are Already Making Early Versions of AGI - Observer - March 18th, 2024 [March 18th, 2024]
- Types of Artificial Intelligence That You Should Know in 2024 - Simplilearn - March 18th, 2024 [March 18th, 2024]
- US government warns AI may be an 'extinction-level threat' to humans - TweakTown - March 18th, 2024 [March 18th, 2024]
- Amazon's VP of AGI: Arrival of AGI Not 'Moment in Time' SXSW 2024 - AI Business - March 14th, 2024 [March 14th, 2024]
- What is general intelligence in the world of AI and computers? The race for the artificial mind explained - PC Gamer - March 14th, 2024 [March 14th, 2024]
- Beyond human intelligence: Claude 3.0 and the quest for AGI - VentureBeat - March 14th, 2024 [March 14th, 2024]
- Rejuve.Bio Launches Groundbreaking Crowd Fund on NetCapital to Pioneer the Future of Artificial General ... - PR Newswire - March 14th, 2024 [March 14th, 2024]
- Employees at Top AI Labs Fear Safety Is an Afterthought - TIME - March 14th, 2024 [March 14th, 2024]
- Meta hooks up with Hammerspace for advanced AI infrastructure project Blocks and Files - Blocks & Files - March 14th, 2024 [March 14th, 2024]
- Among the A.I. Doomsayers - The New Yorker - March 14th, 2024 [March 14th, 2024]
- Artificial Superintelligence Could Arrive by 2027, Scientist Predicts - Futurism - March 14th, 2024 [March 14th, 2024]
- OpenAI, Salesforce and Others Boost Efforts for Ethical AI - PYMNTS.com - March 14th, 2024 [March 14th, 2024]
- Google DeepMind C.E.O. Demis Hassabis on the Path From Chatbots to A.G.I. - The New York Times - February 24th, 2024 [February 24th, 2024]
- Bill Foster, a particle physicist-turned-congressman, on why he's worried about artificial general intelligence - FedScoop - February 24th, 2024 [February 24th, 2024]
- Generative AI Defined: How It Works, Benefits and Dangers - TechRepublic - February 24th, 2024 [February 24th, 2024]
- AI and You: OpenAI's Sora Previews Text-to-Video Future, First Ivy League AI Degree - CNET - February 24th, 2024 [February 24th, 2024]
- Vitalik Buterin and Sandeep Nailwal headline decentralized agi summit @ Ethdenver tackling threats of centralized AI - Grit Daily - February 24th, 2024 [February 24th, 2024]
- What is AI? A-to-Z Glossary of Essential AI Terms in 2024 - Tech.co - February 22nd, 2024 [February 22nd, 2024]
- With Sora, OpenAI highlights the mystery and clarity of its mission | The AI Beat - VentureBeat - February 22nd, 2024 [February 22nd, 2024]
- Why, Despite All the Hype We Hear, AI Is Not One of Us - Walter Bradley Center for Natural and Artificial Intelligence - February 22nd, 2024 [February 22nd, 2024]
- What is Artificial General Intelligence (AGI) and Why It's Not Here Yet: A Reality Check for AI Enthusiasts - Unite.AI - February 22nd, 2024 [February 22nd, 2024]
- Forget Artificial General Intelligence (AGI) the big impact is already here and its called AI agents - IT World Canada - February 22nd, 2024 [February 22nd, 2024]
- Future of Artificial Intelligence: Predictions and Impact on Society - Medriva - February 22nd, 2024 [February 22nd, 2024]
- Bret Taylor and Clay Bavor talk AI startups, AGI, and job disruptions - Semafor - February 22nd, 2024 [February 22nd, 2024]
- 3 scary breakthroughs AI will make in 2024 - Livescience.com - January 2nd, 2024 [January 2nd, 2024]
- What Is Artificial Intelligence (AI)? - Council on Foreign Relations - January 2nd, 2024 [January 2nd, 2024]
- 6. AI in Everyday Life How Artificial Intelligence is Impacting Society - Medium - January 2nd, 2024 [January 2nd, 2024]
- 2023 Was A Breakout Year for AI - What Can We Expect Looking Forward? - Securities.io - January 2nd, 2024 [January 2nd, 2024]
- 2024 Tech Predictions: From Sci-Fi Fantasy to Reality - Exploring Cinematic Tech Prophecies - Medium - January 2nd, 2024 [January 2nd, 2024]
- AGI predictions for 2024. The major LLM players, as well as | by Paul Pallaghy, PhD | Dec, 2023 - Medium - January 2nd, 2024 [January 2nd, 2024]
- 10 Scary Breakthroughs AI Will Make in 2024 | by AI News | Dec, 2023 - Medium - January 2nd, 2024 [January 2nd, 2024]
- Top 5 Myths About AI Debunked. Unraveling the Truth: Separating AI | by Michiel Meire | Dec, 2023 - Medium - January 2nd, 2024 [January 2nd, 2024]
- AI Revolution: Unleashing the Power of Artificial Intelligence in Our Lives - Medium - January 2nd, 2024 [January 2nd, 2024]
- Superintelligence Unleashed: Navigating the Perils and Promises of Tomorrow's AI Landscape with - Medium - January 2nd, 2024 [January 2nd, 2024]
- AI in taxation: Transforming or replacing? - Times of Malta - January 2nd, 2024 [January 2nd, 2024]
- ChatGPT egg balancing task convinced Microsoft that AGI is closer - Business Insider - May 18th, 2023 [May 18th, 2023]
- Can We Stop Runaway A.I.? - The New Yorker - May 18th, 2023 [May 18th, 2023]
- Microsoft's 'Sparks of AGI' ignite debate on humanlike AI - The Jerusalem Post - May 18th, 2023 [May 18th, 2023]
- Artificial General Intelligence is the Answer, says OpenAI CEO - Walter Bradley Center for Natural and Artificial Intelligence - May 18th, 2023 [May 18th, 2023]
- AI glossary: words and terms to know about the booming industry - NBC News - May 18th, 2023 [May 18th, 2023]
- Is medicine ready for AI? Doctors, computer scientists, and ... - MIT News - May 18th, 2023 [May 18th, 2023]
- What marketers should keep in mind when adopting AI - MarTech - May 18th, 2023 [May 18th, 2023]
- Zoom Invests in and Partners With Anthropic to Improve Its AI ... - PYMNTS.com - May 18th, 2023 [May 18th, 2023]
- Hippocratic AI launches With $50M to power healthcare chatbots - VatorNews - May 18th, 2023 [May 18th, 2023]
- The Potential of AI in Tax Practice Relies on Understanding its ... - Thomson Reuters Tax & Accounting - May 18th, 2023 [May 18th, 2023]
- How AI Knows Things No One Told It - Scientific American - May 18th, 2023 [May 18th, 2023]
- Operation HOPE and CAU Host ChatGPT Creator to Discuss AI - Black Enterprise - May 18th, 2023 [May 18th, 2023]
- Paper Claims AI May Be a Civilization-Destroying "Great Filter" - Futurism - May 18th, 2023 [May 18th, 2023]
- Vintage AI Predictions Show Our Hopes and Fears Aren't New ... - Gizmodo - May 18th, 2023 [May 18th, 2023]
- AIs Impact on Journalism - Signals AZ - May 18th, 2023 [May 18th, 2023]
- What is Augmented Intelligence? Explanation and Examples - Techopedia - May 18th, 2023 [May 18th, 2023]
- 'Godfather' of AI is now having second thoughts - The B.C. Catholic - May 18th, 2023 [May 18th, 2023]
- Top Philippine universities - Philstar.com - May 18th, 2023 [May 18th, 2023]
- The Politics of Artificial Intelligence (AI) - National and New Jersey ... - InsiderNJ - May 18th, 2023 [May 18th, 2023]
- People warned AI is becoming like a God and a 'catastrophe' is ... - UNILAD - May 18th, 2023 [May 18th, 2023]
- "Zero tolerance" for hallucinations - Dr. Vishal Sikka on how Vianai builds AI applications, and the mixed emotions of the AI hype cycle -... - May 18th, 2023 [May 18th, 2023]