{"id":1122964,"date":"2024-03-14T00:11:14","date_gmt":"2024-03-14T04:11:14","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/deepmind-co-founder-on-agi-and-the-ai-race-sxsw-2024-ai-business\/"},"modified":"2024-03-14T00:11:14","modified_gmt":"2024-03-14T04:11:14","slug":"deepmind-co-founder-on-agi-and-the-ai-race-sxsw-2024-ai-business","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-general-intelligence\/deepmind-co-founder-on-agi-and-the-ai-race-sxsw-2024-ai-business\/","title":{"rendered":"DeepMind Co-founder on AGI and the AI Race &#8211; SXSW 2024 &#8211; AI Business"},"content":{"rendered":"<p><p>    Artificial general intelligence    might be here in a few years, but the full spectrum of    practical applications is decades away, according to the    co-founder of DeepMind.  <\/p>\n<p>    Speaking on the sidelines of SXSW    2024, Shane Legg told a group of attendees that while AGI might    be achieved in foundation models soon, more factors have to    align for it to be practically deployed and used.  <\/p>\n<p>    He said the cost of AI has to come    down and its use in robotics has to mature, among other    factors. If it is not economically feasible, companies will not    adopt it broadly no matter how mind-blowing AGI can be. In the    meantime, near-term applications of AGI are emerging, including    AI-powered scientific research assistants.  <\/p>\n<p>    Legg, who is the chief AGI scientist    at Google DeepMind, suggested the term artificial general    intelligence years ago after meeting an author who needed a    title for his book on an AI system with broad capabilities, not    just excel at one thing.  <\/p>\n<p>    Legg suggested inserting the word    general between artificial and intelligence. He and a few    others started popularizing the term in online forums. Four    years later, Legg said someone else claimed to have coined the    term before him.  <\/p>\n<p>      DeepMind co-founder Shane Legg talking to attendees after his      fireside chat    <\/p>\n<p>    During a fireside chat, Legg defined    AGI as a system that can do the sorts of cognitive things    people can do and possibly more. He stood by his prior    prediction that there is a 50-50 probability AGI will come by    2028.  <\/p>\n<p>    Related:OpenAI Will Always Offer a Free ChatGPT Version     SXSW 2024  <\/p>\n<p>    But such a prognostication was    wildly optimistic back when the prevailing belief was that AGI    remains 50 to 100 years away  if it came at all.  <\/p>\n<p>    For a long time, people wouldnt    work on AGI safety because they didnt believe AGI will    happen, Legg said. They would say, Oh, its not going to    happen for 100 years so why would I work on it?  <\/p>\n<p>    But foundation models have become    increasingly able such that AGI doesnt look like its that    far away, he added. Large models such as Googles Gemini and    OpenAIs GPT-4 exhibit hints of AGI capability.  <\/p>\n<p>    He said currently, models are at    level 3 of AGI, based on the six    levels Google DeepMind developed.  <\/p>\n<p>    Level 3 is the expert level where    the AI model has the same capabilities as at least the    90th    percentile of skilled adults. But it    remains narrow AI, meaning it is particularly good at    specific tasks. The fifth level is the highest, where the model    reaches artificial superintelligence and outperforms all    humans.  <\/p>\n<p>    What AI models still need is akin to    the two systems of thinking from psychology, Legg said. System    1 is when one spontaneously blurts out what one is thinking.    System 2 is when one thinks through what one plans to    say.  <\/p>\n<p>    Related:AMD    CEO Gets Down at SXSW 2024  <\/p>\n<p>    He said foundation models today are    still at System 1 and needs to progress to System 2 where it    can plan, reason through its plan, critiques its chosen path,    acts on it, observes the outcome and make another plan if    needed.  <\/p>\n<p>    Were not quite there yet, Legg    said.  <\/p>\n<p>    But he believes AI models will get    there soon, especially since todays foundation models already    show signs of AGI.  <\/p>\n<p>    I believe AGI is possible  and I    think its coming quite soon, Legg said. When it does come,    it will be profoundly transformational to society.  <\/p>\n<p>    Consider that todays advances in    society came through human intelligence. Imagine adding machine    intelligence to the mix and all sorts of possibilities open    up, he said. It (will be) an incredibly deep    transformation.  <\/p>\n<p>    But big transformations also bring    risks.  <\/p>\n<p>    Its hard to anticipate how exactly    this is going to play out, Legg said. When you deploy an    advanced technology at global scale, you cant always    anticipate what will happen when this starts interacting with    the world.  <\/p>\n<p>    There could be bad actors who would    use the technology for evil schemes, but there are also those    who unwittingly mess up the system, leading to harmful results,    he pointed out.  <\/p>\n<p>    Historically, AI safety falls into    two buckets: Immediate risks such as bias and toxicity in the    algorithms, and long-term risks from unleashing a    superintelligence including the havoc it could create by going    around guardrails.  <\/p>\n<p>    Legg said the line between these two    buckets has started to blur based on the advancements of the    latest foundation models. Powerful foundation models not only    exhibit some AGI capabilities but they also carry immediate    risks of bias, toxicity and others.  <\/p>\n<p>    The two worlds are coming    together, Legg said.  <\/p>\n<p>    Moreover, with multimodality - in    which foundation models are trained not only on text but also    images, video and audio - they can absorb all the richness and    subtlety of human culture, he added. That will make them even    more powerful.  <\/p>\n<p>    Why do scientists need to strive for    AGI? Why not stop at narrow AI since it is proving to be useful    in many industries?  <\/p>\n<p>    Legg said that several types of    problems benefit from having very large and diverse datasets.    A general AI system will have the underlying knowhow and    structure to help narrow AI solve a range of related    problems.  <\/p>\n<p>    For example, for human beings to    learn a language, it helps if they already know one language so    they are familiar with its structure, Legg explained.    Similarly, it may be helpful for a narrow AI system that excels    at a particular task to have access to a general AI system that    can bring up related issues.  <\/p>\n<p>    Also, practically speaking, it may    already be too late to stop AGI development since for several    big companies it has become mission critical to them, Legg    said. In addition, scores of smaller companies are doing the    same thing.  <\/p>\n<p>    Then there is what he calls the    most difficult group of all: intelligence agencies. For    example, the National Security Agency (NSA) in the U.S. has    more data than anyone else, having access to public information    as well as signal intelligence from interception of data from    electronic systems.  <\/p>\n<p>    How do you stop all of them? Legg    asked. Tell me a credible plan to stop them. Im all    ears.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Original post:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/aibusiness.com\/responsible-ai\/deepmind-co-founder-practical-agi-is-decades-away-sxsw-2024\" title=\"DeepMind Co-founder on AGI and the AI Race - SXSW 2024 - AI Business\">DeepMind Co-founder on AGI and the AI Race - SXSW 2024 - AI Business<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Artificial general intelligence might be here in a few years, but the full spectrum of practical applications is decades away, according to the co-founder of DeepMind. Speaking on the sidelines of SXSW 2024, Shane Legg told a group of attendees that while AGI might be achieved in foundation models soon, more factors have to align for it to be practically deployed and used.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-general-intelligence\/deepmind-co-founder-on-agi-and-the-ai-race-sxsw-2024-ai-business\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1214666],"tags":[],"class_list":["post-1122964","post","type-post","status-publish","format-standard","hentry","category-artificial-general-intelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1122964"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1122964"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1122964\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1122964"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1122964"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1122964"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}