{"id":169356,"date":"2024-05-15T02:36:44","date_gmt":"2024-05-15T06:36:44","guid":{"rendered":"https:\/\/www.immortalitymedicine.tv\/project-astra-is-the-future-of-ai-at-google-the-verge\/"},"modified":"2024-08-18T12:53:42","modified_gmt":"2024-08-18T16:53:42","slug":"project-astra-is-the-future-of-ai-at-google-the-verge","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/ai\/project-astra-is-the-future-of-ai-at-google-the-verge.php","title":{"rendered":"Project Astra is the future of AI at Google &#8211; The Verge"},"content":{"rendered":"<p><p>      Ive had this vision in my mind for quite a while, says      Demis Hassabis, the head of Google DeepMind and the       leader of Googles AI efforts. Hassabis has been thinking      about and working on AI for decades, but four or five years      ago, something really crystallized. One day soon, he      realized, We would have this universal assistant. Its      multimodal, its with you all the time. Call it the Star      Trek Communicator; call it the voice from Her;      call it whatever you want. Its that helper, Hassabis      continues, thats just useful. You get used to it being      there whenever you need it.    <\/p>\n<p>      At       Google I\/O, the companys annual developer conference,      Hassabis       showed off a very early version of what he hopes will      become that universal assistant. Google calls it Project      Astra, and its a real-time, multimodal AI assistant that can      see the world, knows what things are and where you left them,      and can answer questions or help you do almost anything. In      an incredibly impressive demo video that Hassabis swears is      not faked or doctored in any way, an Astra user in Googles      London office asks the system to identify a part of a      speaker, find their missing glasses, review code, and more.      It all works practically in real time and in a very      conversational way.    <\/p>\n<p>      Astra is just one of many Gemini announcements at this years      I\/O. Theres a new model,       called Gemini 1.5 Flash, designed to be faster for common      tasks like summarization and captioning. Another new model,            called Veo, can generate video from a text prompt. Gemini      Nano, the model designed to be used locally on devices like      your phone, is supposedly faster than ever as well. The      context window for Gemini      Pro, which refers to how much information the model can      consider in a given query, is doubling to 2 million tokens,      and Google says the model is better at following instructions      than ever. Googles making fast progress both on the models      themselves and on getting them in front of users.    <\/p>\n<p>      Going forward, Hassabis says, the story of AI will be less      about the models themselves and all about what they can do      for you. And that story is all about agents: bots that dont      just talk with you but actually accomplish stuff on your      behalf. Our history in agents is longer than our generalized      model work, he says, pointing to the game-playing AlphaGo      system from nearly a decade ago. Some of those agents, he      imagines, will be ultra-simple tools for getting things done,      while others will be more like collaborators and companions.      I think it may even be down to personal preference at some      point, he says, and understanding your context.    <\/p>\n<p>      Astra, Hassabis says, is much closer than previous products      to the way a true real-time AI assistant ought to work. When      Gemini 1.5 Pro, the latest version of Googles mainstream      large language model, was ready, Hassabis says he knew the      underlying tech was good enough for something like Astra to      begin to work well. But the model is only part of the      product. We had components of this six months ago, he says,      but one of the issues was just speed and latency. Without      that, the usability isnt quite there. So, for six months,      speeding up the system has been one of the teams most      important jobs. That meant improving the model but also      optimizing the rest of the infrastructure to work well and at      scale. Luckily, Hassabis says with a laugh, Thats something      Google does very well!    <\/p>\n<p>      A lot of Googles AI announcements at I\/O are about giving      you more and easier ways to use Gemini. A new product called      Gemini Live is a voice-only assistant that lets you have easy      back-and-forth conversations with the model, interrupting it      when it gets long-winded or calling back to earlier parts of      the conversation. A new feature in Google Lens allows you to      search the web by shooting and narrating a video. A lot of      this is enabled by Geminis large context window, which means      it can access a huge amount of information at a time, and      Hassabis says its crucial to making it feel normal and      natural to interact with your assistant.    <\/p>\n<p>      Know who agrees with that assessment, by the way? OpenAI,      which has been talking about AI agents for a while now. In      fact, the company demoed      a product strikingly similar to Gemini Live barely an      hour after Hassabis and I chatted. The two companies are      increasingly fighting for the same territory and seem to      share a vision for how AI might change your life and how you      might use it over time.    <\/p>\n<p>      How exactly will those assistants work, and how will you use      them? Nobody knows for sure, not even Hassabis. One thing      Google is focused on right now is trip planning  it built a      new tool for using Gemini to build an itinerary for your      vacation that you can then edit in tandem with the assistant.      There will eventually be many more features like that.      Hassabis says hes bullish on phones and glasses as key      devices for these agents but also says there is probably      room for some exciting form factors. Astra is still in an      early prototype phase and only represents one way you might      want to interact with a system like Gemini. The DeepMind team      is still researching how best to bring multimodal models      together and how to balance ultra-huge general models with      smaller and more focused ones.    <\/p>\n<p>      Were still very much in the speeds and feeds era of AI, in      which every incremental model matters and we obsess over      parameter sizes. But pretty quickly, at least according to      Hassabis, were going to start asking different questions      about AI. Better questions. Questions about what these      assistants can do, how they do it, and how they can make our      lives better. Because the tech is a long way from perfect,      but its getting better really fast.    <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Go here to see the original: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.theverge.com\/2024\/5\/14\/24156296\/google-ai-gemini-astra-assistant-live-io\" title=\"Project Astra is the future of AI at Google - The Verge\">Project Astra is the future of AI at Google - The Verge<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Ive had this vision in my mind for quite a while, says Demis Hassabis, the head of Google DeepMind and the leader of Googles AI efforts. Hassabis has been thinking about and working on AI for decades, but four or five years ago, something really crystallized. One day soon, he realized, We would have this universal assistant.  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/ai\/project-astra-is-the-future-of-ai-at-google-the-verge.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[1234935],"tags":[],"class_list":["post-169356","post","type-post","status-publish","format-standard","hentry","category-ai"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/169356"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=169356"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/169356\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=169356"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=169356"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=169356"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}