Google’s Gemini Pro 1.5 can now hear as well as see what it means for you – Tom’s Guide

Posted: April 12, 2024 at 5:52 am

Google has updated its incredibly powerful Gemini Pro 1.5 artificial intelligence model to give it the ability to hear the contents of an audio or video file for the first time.

The update was announced at Google Next, with the search giant confirming the model can listen to an updloaded clip and provide information without the need for a written transcript.

What this means is you could give it a documentary or video presentation and ask it questions about any moment, both audio and video, within the clip.

This is part of a wider push from Google to create more multimodal models that can understand a variety of input types beyond just text. The move is possible due to the Gemini family of models being trained on audio, video, text and code at the same time.

Google launched Gemini Pro 1.5 in February with a 1 million token context window. This, combined with the multimodal training data means it can process videos.

The tech giant has now added sound to the options for input. This means you can give it a podcast and have it listen through for key moments or specific mentions. It can do the same for audio attached to a video file, while also analysing the video content.

The update also means Gemini can now generate transcripts for video clips regardless of how long they might run and find a specific moment within the audio or video file.

The new update is part of the middle-tier of the Gemini family, which comes in three form factors the tiny Nano for on-device, Pro powering the free version of the Gemini chatbot and Ultra powering Gemini Advanced.

Upgrade your life with a daily dose of the biggest tech news, lifestyle hacks and our curated analysis. Be the first to know about cutting-edge gadgets and the hottest deals.

For some reason Google only released the 1.5 update to Gemini Pro rather than Ultra, meaning their middle-tier model now out performs the more advanced version. It isnt clear if there will be a Gemini Ultra 1.5 or when it will be accessible if it launches.

The massive context window starting at 250,000 (similar to Claude 3 Opus) and rising to over a million for certain approved users means you also dont need to fine tune a model on specific data. You can load that data in at the start of a chat and just ask questions.

The update also means Gemini can now generate transcripts for video clips regardless of how long they might run and find a specific moment within the audio or video file.

I imagine at some point Google will update its Gemini chatbot to use the 1.5 models, possibly after the Google I/O developer conference next month. For now it is only available through the Google Cloud developer dashboard VertexAI.

While VertexAI is a powerful tool for interacting with a range of models, building out AI applications and testing what is possible it isn't widely accessible and mainly targeted at developers, enterprise and researchers rather than consumers.

Using VertexAI you can insert any form of visual or audio media such as a short film or someone giving a talk and add a text prompt. This could be "give me five bullet points summing up the speech" or "how many times did they say Gemini".

Google's main audience for Gemini Pro 1.5 is enterprise with partnerships already in the works with TBS, REplit and others who are using it for metadata tagging and creating code.

Google has also started using Gemini Pro 1.5 in its own products including the Generative AI coding assistant Code Assist to track changes across large-scale codebases.

The changes to Gemini Pro 1.5 were announced at Google Next along with a big update to the DeepMind AI image model Imagen 2 that powers the Gemini image-generation capabilities.

This is getting inpainting and outptaining where users can remove or add any element from a generated image. This is similar to updates OpenAI made to its DALL-E model recently.

Google is also going to starts grounding its AI responses across Gemini and other platforms with Google Search so they always contain up to date information.

The rest is here:

Google's Gemini Pro 1.5 can now hear as well as see what it means for you - Tom's Guide

Related Posts