Page 6«..5678..2030..»

Category Archives: Google

Alphabet Stock Rises Amid Introduction of AI Products – PYMNTS.com

Posted: April 12, 2024 at 5:52 am

Alphabets stock is reportedly headed toward a $2 trillion market value, driven by investors becoming more optimistic about the companys strategy in the artificial intelligence (AI) field.

Reaching the $2 trillion milestone would be a record for the tech company and would place it among the three U.S. firms that have already topped that market value:Microsoft,AppleandNvidia, Bloombergreported Thursday (April 10).

Alphabets stock is up 12% this year, after seeing a drop in March due to investors concerns that it was falling behind on AI, according to the report.

The company has experienced some setbacks with its consumer-facing AI tools, and its strategy for monetizing AI is unclear, but it displayed its enterprise-focused AI model at an event this week and investors see a growth opportunity around its role in generative AI products, the report said.

During its cloud computing conference this week, Alphabet showed the capabilities its Gemini AI product has when it comes to advertising, cybersecurity, short videos and podcasts, per the report. In addition, Alphabet-ownedGoogleunveiled a chip designed for AI.

Alphabets stock has also benefited from reports that Apple is considering using Gemini to powerAI servicesfor its devices, according to the report.

Google unveiled a slew of newAI-powered capabilitiesTuesday (April 9) during a keynote presentation at its annual Cloud Next event.

Among them were Google Vids, an addition to Google Workspace that allows users to create and edit videos collaboratively using AI-powered features, and Gemini Code Assist, which is an enterprise-level AI code completion tool designed to rival GitHubs Copilot Enterprise.

The announcement of these and other business-focused AI updates to Google Workspace came at a time when the impact ofAI software for B2B applications is top of mind for companies looking to modernize their workflows, PYMNTS reported Wednesday (April 10).

Googles ambition with its enterprise AI products is not just to streamline tasks, but to automate processes.

With these advances, enterprises can do things today that just werent possible with AI before, Google CEOSundar Pichai said during the Cloud Next event.

Read the original:

Alphabet Stock Rises Amid Introduction of AI Products - PYMNTS.com

Posted in Google | Comments Off on Alphabet Stock Rises Amid Introduction of AI Products – PYMNTS.com

Google’s Gemini Pro 1.5 can now hear as well as see what it means for you – Tom’s Guide

Posted: at 5:52 am

Google has updated its incredibly powerful Gemini Pro 1.5 artificial intelligence model to give it the ability to hear the contents of an audio or video file for the first time.

The update was announced at Google Next, with the search giant confirming the model can listen to an updloaded clip and provide information without the need for a written transcript.

What this means is you could give it a documentary or video presentation and ask it questions about any moment, both audio and video, within the clip.

This is part of a wider push from Google to create more multimodal models that can understand a variety of input types beyond just text. The move is possible due to the Gemini family of models being trained on audio, video, text and code at the same time.

Google launched Gemini Pro 1.5 in February with a 1 million token context window. This, combined with the multimodal training data means it can process videos.

The tech giant has now added sound to the options for input. This means you can give it a podcast and have it listen through for key moments or specific mentions. It can do the same for audio attached to a video file, while also analysing the video content.

The update also means Gemini can now generate transcripts for video clips regardless of how long they might run and find a specific moment within the audio or video file.

The new update is part of the middle-tier of the Gemini family, which comes in three form factors the tiny Nano for on-device, Pro powering the free version of the Gemini chatbot and Ultra powering Gemini Advanced.

Upgrade your life with a daily dose of the biggest tech news, lifestyle hacks and our curated analysis. Be the first to know about cutting-edge gadgets and the hottest deals.

For some reason Google only released the 1.5 update to Gemini Pro rather than Ultra, meaning their middle-tier model now out performs the more advanced version. It isnt clear if there will be a Gemini Ultra 1.5 or when it will be accessible if it launches.

The massive context window starting at 250,000 (similar to Claude 3 Opus) and rising to over a million for certain approved users means you also dont need to fine tune a model on specific data. You can load that data in at the start of a chat and just ask questions.

The update also means Gemini can now generate transcripts for video clips regardless of how long they might run and find a specific moment within the audio or video file.

I imagine at some point Google will update its Gemini chatbot to use the 1.5 models, possibly after the Google I/O developer conference next month. For now it is only available through the Google Cloud developer dashboard VertexAI.

While VertexAI is a powerful tool for interacting with a range of models, building out AI applications and testing what is possible it isn't widely accessible and mainly targeted at developers, enterprise and researchers rather than consumers.

Using VertexAI you can insert any form of visual or audio media such as a short film or someone giving a talk and add a text prompt. This could be "give me five bullet points summing up the speech" or "how many times did they say Gemini".

Google's main audience for Gemini Pro 1.5 is enterprise with partnerships already in the works with TBS, REplit and others who are using it for metadata tagging and creating code.

Google has also started using Gemini Pro 1.5 in its own products including the Generative AI coding assistant Code Assist to track changes across large-scale codebases.

The changes to Gemini Pro 1.5 were announced at Google Next along with a big update to the DeepMind AI image model Imagen 2 that powers the Gemini image-generation capabilities.

This is getting inpainting and outptaining where users can remove or add any element from a generated image. This is similar to updates OpenAI made to its DALL-E model recently.

Google is also going to starts grounding its AI responses across Gemini and other platforms with Google Search so they always contain up to date information.

The rest is here:

Google's Gemini Pro 1.5 can now hear as well as see what it means for you - Tom's Guide

Posted in Google | Comments Off on Google’s Gemini Pro 1.5 can now hear as well as see what it means for you – Tom’s Guide

Next Vision, or Vision Next? What we really thought about Google and Intel’s AI events – The Register

Posted: at 5:52 am

Next Vision, or Vision Next? What we really thought about Google and Intel's AI events  The Register

Read this article:

Next Vision, or Vision Next? What we really thought about Google and Intel's AI events - The Register

Posted in Google | Comments Off on Next Vision, or Vision Next? What we really thought about Google and Intel’s AI events – The Register

Google Wallet on Wear OS isn’t as convenient as it could be – Android Police

Posted: at 5:52 am

Summary

Of all the tech-enabled conveniences we've come to take for granted, tap-to-pay in wearables is, for my money, one of the most satisfyingly futuristic the very idea of using your wristwatch to pay for something flatly didn't exist until relatively recently. Even now, nearly a decade into living with watches that are also wallets, I still get a little techy satisfaction any time I pay for something using my Pixel Watch. But as much as I appreciate the ability to use my wrist computer to pay for coffee, I think the Google Wallet experience on Wear OS could be significantly improved with what seems to me like a simple UX tweak.

As it stands today, buying something with Google Wallet on Wear OS requires you to open the Google Wallet app before tapping your watch to a contactless payment terminal. Different watches work differently, but in the case of the Pixel Watch 2 I'm typically wearing, you can most easily fire up Wallet by double-tapping the watch's digital crown. Once it's open, tap-to-pay is ready. All told, it's hardly an inconvenient system as is.

But having to get the app open first introduces just a little extra friction that makes the experience feel less seamless than it could. On mobile, provided you have a default card set in Wallet and aren't using the Pixel 7 Pro's wonky face unlock feature, tapping your unlocked phone to a payment terminal will initiate a payment with your default method, whether or not you have the Wallet app open. Essentially, I want to see that approach on Wear OS, too.

That may sound risky, but hear me out. Just like on a phone, using Google Wallet on Wear OS requires that your watch has a screen lock set. Once you've unlocked your watch, it stays unlocked until you take it off, at which time it immediately locks again. Having to manually open Wallet before making a payment does further mitigate some risks, like making a tap-to-pay payment you didn't intend to or having your payment information read without your permission by nearby bad actors. But given NFC in smartwatches only has a range of a couple of inches, I feel those threats are very remote.

Personally, I'd be willing to chance it for the option to use tap-to-pay any time my watch is unlocked. Physical credit cards have no payment method-side authentication for contactless payments at all, so compared to that, my watch locking when it's off my wrist already feels plenty secure without the additional step of opening Google Wallet first.

In light of NFC's range limitations and Google Wallet's inherent security the digital cards on your watch have numbers unique from your physical cards I feel like the risk in allowing easier contactless payments on Wear OS would be pretty minimal. Still, I'd be perfectly happy to see what I'm proposing here as an option while the status quo stays the default. Heck, Google could even throw in a strongly worded warning when you change the setting.

As it stands, though, using Google Wallet on your watch takes two hands, which runs counter to the purpose of a feature that should be all about convenience. You can fish a phone out of your pocket, unlock it, and tap it to a card reader all without setting down your coffee (or your shopping bags, or your child, et cetera) but one-handed payments aren't possible on Wear OS. It's a niche complaint, but I'd love to see it addressed, even if recent changes abroad make it seem like Google is increasingly prioritizing security over flexibility when it comes to payments.

More:

Google Wallet on Wear OS isn't as convenient as it could be - Android Police

Posted in Google | Comments Off on Google Wallet on Wear OS isn’t as convenient as it could be – Android Police

Google AI’s Updates Show Its Ambitions To Go Beyond Automating Tasks As It Aims To Revolutionize Business … – Yahoo Finance

Posted: at 5:52 am

Google AI's Updates Show Its Ambitions To Go Beyond Automating Tasks As It Aims To Revolutionize Business ...  Yahoo Finance

View original post here:

Google AI's Updates Show Its Ambitions To Go Beyond Automating Tasks As It Aims To Revolutionize Business ... - Yahoo Finance

Posted in Google | Comments Off on Google AI’s Updates Show Its Ambitions To Go Beyond Automating Tasks As It Aims To Revolutionize Business … – Yahoo Finance

Google Built Its Own Server CPU in Blow to Intel and AMD – The Motley Fool

Posted: at 5:52 am

Google Built Its Own Server CPU in Blow to Intel and AMD  The Motley Fool

See the original post:

Google Built Its Own Server CPU in Blow to Intel and AMD - The Motley Fool

Posted in Google | Comments Off on Google Built Its Own Server CPU in Blow to Intel and AMD – The Motley Fool

Google Workspace gets a game-changing security feature – Android Police

Posted: at 5:52 am

Summary

The ability to share Google Docs, Sheets, and other Workspace products makes it quicker and easier for teams to collaborate, regardless of where members may be working. However, it can become difficult to keep track of all of your files once several people have access. Admin permissions can help you manage the changes made to your files, but the vetting process can be daunting, depending on how many you need to mitigate. Now, Google is rolling out a new feature for Workspace subscribers that may help relieve any concerns about security issues that could emerge.

According to a Workspace update from Google, users will now have access to a feature that requires sensitive actions to be approved by multiple admins. The actions that have been deemed sensitive are two-step verification, account recovery, advanced protection, Google session control, login challenges, and passwordless login. If the new security feature is enabled, any changes to these settings will need to be submitted by an admin to a designated super admin for approval.

On the admin dashboard, there is a new Multi-party approval option under the Authentication section of the Security menu. Upon tapping into Multi-party approval, admins will see expanded details on requests that have been made. Some of the information that can be viewed is a list of collaborators who will be affected by the changes, the date that the request was made, and what will happen after the changes are implemented. The feature is still being rolled out, meaning users may not see it immediately. However, Workspace users with Enterprise Standard, Enterprise Plus, Education Standard, Education Plus, and Cloud Identity Premium subscriptions can all expect access.

Although Google has been pouring its time and resources into AI specifically Gemini as of late, its other products and services have not been left behind. In fact, Workspace is one example of an initiative that the company has continued to improve, and its even done so through the integration of AI. Recently, Google announced that it was planning to roll out a slew of AI-based features for Workspace subscribers, such as summary generation and automatic live translation. As the companys ventures into AI continue to evolve, new features may continue to be centered around the technology in the future.

Go here to read the rest:

Google Workspace gets a game-changing security feature - Android Police

Posted in Google | Comments Off on Google Workspace gets a game-changing security feature – Android Police

Google Flights says these are the top summer travel destinations of 2024 – Fox Business

Posted: at 5:52 am

Google Flights says these are the top summer travel destinations of 2024  Fox Business

See the article here:

Google Flights says these are the top summer travel destinations of 2024 - Fox Business

Posted in Google | Comments Off on Google Flights says these are the top summer travel destinations of 2024 – Fox Business

Former Google Deepmind Researchers Assemble Luminaries Across Music And Tech To Launch Udio, A New AI … – PR Newswire

Posted: at 5:52 am

Backed by a16z, with participation from angel investors like will.i.am, Common, Kevin Wall, Tay Keith, Steve Stoute's UnitedMasters, Mike Krieger (Cofounder & CTO of Instagram) and Oriol Vinyals (head of Gemini at Google), Udio enables everyone from classically trained musicians, to those with pop star ambitions, to hip hop fans, to people who just want to have fun with their friends to create awe-inspiring songs in mere moments

NEW YORK, April 10, 2024 /PRNewswire/ -- Udio, a company that leverages AI to easily create extraordinary and original music, today announced the public launch of its app at udio.com. Previously only available in closed beta, where the app was regularly played with by some of the biggest names in the music industry, Udio was developed by former Google DeepMind researchers with a mission of making it easy for anyone to create emotionally resonant music in an instant. Whether it is recording a cherished memory in song, generating funny soundtracks for memes, or creating full length tracks for professional release, Udio will expand how everyone creates and shares music.

"There is nothing available that comes close to the ease of use, voice quality and musicality of what we've achieved with Udio - it's a real testament to the folks we have involved," said David Ding, Co-founder and CEO of Udio. "At every stage of development, we talked to people in the industry about how we could bring this technology to market in a way that benefits both artists and musicians. We gathered feedback from some of the most prolific artists and music producers likewill.i.am, Common and Tay Keith, to ensure that everything they thought would enhance the experience would be available. We hold ourselves to the highest standards and we believe we have achieved something truly remarkable, so we can't wait to get Udio into the hands of music lovers worldwide."

With a superior sound quality and musicality that meets professional standards, Udio was designed to make song creation as easy as possible. In just a few steps, users simply type a description of the music genre they want to make, provide the subject or personalized lyrics, and indicate artists that inspire. In less than 40 seconds, Udio works its magic and produces fully mastered tracks. Once a track has been created, users can further edit their creations through the app's "remix" feature. This enables iteration on existing tracks through text descriptors, turning everyday creators into full-blown producers. It even enables users to extend their songs, edit them to have different sounds and use them as the basis of inspiration for their next creation.

Once finished, users can then share their new creations with the app's built-in community of music lovers, for feedback and collaboration.

"This is a brand new Renaissance and Udio is the tool for this era's creativity-with Udio you are able to pull songs into existence via AI and your imagination," said will.i.am, multi-platinum artist and producer.

While in beta, Udio has also inspired some of the most prolific musicians, producers and artists with their next creation. Designed to be artist friendly, Udio helps musicians not only create songs faster, but test and play around with lyrics in an all new way. Through its extensive network, Udio also is in discussions with a number of artists who want to leverage AI in their workflows and find new ways to monetize through its tech.

"Good music stirs up deep emotions in all of us, and connects us to each other through shared experiences. Nothing will ever replace human artists and the unique connections they make with their fans," said Matt Bornstein, Partner at Andreessen Horowitz. "But we think Udio - with its incredible musicality, creativity, and vocals - is a brand new way for us to create and enjoy music together. We're thrilled to back this stellar group of researchers in their mission to make AI music a reality."

Udio's team is working alongside artists on all aspects of product and business development. The company has also secured leading investors in the seed round including a16z, as well as prominent tech and music angels like Mike Krieger (Cofounder & CTO of Instagram) and Oriol Vinyals (head of Gemini at Google).

"I've always been drawn to music and creation tools, and after I demoed Udio, I was blown away," said Mike Krieger. "It's early days but just like Instagram brought photography sharing to the masses, I believe Udio has the power to bring music creation to the masses as well. I'm thrilled to be a product advisor on their groundbreaking journey."

"UnitedMasters embraces cutting-edge technology that can unlock unprecedented opportunities for independent artists, and AI is reshaping how we create, consume, and experience music. As we embrace this transformative technology, we must ensure it amplifies creativity, empowers artists, and enriches the music industry without compromising ownership. It's imperative that we champion transparency, accountability, and ownership in how this technology benefits artists, shaping a future where innovation and creativity can thrive," said Steve Stoute, CEO and Founder of UnitedMasters.

Udio was founded by David Ding, Conor Durkan, Charlie Nash, Yaroslav Ganin, and Andrew Sanchez.

For more information on Udio and how to access, please visit udio.com.

About Udio

Udio is a company that leverages proprietary AI to make amazing sound creation fun. Founded in New York in December 2023 by former Google DeepMind researchers, Udio's mission is to bring world changing products to market. With the launch of its new app of the same name, Udio is lauded by many in the industry as being the first to democratize song creation. To learn more about Udio, its founders and where to access the app, please visit udio.com or on its social channels at Twitter: @udiomusic, Instagram: udiomusic, tiktok: @udiomusic, Youtube: @udio_music

Notice: If your editorial policy requires the use of full legal names,will.i.am's is William Adams. All others shown in Wikipedia and previously published stories are incorrect.

Media Contact: Rachel Rogers 310-770-4917

SOURCE Udio

See more here:

Former Google Deepmind Researchers Assemble Luminaries Across Music And Tech To Launch Udio, A New AI ... - PR Newswire

Posted in Google | Comments Off on Former Google Deepmind Researchers Assemble Luminaries Across Music And Tech To Launch Udio, A New AI … – PR Newswire

Page 6«..5678..2030..»