Which AI phone features are useful and how well they actually work – The Washington Post

Every year like clockwork, some of the biggest companies in the world release new phones they hope you will shell out hundreds of dollars for.

And more and more, they are leaning on a new angle to get you thinking of upgrading: artificial intelligence.

Smartphones from Google and Samsung come with features to help you skim through long swaths of text, tweak the way you sound in messages, and make your photos more eye-catching. Meanwhile, Apple is reportedly racing to build AI tools and features it hopes to include in an upcoming version of its iOS software, which will launch alongside the companys new iPhones later this year.

But here's the real question: Of the AI tools built into phones right now, how many of them are actually useful?

Thats tough to say: It all depends on what you use your phone for, and what you personally perceive is helpful. To help, heres a brief guide to the AI features youll most commonly find in phones right now, so you can decide which might be worth living with for yourself.

For years, smartphone makers have worked to make the photos that come out of the tiny camera sensors they use look better than they should. Now, theyre also giving us the tools to more easily revise those images.

Here are the most basic: Google and Samsung phones now let you resize, move or erase people and objects inside photos youve taken. Once you do that, the phones lean on generative AI to fill in the visual gaps left behind and thats it.

Think of it as a little Photoshopping, except the hard work is basically done for you. And for better or worse, there are limits to what it can do.

You cant use those built-in tools to generate people, objects or more fantastical additions that werent part of the original image the way you can with other AI image creation tools. The results dont usually survive serious scrutiny, either its not hard to see places where little details dont line up, or areas that look smudgy because the AI couldnt convincingly fill a gap where an offending object used to be.

Whats potentially more unsettling are tools such as Googles Best Take for its Pixel phones, which give you the chance to select specific expressions for peoples faces in an image if youve taken a bunch of photos in a row.

Some people dont mind it, while others find it a little divorced from reality. No matter where you land, though, expect your photos to get a lot of AI attention the next time you buy a phone.

Your messages to your boss probably shouldnt sound like messages to your friends and vice versa. Samsungs Chat Assist and Googles Magic Compose tools use generative AI to try to adjust the language in your messages to make them more palatable.

The catch? Googles Magic Compose only works in its texting-focused Messages app, which means you cant easily use it for emails or, say, WhatsApp messages. (A similar tool for Gmail and the Chrome web browser, called Help Me Write, is not yet widely available.) People who buy Galaxy S24 phones, meanwhile, can use Samsungs version of this feature wherever they write text to switch between professional, casual, polite, and even emoji-filled variations of their original message.

What can I say? It works, though I cant imagine using it with any regularity. And in some ways, Samsungs Chat Assist tool backs down when its arguably needed most. In a few test emails where I used some very mild swears to allude to (fictional) workplace stress, Samsungs Chat Assist refused to help on the grounds that the messages contained inappropriate language.

The built-in voice recorder apps on Googles Pixels and Samsungs latest phones dont just record audio theyll turn those recordings into full-blown transcripts.

In theory, this should free you up from having to take so many notes while youre in a meeting or a lecture. And for the most part, these features work well enough after a few seconds, theyll dutifully produce readable, if sometimes clumsy, readouts of what youve just heard.

If all you need is a sort of rough draft to accompany your recordings, these automated transcription tools can be really helpful. They can differentiate between multiple speakers, which is handy when you need to skim through a conversation later. And Googles version will even give you a live transcription, which can be nice if youre the sort of person who keeps subtitles on all the time.

But whether youre using a Google phone or one of Samsungs, the resulting transcripts often need a bit of cleanup that means youll need to do a little extra work before you copy and paste the results into something important.

Who among us hasnt clicked into a Wikipedia page, or an article, or a recipe online that takes way too long to get to the point? As long as youre using the Chrome browser, Googles Pixel phones can scan those long webpages and boil them down into a set of high-level blurbs to give you the gist.

Sadly, Googles summaries are often too cursory to feel satisfying.

Samsungs phones can summarize your notes and transcriptions of your recordings, but it will only summarize things you find on the web if you use its homemade web browser. Honestly, that might be worth it: The quality of its summaries are much better than Googles. (You even have the option of switching to a more detailed version of the AI summary, which Google doesnt offer at all.)

Both versions of these summary tools come with a notable caveat, too: They wont summarize articles from websites that have paywalls, which includes just about every major U.S. newspaper.

Samsungs AI tools are free for now, but a tiny footnote on its website suggests the company may eventually charge customers to use them. Its not a done deal yet, but Samsung isnt ruling it out either.

We are committed to making Galaxy AI features available to as many of our users as possible, a spokesperson said in a statement. We will not be considering any changes to that direction before the end of 2025.

Google, meanwhile, already makes some of its AI-powered features exclusive to certain devices. (For example: A Video Boost tool for improving the look of your footage is only available on the companys higher-end Pixel 8 Pro phones.)

In the past, Google has made experimental versions of some AI tools like the Magic Compose feature available only to people who pay for the companys Google One subscription service. And more recently, Google has started charging people for access to its latest AI chatbot. For now, though, the company hasnt said anything either way about putting future AI phone features behind a paywall.

Google did not immediately respond to a request for comment.

Go here to read the rest:

Which AI phone features are useful and how well they actually work - The Washington Post

Posted in Ai

Google to fix AI picture bot after ‘woke’ criticism – BBC.com

Google and parent company Alphabet Inc's headquarters in Mountain View, California

Google is racing to fix its new AI-powered tool for creating pictures, after claims it was over-correcting against the risk of being racist.

Users said the firm's Gemini bot supplied images depicting a variety of genders and ethnicities even when doing so was historically inaccurate.

For example, a prompt seeking images of America's founding fathers turned up women and people of colour.

The company said its tool was "missing the mark".

"Gemini's AI image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here," said Jack Krawczyk, senior director for Gemini Experiences.

"We're working to improve these kinds of depictions immediately," he added.

Accept and continue

It is not the first time AI has stumbled over real-world questions about diversity.

For example, Google infamously had to apologise almost a decade ago after its photos app labelled a photo of a black couple as "gorillas".

Rival AI firm, OpenAI was also accused of perpetuating harmful stereotypes, after users found its Dall-E image generator responded to queries for chief executive, for example, with results dominated by pictures of white men.

Google, which is under pressure to prove it is not falling behind in AI developments, released its latest version of Gemini last week.

The bot creates pictures in response to written queries.

It quickly drew critics, who accused the company of training the bot to be laughably woke.

Accept and continue

"It's embarrassingly hard to get Google Gemini to acknowledge that white people exist," computer scientist Debarghya Das, wrote.

"Come on," Frank J Fleming, an author and humourist who writes for outlets including the right-wing PJ Media, in response to the results he received asking for an image of a Viking.

The claims picked up speed in right-wing circles in the US, where many big tech platforms are already facing backlash for alleged liberal bias.

Mr Krawczyk said the company took representation and bias seriously and wanted its results to reflect its global user base.

"Historical contexts have more nuance to them and we will further tune to accommodate that," he wrote on X, formerly Twitter, where users were sharing the dubious results they had received.

"This is part of the alignment process - iteration on feedback. Thank you and keep it coming!"

See the rest here:

Google to fix AI picture bot after 'woke' criticism - BBC.com

Posted in Ai

China’s Rush to Dominate A.I. Comes With a Twist: It Depends on U.S. Technology – The New York Times

In November, a year after ChatGPTs release, a relatively unknown Chinese start-up leaped to the top of a leaderboard that judged the abilities of open-source artificial intelligence systems.

The Chinese firm, 01.AI, was only eight months old but had deep-pocketed backers and a $1 billion valuation and was founded by a well-known investor and technologist, Kai-Fu Lee. In interviews, Mr. Lee presented his A.I. system as an alternative to options like Metas generative A.I. model, called LLaMA.

There was just one twist: Some of the technology in 01.AIs system came from LLaMA. Mr. Lees start-up then built on Metas technology, training its system with new data to make it more powerful.

The situation is emblematic of a reality that many in China openly admit. Even as the country races to build generative A.I., Chinese companies are relying almost entirely on underlying systems from the United States. China now lags the United States in generative A.I. by at least a year and may be falling further behind, according to more than a dozen tech industry insiders and leading engineers, setting the stage for a new phase in the cutthroat technological competition between the two nations that some have likened to a cold war.

Chinese companies are under tremendous pressure to keep abreast of U.S. innovations, said Chris Nicholson, an investor with the venture capital firm Page One Ventures who focuses on A.I. technologies. The release of ChatGPT was yet another Sputnik moment that China felt it had to respond to.

Jenny Xiao, a partner at Leonis Capital, an investment firm that focuses on A.I.-powered companies, said the A.I. models that Chinese companies build from scratch arent very good, leading to many Chinese firms often using fine-tuned versions of Western models. She estimated China was two to three years behind the United States in generative A.I. developments.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

See the article here:

China's Rush to Dominate A.I. Comes With a Twist: It Depends on U.S. Technology - The New York Times

Posted in Ai

Samsung’s Galaxy AI Is Coming to the Galaxy S23, Foldables and Tablets Next Month – CNET

Samsung is bringing its suite of Galaxy AI features to the Galaxy S23 lineup, as well as the Galaxy S23 FE,Galaxy Z Fold 5, Galaxy Z Flip 5and Galaxy Tab S9 family starting in March. The move shows that Samsung is eager to make AI a bigger part of all its high-profile mobile products, not just its newest phones.

Galaxy AI is scheduled to arrive in a software update in late March as part of Samsung's goal to bring the features to more than 100 million Galaxy users this year, T.M. Roh, president and head of Samsung's mobile experience business, said in a press release. Samsung previously said Galaxy AI would come to the Galaxy S23 lineup, but it hadn't disclosed the timing until now.

Read more: Best Samsung Phone For 2024

Galaxy AI is an umbrella term that refers to a collection of new AI-powered features that debuted on the Galaxy S24 series in January. Some examples of Galaxy AI features include Generative Edit, which lets you move or manipulate objects in photos; Chat Assist, for rewriting texts in a different tone or translating them into other languages; Circle to Search, which lets you launch a Google search for any object on screen just by circling it; and Live Translate, a tool that translates phone calls in real time.

Samsung and other tech companies have been vocal about their plans to infuse smartphones with generative AI, or AI that can create content or responses when prompted based on training data. It's the same flavor of AI that powers ChatGPT, and device makers have been adamant about adding it to their own products.

Although AI has played an important role in smartphones for years, companies like Samsung and Google, which collaborated to develop Galaxy AI, only recently became focused on bringing generative AI to phones. For Samsung, Galaxy AI is the culmination of those efforts.

Samsung's AI features are also likely coming to wearables next, as the company hinted Tuesday in a blog post authored by Roh.

"In the near future, select Galaxy wearables will use AI to enhance digital health and unlock a whole new era of expanded, intelligent health experiences," he said in the post.

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

Watch this: See How the Galaxy S24 Ultra's Camera Compares to the Pixel 8 Pro's

Excerpt from:

Samsung's Galaxy AI Is Coming to the Galaxy S23, Foldables and Tablets Next Month - CNET

Posted in Ai

The Samsung Galaxy S23 series will get AI features in late March – The Verge

Right now, you need a Galaxy S24 phone to use the very latest AI features from Samsung, but thats changing next month. In late March, Samsung will extend Galaxy AI features to the S23 series including the S23 FE as well as recent foldables and tablets as part of the One UI 6.1 update. Its all free for now, but after 2025 you might have to pay up.

The Galaxy Z Fold 5 and Z Flip 5 are slated to get the update, as well as the Galaxy Tab S9, S9 Plus, and S9 Ultra. If Samsung wants to ship Galaxy AI to 100 million phones this year like it says it will, thats a solid start. The One UI 6.1 update will include the much-touted AI features on the S24 series, including live translation capabilities, generative photo and video editing, and Googles Circle to Search feature. This suite of features includes a mix of on- and off-device processing, just like it does on the S24 series.

An older phone learning new tricks is unequivocally a good thing, even if Galaxy AI is a little bit of a mixed bag right now. But my overall impression is that these features do occasionally come in handy, and when they go sideways theyre mostly harmless. One UI 6.1 will also include a handful of useful non-AI updates, such as lockscreen widgets and the new, unified Quick Share.

The rest is here:

The Samsung Galaxy S23 series will get AI features in late March - The Verge

Posted in Ai

AI agents like Rabbit aim to book your vacation and order your Uber – NPR

The AI-powered Rabbit R1 device is seen at Rabbit Inc.'s headquarters in Santa Monica, California. The gadget is meant to serve as a personal assistant fulfilling tasks such as ordering food on DoorDash for you, calling an Uber or booking your family's vacation. Stella Kalinina for NPR hide caption

The AI-powered Rabbit R1 device is seen at Rabbit Inc.'s headquarters in Santa Monica, California. The gadget is meant to serve as a personal assistant fulfilling tasks such as ordering food on DoorDash for you, calling an Uber or booking your family's vacation.

ChatGPT can give you travel ideas, but it won't book your flight to Cancn.

Now, artificial intelligence is here to help us scratch items off our to-do lists.

A slate of tech startups are developing products that use AI to complete real-world tasks.

Silicon Valley watchers see this new crop of "AI agents" as being the next phase of the generative AI craze that took hold with the launch of chatbots and image generators.

Last year, Sam Altman, the CEO of OpenAI, the maker of ChatGPT, nodded to the future of AI errand-helpers at the company's developer conference.

"Eventually, you'll just ask a computer for what you need, and it'll do all of these tasks for you," Altman said.

One of the most hyped companies doing this is called Rabbit. It has developed a device called the Rabbit R1. Chinese entrepreneur Jesse Lyu launched it at this year's CES, the annual tech trade show, in Las Vegas.

It's a bright orange gadget about half the size of an iPhone. It has a button on the side that you push and talk into like a walkie-talkie. In response to a request, an AI-powered rabbit head pops up and tries to fulfill whatever task you ask.

Chatbots like ChatGPT rely on technology known as a large language model, and Rabbit says it uses both that system and a new type of AI it calls a "large action model." In basic terms, it learns how people use websites and apps and mimics these actions after a voice prompt.

It won't just play a song on Spotify, or start streaming a video on YouTube, which Siri and other voice assistants can already do, but Rabbit will order DoorDash for you, call an Uber, book your family's vacation. And it makes suggestions after learning a user's tastes and preferences.

Storing potentially dozens or hundreds of a person's passwords raises instant questions about privacy. But Rabbit claims it saves user credentials in a way that makes it impossible for the company, or anyone else, to access someone's personal information. The company says it will not sell or share user data with third parties "without your formal, explicit permission."

A Rabbit employee demonstrates the company's Rabbit R1 device. The company says more than 80,000 people have preordered the device for $199. Stella Kalinina for NPR hide caption

A Rabbit employee demonstrates the company's Rabbit R1 device. The company says more than 80,000 people have preordered the device for $199.

The company, which says more than 80,000 people have preordered the Rabbit R1, will start shipping the devices in the coming months.

"This is the first time that AI exists in a hardware format," said Ashley Bao, a spokeswoman for Rabbit at the company's Santa Monica, Calif., headquarters. "I think we've all been waiting for this moment. We've had our Alexa. We've had our smart speakers. But like none of them [can] perform tasks from end to end and bring words to action for you."

Excitement in Silicon Valley over AI agents is fueling an increasingly crowded field of gizmos and services. Google and Microsoft are racing to develop products that harness AI to automate busywork. The web browser Arc is building a tool that uses an AI agent to surf the web for you. Another startup, called Humane, has developed a wearable AI pin that projects a display image on a user's palm. It's supposed to assist with daily tasks and also make people pick up their phones less frequently.

Similarly, Rabbit claims its device will allow people to get things done without opening apps (you log in to all your various apps on a Rabbit web portal, so it uses your credentials to do things on your behalf).

To work, the Rabbit R1 has to be connected to Wi-Fi, but there is also a SIM card slot, in case people want to buy a separate data plan just for the gadget.

When asked why anyone would want to carry around a separate device just to do something your smartphone could do in 30 seconds, Rabbit spokesman Ryan Fenwick argued that using apps to place orders and make requests all day takes longer than we might imagine.

"We are looking at the entire process, end to end, to automate as much as possible and make these complex actions much quicker and much more intuitive than what's currently possible with multiple apps on a smartphone," Fenwick said.

ChatGPT's introduction in late 2022 set off a frenzy at companies in many industries trying to ride the latest tech industry wave. That chatbot exuberance is about to be transferred to the world of gadgets, said Duane Forrester, an analyst at the firm Yext.

Google and Microsoft are racing to develop products that harness AI to automate busywork, which might make other AI-powered assistants obsolete. Stella Kalinina for NPR hide caption

Google and Microsoft are racing to develop products that harness AI to automate busywork, which might make other AI-powered assistants obsolete.

"Early on, with the unleashing of AI, every single product or service attached the letters "A" and "I" to whatever their product or service was," Forrester said. "I think we're going to end up seeing a version of that with hardware as well."

Forrester said an AI walkie-talkie might quickly become obsolete when companies like Apple and Google make their voice assistants smarter with the latest AI innovations.

"You don't need a different piece of hardware to accomplish this," he said. "What you need is this level of intelligence and utility in our current smartphones, and we'll get there eventually."

Researchers are worried that AI-powered personal assistant technology could eventually go wrong. Stella Kalinina for NPR hide caption

Researchers are worried that AI-powered personal assistant technology could eventually go wrong.

Researchers are worried about where such technology could eventually go awry.

The AI assistant purchasing the wrong nonrefundable flight, for instance, or sending a food order to someone else's house are among potential snafus that analysts have mentioned.

A 2023 paper by the Center for AI Safety warned against AI agents going rogue. It said if an AI agent is given an "open-ended goal" say, maximize a person's stock market profits without being told how to achieve that goal, it could go very wrong.

"We risk losing control over AIs as they become more capable. AIs could optimize flawed objectives, drift from their original goals, become power-seeking, resist shutdown, and engage in deception. We suggest that AIs should not be deployed in high-risk settings, such as by autonomously pursuing open-ended goals or overseeing critical infrastructure, unless proven safe," according to a summary of the paper.

At Rabbit's Santa Monica office, Rabbit R1 Creative Director Anthony Gargasz pitches the device as a social media reprieve. Use it to make a doctor's appointment or book a hotel without being sucked into an app's feed for hours.

"Absolutely no doomscrolling on the Rabbit R1," said Gargasz. "The scroll wheel is for intentional interaction."

His colleague Ashley Bao added that the whole point of the gadget is to "get things done efficiently." But she acknowledged there's a cutesy factor too, comparing it to the keychain-size electronic pets that were popular in the 1990s.

"It's like a Tamagotchi but with AI," she said.

Excerpt from:

AI agents like Rabbit aim to book your vacation and order your Uber - NPR

Posted in Ai

Google Just Released Two Open AI Models That Can Run on Laptops – Singularity Hub

Last year, Google united its AI units in Google DeepMind and said it planned to speed up product development in an effort to catch up to the likes of Microsoft and OpenAI. The stream of releases in the last few weeks follows through on that promise.

Two weeks ago, Google announced the release of its most powerful AI to date, Gemini Ultra, and reorganized its AI offerings, including its Bard chatbot, under the Gemini brand. A week later, they introduced Gemini Pro 1.5, an updated Pro model that largely matches Gemini Ultrasperformance and also includes an enormous context windowthe amount of data you can prompt it withfor text, images, and audio.

Today, the company announced two new models. Going by the name Gemma, the models are much smaller than Gemini Ultra, weighing in at 2 and 7 billion parameters respectively. Google said the models are strictly text-basedas opposed to multimodal models that are trained on a variety of data, including text, images, and audiooutperform similarly sized models, and can be run on a laptop, desktop, or in the cloud. Before training, Google stripped datasets of sensitive data like personal information. They also fine-tuned and stress-tested the trained models pre-release to minimize unwanted behavior.

The models were built and trained with the same technology used in Gemini, Google said, but in contrast, theyre being released under an open license.

That doesnt mean theyre open-source. Rather, the company is making the model weights available so developers can customize and fine-tune them. Theyre also releasing developer tools to help keep applications safe and make them compatible with major AI frameworks and platforms. Google says the models can be employed for responsible commercial usage and distributionas defined in the terms of usefor organizations of any size.

If Gemini is aimed at OpenAI and Microsoft, Gemma likely has Meta in mind. Meta is championing a more open model for AI releases, most notably for its Llama 2 large language model. Though sometimes confused for an open-source model, Meta has not released the dataset or code used to train Llama 2. Other more open models, like the Allen Institute for AIs (AI2) recent OLMo models, do include training data and code. Googles Gemma release is more akin to Llama 2 than OLMo.

[Open models have] become pretty pervasive now in the industry, Googles Jeanine Banks said in a press briefing. And it often refers to open weights models, where there is wide access for developers and researchers to customize and fine-tune models but, at the same time, the terms of usethings like redistribution, as well as ownership of those variants that are developedvary based on the models own specific terms of use. And so we see some difference between what we would traditionally refer to as open source and we decided that it made the most sense to refer to our Gemma models as open models.

Still, Llama 2 has been influential in the developer community, and open models from the likes of French startup, Mistral, and others are pushing performance toward state-of-the-art closed models, like OpenAIs GPT-4. Open models may make more sense in enterprise contexts, where developers can better customize them. Theyre also invaluable for AI researchers working on a budget. Google wants to support such research with Google Cloud credits. Researchers can apply for up to $500,000 in credits toward larger projects.

Just how open AI should be is still a matter of debate in the industry.

Proponents of a more open ecosystem believe the benefits outweigh the risks. An open community, they say, can not only innovate at scale, but also better understand, reveal, and solve problems as they emerge. OpenAI and others have argued for a more closed approach, contending the more powerful the model, the more dangerous it could be out in the wild. A middle road might allow an open AI ecosystem but more tightly regulate it.

Whats clear is both closed and open AI are moving at a quick pace. We can expect more innovation from big companies and open communities as the year progresses.

Image Credit: Google

Continue reading here:

Google Just Released Two Open AI Models That Can Run on Laptops - Singularity Hub

Posted in Ai

Intel Launches World’s First Systems Foundry Designed for the AI Era – Investor Relations :: Intel Corporation (INTC)

Announced at Intel Foundry Direct Connect, Intels extended process technology roadmap adds Intel 14A to the companys leading-edge node plan, in addition to several specialized node evolutions and new Intel Foundry Advanced System Assembly and Test capabilities. Intel also affirmed that its ambitious five-nodes-in-four-years process roadmap remains on track and will deliver the industrys first backside power solution. (Credit: Intel Corporation)

Intel announces expanded process roadmap, customers and ecosystem partners to deliver on ambition to be the No. 2 foundry by 2030.

Company hosts Intel Foundry event featuring U.S. Commerce Secretary Gina Raimondo, Arm CEO Rene Haas and Open AI CEO Sam Altman and others.

NEWS HIGHLIGHTS

SAN JOSE, Calif.--(BUSINESS WIRE)-- Intel Corp. (INTC) today launched Intel Foundry as a more sustainable systems foundry business designed for the AI era and announced an expanded process roadmap designed to establish leadership into the latter part of this decade. The company also highlighted customer momentum and support from ecosystem partners including Synopsys, Cadence, Siemens and Ansys who outlined their readiness to accelerate Intel Foundry customers chip designs with tools, design flows and IP portfolios validated for Intels advanced packaging and Intel 18A process technologies.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20240221189319/en/

Announced at Intel Foundry Direct Connect, Intels extended process technology roadmap adds Intel 14A to the companys leading-edge node plan, in addition to several specialized node evolutions and new Intel Foundry Advanced System Assembly and Test capabilities. Intel also affirmed that its ambitious five-nodes-in-four-years process roadmap remains on track and will deliver the industrys first backside power solution. (Credit: Intel Corporation)

The announcements were made at Intels first foundry event, Intel Foundry Direct Connect, where the company gathered customers, ecosystem companies and leaders from across the industry. Among the participants and speakers were U.S. Secretary of Commerce Gina Raimondo, Arm CEO Rene Haas, Microsoft CEO Satya Nadella, OpenAI CEO Sam Altman and others.

More: Intel Foundry Direct Connect (Press Kit)

AI is profoundly transforming the world and how we think about technology and the silicon that powers it, said Intel CEO Pat Gelsinger. This is creating an unprecedented opportunity for the worlds most innovative chip designers and for Intel Foundry, the worlds first systems foundry for the AI era. Together, we can create new markets and revolutionize how the world uses technology to improve peoples lives.

Process Roadmap Expands Beyond 5N4Y

Intels extended process technology roadmap adds Intel 14A to the companys leading-edge node plan, in addition to several specialized node evolutions. Intel also affirmed that its ambitious five-nodes-in-four-years (5N4Y) process roadmap remains on track and will deliver the industrys first backside power solution. Company leaders expect Intel will regain process leadership with Intel 18A in 2025.

The new roadmap includes evolutions for Intel 3, Intel 18A and Intel 14A process technologies. It includes Intel 3-T, which is optimized with through-silicon vias for 3D advanced packaging designs and will soon reach manufacturing readiness. Also highlighted are mature process nodes, including new 12 nanometer nodes expected through the joint development with UMC announced last month. These evolutions are designed to enable customers to develop and deliver products tailored to their specific needs. Intel Foundry plans a new node every two years and node evolutions along the way, giving customers a path to continuously evolve their offerings on Intels leading process technology.

Intel also announced the addition of Intel Foundry FCBGA 2D+ to its comprehensive suite of ASAT offerings, which already include FCBGA 2D, EMIB, Foveros and Foveros Direct.

Microsoft Design on Intel 18A Headlines Customer Momentum

Customers are supporting Intels long-term systems foundry approach. During Pat Gelsingers keynote, Microsoft Chairman and CEO Satya Nadella stated that Microsoft has chosen a chip design it plans to produce on the Intel 18A process.

We are in the midst of a very exciting platform shift that will fundamentally transform productivity for every individual organization and the entire industry, Nadella said. To achieve this vision, we need a reliable supply of the most advanced, high-performance and high-quality semiconductors. Thats why we are so excited to work with Intel Foundry, and why we have chosen a chip design that we plan to produce on Intel 18A process.

Intel Foundry has design wins across foundry process generations, including Intel 18A, Intel 16 and Intel 3, along with significant customer volume on Intel Foundry ASAT capabilities, including advanced packaging.

In total, across wafer and advanced packaging, Intel Foundrys expected lifetime deal value is greater than $15 billion.

IP and EDA Vendors Declare Readiness for Intel Process and Packaging Designs

Intellectual property and electronic design automation (EDA) partners Synopsys, Cadence, Siemens, Ansys, Lorentz and Keysight disclosed tool qualification and IP readiness to enable foundry customers to accelerate advanced chip designs on Intel 18A, which offers the foundry industrys first backside power solution. These companies also affirmed EDA and IP enablement across Intel node families.

At the same time, several vendors announced plans to collaborate on assembly technology and design flows for Intels embedded multi-die interconnect bridge (EMIB) 2.5D packaging technology. These EDA solutions will ensure faster development and delivery of advanced packaging solutions for foundry customers.

Intel also unveiled an "Emerging Business Initiative" that showcases a collaboration with Arm to provide cutting-edge foundry services for Arm-based system-on-chips (SoCs). This initiative presents an important opportunity for Arm and Intel to support startups in developing Arm-based technology and offering essential IP, manufacturing support and financial assistance to foster innovation and growth.

Systems Approach Differentiates Intel Foundry in the AI Era

Intels systems foundry approach offers full-stack optimization from the factory network to software. Intel and its ecosystem empower customers to innovate across the entire system through continuous technology improvements, reference designs and new standards.

Stuart Pann, senior vice president of Intel Foundry at Intel said, We are offering a world-class foundry, delivered from a resilient, more sustainable and secure source of supply, and complemented by unparalleled systems of chips capabilities. Bringing these strengths together gives customers everything they need to engineer and deliver solutions for the most demanding applications.

Global, Resilient, More Sustainable and Trusted Systems Foundry

Resilient supply chains must also be increasingly sustainable, and today Intel shared its goal of becoming the industrys most sustainable foundry. In 2023, preliminary estimates show that Intel used 99% renewable electricity in its factories worldwide. Today, the company redoubled its commitment to achieving 100% renewable electricity worldwide, net-positive water and zero waste to landfills by 2030. Intel also reinforced its commitment to net-zero Scope 1 and Scope 2 GHG emissions by 2040 and net-zero upstream Scope 3 emissions by 2050.

Forward-Looking Statements

This release contains forward-looking statements, including with respect to Intels:

Such statements involve many risks and uncertainties that could cause our actual results to differ materially from those expressed or implied, including those associated with:

All information in this press release reflects Intel management views as of the date hereof unless an earlier date is specified. Intel does not undertake, and expressly disclaims any duty, to update such statements, whether as a result of new information, new developments, or otherwise, except to the extent that disclosure may be required by law.

About Intel

Intel (Nasdaq: INTC) is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moores Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better. To learn more about Intels innovations, go to newsroom.intel.com and intel.com.

Intel Corporation. Intel, the Intel logo and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

View source version on businesswire.com: https://www.businesswire.com/news/home/20240221189319/en/

John Hipsher 1-669-223-2416 john.hipsher@intel.com

Robin Holt 1-503-616-1532 robin.holt@intel.com

Source: Intel Corp.

Released Feb 21, 2024 11:30 AM EST

More here:

Intel Launches World's First Systems Foundry Designed for the AI Era - Investor Relations :: Intel Corporation (INTC)

Posted in Ai

Generative AI’s environmental costs are soaring and mostly secret – Nature.com

Last month, OpenAI chief executive Sam Altman finally admitted what researchers have been saying for years that the artificial intelligence (AI) industry is heading for an energy crisis. Its an unusual admission. At the World Economic Forums annual meeting in Davos, Switzerland, Altman warned that the next wave of generative AI systems will consume vastly more power than expected, and that energy systems will struggle to cope. Theres no way to get there without a breakthrough, he said.

Im glad he said it. Ive seen consistent downplaying and denial about the AI industrys environmental costs since I started publishing about them in 2018. Altmans admission has got researchers, regulators and industry titans talking about the environmental impact of generative AI.

So what energy breakthrough is Altman banking on? Not the design and deployment of more sustainable AI systems but nuclear fusion. He has skin in that game, too: in 2021, Altman started investing in fusion company Helion Energy in Everett, Washington.

Is AI leading to a reproducibility crisis in science?

Most experts agree that nuclear fusion wont contribute significantly to the crucial goal of decarbonizing by mid-century to combat the climate crisis. Helions most optimistic estimate is that by 2029 it will produce enough energy to power 40,000 average US households; one assessment suggests that ChatGPT, the chatbot created by OpenAI in San Francisco, California, is already consuming the energy of 33,000 homes. Its estimated that a search driven by generative AI uses four to five times the energy of a conventional web search. Within years, large AI systems are likely to need as much energy as entire nations.

And its not just energy. Generative AI systems need enormous amounts of fresh water to cool their processors and generate electricity. In West Des Moines, Iowa, a giant data-centre cluster serves OpenAIs most advanced model, GPT-4. A lawsuit by local residents revealed that in July 2022, the month before OpenAI finished training the model, the cluster used about 6% of the districts water. As Google and Microsoft prepared their Bard and Bing large language models, both had major spikes in water use increases of 20% and 34%, respectively, in one year, according to the companies environmental reports. One preprint1 suggests that, globally, the demand for water for AI could be half that of the United Kingdom by 2027. In another2, Facebook AI researchers called the environmental effects of the industrys pursuit of scale the elephant in the room.

Rather than pipe-dream technologies, we need pragmatic actions to limit AIs ecological impacts now.

Theres no reason this cant be done. The industry could prioritize using less energy, build more efficient models and rethink how it designs and uses data centres. As the BigScience project in France demonstrated with its BLOOM model3, it is possible to build a model of a similar size to OpenAIs GPT-3 with a much lower carbon footprint. But thats not whats happening in the industry at large.

It remains very hard to get accurate and complete data on environmental impacts. The full planetary costs of generative AI are closely guarded corporate secrets. Figures rely on lab-based studies by researchers such as Emma Strubell4 and Sasha Luccioni3; limited company reports; and data released by local governments. At present, theres little incentive for companies to change.

There are holes in Europes AI Act and researchers can help to fill them

But at last, legislators are taking notice. On 1 February, US Democrats led by Senator Ed Markey of Massachusetts introduced the Artificial Intelligence Environmental Impacts Act of 2024. The bill directs the National Institute for Standards and Technology to collaborate with academia, industry and civil society to establish standards for assessing AIs environmental impact, and to create a voluntary reporting framework for AI developers and operators. Whether the legislation will pass remains uncertain.

Voluntary measures rarely produce a lasting culture of accountability and consistent adoption, because they rely on goodwill. Given the urgency, more needs to be done.

To truly address the environmental impacts of AI requires a multifaceted approach including the AI industry, researchers and legislators. In industry, sustainable practices should be imperative, and should include measuring and publicly reporting energy and water use; prioritizing the development of energy-efficient hardware, algorithms, and data centres; and using only renewable energy. Regular environmental audits by independent bodies would support transparency and adherence to standards.

Researchers could optimize neural network architectures for sustainability and collaborate with social and environmental scientists to guide technical designs towards greater ecological sustainability.

Finally, legislators should offer both carrots and sticks. At the outset, they could set benchmarks for energy and water use, incentivize the adoption of renewable energy and mandate comprehensive environmental reporting and impact assessments. The Artificial Intelligence Environmental Impacts Act is a start, but much more will be needed and the clock is ticking.

K.C. is employed by both USC Annenberg, and Microsoft Research, which makes generative AI systems.

See the original post here:

Generative AI's environmental costs are soaring and mostly secret - Nature.com

Posted in Ai

Energy companies tap AI to detect defects in an aging grid – E&E News by POLITICO

A helicopter loaded with cameras and sensors sweeps over a utilitys high-voltage transmission line in the southeastern United States.

High-resolution cameras record images of cables, connections and towers. Artificial intelligence tools search for cracks and flaws that could be overlooked by the naked eye, the worn-out component that could spark the next wildfire.

We have trained a lot of AI models to recognize defects, said Marion Baroux, a Germany-based business developer for Siemens Energy, which built the helicopter scanning and analysis technology.

Drones have been inspecting power lines for a decade. Today, the rapid advancement of AI and machine-learning technology has opened the door to faster detection of potential failures in aging power lines, guiding transmission owners on how to upgrade the grid to meet clean energy and extreme weather challenges.

Automating inspections is a first step in a still uncharted future for AI adoption in the electric power sector, echoing the high-stakes international debate over the risks and potential of AI technology.

President Joe Bidens executive order on AI last October emphasized caution. Safety requires robust, reliable, repeatable, and standardized evaluations of AI systems, the order said, as well as policies, institutions, and as appropriate, other mechanisms to test, understand, and mitigate risks from these systems before they are put to use.

There is also a case for accelerating AIs adoption, according to Department of Energy experts speaking at a recent conference.

Balancing supply and demand on the grid is becoming more complex as renewable generation replaces fossil power plants.

AI has the potential to help us operate the grid with much higher percentages of renewables, said Andrew Bochman, senior grid strategist at the Idaho National Laboratory.

But first, AI must earn the confidence of engineers who are responsible for ensuring utilities face as few risks as possible.

Obviously, there are a lot of technical concerns about how these systems work and what we can trust them to do, said Christopher Lamb, a senior cybersecurity researcher at Sandia National Laboratories in New Mexico.

There are definitely risks associated with AI, said Colin Ponce, a computational mathematician at Lawrence Livermore National Laboratory in California. A lot of utilities have a certain amount of hesitation about it because they dont really understand what it will do.

The need for transmission owners and operators to find and prevent breaks in aging power line components was driven home tragically in Californias fatal Camp Fire in 2018.

A 99-year-old metal hook supporting a high-voltage cable on a Pacific Gas & Electric power line wore through, allowing the line to hit the tower causing a short-circuit whose sparks ignited the fire. The fire claimed 85 lives.

Baroux said Siemens Energys system may or may not have prevented the Camp Fire. But the purpose is to find the transmission line components like the failed PG&E hook that are most in need of replacement.

Another California catastrophe demonstrates a case for that capability.

On July 13, 2021, a California grid trouble man driving through Californias rugged, remote Sierra Nevada region spotted a 65-foot-tall Douglas fir that had fallen onto a PG&E power line. According to his court testimony there was nothing he could do to prevent the spread of what would be called the Dixie Fire, which burned for three months, consuming nearly 1 million acres.

Faced with the threat of more impacts between dead or dying trees and its lines, PG&E has received state regulators permission to bury 1,230 miles of its power lines at a cost of roughly $3 million per mile.

The flying inspections produce thousands of gigabytes of data per mile, which would overwhelm human investigators. We will run AI models on data, then the customer-operators will review these results to look for the most urgent actions to take. The human remains the decisionmaking, always, she said. But this saves them time.

Siemens Energy declined to discuss the systems price tag and would not identify the utility in the Southeast using it. The service is in use at the E.ON Group energy operations in Germany, in French grid operator RTE, and TenneT, which runs the Netherlands network, a Siemens Energy spokesperson said.

In addition to the helicopters camera array, its instrument pod also carries sensors that detect wasteful or damaging electrical current leaks in lines. Lidar distance measuring radar scanners are also aboard to create 3D views of towers and nearby vegetation, alerting operators to potential threats from tree impacts with lines.

The possibility of applying AI and other advanced computing solutions to grid operations is the goal of another DOE project called HIPPO, for high-performance power grid optimization. HIPPOs lead partners are the Midcontinent Independent System Operator (MISO); DOEs Pacific Northwest National Laboratory; General Electric; and Gurobi Optimization, a Beaverton, Oregon, technology firm.

HIPPO has designed high-speed computing algorithms employing machine learning tools to improve the speed and accuracy of power plant scheduling decisions by MISO, the grid operator in 15 central U.S. states and Canadas Manitoba province.

Every day, MISO operators must make decisions about which electricity generating resources will run each hour of the following day, based on the generators competing power prices and transmission costs. The growth of wind and solar power, microgrids, and customers rooftop solar power and electric vehicle charging are making decisions harder as forecasting weather impacts on the grid is also more challenging.

HIPPOs heavier computing power and complex calculations produce answers 35 times faster than current systems, allowing greener and more sustainable grid operations, MISO reported last year.

One of the advantage of HIPPO is its flexibility, said Feng Pan, PNNL research scientist and the projects principal investigator. In addition to scheduling generation and confirming grid stability, HIPPO will enable operators to run what-if scenarios involving battery storage customer-based resources, he said in an email.

HIPPO is easing its way into the MISO operation. The project, launched with a 2015 grant from DOEs Advanced Projects Research Agency-Energy, is not yet scheduled for full deployment. It will assist operators, not take over, Pan said.

For AI systems to solve problems, they will need trusted data about grid operations, said Lamb, the senior researcher at Sandia.

Are there biases that could get cooked into algorithms that could create serious risks to operation reliability, and if so, what might they be? Lamb asked.

Data issues arent waiting for AI. Even without the complications AI may bring, operators of the principal Texas grid were dangerously in the dark during Winter Storm Uri in 2021.

If an adversary can insert data into your [computer] training pipeline, there are ways they can poison your data set and cause a variety of problems, Lawrence Livermores Ponce said, adding that designing defenses against rogue data threats is a major priority.

Ponce and Lamb came down on AIs side in the conference.

There is a bunch of hype around AI that is really undeserved, Lamb said. Operators understand their businesses. They are going to be making responsible decisions, and frankly I trust them to do so.

Grid operators should be able to maximize benefits and minimize risks provided they invest wisely in safety technology, he said. It doesnt mean the risks will be zero.

If we get too scared of AI and completely put the brakes on, I fear that will hinder our ability to respond to real threats and significant risk we already have evidence for, like climate change, Ponce said.

Theres a lot of doom and a lot of gloom about the application of AI, Lamb said. Dont be scared.

Read the original post:

Energy companies tap AI to detect defects in an aging grid - E&E News by POLITICO

Posted in Ai

Tor Books Criticized for Use of AI-Generated Art in ‘Gothikana’ Cover Design – Publishers Weekly

A number of readers are calling out Tor Books over the cover art of Gothikana by RuNyx, published by Tor's romance imprint Bramble on January 23, which incorporates AI-generated assets in its design.

On February 9, BookTok influencer @emmaskies identified two Adobe Stock images that had been used for the book's cover, both of which include the phrase "Generative AI" in their titles and are flagged on the Adobe Stock website as "generated with AI."

"We cannot allow AI-generated anything to infiltrate creative spaces because they are not just going to stop at covers," says @emmaskies in the video. She goes on to suggest that the use of such images is a slippery slope, imagining a publishing industry in the near future in which AI-generated images supplant cover artists, AI language models replace editorial staff, and AI models make acquisition judgements.

The video has since garnered more than 64,000 views. Her initial analysis of the cover, in which she alleged but had not yet confirmed the use of AI-generated images, received more than 300,000 views and 35,000 likes.

This is not the first time that Tor has attracted criticism online for using AI-generated assets in book cover designs. When Tor unveiled the cover of Christopher Paolini's sci-fi thriller Fractal Noise in November 2022, the publisher was quickly met with criticism over the use of an AI-generated asset, which had been posted to Shutterstock and created with Midjourney. The book was subsequently review-bombed on Goodreads.

"During the process of creating this cover, we licensed an image from a reputable stock house. We were not aware that the image may have been created by AI," Tor Books said in a statement posted to X on December 15. "Our in-house designer used the licensed image to create the cover, which was presented to Christopher for approval." Tor decided to move ahead with the cover "due to production constraints."

In response to the statement, Eisner Awardwinning illustrator Trung Le Nguyen commented, "I might not be able to judge a book by its cover, but I sure as hell will judge its publisher."

Tor is not the only publisher catch heat for using AI-generated art on book covers. Last spring, the Verge reported on the controversy over the U.K. paperback edition of Sarah J. Maas's House of Earth and Blood, published by Bloomsbury, which credited Adobe Stock for the illustration of a wolf on the book's cover; the illustration had been marked as AI-generated on Adobe's website. Bloomsbury later claimed that its in-house design team was "unaware" that the licensed image had been created by AI.

Gothikana was originally self-published by author RuNyx in June 2021, and was reissued by Bramble in a hardcover edition featuring sprayed edges, a foil case stamp, and detailed endpapers. Bramble did not respond to PW's request for comment by press time.

See the original post here:

Tor Books Criticized for Use of AI-Generated Art in 'Gothikana' Cover Design - Publishers Weekly

Posted in Ai

Google launches Gemini Business AI, adds $20 to the $6 Workspace bill – Ars Technica

Google

Google went ahead with plans to launch Gemini for Workspace today. The big news is the pricing information, and you can see the Workspace pricing page is new, with every plan offering a "Gemini add-on." Google's old AI-for-Business plan, "Duet AI for Google Workspace," is dead, though it never really launched anyway.

Google has a blog post explaining the changes. Google Workspace starts at $6 per user per month for the "Starter" package, and the AI "Add-on," as Google is calling it, is an extra $20 monthly cost per user (all of these prices require an annual commitment). That is a massive price increase over the normal Workspace bill, but AI processing is expensive. Google says this business package will get you "Help me write in Docs and Gmail, Enhanced Smart Fill in Sheets and image generation in Slides." It also includes the "1.0 Ultra" model for the Gemini chatbotthere's a full feature list here. This $20 plan is subject to a usage limit for Gemini AI features of "1,000 times per month."

Google

Google's second plan is "Gemini Enterprise," which doesn't come with any usage limits, but it's also only available through a "contact us" link and not a normal checkout procedure. Enterprise is $30 per user per month, and it "includes additional capabilities for AI-powered meetings, where Gemini can translate closed captions in more than 100 language pairs, and soon even take meeting notes."

More here:

Google launches Gemini Business AI, adds $20 to the $6 Workspace bill - Ars Technica

Posted in Ai

AI and You: OpenAI’s Sora Previews Text-to-Video Future, First Ivy League AI Degree – CNET

AI developments are happening pretty fast. If you don't stop and look around once in a while, you could miss them.

Fortunately, I'm looking around for you and what I saw this week is that competition between OpenAI, maker of ChatGPT and Dall-E, and Google continues to heat up in a way that's worth paying attention to.

A week after updating its Bard chatbot and changing the name to Gemini, Google's DeepMind AI subsidiary previewed the next version of its generative AI chatbot. DeepMind told CNET's Lisa Lacy that Gemini 1.5 will be rolled out "slowly" to regular people who sign up for a wait list and will be available now only to developers and enterprise customers.

Gemini 1.5 Pro, Lacy reports, is "as capable as" the Gemini 1.0 Ultra model, which Google announced on Feb. 8. The 1.5 Pro model has a win rate -- a measurement of how many benchmarks it can outperform -- of 87% compared with the 1.0 Pro and 55% against the 1.0 Ultra. So the 1.5 Pro is essentially an upgraded version of the best available model now.

Gemini 1.5 Pro can ingest video, images, audio and text to answer questions, added Lacy. Oriol Vinyals, vice president of research at Google DeepMind and co-lead of Gemini, described 1.5 as a "research release" and said the model is "very efficient" thanks to a unique architecture that can answer questions by zeroing in on expert sources in that particular subject rather than seeking the answer from all possible sources.

Meanwhile, OpenAI announced a new text-to-video model called Sora that's capturing a lot of attention because of the photorealistic videos it's able to generate. Sora can "create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions." Following up on a promise it made, along with Google and Meta last week, to watermark AI-generated images and video, OpenAI says it's also creating tools to detect videos created with Sora so they can be identified as being AI generated.

Google and Meta have also announced their own gen AI text-to-video creators.

Sora, which means "sky" in Japanese, is also being called experimental, with OpenAI limiting access for now to so-called "red teamers," security experts and researchers who will assess the tool's potential harms or risks. That follows through on promises made as part of President Joe Biden's AI executive order last year, asking developers to submit the results of safety checks on their gen AI chatbots before releasing them publicly. OpenAI said it's also looking to get feedback on Sora from some visual artists, designers and filmmakers.

How do the photorealistic videos look? Pretty realistic. I agree with the The New York Times, which described the short demo videos -- "of wooly mammoths trotting through a snowy meadow, a monster gazing at a melting candle and a Tokyo street scene seemingly shot by a camera swooping across the city" -- as "eye popping."

The MIT Review, which also got a preview of Sora, said the "tech has pushed the envelope of what's possible with text-to-video generation." Meanwhile, The Washington Post noted Sora could exacerbate an already growing problem with video deepfakes, which have been used to "deceive voters" and scam consumers.

One X commentator summarized it this way: "Oh boy here we go what is real anymore." And OpenAI CEO Sam Altman called the news about its video generation model a "remarkable moment."

You can see the four examples of what Sora can produce on OpenAI's intro site, which notes that the tool is "able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world. The model has a deep understanding of language, enabling it to accurately interpret prompts and generate compelling characters that express vibrant emotions."

But Sora has its weaknesses, which is why OpenAI hasn't yet said whether it will actually be incorporated into its chatbots. Sora "may struggle with accurately simulating the physics of a complex scene and may not understand specific instances of cause and effect. For example, a person might take a bite out of a cookie, but afterward, the cookie may not have a bite mark. The model may also confuse spatial details of a prompt, for example, mixing up left and right."

All of this is to remind us that tech is a tool -- and that it's up to us humans to decide how, when, where and why to use that technology. In case you didn't see it, the trailer for the new Minions movie (Despicable Me 4: Minion Intelligence) makes this point cleverly, with its sendup of gen AI deepfakes and Jon Hamm's voiceover of how "artificial intelligence is changing how we see the worldtransforming the way we do business."

"With artificial intelligence," Hamm adds over the minions' laughter, "the future is in good hands."

Here are the other doings in AI worth your attention.

Twenty tech companies, including Adobe, Amazon, Anthropic, ElevenLabs, Google, IBM, Meta, Microsoft, OpenAI, Snap, TikTok and X, agreed at a security conference in Munich that they will voluntarily adopt "reasonable precautions" to guard against AI tools being used to mislead or deceive voters ahead of elections.

"The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes," the text of the accord says, according to NPR. "We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders."

But the agreement is "largely symbolic," the Associated Press reported, noting that "reasonable precautions" is a little vague.

"The companies aren't committing to ban or remove deepfakes," the AP said. "Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. It notes the companies will share best practices with each other and provide 'swift and proportionate responses' when that content starts to spread."

AI has already been used to try to trick voters in the US and abroad. Days before the New Hampshire presidential primary, fraudsters sent an AI robocall that mimicked President Biden's voice, asking them not to vote in the primary. That prompted the Federal Communications Commission this month to make AI-generated robocalls illegal. The AP said that "Just days before Slovakia's elections in November, AI-generated audio recordings impersonated a candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false as they spread across social media."

"Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own," Nick Clegg, president of global affairs for Meta, told the Associated Press in an interview before the summit.

Over 4 billion people are set to vote in key elections this year in more than 40 countries,. including the US, The Hill reported.

If you're concerned about how deepfakes may be used to scam you or your family members -- someone calls your grandfather and asks them for money by pretending to be you -- Bloomberg reporter Rachel Metz has a good idea. She suggests it may be time for us all to create a "family password" or safe word or phrase to share with our family or personal network that we can ask for to make sure we're talking to who we think we're talking to.

"Extortion has never been easier," Metz reports. "The kind of fakery that used to take time, money and technical know-how can now be accomplished quickly and cheaply by nearly anyone."

That's where family passwords come in, since they're "simple and free," Metz said. "Pick a word that you and your family (or another trusted group) can easily remember. Then, if one of those people reaches out in a way that seems a bit odd -- say, they're suddenly asking you to deliver 5,000 gold bars to a P.O. Box in Alaska -- first ask them what the password is."

How do you pick a good password? She offers a few suggestions, including using a word you don't say frequently and that's not likely to come up in casual conversations. Also, "avoid making the password the name of a pet, as those are easily guessable."

Hiring experts have told me it's going to take years to build an AI-educated workforce, considering that gen AI tools like ChatGPT weren't released until late 2022. So it makes sense that learning platforms like Coursera, Udemy, Udacity, Khan Academy and many universities are offering online courses and certificates to upskill today's workers. Now the University of Pennsylvania's School of Engineering and Applied Science said it's the first Ivy League school to offer an undergraduate major in AI.

"The rapid rise of generative AI is transforming virtually every aspect of life: health, energy, transportation, robotics, computer vision, commerce, learning and even national security," Penn said in a Feb. 13 press release. "This produces an urgent need for innovative, leading-edge AI engineers who understand the principles of AI and how to apply them in a responsible and ethical way."

The bachelor of science in AI offers coursework in machine learning, computing algorithms, data analytics and advanced robotics and will have students address questions about "how to align AI with our social values and how to build trustworthy AI systems," Penn professor Zachary Ives said.

"We are training students for jobs that don't yet exist in fields that may be completely new or revolutionized by the time they graduate," added Robert Ghrist, associate dean of undergraduate education in Penn Engineering.

FYI, the cost of an undergraduate education at Penn, which typically spans four years, is over $88,000 per year (including housing and food).

For those not heading to college or who haven't signed up for any of those online AI certificates, their AI upskilling may come courtesy of their current employee. The Boston Consulting Group, for its Feb. 9 report, What GenAI's Top Performer Do Differently, surveyed over 150 senior executives across 10 sectors. Generally:

Bottom line: companies are starting to look at existing job descriptions and career trajectories, and the gaps they're seeing in the workforce when they consider how gen AI will affect their businesses. They've also started offering gen AI training programs. But these efforts don't lessen the need for today's workers to get up to speed on gen AI and how it may change the way they work -- and the work they do.

In related news, software maker SAP looked at Google search data to see which states in the US were most interested in "AI jobs and AI business adoption."

Unsurprisingly, California ranked first in searches for "open AI jobs" and "machine learning jobs." Washington state came in second place, Vermont in third, Massachusetts in fourth and Maryland in fifth.

California, "home to Silicon Valley and renowned as a global tech hub, shows a significant interest in AI and related fields, with 6.3% of California's businesses saying that they currently utilize AI technologies to produce goods and services and a further 8.4% planning to implement AI in the next six months, a figure that is 85% higher than the national average," the study found.

Virginia, New York, Delaware, Colorado and New Jersey, in that order, rounded out the top 10.

Over the past few months, I've highlighted terms you should know if you want to be knowledgeable about what's happening as it relates to gen AI. So I'm going to take a step back this week and provide this vocabulary review for you, with a link to the source of the definition.

It's worth a few minutes of your time to know these seven terms.

Anthropomorphism: The tendency for people to attribute humanlike qualities or characteristics to an AI chatbot. For example, you may assume it's kind or cruel based on its answers, even though it isn't capable of having emotions, or you may believe the AI is sentient because it's very good at mimicking human language.

Artificial general intelligence (AGI): A description of programs that are as capable as -- or even more capable than -- than a human. While full general intelligence is still off in the future, models are growing in sophistication. Some have demonstrated skills across multiple domains ranging from chemistry to psychology, with task performance paralleling human benchmarks.

Generative artificial intelligence (gen AI): Technology that creates content -- including text, images, video and computer code -- by identifying patterns in large quantities of training data and then creating original material that has similar characteristics.

Hallucination: Hallucinations are unexpected and incorrect responses from AI programs that can arise for reasons that aren't yet fully known. A language model might suddenly bring up fruit salad recipes when you were asking about planting fruit trees. It might also make up scholarly citations, lie about data you ask it to analyze or make up facts about events that aren't in its training data. It's not fully understood why this happens but can arise from sparse data, information gaps and misclassification.

Large language model (LLM): A type of AI model that can generate human-like text and is trained on a broad dataset.

Prompt engineering: This is the act of giving AI an instruction so it has the context it needs to achieve your goal. Prompt engineering is best associated with OpenAI's ChatGPT, describing the tasks users feed into the algorithm. (e.g. "Give me five popular baby names.")

Temperature: In simple terms, model temperature is a parameter that controls how random a language model's output is. A higher temperature means the model takes more risks, giving you a diverse mix of words. On the other hand, a lower temperature makes the model play it safe, sticking to more focused and predictable responses.

Model temperature has a big impact on the quality of the text generated in a bunch of [natural language processing] tasks, like text generation, summarization and translation.

The tricky part is finding the perfect model temperature for a specific task. It's kind of like Goldilocks trying to find the perfect bowl of porridge -- not too hot, not too cold, but just right. The optimal temperature depends on things like how complex the task is and how much creativity you're looking for in the output.

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

Read the original post:

AI and You: OpenAI's Sora Previews Text-to-Video Future, First Ivy League AI Degree - CNET

Posted in Ai

Can AI help us forecast extreme weather? – Vox.com

Weve learned how to predict weather over the past century by understanding the science that governs Earths atmosphere and harnessing enough computing power to generate global forecasts. But in just the past three years, AI models from companies like Google, Huawei, and Nvidia that use historical weather data have been releasing forecasts rivaling those created through traditional forecasting methods.

This video explains the promise and challenges of these new models built on artificial intelligence rather than numerical forecasting, particularly the ability to foresee extreme weather.

Additional reading:

You can find this video and all of Voxs videos on YouTube.

This video is sponsored by Microsoft Copilot for Microsoft 365. Microsoft has no editorial influence on our videos, but their support makes videos like these possible.

Yes, I'll give $5/month

Yes, I'll give $5/month

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

Read more:

Can AI help us forecast extreme weather? - Vox.com

Posted in Ai

Scale AI to set the Pentagon’s path for testing and evaluating large language models – DefenseScoop

The Pentagons Chief Digital and Artificial Intelligence Office (CDAO) tapped Scale AI to produce a trustworthy means for testing and evaluating large language models that can support and potentially disrupt military planning and decision-making.

According to a statement the San Francisco-based company shared exclusively with DefenseScoop, the outcomes of this new one-year contract will supply the CDAO with a framework to deploy AI safely by measuring model performance, offering real-time feedback for warfighters, and creating specialized public sector evaluation sets to test AI models for military support applications, such as organizing the findings from after action reports.

Large language models and the overarching field of generative AI include emerging technologies that can generate (convincing but not always accurate) text, software code, images and other media, based on prompts from humans.

This rapidly evolving realm holds a lot of promise for the Department of Defense, but also poses unknown and serious potential challenges. Last year, Pentagon leadership launched Task Force Lima within the CDAOs Algorithmic Warfare Directorate to accelerate its components grasp, assessment and deployment of generative artificial intelligence.

The department has long leaned on test-and-evaluation (T&E) processes to assess and ensure its systems, platforms and technologies perform in a safe and reliable manner before they are fully fielded. But AI safety standards and policies have not yet been universally set, and the complexities and uncertainties associated with large language models make T&E even more complicated when it comes to generative AI.

Broadly, T&E enables experts to determine the baseline performance of a specific model.

For instance, to test and evaluate a computer vision algorithm that differentiates between images of dogs and cats and things that are not dogs or cats, an official might first train it with millions of different pictures of those type of animals as well as objects that arent dogs or cats. In doing so, the expert will also hold back a diverse subset of data that can then be presented to the algorithm down the line.

They can then assess that evaluation dataset against the test set, or ground truth, and ultimately determine failure rates of where the model is unable to determine if something is or is not one of the classifiers theyre trying to identify.

Experts at Scale AI will adopt a similar approach for T&E with large language models, but because they are generative in nature and the English language can be hard to evaluate, there isnt that same level of ground truth for these complex systems. For example, if prompted to supply five different responses, an LLM might be generally factually accurate in all five, yet contrasting sentence structures could change the meanings of each output.

So, part of the companys effort to develop the framework, methods and technology CDAO can use to test and evaluate large language models will involve creating holdout datasets where they include DOD insiders to prompt response pairs and adjudicate them by layers of review, and ensure that each is as good of a response as would be expected from a human in the military.

The entire process will be iterative in nature.

Once datasets that are germane to the DOD for world knowledge, truthfulness, and other topics are made and refined, the experts can then evaluate existing large language models against them.

Eventually, as they have these holdout datasets, experts will be able to run evaluations and establish model cards or short documents that supply details on the context for best for use of various machine learning models and information for measuring their performance.

Officials plan to automate in this development as much as possible, so that as new models come in, there can be some baseline understanding of how they will perform, where they will perform best, and where they will probably start to fail.

Further in the process, the ultimate intent is for models to essentially send signals to CDAO officials that engage with them, if they start to waver from the domains they have been tested against.

This work will enable the DOD to mature its T&E policies to address generative AI by measuring and assessing quantitative data via benchmarking and assessing qualitative feedback from users. The evaluation metrics will help identify generative AI models that are ready to support military applications with accurate and relevant results using DoD terminology and knowledge bases. The rigorous T&E process aims to enhance the robustness and resilience of AI systems in classified environments, enabling the adoption of LLM technology in secure environments, Scale AIs statement reads.

Beyond the CDAO, the company has also partnered with Meta, Microsoft, the U.S. Army, the Defense Innovation Unit, OpenAI, General Motors, Toyota Research Institute, Nvidia, and others.

Testing and evaluating generative AI will help the DoD understand the strengths and limitations of the technology, so it can be deployed responsibly. Scale is honored to partner with the DoD on this framework, Alexandr Wang, Scale AIs founder and CEO, said in the statement.

Continue reading here:

Scale AI to set the Pentagon's path for testing and evaluating large language models - DefenseScoop

Posted in Ai

What is AI governance? – Cointelegraph

The landscape and importance of AI governance

AI governance encompasses the rules, principles and standards that ensure AI technologies are developed and used responsibly.

AI governance is a comprehensive term encompassing the definition, principles, guidelines and policies designed to steer the ethical creation and utilization of artificial intelligence (AI) technologies. This governance framework is crucial for addressing a wide array of concerns and challenges associated with AI, such as ethical decision-making, data privacy, bias in algorithms, and the broader impact of AI on society.

The concept of AI governance extends beyond mere technical aspects to include legal, social and ethical dimensions. It serves as a foundational structure for organizations and governments to ensure that AI systems are developed and deployed in beneficial ways that do not cause unintentional harm.

In essence, AI governance forms the backbone of responsible AI development and usage, providing a set of standards and norms that guide various stakeholders, including AI developers, policymakers and end-users. By establishing clear guidelines and ethical principles, AI governance aims to harmonize the rapid advancements in AI technology with the societal and ethical values of human communities.

AI governance adapts to organizational needs without fixed levels, employing frameworks like NIST and OECD for guidance.

AI governance doesnt follow universally standardized levels, as seen in fields like cybersecurity. Instead, it utilizes structured approaches and frameworks from various entities, allowing organizations to tailor these to their specific requirements.

Frameworks, such as the National Institute Of Standards and Technology (NIST) AI Risk Management Framework, the Organization for Economic Co-operation and Development (OECD) principles on artificial intelligence, and the European Commissions Ethics Guidelines for Trustworthy AI, are among the most utilized. They cover many topics, including transparency, accountability, fairness, privacy, security and safety, providing a solid foundation for governance practices.

The extent of governance adoption varies with the organizations size, the complexity of the AI systems it employs, and the regulatory landscape it operates within. Three main approaches to AI governance are:

The most basic form relies on an organizations core values and principles, with some informal processes in place, such as ethical review boards, but lacking a formal governance structure.

A more structured approach than informal governance involves creating specific policies and procedures in response to particular challenges. However, it may not be comprehensive or systematic.

The most comprehensive approach entails the development of an extensive AI governance framework that reflects the organizations values, aligns with legal requirements and includes detailed risk assessment and ethical oversight processes.

Illustrating AI governance through diverse examples like GDPR, the OECD AI principles and corporate ethics boards showcases the multifaceted approach to responsible AI use.

AI governance manifests through various policies, frameworks and practices aimed at ethically deploying AI technologies through organizations and governments. These instances highlight the application of AI governance across different scenarios:

The General Data Protection Regulation (GDPR) is a pivotal example of AI governance in safeguarding personal data and privacy. Although the GDPR isnt solely AI-focused, its regulations significantly impact AI applications, particularly those processing personal data within the European Union, emphasizing the need for transparency and data protection.

The OECD AI principles, endorsed by over 40 countries, underscore the commitment to trustworthy AI. These principles advocate for AI systems to be transparent, fair and accountable, guiding international efforts toward responsible AI development and usage.

Corporate AI Ethics Boards represent an organizational approach to AI governance. Numerous corporations have instituted ethics boards to supervise AI projects, ensuring they conform to ethical norms and societal expectations. For instance, IBMs AI Ethics Council reviews AI offerings to ensure they comply with the companys AI ethics, involving a diverse team from various disciplines to provide comprehensive oversight.

Stakeholder engagement is essential for developing inclusive, effective AI governance frameworks that reflect a broad spectrum of perspectives.

A wide range of stakeholders, including governmental entities, international organizations, business associations and civil society organizations, are in charge of AI governance. Because different areas and nations have different legal, cultural and political contexts, their oversight structures can also differ significantly.

The complexity of AI governance requires active participation from all sectors of society, including government, industry, academia and civil society. Engaging a diverse range of stakeholders ensures that multiple perspectives are considered when developing AI governance frameworks, leading to more robust and inclusive policies.

This engagement also fosters a sense of shared responsibility for the ethical development and use of AI technologies. By involving stakeholders in the governance process, policymakers can leverage a wide range of expertise and insights, ensuring that AI governance frameworks are well-informed, adaptable and capable of addressing the multifaceted challenges and opportunities presented by AI.

For instance, the exponential growth of data collection and processing raises significant privacy concerns, necessitating stringent governance frameworks to protect an individuals personal information. This involves compliance with global data protection regulations like GDPR and active participation by stakeholders in implementing advanced data security technologies to prevent unauthorized access and data breaches.

The future of AI governance will be shaped by advancements in technology, evolving societal values and the need for international collaboration.

As AI technologies evolve, so will the frameworks governing them. The future of AI governance is likely to see a greater emphasis on sustainable and human-centered AI practices.

Sustainable AI focuses on developing environmentally friendly and economically viable technologies over the long term. Human-centered AI prioritizes systems that enhance human capabilities and well-being, ensuring that AI serves as a tool for augmenting human potential rather than replacing it.

Moreover, the global nature of AI technologies necessitates international collaboration in AI governance. This involves harmonizing regulatory frameworks across borders, fostering global standards for AI ethics, and ensuring that AI technologies can be safely deployed across different cultural and regulatory environments. Global cooperation is key to addressing challenges, such as cross-border data flow and ensuring that AI benefits are shared equitably worldwide.

Read more here:

What is AI governance? - Cointelegraph

Posted in Ai

HOUSE LAUNCHES BIPARTISAN TASK FORCE ON ARTIFICIAL INTELLIGENCE – Congressman Ted Lieu

WASHINGTON Speaker Mike Johnson and Democratic Leader Hakeem Jeffries announced the establishment of a bipartisan Task Force on Artificial Intelligence (AI) to explore how Congress can ensure America continues to lead the world in AI innovation while considering guardrails that may be appropriate to safeguard the nation against current and emerging threats.

Speaker Johnson and Leader Jeffries have each appointed twelve members to the Task Force that represent key committees of jurisdiction and will be jointly led by Chair JayObernolte (CA-23) and Co-Chair Ted Lieu (CA-36). The Task Force will seek to produce a comprehensive report that will include guiding principles, forward-looking recommendations and bipartisan policy proposals developed in consultation with committees of jurisdiction.

Because advancements in artificial intelligence have the potential to rapidly transform our economy and our society, it is important for Congress to work in a bipartisan manner to understand and plan for both the promises and the complexities of this transformative technology, saidSpeaker Mike Johnson.I am happy to announce with Leader Jeffries this new Bipartisan Task Force on Artificial Intelligenceto ensure America continues leading in this strategic arena.

Led by Rep. JayObernolte (R-Ca.) and Rep. Ted Lieu (D-Ca.), the task force will bring together a bipartisan group of Members who have AI expertise and represent the relevant committees of jurisdiction. As we look to the future, Congress must continue to encourage innovation and maintain our countrys competitive edge, protect our national security, and carefully consider what guardrails may be needed to ensure the development of safe and trustworthy technology.

Congress has a responsibility to facilitate the promising breakthroughs that artificial intelligence can bring to fruition and ensure that everyday Americans benefit from these advancements in an equitable manner, saidDemocratic Leader HakeemJeffries.That is why I am pleased to join Speaker Johnson in announcing the new Bipartisan Task Force on Artificial Intelligence, led by Rep. Ted Lieu and Rep. Jay Obernolte.

The rise of artificial intelligence also presents a unique set of challenges and certain guardrails must be put in place to protect the American people. Congress needs to work in a bipartisan way to ensure that America continues to lead in this emerging space, while also preventing bad actors from exploiting this evolving technology. The Members appointed to this Task Force bring a wide range of experience and expertise across the committees of jurisdiction and I look forward to working with them to tackle these issues in a bipartisan way.

It is an honor to be entrusted by Speaker Johnson to serve as Chairman of the House Task Force on Artificial Intelligence, saidChair Jay Obernolte (CA-23).As new innovations in AI continue to emerge, Congress and our partners in federal government must keep up. House Republicans and Democrats will work together to create a comprehensive report detailing the regulatory standards and congressional actions needed to both protect consumers and foster continued investment and innovation in AI.

The United States has led the world in the development of advanced AI, and we must work to ensure that AI realizes its tremendous potential to improve the lives of people across our country. I look forward to working with Co-Chair Ted Lieu and the rest of the Task Force on this critical bipartisan effort.

Thank you to Leader Jeffries and Speaker Johnson for establishing this bipartisan House Task Force on Artificial intelligence. AI has the capability of changing our lives as we know it. The question is how to ensure AI benefits society instead of harming us. As a recovering Computer Science major, I know this will not be an easy or quick or one-time task, but I believe Congress has an essential role to play in the future of AI. I have been heartened to see so many Members of Congress of all political persuasions agree, saidCo-Chair Ted Lieu (CA-36).

I am honored to join Congressman Jay Obernolte in leading this Task Force on AI, and honored to work with the bipartisan Members on the Task Force. I look forward to engaging with Members of both the Democratic Caucus and Republican Conference, as well as the Senate, to find meaningful, bipartisan solutions with regards to AI.

Membership

Rep. Ted Lieu (CA-36),Co-Chair Rep. Anna Eshoo (CA-16) Rep. Yvette Clarke (NY-09) Rep. Bill Foster (IL-11) Rep. Suzanne Bonamici (OR-01) Rep. Ami Bera (CA-06) Rep. Don Beyer (VA-08) Rep. Alexandria Ocasio-Cortez (NY-14) Rep. Haley Stevens (MI-11) Rep. Sara Jacobs (CA-51) Rep. Valerie Foushee (NC-04) Rep. Brittany Pettersen (CO-07)

Rep. Jay Obernolte (CA-23),Chair Rep. Darrell Issa (CA-48) Rep. French Hill (AR-02) Rep. Michael Cloud (TX-27) Rep. Neal Dunn (FL-02) Rep. Ben Cline (VA-06) Rep. Kat Cammack (FL-03) Rep. Scott Franklin (FL-18) Rep. Michelle Steel (CA-45) Rep. Eric Burlison (MO-07) Rep. Laurel Lee (FL-15) Rep. Rich McCormick (GA-06)

###

Go here to see the original:

HOUSE LAUNCHES BIPARTISAN TASK FORCE ON ARTIFICIAL INTELLIGENCE - Congressman Ted Lieu

Posted in Ai

Nvidia’s Q4 Earnings Blow Past Expectations as Company Benefits From AI Boom – Investopedia

Key Takeaways

Nvidia Corp. (NVDA)posted revenue and earnings for its fiscal fourth quarter that blew past market expectations, as the company continues to benefit from booming demand for equipment and services to support artificial intelligence (AI).

Shares of the company, which had fallen for four consecutive sessions ahead of Wednesday's eagerly anticipated earnings release, gained 9.1% to $735.94 in after-hours trading.

Nvidia said that revenue jumped to $22.10 billion in the quarter ending Jan. 28, compared with $6.05 billion a year earlier. Net income increased to $12.29 billion from $1.41 billion, while diluted earnings per share came in at $4.93, up from 57 cents a year earlier. Each of those numbers handily topped analysts' expectations.

Revenue for Nvidia's closely watched data-center business, which offers cloud and AI services, jumped to $18.40 billion, a five-fold increase from the year-ago period and also well above expectations.

Accelerated computing and generative AI have hit the tipping point. Demand is surging worldwide across companies, industries and nations, Nvidia CEO Jensen Huang said in a press release, noting that the data center business has "increasingly diverse drivers."

"Vertical industriesled by auto, financial services and healthcareare now at a multibillion-dollar level," Huang added.

Nvidia's gross margin for the fourth quarter was 76%, up from 63.3% in the year-ago period. Nvidia's chief financial officer, Colette Kress, said the improvement was a function of the growth in the data center business, which wasprimarily driven by Nvidia's Hopper GPU computing platform.

Looking ahead, Nvidia says that fiscal first-quarter revenue is expected to come in at $24 billion, plus or minus 2%, which is above the consensus view from analysts. The company expects gross margin in the current quarter to rise slightly from the fourth-quarter figure.

Optimism around artificial intelligence helped push Nvidia's stock, which has more than tripled in the past year, to an all-time high last week. In the days leading up to the earnings release, analysts had raised their expectations even as investors expressed some concerns that the quarterly report might fall short of expectations.

The strong earnings report not only lifted Nvidia in extended trading but gave a boost to other chipmakers that have been riding the AI wave. Shares of Advanced Micro Devices (AMD), ARM Holdings (ARM), Broadcom (AVGO), Taiwan Semiconductor (TSM) and Super Micro Computer (SMCI) were all moving higher late Wednesday.

UPDATE: This article has been updated after initial publication to add comments from company executives, additional details from the earnings report and updated share prices.

Go here to read the rest:

Nvidia's Q4 Earnings Blow Past Expectations as Company Benefits From AI Boom - Investopedia

Posted in Ai

One month with Microsoft’s AI vision of the future: Copilot Pro – The Verge

Microsofts Copilot Pro launched last month as a $20 monthly subscription that provides access to AI-powered features inside some Office apps, alongside priority access to the latest OpenAI models and improved image generation.

Ive been testing Copilot Pro over the past month to see if its worth the $20 subscription for my daily needs and just how good or bad the AI image and text generation is across Office apps like Word, Excel, and PowerPoint. Some of the Copilot Pro features are a little disappointing right now, whereas others are truly useful improvements that Im not sure I want to live without.

Lets dig into everything you get with Copilot Pro right now.

One of the main draws of subscribing to Copilot Pro is an improved version of Designer, Microsofts image creation tool. Designer uses OpenAIs DALL-E 3 model to generate content, and the paid Copilot Pro version creates widescreen images with far more detail than the free version.

Ive been using Designer to experiment with images, and Ive found it particularly impressive when you feed it as much detail as possible. Asking Designer for an image of a dachshund sitting by a window staring at a slice of bacon generates some good examples, but you can get Designer to do much more with some additional prompting. Adding in more descriptive language to generate a hyper-real painting with natural lighting, medium shot, and shallow depth of field will greatly improve image results.

As you can see in the two examples below, Designer gets the natural lighting correct, with some depth of field around the bacon. Unfortunately, there are multiple slices of bacon here instead of just one, and theyre giant pieces of bacon.

Like most things involving AI, the Designer feature isnt perfect. I generated another separate image of a dog staring at bacon, and a giant piece of bacon was randomly inserted. In fact, Id say most times only one or two of the four images that are produced are usable. DALL-E 3 still struggles with text, too, particularly if you ask Designer to add labels or signs that have text written on them.

It did a good job of an illustrated image of a UPS delivery man from 1910. In the style of early Japanese cartoons, though, adding the UPS logo in even if its a little wonky. Copilot Pro lets you generate 100 images per day, and it does so much faster than the free version.

Copilot Pro isnt all about image generation, though. This subscription unlocks the AI capabilities inside Office apps. Inside Word, you can use Copilot to generate text, which can be helpful for getting an outline of a document started or refining paragraphs.

If you have numerical data, you can also get Copilot to visualize this data as a graph or table, which is particularly useful for making text-heavy documents a little easier to read. If you highlight text, a little Copilot logo appears to nudge you into selecting it to rewrite that text or visualize it. If you select an entire paragraph, Copilot will try to rewrite it with different options you can cycle through and pick.

Like the image generation, the paragraph rewriting can be a little hit-and-miss, introducing different meaning to sentences by swapping out words. Overall, I didnt find that it improved my writing. For someone who doesnt write regularly, it might be a lot more useful.

Copilot in Outlook has been super useful to me personally. I use it every day to check summaries of emails, which helpfully appear at the top of emails. This might even tempt me to buy Copilot Pro just for this feature because it saves me so much time when Im planning a project with multiple people.

Its also really helpful when you have a long-running email thread to just get a quick summary of all the key information. You can also use Copilot in Outlook to generate emails or craft replies. Much like Word, theres a rewrite tool here that lets you write a draft email thats then analyzed to produce suggestions for improving the tone or clarity of an email.

Copilot in PowerPoint is equally useful if youre not used to creating presentations. You can ask it to generate slides in a particular style, and youll get an entire deck back within seconds. Designer is part of this feature, so you can dig into each individual slide and modify the images or text.

As someone who hates creating presentations, this is something I will absolutely use in the future. It certainly beats the PowerPoint templates you can find online. I did run into some PowerPoint slide generation issues, though, particularly where Copilot would sit there saying, Still working on it, and not finish generating the slides.

Copilot in Excel seems to be the most limited part of the Copilot Pro experience right now. You need your data neatly arranged in a table. Otherwise, Copilot will want to convert it. Once you have data that works with Copilot, you can create visualizations, use data insights to create pivot tables, or even get formula suggestions. Copilot for Excel is still in preview, so Id expect well see even more functionality here over time.

The final example of Copilot inside Office apps is OneNote. Much like Word, you can draft notes or plans here and easily rewrite text. Copilot also offers summaries of your notes, which can be particularly amusing if you attempt to summarize shorthand notes or incomplete notes that only make sense to your brain.

Microsoft is also rolling out a number of GPTs for fitness, travel, and cooking. These are essentially individual assistants inside Copilot that can help you find recipes, plan out a vacation itinerary, or create a personalized workout plan. Copilot Pro subscribers will soon be able to build their own custom GPTs around specific topics, too.

Overall, I think Copilot Pro is a good start for Microsofts consumer AI efforts, but Im not sure Id pay $20 a month just yet. The image generation improvements are solid here and might be worth $20 a month for some.

Email summaries in Outlook tempt me into the subscription, but the text generation features arent really all that unique in the Office apps. I feel like you can get just as good results using the free version of Copilot or even ChatGPT, but youll have to do the manual (and less expensive) option of copying and pasting the results into a document.

The consumer Copilot Pro isnt as fully featured as the commercial version just yet, so Id expect well see a lot of improvements over the coming months. Microsoft is showing no sign of slowing down with its AI efforts, and the company is set to detail more of its AI plans at Build in May.

See more here:

One month with Microsoft's AI vision of the future: Copilot Pro - The Verge

Posted in Ai

Some of the world’s biggest cloud computing firms want to make millions of servers last longer doing so will save … – Yahoo! Voices

Some of the world's largest cloud computing firms, including Alphabet, Amazon, and Cloudflare, have found a way to save billions by extending the lifespan of their servers - a move expected to significantly reduce depreciation costs, increase net income, and contribute to their bottom lines.

Alphabet, Google's parent company, started this trend in 2021 by extending the lifespan of its servers and networking equipment. By 2023, the company decided that both types of hardware could last six years before needing to be replaced. This decision led to the company saving $3.9 billion in depreciation and increasing net income by $3.0 billion last year.

These savings will go towards Alphabet's investment in technical infrastructure, particularly servers and data centers, to support the exponential growth of AI-powered services.

Like Alphabet, Amazon also recently completed a "useful life study" for its servers, deciding to extend their working life from five to six years. This change is predicted to contribute $900 million to net income in Q1 of 2024 alone.

Cloudflare followed a similar path, extending the useful life of its service and network equipment from four to five years starting in 2024. This decision is expected to result in a modest impact of $20 million.

Tech behemoths are facing increasing costs from investing in AI and technical infrastructure, so any savings that can made elsewhere are vital. The move to extend the life of servers isn't just a cost cutting exercise however, it also reflects the continuous advancements in hardware technology and improvements in data center designs.

Continue reading here:

Some of the world's biggest cloud computing firms want to make millions of servers last longer doing so will save ... - Yahoo! Voices