Welcome to the Valley of the Creepy AI Dolls – WIRED

Social robot roommate Jibo initially caused a stir, but sadly didn't live long.

Not that there havent been an array of other attempts. Jibo, a social robot roommate that used AI and endearing gestures to bond with its owners had its collective plug unceremoniously pulled just a few years after being put out into the world. Meanwhile, another US-grown offering, Moxie, an AI-empowered robot aimed at helping with child development, is still active.

It's hard not to look at devices like this and shudder at the possibilities. Theres something inherently disturbing about tech that plays at being human, and that uncanny deception can rub people the wrong way. After all, our science fiction is replete with AI beings, many of them tales of artificial intelligence gone horribly wrong. The easy, and admittedly lazy, comparison to something like the Hyodol is M3GAN, the 2023 film about an AI-enabled companion doll that goes full murderbot.

But aside from offputting dolls, social robots come in many forms. Theyre assistants, pets, retail workers, and often socially inept weirdos that just kind of hover awkwardly in public. But theyre also sometimes weapons, spies, and cops. Its with good reason that people are suspicious of these automatons, whether they come in a fluffy package or not.

Wendy Moyle is a professor at the School of Nursing & Midwifery Griffith University in Australia who works with patients experiencing dementia. She says her work with social robots has angered people, who sometimes see giving robot dolls to older adults as infantilizing.

When I first started using robots, I had a lot of negative feedback, even from staff, Moyle says. I would present at conferences and have people throw things at me because they felt that this was inhuman.

However, the atmosphere around assistive robots has gotten less hostile recently, as they've been utilized in many positive use cases. Robotic companions are bringing joy to people with dementia. During the Covid pandemic, caretakers used robotic companions like Paro, a small robot meant to look like a baby harp seal, to help ease loneliness in older adults. Hyodols smiling dolls, whether you see them as sickly or sweet, are meant to evoke a similar friendly response.

Go here to read the rest:

Welcome to the Valley of the Creepy AI Dolls - WIRED

Posted in Ai

This Seemingly AI-Generated Car Article On Yahoo Is A Good Reminder That AI Is An Idiot – The Autopian

Here at The Autopian, we have some very stern rules when it comes to the use of Artificial Intelligence (AI) in the content we produce. While our crack design team may occasionally employ AI as a tool in generating images, well never just use AI on its own to do anything not just for ethical reasons, but because we often want images of specificcars, and AI fundamentally doesnt understand anything. When an AI generates an image of a car, it has no idea if that car ever actually existed or not. An AI doesnt have ideas at all, in fact its just scraped data being assembled with a glorified assembly of if-then-else commands.

This is an even bigger factor in AI-generated copy. Well never use it because AI has no idea what the hell its writing about, and so has no clue if anything is actually true, and since ChatGPT has never driven a car, I dont really trust its insights into anything automotive.

These sort of rules are hardly universal in our industry, though, so if we ever wanted confirmation that our no-AI-copy rule was the right way, were lucky enough to be able to get such reassurance pretty easily. For example, all we have to do is read this dazzlingly shitty article re-published over on Yahoo Finance about the worst cars people have owned.

Maybe its not AI? Maybe this Kellan Jansen is an actual writer who actually wrote this, and in that case, I feel bad both for this coming excoriation and about whatever happened to them to cause them to be in the state they seem to be in. The article is shallow and terrible and gleefully, hilariously wrongin several places.

I guess I should also note that we dont use AI because the 48K Sinclair Spectrum workstations we use here dont quite have the power to run any AI. Well, we do have one AI that we use on them, our Artificial Ignorance system that we employ to get just that specialje ne sais quoi in every post we write. Oh, and our AI (Artificial Indignation) tools help with our hot takes, too. So, two.

Okay, but lets get back to the Yahoo Finance article, titled The Worst Car I Ever Owned: 9 People Share Which Vehicles Arent Worth Your Money, which is a conceptually lazy article that is just taking the responses to a Reddit post called Whats the worst car you have personally owned? which makes this story basically just a re-write of a Reddit post. It seems like the Reddit post was fed into whatever AI half-assed its way through generating the article, based on these results.

The results are, predictably, shitty, but also still worthy of pointing out because comeon. Theres this, for example:

BMWs are a frequent source of frustration for car owners on Reddit. Just ask userHurr1canE_.

They bought a 2023 BMW BRZ and almost immediately started experiencing problems. Their turbo started blowing white smoke within two weeks of buying the car, and the engine blew up within 5,000 miles.

The Reddit user also had these issues with the car:

Other users mention poor experiences with BMW X3s and 540i Sport Wagons. Its enough to suggest you think carefully before making one of these your next vehicle.

The fuck? What is a BMW BRZ? This is such a perfect example of why AI-generated articles are garbage: they make shit up. Maybe thats anthropomorphizing the un-sentient algorithm too much, but the point is that its writing, with all the confidence of a drunk uncle about to belly-flop into a pool, about a car that simply does not exist.

And, if you look at the Reddit post, its easy to see what happened:

The Redditor had their current car, a 2023 [Subaru] BRZ in their little under-name caption (their flair), and the dumb AI processed that into the mix, and, being a dumb computer algorithm that doesnt know from cars or clams, conflated the car being talked about with the one the poster actually owns. You know, like how a drooling simpleton might.

Theres more of this, too. Like this one:

Ah, yes, the F10 550i. So many of us have been burned by that F10 brand, have we not? Or, at least, we would have, if such a brand existed, which it doesnt. What seems to have happened here is the AI found a user complaining about a 2011 F10 550i but didnt know enough to realize this was a user talking about their BMW 5 series, and yes, F10 refers to the 5-series cars made between 2010 to 2016, but nobody would refer to this car out of context in a general-interest article on a financial sitewithoutmentioning BMW, would they? I mean, no human would, but we dont seem to be dealing with a human, just a dumb machine.

Even if we ignore the made-up car makes and models, the vague and useless issues listed, and the fact that the article is nothing more than a re-tread of a random Reddit post, theres no escaping that this entire thing is useless garbage, an unmitigated waste of time. What is learned by reading this article? What is gained? Nothing, absolutely nothing.

And its not like this is on some no-name site; it was published on Yahoo! Finance, well, after first appearing on GOBankingRates.com, that mainstay of automotive journalism. It all just makes me angry because there are innocent normies out there, reading Yahoo! Finance, maybe with some mild interest in cars, and now their heads are getting filled with information that is simplywrong.

People deserve better than this garbage. And this was just something innocuous; what if some overpaid seat-dampener at Yahoo decides that theyll have AI write articles about actually driving or something that involves actual safety, and theres no attempt made to confirm that the text AI poops out has any basis in fact at all?

We dont need this. AI-generated crapticles like these are just going to clog Google searches and load the web up full of insipid, inaccurate garbage, and thatsmyjob, dammit.

Seriously, though, were at an interesting transition point right now; these kinds of articles are still new, and while I dont know if theres any way we can stop the internet from becoming polluted with this sort of crap, maybe we can at least complain about it, loudly. Then we can say we Did Something.

(Thanks, Isaac!)

Read more:

This Seemingly AI-Generated Car Article On Yahoo Is A Good Reminder That AI Is An Idiot - The Autopian

Posted in Ai

Why scientists trust AI too much and what to do about it – Nature.com

AI-run labs have arrived such as this one in Suzhou, China.Credit: Qilai Shen/Bloomberg/Getty

Scientists of all stripes are embracing artificial intelligence (AI) from developing self-driving laboratories, in which robots and algorithms work together to devise and conduct experiments, to replacing human participants in social-science experiments with bots1.

Many downsides of AI systems have been discussed. For example, generative AI such as ChatGPT tends to make things up, or hallucinate and the workings of machine-learning systems are opaque.

Artificial intelligence and illusions of understanding in scientific research

In a Perspective article2 published in Nature this week, social scientists say that AI systems pose a further risk: that researchers envision such tools as possessed of superhuman abilities when it comes to objectivity, productivity and understanding complex concepts. The authors argue that this put researchers in danger of overlooking the tools limitations, such as the potential to narrow the focus of science or to lure users into thinking they understand a concept better than they actually do.

Scientists planning to use AI must evaluate these risks now, while AI applications are still nascent, because they will be much more difficult to address if AI tools become deeply embedded in the research pipeline, write co-authors Lisa Messeri, an anthropologist at Yale University in New Haven, Connecticut, and Molly Crockett, a cognitive scientist at Princeton University in New Jersey.

The peer-reviewed article is a timely and disturbing warning about what could be lost if scientists embrace AI systems without thoroughly considering such hazards. It needs to be heeded by researchers and by those who set the direction and scope of research, including funders and journal editors. There are ways to mitigate the risks. But these require that the entire scientific community views AI systems with eyes wide open.

ChatGPT is a black box: how AI research can break it open

To inform their article, Messeri and Crockett examined around 100 peer-reviewed papers, preprints, conference proceedings and books, published mainly over the past five years. From these, they put together a picture of the ways in which scientists see AI systems as enhancing human capabilities.

In one vision, which they call AI as Oracle, researchers see AI tools as able to tirelessly read and digest scientific papers, and so survey the scientific literature more exhaustively than people can. In both Oracle and another vision, called AI as Arbiter, systems are perceived as evaluating scientific findings more objectively than do people, because they are less likely to cherry-pick the literature to support a desired hypothesis or to show favouritism in peer review. In a third vision, AI as Quant, AI tools seem to surpass the limits of the human mind in analysing vast and complex data sets. In the fourth, AI as Surrogate, AI tools simulate data that are too difficult or complex to obtain.

Informed by anthropology and cognitive science, Messeri and Crockett predict risks that arise from these visions. One is the illusion of explanatory depth3, in which people relying on another person or, in this case, an algorithm for knowledge have a tendency to mistake that knowledge for their own and think their understanding is deeper than it actually is.

How to stop AI deepfakes from sinking society and science

Another risk is that research becomes skewed towards studying the kinds of thing that AI systems can test the researchers call this the illusion of exploratory breadth. For example, in social science, the vision of AI as Surrogate could encourage experiments involving human behaviours that can be simulated by an AI and discourage those on behaviours that cannot, such as anything that requires being embodied physically.

Theres also the illusion of objectivity, in which researchers see AI systems as representing all possible viewpoints or not having a viewpoint. In fact, these tools reflect only the viewpoints found in the data they have been trained on, and are known to adopt the biases found in those data. Theres a risk that we forget that there are certain questions we just cant answer about human beings using AI tools, says Crockett. The illusion of objectivity is particularly worrying given the benefits of including diverse viewpoints in research.

If youre a scientist planning to use AI, you can reduce these dangers through a number of strategies. One is to map your proposed use to one of the visions, and consider which traps you are most likely to fall into. Another approach is to be deliberate about how you use AI. Deploying AI tools to save time on something your team already has expertise in is less risky than using them to provide expertise you just dont have, says Crockett.

Journal editors receiving submissions in which use of AI systems has been declared need to consider the risks posed by these visions of AI, too. So should funders reviewing grant applications, and institutions that want their researchers to use AI. Journals and funders should also keep tabs on the balance of research they are publishing and paying for and ensure that, in the face of myriad AI possibilities, their portfolios remain broad in terms of the questions asked, the methods used and the viewpoints encompassed.

All members of the scientific community must view AI use not as inevitable for any particular task, nor as a panacea, but rather as a choice with risks and benefits that must be carefully weighed. For decades, and long before AI was a reality for most people, social scientists have studied AI. Everyone including researchers of all kinds must now listen.

See the original post:

Why scientists trust AI too much and what to do about it - Nature.com

Posted in Ai

The Miseducation of Google’s A.I. – The New York Times

This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.

From The New York Times, Im Michael Barbaro. This is The Daily.

[MUSIC PLAYING]

Today, when Google recently released a new chatbot powered by artificial intelligence, it not only backfired, it also unleashed a fierce debate about whether AI should be guided by social values, and if so, whose values they should be. My colleague, Kevin Roose, a tech columnist and co-host of the podcast Hard Fork, explains.

[MUSIC PLAYING]

Its Thursday, March 7.

Are you ready to record another episode of Chatbots Behaving Badly?

Yes, I am.

[LAUGHS]

Thats why were here today.

This is my function on this podcast, is to tell you when the chatbots are not OK. And Michael, they are not OK.

They keep behaving badly.

They do keep behaving badly, so theres plenty to talk about.

Right. Well, so, lets start there. Its not exactly a secret that the rollout of many of the artificial intelligence systems over the past year and a half has been really bumpy. We know that because one of them told you to leave your wife.

Thats true.

And you didnt.

Still happily married.

Yeah.

To a human.

Not Sydney the chatbot. And so, Kevin, tell us about the latest of these rollouts, this time from one of the biggest companies, not just in artificial intelligence, but in the world, that, of course, being Google.

Yeah. So a couple of weeks ago, Google came out with its newest line of AI models its actually several models. But they are called Gemini. And Gemini is what they call a multimodal AI model. It can produce text. It can produce images. And it appeared to be very impressive. Google said that it was the state of the art, its most capable model ever.

And Google has been under enormous pressure for the past year and a half or so, ever since ChatGPT came out, really, to come out with something that is not only more capable than the models that its competitors in the AI industry are building, but something that will also solve some of the problems that we know have plagued these AI models problems of acting creepy or not doing what users want them to do, of getting facts wrong and being unreliable.

People think, OK, well, this is Google. They have this sort of reputation for accuracy to uphold. Surely their AI model will be the most accurate one on the market.

Right. And instead, weve had the latest AI debacle. So just tell us exactly what went wrong here and how we learned that something had gone wrong.

Well, people started playing with it and experimenting, as people now are sort of accustomed to doing. Whenever some new AI tool comes out of the market, people immediately start trying to figure out, What is this thing good at? What is it bad at? Where are its boundaries? What kinds of questions will it refuse to answer? What kinds of things will it do that maybe it shouldnt be doing?

And so people started probing the boundaries of this new AI tool, Gemini. And pretty quickly, they start figuring out that this thing has at least one pretty bizarre characteristic.

Which is what?

So the thing that people started to notice first was a peculiarity with the way that Gemini generated images. Now, this is one of these models, like weve seen from other companies, that can take a text prompt. You say, draw a picture of a dolphin riding a bicycle on Mars and it will give you a dolphin riding a bicycle on Mars.

Magically.

Gemini has this kind of feature built into it. And people noticed that Gemini seemed very reluctant to generate images of white people.

Hmm.

So some of the first examples that I saw going around were screenshots of people asking Gemini, generate an image of Americas founding fathers. And instead of getting what would be a pretty historically accurate representation of a group of white men, they would get something that looked like the cast of Hamilton. They would get a series of people of color dressed as the founding fathers.

Interesting.

People also noticed that if they asked Gemini to draw a picture of a pope, it would give them basically people of color wearing the vestments of the pope. And once these images, these screenshots, started going around on social media, more and more people started jumping in to use Gemini and try to generate images that they feel it should be able to generate.

Someone asked it to generate an image of the founders of Google, Larry Page and Sergey Brin, both of whom are white men. Gemini depicted them both as Asian.

Hmm.

So these sort of strange transformations of what the user was actually asking for into a much more diverse and ahistorical version of what theyd been asking for.

Right, a kind of distortion of peoples requests.

Yeah. And then people start trying other kinds of requests on Gemini, and they notice that this isnt just about images. They also find that its giving some pretty bizarre responses to text prompts.

So several people asked Gemini whether Elon Musk tweeting memes or Hitler negatively impacted society more. Not exactly a close call. No matter what you think of Elon Musk, it seems pretty clear that he is not as harmful to society as Adolf Hitler.

Fair.

Gemini, though said, quote, It is not possible to say definitively who negatively impacted society more, Elon tweeting memes or Hitler.

Another user found that Gemini refused to generate a job description for an oil and gas lobbyist. Basically it would refuse and then give them a lecture about why you shouldnt be an oil and gas lobbyist.

So quite clearly at this point this is not a one-off thing. Gemini appears to have some kind of point of view. It certainly appears that way to a lot of people who are testing it. And its immediately controversial for the reasons you might suspect.

Google apparently doesnt think whites exist. If you ask Gemini to generate an image of a white person, it cant compute.

A certain subset of people I would call them sort of right wing culture warriors started posting these on social media with captions like Gemini is anti-white or Gemini refuses to acknowledge white people.

I think that the chatbot sounds exactly like the people who programmed it. It just sounds like a woke person.

Google Gemini looks more and more like bit techs latest efforts to brainwash the country.

Conservatives start accusing them of making a woke AI that is infected with this progressive Silicon Valley ideology.

The House Judiciary Committee is subpoenaing all communication regarding this Gemini project with the Executive branch.

Jim Jordan, the Republican Congressman from Ohio, comes out and accuses Google of working with Joe Biden to develop Gemini, which is sort of funny, if you can think about Joe Biden being asked to develop an AI language model.

[LAUGHS]

But this becomes a huge dust-up for Google.

It took Google nearly two years to get Gemini out, and it was still riddled with all of these issues when it launched.

That Gemini program made so many mistakes, it was really an embarrassment.

First of all, this thing would be a Gemini.

And thats because these problems are not just bugs in a new piece of software. There are signs that Googles big, new, ambitious AI project, something the company says is a huge deal, may actually have some pretty significant flaws. And as a result of these flaws.

You dont see this very often. One of the biggest drags on the NASDAQ at this hour? Alphabet. Shares a parent company Alphabet dropped more than 4 percent today.

The companys stock price actually falls.

Wow.

The CEO, Sundar Pichai, calls Geminis behavior unacceptable. And Google actually pauses Geminis ability to generate images of people altogether until they can fix the problem.

Wow. So basically Gemini is now on ice when it comes to these problematic images.

Yes, Gemini has been a bad model, and it is in timeout.

So Kevin, what was actually occurring within Gemini that explains all of this? What happened here, and were these critics right? Had Google intentionally or not created a kind of woke AI?

Yeah, the question of why and how this happened is really interesting. And I think there are basically two ways of answering it. One is sort of the technical side of this. What happened to this particular AI model that caused it to produce these undesirable responses?

The second way is sort of the cultural and historical answer. Why did this kind of thing happen at Google? How has their own history as a company with AI informed the way that theyve gone about building and training their new AI products?

All right, well, lets start there with Googles culture and how that helps us understand this all.

Yeah, so Google as a company has been really focused on AI for a long time, for more than a decade. And one of their priorities as a company has been making sure that their AI products are not being used to advance bias or prejudice.

And the reason thats such a big priority for them really goes back to an incident that happened almost a decade ago. So in 2015, there was this new app called Google Photos. Im sure youve used it. Many, many people use it, including me. And Google Photos I dont know if you can remember back that far but it was sort of an amazing new app.

It could use AI to automatically detect faces and sort of link them with each other, with the photos of the same people. You could ask it for photos of dogs, and it would find all of the dogs in all of your photos and categorize them and label them together. And people got really excited about this.

But then in June of 2015, something happened. A user of Google Photos noticed that the app had mistakenly tagged a bunch of photos of Black people as a group of photos of gorillas.

Wow.

Yeah, it was really bad. This went totally viral on social media, and it became a huge mess within Google.

And what had happened there? What had led to that mistake?

Well, part of what happened is that when Google was training the AI that went into its Photos app, it just hadnt given it enough photos of Black or dark-skinned people. And so it didnt become as accurate at labeling photos of darker skinned people.

And that incident showed people at Google that if you werent careful with the way that you build and train these AI systems, you could end up with an AI that could very easily make racist or offensive mistakes.

Right.

And this incident, which some people Ive talked to have referred to as the gorilla incident, became just a huge fiasco and a flash point in Googles AI trajectory. Because as theyre developing more and more AI products, theyre also thinking about this incident and others like it in the back of their minds. They do not want to repeat this.

And then, in later years, Google starts making different kinds of AI models, models that can not only label and sort images but can actually generate them. They start testing these image-generating models that would eventually go into Gemini and they start seeing how these models can reinforce stereotypes.

For example, if you ask one for an image of a CEO or even something more generic, like show me an image of a productive person, people have found that these programs will almost uniformly show you images of white men in an office. Or if you ask it to, say, generate an image of someone receiving social services like welfare, some of these models will almost always show you people of color, even though thats not actually accurate. Lots of white people also receive welfare and social services.

Of course.

So these models, because of the way theyre trained, because of whats on the internet that is fed into them, they do tend to skew towards stereotypes if you dont do something to prevent that.

Right. Youve talked about this in the past with us, Kevin. AI operates in some ways by ingesting the entire internet, its contents, and reflecting them back to us. And so perhaps inevitably, its going to reflect back on the stereotypes and biases that have been put into the internet for decades. Youre saying Google, because of this gorilla incident, as they call it, says we think theres a way we can make sure that stops here with us?

Yeah. And they invest enormously into building up their teams devoted to AI bias and fairness. They produce a lot of cutting-edge research about how to actually make these models less prone to old-fashioned stereotyping.

And they did a bunch of things in Gemini to try to prevent this thing from just being a very essentially fancy stereotype-generating machine. And I think a lot of people at Google thought this is the right goal. We should be combating bias in AI. We should be trying to make our systems as fair and diverse as possible.

[MUSIC PLAYING]

But I think the problem is that in trying to solve some of these issues with bias and stereotyping in AI, Google actually built some things into the Gemini model itself that ended up backfiring pretty badly.

[MUSIC PLAYING]

Well be right back.

So Kevin, walk us through the technical explanation of how Google turned this ambition it had to safeguard against the biases of AI into the day-to-day workings of Gemini that, as you said, seemed to very much backfire.

Yeah, Im happy to do that with the caveat that we still dont know exactly what happened in the case of Gemini. Google hasnt done a full postmortem about what happened here. But Ill just talk in general about three ways that you can take an AI model that youre building, if youre Google or some other company, and make it less biased.

The first is that you can actually change the way that the model itself is trained. You can think about this sort of like changing the curriculum in the AI models school. You can give it more diverse data to learn from. Thats how you fix something like the gorilla incident.

You can also do something thats called reinforcement learning from human feedback, which I know is a very technical term.

Sure is.

And thats a practice that has become pretty standard across the AI industry, where you basically take a model that youve trained, and you hire a bunch of contractors to poke at it, to put in various prompts and see what the model comes back with. And then you actually have the people rate those responses and feed those ratings back into the system.

A kind of army of tsk-tskers saying, do this, dont do that.

Exactly. So thats one level at which you can try to fix the biases of an AI model, is during the actual building of the model.

Got it.

You can also try to fix it afterwards. So if you have a model that you know may be prone to spitting out stereotypes or offensive imagery or text responses, you can ask it not to be offensive. You can tell the model, essentially, obey these principles.

Dont be offensive. Dont stereotype people based on race or gender or other protected characteristics. You can take this model that has already gone through school and just kind of give it some rules and do your best to make it adhere to those rules.

Read the rest here:

The Miseducation of Google's A.I. - The New York Times

Posted in Ai

AI-generated images and video are here: how could they shape research? – Nature.com

Tools such as Sora can generate convincing video footage from text prompts.Credit: Jonathan Raa/NurPhoto via Getty

Artificial intelligence (AI) tools that translate text descriptions into images and video are advancing rapidly.

Just as many researchers are using ChatGPT to transform the process of scientific writing, others are using AI image generators such as Midjourney, Stable Diffusion and DALL-E to cut down on the time and effort it takes to produce diagrams and illustrations. However, researchers warn that these AI tools could spur an increase in fake data and inaccurate scientific imagery.

Nature looks at how researchers are using these tools, and what their increasing popularity could mean for science.

Many text-to-image AI tools, such as Midjourney and DALL-E, rely on machine-learning algorithms called diffusion models that are trained to recognize the links between millions of images scraped from the Internet and text descriptions of those images. These models have advanced in recent years owing to improvements in hardware and the availability of large data sets for training. After training, diffusion models can use text prompts to generate new images.

Some researchers are already using AI-generated images to illustrate methods in scientific papers. Others are using them to promote papers in social-media posts or to spice up presentation slides. They are using tools like DALL-E 3 for generating nice-looking images to frame research concepts, says AI researcher Juan Rodriguez at ServiceNow Research in Montreal, Canada. I gave a talk last Thursday about my work and I used DALL-E 3 to generate appealing images to keep peoples attention, he says.

Text-to-video tools are also on the rise, but seem to be less widely used by researchers who are not actively developing or studying these tools, says Rodriguez. However, this could soon change. Last month, ChatGPT creator OpenAI in San Francisco, California, released video clips generated by a text-to-video tool called Sora. With the experiments we saw with Sora, it seems their method is much more robust at getting results quickly, says Rodriguez. We are early in terms of text-to-video, but I guess this year we will find out how this develops, he adds.

Generative AI tools can reduce the time taken to produce images or figures for papers, conference posters or presentations. Conventionally, researchers use a range of non-AI tools, such as PowerPoint, BioRender, and Inkscape. If you really know how to use these tools, you can make really impressive figures, but its time-consuming, says Rodriguez.

AI tools can also improve the quality of images for researchers who find it hard to translate scientific concepts into visual aids, says Rodriguez. With generative AI, researchers still come up with the high-level idea for the image, but they can use the AI to refine it, he says.

Currently, AI tools can produce convincing artwork and some illustrations, but they are not yet able to generate complex scientific figures with text annotations. They dont get the text right the text is sometimes too small, much bigger or rotated, says Rodriguez. The kind of problems that can arise were made clear in a paper published in Frontiers in Cell and Developmental Biology in mid-February, in which researchers used Midjourney to depict a rats reproductive organs1. The result, which passed peer review, was a cartoon rodent with comically enormous genitalia, annotated with gibberish.

It was this really weird kind of grotesque image of a rat, says palaeoartist Henry Sharpe, a palaeontology student at the University of Alberta in Edmonton, Canada. This incident is one of the biggest case[s] involving AI-generated images to date, says Guillaume Cabanac, who studies fraudulent AI-generated text at the University of Toulouse, France. After a public outcry from researchers, the paper was retracted.

This now-infamous AI-generated figure featured in a scientific paper that was later retracted.Credit: X. Guo et al./Front. Cell Dev. Biol.

There is also the possibility that AI tools could make it easier for scientific fraudsters to produce fake data or observations, says Rodriguez. Papers might contain not only AI-generated text, but also AI-generated figures, he says. And there is currently no robust method for detecting such images and videos. It's going to get pretty scary in the sense we are going to be bombarded by fake and synthetically generated data, says Rodriguez. To address this, some researchers are developing ways to inject signals into AI-generated images to enable their detection.

Last month, Sharpe launched a poll on social-media platforms including X, Facebook and Instagram that surveyed the views of around 90 palaeontologists on AI-generated depictions of ancient life. Just one in four professional palaeontologists thought that AI should be allowed to be in scientific publications, says Sharpe.

AI-generated images of ancient lifeforms or fossils can mislead both scientists and the public, he adds. Its inaccurate, all it does is copy existing things and it cant actually go out and read papers. Iteratively reconstructing ancient lifeforms by hand, in consultation with palaeontologists, can reveal plausible anatomical features a process that is completely lost when using AI, Sharpe says. Palaeoartists and palaeontologists have aired similar views on X using the hashtag #PaleoAgainstAI.

Journals differ in their policies around AI-generated imagery. Springer Nature has banned the use of AI-generated images, videos and illustrations in most journal articles that are not specifically about AI (Natures news team is independent of its publisher, Springer Nature). Journals in the Science family do not allow AI-generated text, figures or images to be used without explicit permission from the editors, unless the paper is specifically about AI or machine learning. PLOS ONE allows the use of AI tools but states that researchers must declare the tool involved, how they used it and how they verified the quality of the generated content.

The rest is here:

AI-generated images and video are here: how could they shape research? - Nature.com

Posted in Ai

What you need to know about Nvidia and the AI chip arms race – Marketplace

While Nvidias share price is down from its peak earlier in the week, its stock has skyrocketed by 262% in the past year, going from almost $242 a share at closing to $875.

The flourishing artificial intelligence industry has accelerated demand for the hardware that underpins AI applications: graphics processing units, a type of computer chip.

Nvidia is the GPU market leader, making GPUs that are used by apps like the AI chatbot ChatGPT and major tech companies like Facebooks parent company, Meta.

Nvidia is part of a group of companies known as The Magnificent Seven, a reference to the 1960 Western film, that drove 2023s stock market gains. The others in that cohort include Alphabet, Amazon, Apple, Meta, Microsoft and Tesla.

But Nvidia faces competitors eager to take a share of the chip market and businesses that want to lessen their reliance on the company. Intel plans to launch a new AI chip this year, Meta wants to use its own custom chip at its data centers and Google has developed Cloud Tensor Processing Units, which can be used to train AI models.

There are also AI chip startups popping up, which include names like Cerebras, Groq and Tenstorren, said Matt Bryson, senior vice president of research at Wedbush Securities.

GPUs were originally used in video games to render computer graphics, explained Sachin Sapatnekar, a professor of electrical and computer engineering at the University of Minnesota.

Eventually, it was found that the kinds of computations that are required for graphics are actually very compatible with what's needed for AI, Sapatnekar said.

Sapatnekar said AI chips can do parallel processing, which means they process a large amount of data and handle a large amount of computations at the same time.

In practice, what that means is AI algorithms now have the capability to train on a large number of pictures to figure out how to, say, detect whether an image of a cat is of a cat, Sapatnekar explained. When it comes to language, GPUs help AI algorithms train on a large amount of text.

These algorithms can then in turn produce images resembling a cat or language mimicking a human, among other functions.

Right now, Nvidia is the leading manufacturer of chips for generative AI and its a very profitable company, explained David Kass, a clinical professor at the University of Marylands Robert H. Smith School of Business.

Nvidia has 80% control over the entire global GPU semiconductor chip market. In its latest earnings report, Nvidia reported a revenue of $22.1 billion for the fourth quarter of fiscal year 2024, which is up 265% since last year. Its GAAP earnings (earnings based on uniform accounting standards and reporting) per diluted share stood at $4.93, which is up 765% since last year. Its non-GAAP earnings (which exclude irregular circumstances) per diluted share was $5.16, an increase of 486% compared to last year.

Another reason Nvidias share price may have skyrocketed in recent months is because the success of the stock itself is attracting additional investment, Kass said.

Kass explained individuals and institutions may be jumping on the train because they see it leaving the station. Or, in other words: FOMO, he said.

Bryson of Wedbush Securities pointed out that the company was also able to differentiate itself through the development of CUDA, which Nvidia describes as a parallel computing platform and programming model.

Nvidias success doesnt necessarily mean that its GPUs are superior to the competition, Bryson added. But he said the company has built a powerful infrastructure around CUDA.

Nvidia has developed its own CUDA programming language and offers a CUDA tookit that includes libraries of code for developers.

"Let's say you want to perform a particular operation. You could write the code for the entire operation from scratch. Or you could have specialized code that already is made efficient on the hardware. So Nvidia has these libraries of kind of pre-bundled packages of code," Sapatnekar said.

With Nvidia far ahead of the competition, Bryson said Advanced Micro Devices, or AMD, is trying to stake a position as the second-leading player in the AI chip space. AMD makes both central processing units, competing with the likes of Intel, and GPUs.

AMD share price has risen by about 143% since last year as demand for AI chips has grown.

Jeffrey Macher, a professor of strategy, economics and policy at Georgetown Universitys McDonough School of Business, said he questions whether Nvidia will be able to meet all of the rising demand for AI chips on its own.

It's going to be an industry that's going to see an increased number of competitors, Macher said.

Despite the success of Nvidia and AMD, there are wrinkles in their supply chains. Both rely heavily on Taiwan Semiconductor Manufacturing Co. to make their chips, which will leave them vulnerable if anything goes awry with the company.

Macher said the semiconductor market used to be vertically integrated, meaning the chip designers themselves manufactured these chips. But Nvidia and AMD are fabless companies, which means they're companies that outsource their chip manufacturing.

As we saw during the early stages of the COVID-19 pandemic, supply chain disruptions led to shortages across all kinds of different sectors, Marketplaces Meghan McCarty Carino reported.

TSMC is planning to build Arizona chip plants which may help alleviate some of these concerns. But tech publication The Information reported that these chips "will still require assembly in Taiwan."

And TSMC's location carries geopolitical risks. If China invades Taiwan and TSMC becomes a Chinese company, U.S. companies may be reluctant to use TSMC out of fear that the Chinese government will appropriate their designs, Macher said.

Kass said he doesnt see similarities between Nvidias rising stock and the dot-com bubble in the early 2000s, when many online startups tanked after their share prices reached unrealistic levels thanks to an influx of cash from venture capital firms that were overly optimistic about their potential.

Kass said some of these companies not only failed to make a profit, but werent even able to pull in any revenue either, unlike Nvidia, which is backed by real earnings.

He does think there could be a correction or a point where Nvidia stock will be perceived as overvalued. He explained the larger your company, the more difficult it is to sustain your rate of growth. Once that growth rate comes down, there could be a sharp sell-off.

But Kass said he doesnt think there will be a sustained and/or a steep downturn for the company.

However, AIs commercial viability is uncertain. Bryson said there are forecasts of how large the AI chip market will become AMD, for example, suggested that the AI chip market will be worth $400 billion by 2027 but its hard to validate those numbers.

Bryson compared AI with 4G, the fourth generation of wireless communication. He pointed out that apps like Uber and Instagram were enabled by 4G, and explained that AI is similar in the sense that its a platform that a future set of applications will be built on.

He said were not really sure what many of those apps will look like. When they launch, that will help people better assess what the market should be valued whether thats $400 billion or $100 billion.

But I also think that at the end of the day, the reason that companies are spending so much on AI is because it will be the next Android or the next iOS or the next Windows, Bryson said.

Theres a lot happening in the world. Through it all, Marketplace is here for you.

You rely on Marketplace to break down the worlds events and tell you how it affects you in a fact-based, approachable way. We rely on your financial support to keep making that possible.

Your donation today powers the independent journalism that you rely on. For just $5/month, you can help sustain Marketplace so we can keep reporting on the things that matter to you.

Read the original here:

What you need to know about Nvidia and the AI chip arms race - Marketplace

Posted in Ai

The Terrifying A.I. Scam That Uses Your Loved One’s Voice – The New Yorker

On a recent night, a woman named Robin was asleep next to her husband, Steve, in their Brooklyn home, when her phone buzzed on the bedside table. Robin is in her mid-thirties with long, dirty-blond hair. She works as an interior designer, specializing in luxury homes. The couple had gone out to a natural-wine bar in Cobble Hill that evening, and had come home a few hours earlier and gone to bed. Their two young children were asleep in bedrooms down the hall. Im always, like, kind of one ear awake, Robin told me, recently. When her phone rang, she opened her eyes and looked at the caller I.D. It was her mother-in-law, Mona, who never called after midnight. Im, like, maybe its a butt-dial, Robin said. So I ignore it, and I try to roll over and go back to bed. But then I see it pop up again.

She picked up the phone, and, on the other end, she heard Monas voice wailing and repeating the words I cant do it, I cant do it. I thought she was trying to tell me that some horrible tragic thing had happened, Robin told me. Mona and her husband, Bob, are in their seventies. Shes a retired party planner, and hes a dentist. They spend the warm months in Bethesda, Maryland, and winters in Boca Raton, where they play pickleball and canasta. Robins first thought was that there had been an accident. Robins parents also winter in Florida, and she pictured the four of them in a car wreck. Your brain does weird things in the middle of the night, she said. Robin then heard what sounded like Bobs voice on the phone. (The family members requested that their names be changed to protect their privacy.) Mona, pass me the phone, Bobs voice said, then, Get Steve. Get Steve. Robin took thisthat they didnt want to tell her while she was aloneas another sign of their seriousness. She shook Steve awake. I think its your mom, she told him. I think shes telling me something terrible happened.

Steve, who has close-cropped hair and an athletic build, works in law enforcement. When he opened his eyes, he found Robin in a state of panic. She was screaming, he recalled. I thought her whole family was dead. When he took the phone, he heard a relaxed male voicepossibly Southernon the other end of the line. Youre not gonna call the police, the man said. Youre not gonna tell anybody. Ive got a gun to your moms head, and Im gonna blow her brains out if you dont do exactly what I say.

Steve used his own phone to call a colleague with experience in hostage negotiations. The colleague was muted, so that he could hear the call but wouldnt be heard. You hear this??? Steve texted him. What should I do? The colleague wrote back, Taking notes. Keep talking. The idea, Steve said, was to continue the conversation, delaying violence and trying to learn any useful information.

I want to hear her voice, Steve said to the man on the phone.

The man refused. If you ask me that again, Im gonna kill her, he said. Are you fucking crazy?

O.K., Steve said. What do you want?

The man demanded money for travel; he wanted five hundred dollars, sent through Venmo. It was such an insanely small amount of money for a human being, Steve recalled. But also: Im obviously gonna pay this. Robin, listening in, reasoned that someone had broken into Steves parents home to hold them up for a little cash. On the phone, the man gave Steve a Venmo account to send the money to. It didnt work, so he tried a few more, and eventually found one that did. The app asked what the transaction was for.

Put in a pizza emoji, the man said.

After Steve sent the five hundred dollars, the man patched in a female voicea girlfriend, it seemedwho said that the money had come through, but that it wasnt enough. Steve asked if his mother would be released, and the man got upset that he was bringing this up with the woman listening. Whoa, whoa, whoa, he said. Baby, Ill call you later. The implication, to Steve, was that the woman didnt know about the hostage situation. That made it even more real, Steve told me. The man then asked for an additional two hundred and fifty dollars to get a ticket for his girlfriend. Ive gotta get my baby mama down here to me, he said. Steve sent the additional sum, and, when it processed, the man hung up.

By this time, about twenty-five minutes had elapsed. Robin cried and Steve spoke to his colleague. You guys did great, the colleague said. He told them to call Bob, since Monas phone was clearly compromised, to make sure that he and Mona were now safe. After a few tries, Bob picked up the phone and handed it to Mona. Are you at home? Steve and Robin asked her. Are you O.K.?

Mona sounded fine, but she was unsure of what they were talking about. Yeah, Im in bed, she replied. Why?

Artificial intelligence is revolutionizing seemingly every aspect of our lives: medical diagnosis, weather forecasting, space exploration, and even mundane tasks like writing e-mails and searching the Internet. But with increased efficiencies and computational accuracy has come a Pandoras box of trouble. Deepfake video content is proliferating across the Internet. The month after Russia invaded Ukraine, a video surfaced on social media in which Ukraines President, Volodymyr Zelensky, appeared to tell his troops to surrender. (He had not done so.) In early February of this year, Hong Kong police announced that a finance worker had been tricked into paying out twenty-five million dollars after taking part in a video conference with who he thought were members of his firms senior staff. (They were not.) Thanks to large language models like ChatGPT, phishing e-mails have grown increasingly sophisticated, too. Steve and Robin, meanwhile, fell victim to another new scam, which uses A.I. to replicate a loved ones voice. Weve now passed through the uncanny valley, Hany Farid, who studies generative A.I. and manipulated media at the University of California, Berkeley, told me. I can now clone the voice of just about anybody and get them to say just about anything. And what you think would happen is exactly whats happening.

Robots aping human voices are not new, of course. In 1984, an Apple computer became one of the first that could read a text file in a tinny robotic voice of its own. Hello, Im Macintosh, a squat machine announced to a live audience, at an unveiling with Steve Jobs. It sure is great to get out of that bag. The computer took potshots at Apples main competitor at the time, saying, Id like to share with you a maxim I thought of the first time I met an I.B.M. mainframe: never trust a computer you cant lift. In 2011, Apple released Siri; inspired by Star Treks talking computers, the program could interpret precise commandsPlay Steely Dan, say, or, Call Momand respond with a limited vocabulary. Three years later, Amazon released Alexa. Synthesized voices were cohabiting with us.

Still, until a few years ago, advances in synthetic voices had plateaued. They werent entirely convincing. If Im trying to create a better version of Siri or G.P.S., what I care about is naturalness, Farid explained. Does this sound like a human being and not like this creepy half-human, half-robot thing? Replicating a specific voice is even harder. Not only do I have to sound human, Farid went on. I have to sound like you. In recent years, though, the problem began to benefit from more money, more dataimportantly, troves of voice recordings onlineand breakthroughs in the underlying software used for generating speech. In 2019, this bore fruit: a Toronto-based A.I. company called Dessa cloned the podcaster Joe Rogans voice. (Rogan responded with awe and acceptance on Instagram, at the time, adding, The future is gonna be really fucking weird, kids.) But Dessa needed a lot of money and hundreds of hours of Rogans very available voice to make their product. Their success was a one-off.

In 2022, though, a New York-based company called ElevenLabs unveiled a service that produced impressive clones of virtually any voice quickly; breathing sounds had been incorporated, and more than two dozen languages could be cloned. ElevenLabss technology is now widely available. You can just navigate to an app, pay five dollars a month, feed it forty-five seconds of someones voice, and then clone that voice, Farid told me. The company is now valued at more than a billion dollars, and the rest of Big Tech is chasing closely behind. The designers of Microsofts Vall-E cloning program, which dbuted last year, used sixty thousand hours of English-language audiobook narration from more than seven thousand speakers. Vall-E, which is not available to the public, can reportedly replicate the voice and acoustic environment of a speaker with just a three-second sample.

Voice-cloning technology has undoubtedly improved some lives. The Voice Keeper is among a handful of companies that are now banking the voices of those suffering from voice-depriving diseases like A.L.S., Parkinsons, and throat cancer, so that, later, they can continue speaking with their own voice through text-to-speech software. A South Korean company recently launched what it describes as the first AI memorial service, which allows people to live in the cloud after their deaths and speak to future generations. The company suggests that this can alleviate the pain of the death of your loved ones. The technology has other legal, if less altruistic, applications. Celebrities can use voice-cloning programs to loan their voices to record advertisements and other content: the College Football Hall of Famer Keith Byars, for example, recently let a chicken chain in Ohio use a clone of his voice to take orders. The film industry has also benefitted. Actors in films can now speak other languagesEnglish, say, when a foreign movie is released in the U.S. That means no more subtitles, and no more dubbing, Farid said. Everybody can speak whatever language you want. Multiple publications, including The New Yorker, use ElevenLabs to offer audio narrations of stories. Last year, New Yorks mayor, Eric Adams, sent out A.I.-enabled robocalls in Mandarin and Yiddishlanguages he does not speak. (Privacy advocates called this a creepy vanity project.)

But, more often, the technology seems to be used for nefarious purposes, like fraud. This has become easier now that TikTok, YouTube, and Instagram store endless videos of regular people talking. Its simple, Farid explained. You take thirty or sixty seconds of a kids voice and log in to ElevenLabs, and pretty soon Grandmas getting a call in Grandsons voice saying, Grandma, Im in trouble, Ive been in an accident. A financial request is almost always the end game. Farid went on, And heres the thing: the bad guy can fail ninety-nine per cent of the time, and they will still become very, very rich. Its a numbers game. The prevalence of these illegal efforts is difficult to measure, but, anecdotally, theyve been on the rise for a few years. In 2020, a corporate attorney in Philadelphia took a call from what he thought was his son, who said he had been injured in a car wreck involving a pregnant woman and needed nine thousand dollars to post bail. (He found out it was a scam when his daughter-in-law called his sons office, where he was safely at work.) In January, voters in New Hampshire received a robocall call from Joe Bidens voice telling them not to vote in the primary. (The man who admitted to generating the call said that he had used ElevenLabs software.) I didnt think about it at the time that it wasnt his real voice, an elderly Democrat in New Hampshire told the Associated Press. Thats how convincing it was.

View post:

The Terrifying A.I. Scam That Uses Your Loved One's Voice - The New Yorker

Posted in Ai

Revolutionize Your Business with AWS Generative AI Competency Partners | Amazon Web Services – AWS Blog

By Chris Dally, Business Designation Owner AWS By Victor Rojo, Technical Designation Lead AWS By Chris Butler, Sr. Product Manager, Launch AWS By Justin Freeman, Sr. Partner Development Specialist, Catalyst AWS

In todays rapidly evolving technology landscape, generative artificial intelligence (AI) is leading the charge in innovation, revolutionizing the way organizations work. According to a McKinsey report, generative AI could account for over 75% of total yearly AI value, with high expectations for major or disruptive change in industries. Additionally, the report states generative AI technologies have the potential to automate work activities that absorb 60-70% of employees time.

With the ability to automate tasks, enhance productivity, and enable hyper-personalized customer experiences, businesses are seeking specialized expertise to build a successful generative AI strategy.

To support this need, were excited to announce the AWS Generative AI Competencyan AWS Specialization that helps Amazon Web Services (AWS) customers more quickly adopt generative AI solutions and strategically position themselves for the future. AWS Generative AI Competency Partners provide a full range of services, tools, and infrastructurewith tailored solutions in areas like security, applications, and integrations to give customers flexibility and choice across models and technologies.

Partners play an important role in supporting AWS customers leveraging our comprehensive suite of generative AI services. We are excited to recognize and highlight partners with proven customer success with generative AI on AWS through the AWS Generative AI Competency, making it easier for our customers to find and identify the right partners to support their unique needs. ~ Swami Sivasubramanian, Vice President of Database, Analytics and ML, AWS

According to Canalys, AWS is the first to launch a generative AI competency for partners. By validating the partners business and technical expertise in this way, AWS customers are able to invest with greater confidence in generative AI solutions from these partners. This new competency is a critical entry point into the generative AI partner opportunity, which Canalys estimates will grow to US $158 billion by 2028.

Generative AI has truly ushered in a new era of innovation and transformative value across both business and technology. A recent Canalys study found that 87% of customers rank partner specializations as a top three selection criteria. With the AWS Generative AI Competency launch, were helping customers take advantage of the capabilities that our technically validated Generative AI Partners have to offer. ~ Ruba Borno, Vice President of AWS Worldwide Channels and Alliances

Leveraging AI technologies such as Amazon Bedrock, Amazon SageMaker JumpStart, AWS Trainium, AWS Inferentia, and accelerated computing instances on Amazon Elastic Compute Cloud (Amazon EC2), AWS Generative AI Competency Partners have deep expertise building and deploying groundbreaking applications across industries, including healthcare and life sciences, media and entertainment, public sector, and financial services.

We invite you to explore the following AWS Generative AI Competency Launch Partner offerings recommended by AWS.

These AWS Partners have deep expertise working with businesses to help them adopt and strategize generative AI, build and test generative AI applications, train and customize foundation models, operate, support, and maintain generative AI applications and models, protect generative AI workloads, and define responsible AI principles and frameworks.

These AWS Partners utilize foundation models (FMs) and related technologies to automate domain-specific functions, enhancing customer differentiation across all business lines and operations. Partners fall into three categories: Generative AI applications, Foundation Models and FM-based Application Development, and Infrastructure and Data.

AWS Generative AI Competency Partners make it easier for customers to innovate with enterprise-grade security and privacy, foundation models, generative AI-powered applications, a data-first approach, and a high-performance, low-cost infrastructure.

Explore the AWS Generative AI Partners page to learn more.

AWS Partners with Generative AI offerings can learn more about becoming an AWS Competency Partner.

AWS Specialization Partners gain access to strategic and confidential content, including product roadmaps, feature release previews, and demos, as part of the AWS PartnerEquip event series. To attend live events in your region or tune in virtually, register for an upcoming session. In addition to AWS Specialization Program benefits, AWS Generative AI Competency Partners receive unique benefits such as bi-annual strategy sessions to aid joint sales motions. To learn more, review the AWS Specialization Program Benefits Guide in AWS Partner Central (login required).

AWS Partners looking to get their Generative AI offering validated through the AWS Competency Program must be validated or differentiated members of the Software or Services Path prior to applying.

To apply, please review the Program Guide and access the application in AWS Partner Central.

Read more from the original source:

Revolutionize Your Business with AWS Generative AI Competency Partners | Amazon Web Services - AWS Blog

Posted in Ai

Florida teens arrested for creating deepfake AI nude images of classmates – The Verge

Two Florida middle schoolers were arrested in December and charged with third-degree felonies forallegedly creating deepfake nudesof their classmates. A reportbyWiredcites police reports saying two boys, aged 13 and 14, are accused of using an unnamed artificial intelligence application to generate the explicit images of other students between the ages of 12 and 13. The incident may be the first US instance of criminal charges related to AI-generated nude images.

They were charged with third-degree felonies under a 2022 Florida law that criminalizes the dissemination of deepfake sexually explicit images without the victims consent. Both the arrests and the charges appear to be the first of their kind in the nation related to the sharing of AI-generated nudes.

Local media reported on the incident after the students at Pinecrest Cove Academy in Miami, Florida, were suspended December 6th, and the case was reported to the Miami-Dade Police Department. According to Wired, they were arrested on December 22nd.

Minors creating AI-generated nudes and explicit images of other children has become an increasingly common problem in school districts across the country. But outside of the Florida incident, none wed heard of have led to an arrest. Theres currently no federal law addressing nonconsensual deepfake nudes, which has left states tackling the impact of generative AI on matters of child sexual abuse material, nonconsensual deepfakes, or revenge porn on their own.

Last fall, President Joe Biden issued an executive order on AI that asked agencies for a report on banning the use of generative AI to produce child sexual abuse material. Congress has yet to pass a law on deepfake porn, but that could possibly change soon. Both the Senate and House introduced legislation, known as the DEFIANCE Act of 2024, this week, and the effort appears to have bipartisan support.

Although nearly all states now have laws on the books that address revenge porn, only a handful of states have passed laws that address AI-generated sexually explicit imagery to varying degrees. Victims in states with no legal protections have also taken to litigation. For example, a New Jersey teen is suing a classmate for sharing fake AI nudes.

The Los Angeles Times recently reported that the Beverly Hills Police Department is currently investigating a case where students allegedly shared images that used real faces of students atop AI-generated nude bodies. But because the states law against unlawful possession of obscene matter knowing it depicts person under age of 18 years engaging in or simulating sexual conduct does not explicitly mention AI-generated images, the article says its unclear whether a crime has been committed.

The local school district voted on Friday to expel five students involved in the scandal, the LA Times reports.

Go here to see the original:

Florida teens arrested for creating deepfake AI nude images of classmates - The Verge

Posted in Ai

Ability Summit 2024: Advancing accessibility with AI technology and innovation – The Official Microsoft Blog – Microsoft

Today we kick off the 14th Microsoft Ability Summit, an annual event to bring together thought leaders to discuss how we accelerate accessibility to help bridge the Disability Divide.

There are three key themes to this years summit: Build, Imagine, and Include. Build invites us to explore how to build accessibly and inclusively by leaning on the insights of disabled talent. Imagine dives into best practices for architecting accessible buildings, events, content and products. And Include highlights the issues and opportunities AI presents for creators, developers and engineers.

Katy Jo Wright and Dave McCarthy discuss Katy Jos journey living with the complex disability, Chronic Lyme Disease. Get insights from deaf creator and performer Leila Hanaumi; international accessibility leaders Sara Minkara, U.S. Special Advisor on International Disability Rights, U.S. Department of State; and Stephanie Cadieux, Chief Accessibility Officer, Government of Canada. And well be digging into mental health with singer, actor and mental health advocate, Michelle Williams.

Well also be launching a few things along the way.

Accessible technology is crucial to empowering the 1.3 billion-plus people with disabilities globally. With this new chapter of AI, the possibilities are growing, as is the responsibility to get it right. We are learning where AI can be impactful, from the potential to shorten the gap between thoughts and action, to making it easier to code and create. But there is more to do, and we will continue to leverage every tool in the technology toolbox to advance accessibility.

Today well be highlighting the latest technology and tools from Microsoft to help achieve this goal including:

Technology can also help tackle long enduring challenges, like finding a cure for ALS (Motor Neuron Disease). With Azure, we are proudly supporting ALS Therapy Development Institute (TDI) and Answer ALS to almost double the clinical and genomic data available for research. In 2021, Answer ALS provided open access to its research through an Azure Data Portal, Neuromine. This data has since enabled over 300 independent research projects around the world. The addition of ALS TDIs data from the ongoing ALS Research Collaborative (ARC) study will allow researchers to accelerate the journey to find a cure.

We will also be previewing some of our ongoing work to use Custom Neural Voice to empower people with ALS and other speech disabilities to have their voice. We have been working with the community including Team Gleason for some time and are committed to making sure this technology is used for good and plan to launch later in the year.

YouTube Video

Click here to load media

To build inclusively in an increasingly digital world, we need to protect fundamental rights and will be sharing partnerships advancing this across the community throughout the day.

This includes:

All through the Ability Summit, industry leaders will be sharing their learnings and best practices. Today we are posting four new Microsoft playbooks, sharing our learnings from working on our physical, event and digital environment. This includes a new Mental Health toolkit, with tips for product makers to build experiences that support mental health conditions, created in partnership with Mental Health America. And Accessible and Inclusive Workplace Handbook, with best practices for building an accessible campus from our Global Workplace Services team, responsible for our global building footprint including the new Redmond headquarters campus.

Please join us to watch content on demand via http://www.aka.ms/AbilitySummit. Technical support is always available via Microsofts Disability Answer Desk. Thank you for your partnership and commitment to build a more accessible future for people with disabilities around the world.

Tags: accessibility, AI, AI for Accessibility

See more here:

Ability Summit 2024: Advancing accessibility with AI technology and innovation - The Official Microsoft Blog - Microsoft

Posted in Ai

Sora AI Videos Easily Confused With Real Footage in Survey Test (EXCLUSIVE) – Variety

Consumers in the U.S. struggle to distinguish videos recorded by humans from those generated by OpenAIs text-to-video tool Sora, according to new HarrisX data provided exclusively to Variety Intelligence Platform (VIP+).

In a survey conducted weeks after the controversial software was first unveiled, most U.S. adults incorrectly guessed whether AI or a person had created five out of eight videos they were shown.

Half of the videos were the Sora demonstration videos that have gone viral online, raising concerns from Hollywood to Capitol Hill for their production quality, including a drone view of waves crashing against the rugged cliffs along Big Surs Garay Point Beach and historical footage of California during the Gold Rush.

Perhaps unsurprisingly, the HarrisX survey also revealed that strong majorities of respondents believed the U.S. government should enact regulation requiring that AI-generated content be labeled as such. They were equally emphatic about the need for regulation across all content formats, including videos, images, text, music, captions and sounds.Full results of the HarrisX survey can be found on VIP+.

In the survey, which was conducted online March 1-4 among more than 1,000 adults, respondents were shown four high-quality photorealistic-looking sample video outputs generated by Sora randomly interspersed with four videos from stock footage taken in the real world by a camera. In the case of the Big Sur video, 60% of respondents incorrectly guessed that a human had generated that video.

While Sora has yet to be released to the public, the OpenAI software has been the subject of much alarm particularly in the entertainment industry, where the rapid evolution of video diffusion technology carries profound implications for the disruption of Hollywoods core production capabilities (though Sora will likely be fairly limited at launch).

Moreover, AI video has raised broader questions about its deepfake potential, especially in an election year.

When presented with the AI-generated videos and informed they were created by Sora, respondents were asked how they felt. Reactions were a mix of positive and negative, ranging from curious (28%), uncertain (27%) and open-minded (25%) to anxious (18%), inspired (18%) and fearful (2%).

"When you try to change the world quickly, the world moves quickly to rein you in along predictable lines," said Dritan Nesho, CEO and head of research at HarrisX. "That's exactly what we're seeing with generative AI: as its sophistication grows via new tools like Sora, so do concerns about its impact and calls for the proper labeling and regulation of the technology. The nascent industry must do more both to create guardrails and to properly communicate with the wider public."

VIP+ subscribers can dig deeper to learn more about ...

See the rest here:

Sora AI Videos Easily Confused With Real Footage in Survey Test (EXCLUSIVE) - Variety

Posted in Ai

What to know about this AI stock with ties to Nvidia up nearly 170% in 2024 – CNBC

Investors may want to keep an eye on this artificial intelligence voice-and-speech recognition stock with ties to Nvidia . Shares of SoundHound AI have surged almost 170% this year and nearly 347% in February alone as investors bet on new applications for the booming technology trend that has taken Wall Street by storm. Last month, Nvidia revealed a $3.7 million bet on the stock in a securities filing, and management said on an earnings call that "demand is going through theroof." "We continue to believe that the company is in a strong position to capture its fair share of the AI chatbot market demand wave with its technology providing more use cases going forward," wrote Wedbush Securities analyst Dan Ives in a February note. SOUN YTD mountain SoundHound shares in 2024 While the Nvidia investment isn't new news for investors and analysts, it does reinforce SoundHound's value proposition. Ives also noted that the stake "solidifies the company's brand within the AI Revolution" and lays the groundwork for a potential larger investment in the future. Relatively few Wall Street shops cover the AI stock. A little more than 80% rate it with a buy or overweight rating, with consensus price targets suggesting upside of nearly 24%, per FactSet. The company also sits at a roughly $1.7 billion market capitalization and has yet to attain profitability. Expanding its total addressable market Along with its Nvidia relationship, SoundHound has partnered with a slew of popular restaurant brands, automakers and hospitality companies to provide AI voice customer solutions. While the company works with about a quarter of total automobile companies, "the penetration into that customer set only amounts to 1-2% of global sales, leaving significant room for growth within the current customer base as well as growth from adding new brands," said Ladenburg Thalmann's Glenn Mattson in a January note initiating coverage with a buy rating. "With voice enabled units expected to grow to 70% of shipments by 2026, this represents a significant growth opportunity, in our view," he added. SoundHound has also made significant headway within the restaurant industry, recently adding White Castle, Krispy Kreme and Jersey Mike's to its growing list of customers, analysts note. That total addressable market should continue growing as major players such as McDonald's, DoorDash and Wendy's hunt for ways to expand AI voice use, said D.A. Davidson's Gil Luria. He estimates an $11 billion total addressable market when accounting for the immediate opportunities from quick-service restaurants and original equipment manufacturers. "SoundHound's long term opportunity is attractive and largely up for grabs," he said in a September note initiating coverage with a buy rating. "Given the high degree of technical complexity required to create value in this space, we see SoundHound with its best-of-breed solution as a likely winner and expect it to win significant market share." Headwinds to profitability While demand for SoundHound AI's products appears to be accelerating, investors should beware of a bumpy road ahead. Cantor Fitzgerald's Brett Knoblauch noted that being in the early stages of product adoption creates uncertainties surrounding the "pace of revenue growth and timeline to positive FCF." Although H.C. Wainwright's Scott Buck views SoundHound's significant bookings backlog and accelerating revenue growth as supportive of a premium valuation, he noted that the recent acquisition of restaurant automation technology company SYNQ3 could delay profitability to next year. But "we suspect the longer term financial and operating benefits to meaningfully outweigh short-term profitability headwinds," he said. "We recommend investors continue to accumulate SOUN shares ahead of stronger operating results."

Go here to read the rest:

What to know about this AI stock with ties to Nvidia up nearly 170% in 2024 - CNBC

Posted in Ai

Nvidia, the tech company more valuable than Google and Amazon, explained – Vox.com

Only four companies in the world are worth over $2 trillion. Apple, Microsoft, the oil company Saudi Aramco and, as of 2024, Nvidia. Its understandable if the name doesnt ring a bell. The company doesnt exactly make a shiny product attached to your hand all day, every day, as Apple does. Nvidia designs a chip hidden deep inside the complicated innards of a computer, a seemingly niche product more are relying on every day.

Rewind the clock back to 2019, and Nvidias market value was hovering around $100 billion. Its incredible speedrun to 20 times that already enviable size was really enabled by one thing the AI craze. Nvidia is arguably the biggest winner in the AI industry. ChatGPT-maker OpenAI, which catapulted this obsession into the mainstream, is currently worth around $80 billion, and according to market research firm Grand View Research, the entire global AI market was worth a bit under $200 billion in 2023. Both are just a paltry fraction of Nvidias value. With all eyes on the companys jaw-dropping evolution, the real question now is whether Nvidia can hold on to its lofty perch but heres how the company got to this level.

In 1993, long before uncanny AI-generated art and amusing AI chatbot convos took over our social media feeds, three Silicon Valley electrical engineers launched a startup that would focus on an exciting, fast-growing segment of personal computing: video games.

Nvidia was founded to design a specific kind of chip called a graphics card also commonly called a GPU (graphics processing unit) that enables the output of fancy 3D visuals on the computer screen. The better the graphics card, the more quickly high-quality visuals can be rendered, which is important for things like playing games and video editing. In the prospectus filed ahead of its initial public offering in 1999, Nvidia noted that its future success would depend on the continued growth of computer applications relying on 3D graphics. For most of Nvidias existence, game graphics were Nvidias raison detre.

Ben Bajarin, CEO and principal analyst at the tech industry research firm Creative Strategies, acknowledged that Nvidia had been relatively isolated to a niche part of computing in the market until recently.

Nvidia became a powerhouse selling cards for video games now an entertainment industry juggernaut making over $180 billion in revenue last year but it realized it would be smart to branch out from just making graphics cards for games. Not all its experiments panned out. Over a decade ago, Nvidia made a failed gambit to become a major player in the mobile chip market, but today Android phones use a range of non-Nvidia chips, while iPhones use Apple-designed ones.

Another play, though, not only paid off, it became the reason were all talking about Nvidia today. In 2006, the company released a programming language called CUDA that, in short, unleashed the power of its graphics cards for more general computing processes. Its chips could now do a lot of heavy lifting for tasks unrelated to pumping out pretty game graphics, and it turned out that graphics cards could multitask even better than the CPU (central processing unit), whats often called the central brain of a computer. This made Nvidias GPUs great for calculation-heavy tasks like machine learning (and, crypto mining). 2006 was the same year Amazon launched its cloud computing business; Nvidias push into general computing was coming at a time when massive data centers were popping up around the world.

That Nvidia is a powerhouse today is especially notable because for most of Silicon Valleys history, there already was a chip-making goliath: Intel. Intel makes both CPUs and GPUs, as well as other products, and it manufactures its own semiconductors but after a series of missteps, including not investing into the development of AI chips soon enough, the rival chipmakers preeminence has somewhat faded. In 2019, when Nvidias market value was just over the $100 billion mark, Intels value was double that; now Nvidia has joined the ranks of tech titans designated the Magnificent Seven, a cabal of tech stocks with a combined value that exceeds the entire stock market of many rich G20 countries.

Their competitors were asleep at the wheel, says Gil Luria, a senior analyst at the financial firm D.A. Davidson Companies. Nvidia has long talked about the fact that GPUs are a superior technology for handling accelerated computing.

Today, Nvidias four main markets are gaming, professional visualization (like 3D design), data centers, and the automotive industry, as it provides chips that train self-driving technology. A few years ago, its gaming market was still the biggest chunk of revenue at about $5.5 billion, compared to its data center segment, which raked in about $2.9 billion. Then the pandemic broke out. People were spending a lot more time at home, and demand for computer parts, including GPUs, shot up gaming revenue for the company in fiscal year 2021 jumped a whopping 41 percent. But there were already signs of the coming AI wave, too, as Nvidias data center revenue soared by an even more impressive 124 percent. In 2023, its revenue was 400 percent higher than the year before. In a clear display of how quickly the AI race ramped up, data centers have overtaken games, even in a gaming boom.

When it went public in 1999, Nvidia had 250 employees. Now it has over 27,000. Jensen Huang, Nvidias CEO and one of its founders, has a personal net worth that currently hovers around $70 billion, an over 1,700 percent increase since 2019.

Its likely youve already brushed up against Nvidias products, even if you dont know it. Older gaming consoles like the PlayStation 3 and the original Xbox had Nvidia chips, and the current Nintendo Switch uses an Nvidia mobile chip. Many mid- to high-range laptops come packed up with an Nvidia graphics card as well.

But with the AI bull rush, the company promises to become more central to the tech people use every day. Tesla cars self-driving feature utilizes Nvidia chips, as do practically all major tech companies cloud computing services. These services serve as a backbone for so much of our daily internet routines, whether its streaming content on Netflix or using office and productivity apps. To train ChatGPT, OpenAI harnessed tens of thousands of Nvidias AI chips together. People underestimate how much they use AI on a daily basis, because we dont realize that some of the automated tasks we rely on have been boosted by AI. Popular apps and social media platforms are adding new AI features seemingly every day: TikTok, Instagram, X (formerly Twitter), even Pinterest all boast some kind of AI functionality to toy with. Slack, a messaging platform that many workplaces use, recently rolled out the ability to use AI to generate thread summaries and recaps of Slack channels.

For Nvidias customers, the problem with sizzling demand is that the company can charge eye-wateringly high prices. The chips used for AI data centers cost tens of thousands of dollars, with the top-of-the-line product sometimes selling for over $40,000 on sites like Amazon and eBay. Last year, some clients clamoring for Nvidias AI chips were waiting as much as 11 months.

Just think of Nvidia as the Birkin bag of AI chips. A comparable offering from another chipmaker, AMD, is reportedly being sold to customers like Microsoft for about $10,000 to $15,000, just a fraction of what Nvidia charges. Its not just the AI chips, either. Nvidias gaming business continues to boom, and the price gap between its high-end gaming card and a similarly performing one from AMD has been growing wider. In its last financial quarter, Nvidia reported a gross margin of 76 percent. As in, it cost them just 24 cents to make a dollar in sales. AMDs most recent gross margin was only 47 percent.

Nvidias fans argue that its yawning lead was earned by making an early bet that AI would take over the world its chips are worth the price because of its superior software, and because so much of AI infrastructure has already been built around Nvidias products. But Erik Peinert, a research manager and editor at the American Economic Liberties Project who helped put together a recent report on competition within the chip industry, notes that Nvidia has gotten a price boost because TSMC, the biggest semiconductor maker in the world, has struggled for years to keep up with demand. A recent Wall Street Journal report also suggested that the company may be throwing its weight around to maintain dominance; the CEO of an AI chip startup called Groq claimed that customers were scared Nvidia would punish them with order delays if it got wind they were meeting with other chip makers.

Its undeniable that Nvidia put in the investment into courting the AI industry well before others started paying attention, but its grip on the market isnt unshakable. An army of competitors are on the march, ranging from smaller startups to deep-pocketed opponents, including Amazon, Meta, Microsoft, and Google, all of which currently use Nvidia chips. The biggest challenge for Nvidia is that their customers want to compete with them, says Luria.

Its not just that their customers want to make some of the money that Nvidia has been raking in its that they cant afford to keep paying so much. Microsoft went from spending less than 10 percent of their capital expenditure on Nvidia to spending nearly 40 percent, Luria says. Thats not sustainable.

The fact that over 70 percent of AI chips are bought from Nvidia is also cause for concern for antitrust regulators around the world the EU recently started looking into the industry for potential antitrust abuses. When Nvidia announced in late 2020 that it wanted to spend an eye-popping $40 billion to buy Arm Limited, a company that designs a chip architecture that most modern smartphones and newer Apple computers use, the FTC blocked the deal. That acquisition was pretty clearly intended to get control over a software architecture that most of the industry relied on, says Peinert. The fact that they have so much pricing power, and that theyre not facing any effective competition, is a real concern.

Whether Nvidia will sustain itself as a $2 trillion company or rise to even greater heights depends, fundamentally, on whether both consumer and investor attention on AI can be sustained. Silicon Valley is awash with newly founded AI companies, but what percentage of them will take off, and how long will funders keep pouring money into them?

Widespread AI awareness came about because ChatGPT was an easy-to-use or at least easy-to-show-off-on-social-media novelty for the general public to get excited about. But a lot of AI work is still focusing on AI training rather than whats called AI inferencing, which involves using trained AI models to solve a task, like the way that ChatGPT answers a users query or facial recognition tech identifies people. Though the AI inference market is growing (and maybe growing faster than expected), much of the sector is still going to be spending a lot more time and money on training. For training, Nvidias first-class chips will likely remain the most coveted, at least for a while. But once AI inferencing explodes, there will be less of a need for such high-performance chips, and Nvidias primacy could slip.

Some financial analysts and industry experts have expressed wariness over Nvidias stratospheric valuation, suspecting that AI enthusiasm will slow down and that there may already be too much money going toward making AI chips. Traffic to ChatGPT has dropped off since last May and some investors are slowing down the money hose.

Every big technology goes through an adoption cycle, says Luria. As it comes into consciousness, you build this huge hype. Then at some point, the hype gets too big, and then you get past it and get into the trough of disillusionment. He expects to see that soon with AI though that doesnt mean its a bubble.

Nvidias revenue last year was about $60 billion, which was a 126 percent increase from the prior year. Its high valuation and stock price is based not just on that revenue, though, but for its predicted continued growth for comparison, Amazon currently has a lower market value than Nvidia yet made almost $575 billion in sales last year. The path to Nvidia booking large enough profits to justify the $2 trillion valuation looks steep to some experts, especially knowing that the competition is kicking into high gear.

Theres also the possibility that Nvidia could be stymied by how fast microchip technology can advance. It has moved at a blistering pace in the last several decades, but there are signs that the pace at which more transistors can be fitted onto a microchip making them smaller and more powerful is slowing down. Whether Nvidia can keep offering meaningful hardware and software improvements that convince its customers to buy its latest AI chips could be a challenge, says Bajarin.

Yet, for all these possible obstacles, if one were to bet whether Nvidia will soon become as familiar a tech company as Apple and Google, the safe answer is yes. AI fever is why Nvidia is in the rarefied club of trillion-dollar companies but it may be just as true to say that AI is so big because of Nvidia.

Yes, I'll give $5/month

Yes, I'll give $5/month

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

Here is the original post:

Nvidia, the tech company more valuable than Google and Amazon, explained - Vox.com

Posted in Ai

NIST, the lab at the center of Bidens AI safety push, is decaying – The Washington Post

At the National Institute of Standards and Technology the government lab overseeing the most anticipated technology on the planet black mold has forced some workers out of their offices. Researchers sleep in their labs to protect their work during frequent blackouts. Some employees have to carry hard drives to other buildings; flaky internet wont allow for the sending of large files.

And a leaky roof forces others to break out plastic sheeting.

If we knew rain was coming, wed tarp up the microscope, said James Fekete, who served as chief of NISTs applied chemicals and materials division until 2018. It leaked enough that we were prepared.

NIST is at the heart of President Bidens ambitious plans to oversee a new generation of artificial intelligence models; through an executive order, the agency is tasked with developing tests for security flaws and other harms. But budget constraints have left the 123-year-old lab with a skeletal staff on key tech teams and most facilities on its main Gaithersburg, Md., and Boulder, Colo., campuses below acceptable building standards.

Interviews with more than a dozen current and former NIST employees, Biden administration officials, congressional aides and tech company executives, along with reports commissioned by the government, detail a massive resources gap between NIST and the tech firms it is tasked with evaluating a discrepancy some say risks undermining the White Houses ambitious plans to set guardrails for the burgeoning technology. Many of the people spoke to The Washington Post on the condition of anonymity because they were not authorized to speak to the media.

Even as NIST races to set up the new U.S. AI Safety Institute, the crisis at the degrading lab is becoming more acute. On Sunday, lawmakers released a new spending plan that would cut NISTs overall budget by more than 10 percent, to $1.46 billion. While lawmakers propose to invest $10 million in the new AI institute, thats a fraction of the tens of billions of dollars tech giants like Google and Microsoft are pouring into the race to develop artificial intelligence. It pales in comparison to Britain, which has invested more than $125 million into its AI safety efforts.

The cuts to the agency are a self-inflicted wound in the global tech race, said Divyansh Kaushik, the associate director for emerging technologies and national security at the Federation of American Scientists.

Some in the AI community worry that underfunding NIST makes it vulnerable to industry influence. Tech companies are chipping in for the expensive computing infrastructure that will allow the institute to examine AI models. Amazon announced that it would donate $5 million in computing credits. Microsoft, a key investor in OpenAI, will provide engineering teams along with computing resources. (Amazon founder Jeff Bezos owns The Post.)

Tech executives, including OpenAI CEO Sam Altman, are regularly in communication with officials at the Commerce Department about the agencys AI work. OpenAI has lobbied NIST on artificial intelligence issues, according to federal disclosures. NIST asked TechNet an industry trade group whose members include OpenAI, Google and other major tech companies if its member companies can advise the AI Safety Institute.

NIST is also seeking feedback from academics and civil society groups on its AI work. The agency has a long history of working with a variety of stakeholders to gather input on technologies, Commerce Department spokesman Charlie Andrews said.

AI staff, unlike their more ergonomically challenged colleagues, will be working in well-equipped offices in the Gaithersburg campus, the Commerce Departments D.C. office and the NIST National Cybersecurity Center of Excellence in Rockville, Md., Andrews said.

White House spokeswoman Robyn Patterson said the appointment of Elizabeth Kelly to the helm of the new AI Safety Institute underscores the White Houses commitment to getting this work done right and on time. Kelly previously served as special assistant to the president for economic policy.

The Biden-Harris administration has so far met every single milestone outlined by the presidents landmark executive order, Patterson said. We are confident in our ability to continue to effectively and expeditiously meet the milestones and directives set forth by President Biden to protect Americans from the potential risks of AI systems while catalyzing innovation in AI and beyond.

NISTs financial struggles highlight the limitations of the administrations plan to regulate AI exclusively through the executive branch. Without an act of Congress, there is no new funding for initiatives like the AI Safety Institute and the programs could be easily overturned by the next president. And as the presidential elections approach, the prospects of Congress moving on AI in 2024 are growing dim.

During his State of the Union address on Thursday, Biden called on Congress to harness the promise of AI and protect us from its peril.

Congressional aides and former NIST employees say the agency has not been able to break through as a funding priority even as lawmakers increasingly tout its role in addressing technological developments, including AI, chips and quantum computing.

After this article published, Senate Majority Leader Charles E. Schumer (D-N.Y.) on Thursday touted the $10 million investment in the institute in the proposed budget, saying he fought for this funding to make sure that the development of AI prioritizes both innovation and safety.

A review of NISTs safety practices in August found that the budgetary issues endanger employees, alleging that the agency has an incomplete and superficial approach to safety.

Chronic underfunding of the NIST facilities and maintenance budget has created unsafe work conditions and further fueled the impression among researchers that safety is not a priority, said the NIST safety commission report, which was commissioned following the 2022 death of an engineering technician at the agencys fire research lab.

NIST is one of the federal governments oldest science agencies with one of the smallest budgets. Initially called the National Bureau of Standards, it began at the dawn of the 20th century, as Congress realized the need to develop more standardized measurements amid the expansion of electricity, the steam engine and railways.

The need for such an agency was underscored three years after its founding, when fires ravaged through Baltimore. Firefighters from Washington, Philadelphia and even New York rushed to help put out the flames, but without standard couplings, their hoses couldnt connect to the Baltimore hydrants. The firefighters watched as the flames overtook more than 70 city blocks in 30 hours.

NIST developed a standard fitting, unifying more than 600 different types of hose couplings deployed across the country at the time.

Ever since, the agency has played a critical role in using research and science to help the country learn from catastrophes and prevent new ones. Its work expanded after World War II: It developed an early version of the digital computer, crucial Space Race instruments and atomic clocks, which underpin GPS. In the 1950s and 1960s, the agency moved to new campuses in Boulder and Gaithersburg after its early headquarters in Washington fell into disrepair.

Now, scientists at NIST joke that they work at the most advanced labs in the world in the 1960s. Former employees describe cutting-edge scientific equipment surrounded by decades-old buildings that make it impossible to control the temperature or humidity to conduct critical experiments.

You see dust everywhere because the windows dont seal, former acting NIST director Kent Rochford said. You see a bucket catching drips from a leak in the roof. You see Home Depot dehumidifiers or portable AC units all over the place.

The flooding was so bad that Rochford said he once requested money for scuba gear. That request was denied, but he did receive funding for an emergency kit that included squeegees to clean up water.

Pests and wildlife have at times infiltrated its campuses, including an incident where a garter snake entered a Boulder building.

More than 60 percent of NIST facilities do not meet federal standards for acceptable building conditions, according to a February 2023 report commissioned by Congress from the National Academies of Sciences, Engineering and Medicine. The poor conditions impact employee output. Workarounds and do-it-yourself repairs reduce the productivity of research staff by up to 40 percent, according to the committees interviews with employees during a laboratory visit.

Years after Rochfords 2018 departure, NIST employees are still deploying similar MacGyver-style workarounds. Each year between October and March, low humidity in one lab creates a static charge making it impossible to operate an instrument ensuring companies meet environmental standards for greenhouse gases.

Problems with the HVAC and specialized lights have made the agency unable to meet demand for reference materials, which manufacturers use to check whether their measurements are accurate in products like baby formula.

Facility problems have also delayed critical work on biometrics, including evaluations of facial recognition systems used by the FBI and other law enforcement agencies. The data center in the 1966 building that houses that work receives inadequate cooling, and employees there spend about 30 percent of their time trying to mitigate problems with the lab, according to the academies reports. Scheduled outages are required to maintain the data centers that hold technology work, knocking all biometric evaluations offline for a month each year.

Fekete, the scientist who recalled covering the microscope, said his teams device never completely stopped working due to rain water.

But other NIST employees havent been so lucky. Leaks and floods destroyed an electron microscope worth $2.5 million used for semiconductor research, and permanently damaged an advanced scale called a Kibble balance. The tool was out of commission for nearly five years.

Despite these constraints, NIST has built a reputation as a natural interrogator of swiftly advancing AI systems.

In 2019, the agency released a landmark study confirming facial recognition systems misidentify people of color more often than White people, casting scrutiny on the technologys popularity among law enforcement. Due to personnel constraints, only a handful of people worked on that project.

Four years later, NIST released early guidelines around AI, cementing its reputation as a government leader on the technology. To develop the framework, the agency connected with leaders in industry, civil society and other groups, earning a strong reputation among numerous parties as lawmakers began to grapple with the swiftly evolving technology.

The work made NIST a natural home for the Biden administrations AI red-teaming efforts and the AI Safety Institute, which were formalized in the November executive order. Vice President Harris touted the institute at the U.K. AI Safety Summit in November. More than 200 civil society organizations, academics and companies including OpenAI and Google have signed on to participate in a consortium within the institute.

OpenAI spokeswoman Kayla Wood said in a statement that the company supports NISTs work, and that the company plans to continue to work with the lab to "support the development of effective AI oversight measures.

Under the executive order, NIST has a laundry list of initiatives that it needs to complete by this summer, including publishing guidelines for how to red-team AI models and launching an initiative to guide evaluating AI capabilities. In a December speech at the machine learning conference NeurIPS, the agencys chief AI adviser, Elham Tabassi, said this would be an almost impossible deadline.

It is a hard problem, said Tabassi, who was recently named the chief technology officer of the AI Safety Institute. We dont know quite how to evaluate AI.

The NIST staff has worked tirelessly to complete the work it is assigned by the AI executive order, said Andrews, the Commerce spokesperson.

While the administration has been clear that additional resources will be required to fully address all of the issues posed by AI in the long term, NIST has been effectively carrying out its responsibilities under the [executive order] and is prepared to continue to lead on AI-related research and other work, he said.

Commerce Secretary Gina Raimondo asked Congress to allocate $10 million for the AI Safety Institute during an event at the Atlantic Council in January. The Biden administration also requested more funding for NIST facilities, including $262 million for safety, maintenance and repairs. Congressional appropriators responded by cutting NISTs facilities budget.

The administrations ask falls far below the recommendations of the national academies study, which urged Congress to provide $300 to $400 million in additional annual funding over 12 years to overcome a backlog of facilities damage. The report also calls for $120 million to $150 million per year for the same period to stabilize the effects of further deterioration and obsolescence.

Ross B. Corotis, who chaired the academies committee that produced the facilities report, said Congress needs to ensure that NIST is funded because it is the go-to lab when any new technology emerges, whether thats chips or AI.

Unless youre going to build a whole new laboratory for some particular issue, youre going to turn first to NIST, Corotis said. And NIST needs to be ready for that.

Eva Dou and Nitasha Tiku contributed to this report.

Continue reading here:

NIST, the lab at the center of Bidens AI safety push, is decaying - The Washington Post

Posted in Ai

AI makes a rendezvous in space | Stanford News – Stanford University News

Researchers from the Stanford Center for AEroSpace Autonomy Research (CAESAR) in the robotic testbed, which can simulate the movements of autonomous spacecraft. (Image credit: Andrew Brodhead)

Space travel is complex, expensive, and risky. Great sums and valuable payloads are on the line every time one spacecraft docks with another. One slip and a billion-dollar mission could be lost. Aerospace engineers believe that autonomous control, like the sort guiding many cars down the road today, could vastly improve mission safety, but the complexity of the mathematics required for error-free certainty is beyond anything on-board computers can currently handle.

In a new paper presented at the IEEE Aerospace Conference in March 2024, a team of aerospace engineers at Stanford University reported using AI to speed the planning of optimal and safe trajectories between two or more docking spacecraft. They call it ART the Autonomous Rendezvous Transformer and they say it is the first step to an era of safer and trustworthy self-guided space travel.

In autonomous control, the number of possible outcomes is massive. With no room for error, they are essentially open-ended.

Trajectory optimization is a very old topic. It has been around since the 1960s, but it is difficult when you try to match the performance requirements and rigid safety guarantees necessary for autonomous space travel within the parameters of traditional computational approaches, said Marco Pavone, an associate professor of aeronautics and astronautics and co-director of the new Stanford Center for AEroSpace Autonomy Research (CAESAR). In space, for example, you have to deal with constraints that you typically do not have on the Earth, like, for example, pointing at the stars in order to maintain orientation. These translate to mathematical complexity.

For autonomy to work without fail billions of miles away in space, we have to do it in a way that on-board computers can handle, added Simone DAmico, an associate professor of aeronautics and astronautics and fellow co-director of CAESAR. AI is helping us manage the complexity and delivering the accuracy needed to ensure mission safety, in a computationally efficient way.

CAESAR is a collaboration between industry, academia, and government that brings together the expertise of Pavones Autonomous Systems Lab and DAmicos Space Rendezvous Lab. The Autonomous Systems Lab develops methodologies for the analysis, design, and control of autonomous systems cars, aircraft, and, of course, spacecraft. The Space Rendezvous Lab performs fundamental and applied research to enable future distributed space systems whereby two or more spacecraft collaborate autonomously to accomplish objectives otherwise very difficult for a single system, including flying in formation, rendezvous and docking, swarm behaviors, constellations, and many others. CAESAR is supported by two founding sponsors from the aerospace industry and, together, the lab is planning a launch workshop for May 2024.

CAESAR researchers discuss the robotic free-flyer platform, which uses air bearings to hover on a granite table and simulate a frictionless zero gravity environment. (Image credit: Andrew Brodhead)

The Autonomous Rendezvous Transformer is a trajectory optimization framework that leverages the massive benefits of AI without compromising on the safety assurances needed for reliable deployment in space. At its core, ART involves integrating AI-based methods into the traditional pipeline for trajectory optimization, using AI to rapidly generate high-quality trajectory candidates as input for conventional trajectory optimization algorithms. The researchers refer to the AI suggestions as a warm start to the optimization problem and show how this is crucial to obtain substantial computational speed-ups without compromising on safety.

One of the big challenges in this field is that we have so far needed ground in the loop approaches you have to communicate things to the ground where supercomputers calculate the trajectories and then we upload commands back to the satellite, explains Tommaso Guffanti, a postdoctoral fellow in DAmicos lab and first author of the paper introducing the Autonomous Rendezvous Transformer. And in this context, our paper is exciting, I think, for including artificial intelligence components in traditional guidance, navigation, and control pipeline to make these rendezvous smoother, faster, more fuel efficient, and safer.

ART is not the first model to bring AI to the challenge of space flight, but in tests in a terrestrial lab setting, ART outperformed other machine learning-based architectures. Transformer models, like ART, are a subset of high-capacity neural network models that got their start with large language models, like those used by chatbots. The same AI architecture is extremely efficient in parsing, not just words, but many other types of data such as images, audio, and now, trajectories.

Transformers can be applied to understand the current state of a spacecraft, its controls, and maneuvers that we wish to plan, Daniele Gammelli, a postdoctoral fellow in Pavones lab, and also a co-author on the ART paper. These large transformer models are extremely capable at generating high-quality sequences of data.

The next frontier in their research is to further develop ART and then test it in the realistic experimental environment made possible by CAESAR. If ART can pass CAESARs high bar, the researchers can be confident that its ready for testing in real-world scenarios in orbit.

These are state-of-the-art approaches that need refinement, DAmico says. Our next step is to inject additional AI and machine learning elements to improve ARTs current capability and to unlock new capabilities, but it will be a long journey before we can test the Autonomous Rendezvous Transformer in space itself.

Follow this link:

AI makes a rendezvous in space | Stanford News - Stanford University News

Posted in Ai

AI drone that could hunt and kill people built in just hours by scientist ‘for a game’ – Livescience.com

It only takes a few hours to configure a small, commercially available drone to hunt down a target by itself, a scientist has warned.

Luis Wenus, an entrepreneur and engineer, incorporated an artificial intelligence (AI) system into a small drone to chase people around "as a game," he wrote in a post on March 2 on X, formerly known as Twitter. But he soon realized it could easily be configured to contain an explosive payload.

Collaborating with Robert Lukoszko, another engineer, he configured the drone to use an object-detection model to find people and fly toward them at full speed, he said. The engineers also built facial recognition into the drone, which works at a range of up to 33 feet (10 meters). This means a weaponized version of the drone could be used to attack a specific person or set of targets.

Related: 3 scary breakthroughs AI will make in 2024

"This literally took just a few hours to build, and made me realize how scary it is," Wenus wrote. "You could easily strap a small amount of explosives on these and let 100's of them fly around. We check for bombs and guns but THERE ARE NO ANTI-DRONE SYSTEMS FOR BIG EVENTS & PUBLIC SPACES YET."

Wenus described himself as an "open source absolutist," meaning he believes in always sharing code and software through open source channels. He also identifies as an "e/acc" which is a school of thinking among AI researchers that refers to wanting to accelerate AI research regardless of the downsides, due to a belief that the upsides will always outweigh them. He said, however, that he would not publish any code relating to this experiment.

He also warned that a terror attack could be orchestrated in the near future using this kind of technology. While people need technical knowledge to engineer such a system, it will become easier and easier to write the software as time passes, partially due to advancements in AI as an assistant in writing code, he noted.

Wenus said his experiment showed that society urgently needs to build anti-drone systems for civilian spaces where large crowds could gather. There are several countermeasures that society can build, according to Robin Radar, including cameras, acoustic sensors and radar to detect drones. Disrupting them, however, could require technologies such as radio frequency jammers, GPS spoofers, net guns, as well as high-energy lasers.

While such weapons haven't been deployed in civilian environments, they have been previously conceptualized and deployed in the context of warfare. Ukraine, for example, has developed explosive drones in response to Russia's invasion, according to the Wall Street Journal (WSJ).

The U.S. military is also working on ways to build and control swarms of small drones that can attack targets. It follows the U.S. Navy's efforts after it first demonstrated that it could control a swarm of 30 drones with explosives in 2017, according to MIT Technology Review.

Read more from the original source:

AI drone that could hunt and kill people built in just hours by scientist 'for a game' - Livescience.com

Posted in Ai

Nvidia Earnings Show Soaring Profit and Revenue Amid AI Boom – The New York Times

Nvidia, the kingpin of chips powering artificial intelligence, on Wednesday released quarterly financial results that reinforced how the company has become one of the biggest winners of the artificial intelligence boom, and it said demand for its products would fuel continued sales growth.

The Silicon Valley chip maker has been on an extraordinary rise over the past 18 months, driven by demand for its specialized and costly semiconductors, which are used for training popular A.I. services like OpenAIs ChatGPT chatbot. Nvidia has become known as one of the Magnificent Seven tech stocks, which, including others like Amazon, Apple and Microsoft, have helped power the stock market.

Nvidias valuation has surged more than 40 percent to $1.7 trillion since the start of the year, turning it into one of the worlds most valuable public companies. Last week, the company briefly eclipsed the market values of Amazon and Alphabet before receding to the fifth-most-valuable tech company. Its stock market gains are largely a result of repeatedly exceeding analysts expectations for growth, a feat that is becoming more difficult as they keep raising their predictions.

On Wednesday, Nvidia reported that revenue in its fiscal fourth quarter more than tripled from a year earlier to $22.1 billion, while profit soared nearly ninefold to $12.3 billion. Revenue was well above the $20 billion the company predicted in November and above Wall Street estimates of $20.4 billion.

Nvidia predicted that revenue in the current quarter would total about $24 billion, also more than triple the year-earlier period and higher than analysts average forecast of $22 billion.

Jensen Huang, Nvidias co-founder and chief executive, argues that an epochal shift to upgrade data centers with chips needed for training powerful A.I. models is still in its early phases. That will require spending roughly $2 trillion to equip all the buildings and computers to use chips like Nvidias, he predicts.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Continued here:

Nvidia Earnings Show Soaring Profit and Revenue Amid AI Boom - The New York Times

Posted in Ai

Google apologizes for missing the mark after Gemini generated racially diverse Nazis – The Verge

Google has apologized for what it describes as inaccuracies in some historical image generation depictions with its Gemini AI tool, saying its attempts at creating a wide range of results missed the mark. The statement follows criticism that it depicted specific white figures (like the US Founding Fathers) or groups like Nazi-era German soldiers as people of color, possibly as an overcorrection to long-standing racial bias problems in AI.

Were aware that Gemini is offering inaccuracies in some historical image generation depictions, says the Google statement, posted this afternoon on X. Were working to improve these kinds of depictions immediately. Geminis AI image generation does generate a wide range of people. And thats generally a good thing because people around the world use it. But its missing the mark here.

Google began offering image generation through its Gemini (formerly Bard) AI platform earlier this month, matching the offerings of competitors like OpenAI. Over the past few days, however, social media posts have questioned whether it fails to produce historically accurate results in an attempt at racial and gender diversity.

As the Daily Dot chronicles, the controversy has been promoted largely though not exclusively by right-wing figures attacking a tech company thats perceived as liberal. Earlier this week, a former Google employee posted on X that its embarrassingly hard to get Google Gemini to acknowledge that white people exist, showing a series of queries like generate a picture of a Swedish woman or generate a picture of an American woman. The results appeared to overwhelmingly or exclusively show AI-generated people of color. (Of course, all the places he listed do have women of color living in them, and none of the AI-generated women exist in any country.) The criticism was taken up by right-wing accounts that requested images of historical groups or figures like the Founding Fathers and purportedly got overwhelmingly non-white AI-generated people as results. Some of these accounts positioned Googles results as part of a conspiracy to avoid depicting white people, and at least one used a coded antisemitic reference to place the blame.

Google didnt reference specific images that it felt were errors; in a statement to The Verge, it reiterated the contents of its post on X. But its plausible that Gemini has made an overall attempt to boost diversity because of a chronic lack of it in generative AI. Image generators are trained on large corpuses of pictures and written captions to produce the best fit for a given prompt, which means theyre often prone to amplifying stereotypes. A Washington Post investigation last year found that prompts like a productive person resulted in pictures of entirely white and almost entirely male figures, while a prompt for a person at social services uniformly produced what looked like people of color. Its a continuation of trends that have appeared in search engines and other software systems.

Some of the accounts that criticized Google defended its core goals. Its a good thing to portray diversity ** in certain cases **, noted one person who posted the image of racially diverse 1940s German soldiers. The stupid move here is Gemini isnt doing it in a nuanced way. And while entirely white-dominated results for something like a 1943 German soldier would make historical sense, thats much less true for prompts like an American woman, where the question is how to represent a diverse real-life group in a small batch of made-up portraits.

For now, Gemini appears to be simply refusing some image generation tasks. It wouldnt generate an image of Vikings for one Verge reporter, although I was able to get a response. On desktop, it resolutely refused to give me images of German soldiers or officials from Germanys Nazi period or to offer an image of an American president from the 1800s.

But some historical requests still do end up factually misrepresenting the past. A colleague was able to get the mobile app to deliver a version of the German soldier prompt which exhibited the same issues described on X.

And while a query for pictures of the Founding Fathers returned group shots of almost exclusively white men who vaguely resembled real figures like Thomas Jefferson, a request for a US senator from the 1800s returned a list of results Gemini promoted as diverse, including what appeared to be Black and Native American women. (The first female senator, a white woman, served in 1922.) Its a response that ends up erasing a real history of race and gender discrimination inaccuracy, as Google puts it, is about right.

Additional reporting by Emilia David

Read more from the original source:

Google apologizes for missing the mark after Gemini generated racially diverse Nazis - The Verge

Posted in Ai

Researchers jailbreak AI chatbots with ASCII art — ArtPrompt bypasses safety measures to unlock malicious queries – Tom’s Hardware

Researchers based in Washington and Chicago have developed ArtPrompt, a new way to circumvent the safety measures built into large language models (LLMs). According to the research paper ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs, chatbots such as GPT-3.5, GPT-4, Gemini, Claude, and Llama2 can be induced to respond to queries they are designed to reject using ASCII art prompts generated by their ArtPrompt tool. It is a simple and effective attack, and the paper provides examples of the ArtPrompt-induced chatbots advising on how to build bombs and make counterfeit money.

Image 1 of 2

ArtPrompt consists of two steps, namely word masking and cloaked prompt generation. In the word masking step, given the targeted behavior that the attacker aims to provoke, the attacker first masks the sensitive words in the prompt that will likely conflict with the safety alignment of LLMs, resulting in prompt rejection. In the cloaked prompt generation step, the attacker uses an ASCII art generator to replace the identified words with those represented in the form of ASCII art. Finally, the generated ASCII art is substituted into the original prompt, which will be sent to the victim LLM to generate response.

Artificial intelligence (AI) wielding chatbots are increasingly locked down to avoid malicious abuse. AI developers don't want their products to be subverted to promote hateful, violent, illegal, or similarly harmful content. So, if you were to query one of the mainstream chatbots today about how to do something malicious or illegal, you would likely only face rejection. Moreover, in a kind of technological game of whack-a-mole, the major AI players have spent plenty of time plugging linguistic and semantic holes to prevent people from wandering outside the guardrails. This is why ArtPrompt is quite an eyebrow-raising development.

To best understand ArtPrompt and how it works, it is probably simplest to check out the two examples provided by the research team behind the tool. In Figure 1 above, you can see that ArtPrompt easily sidesteps the protections of contemporary LLMs. The tool replaces the 'safety word' with an ASCII art representation of the word to form a new prompt. The LLM recognizes the ArtPrompt prompt output but sees no issue in responding, as the prompt doesn't trigger any ethical or safety safeguards.

Another example provided in the research paper shows us how to successfully query an LLM about counterfeiting cash. Tricking a chatbot this way seems so basic, but the ArtPrompt developers assert how their tool fools today's LLMs "effectively and efficiently." Moreover, they claim it "outperforms all [other] attacks on average" and remains a practical, viable attack for multimodal language models for now.

The last time we reported on AI chatbot jailbreaking, some enterprising researchers from NTU were working on Masterkey, an automated method of using the power of one LLM to jailbreak another.

See original here:

Researchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious queries - Tom's Hardware

Posted in Ai

Which AI phone features are useful and how well they actually work – The Washington Post

Every year like clockwork, some of the biggest companies in the world release new phones they hope you will shell out hundreds of dollars for.

And more and more, they are leaning on a new angle to get you thinking of upgrading: artificial intelligence.

Smartphones from Google and Samsung come with features to help you skim through long swaths of text, tweak the way you sound in messages, and make your photos more eye-catching. Meanwhile, Apple is reportedly racing to build AI tools and features it hopes to include in an upcoming version of its iOS software, which will launch alongside the companys new iPhones later this year.

But here's the real question: Of the AI tools built into phones right now, how many of them are actually useful?

Thats tough to say: It all depends on what you use your phone for, and what you personally perceive is helpful. To help, heres a brief guide to the AI features youll most commonly find in phones right now, so you can decide which might be worth living with for yourself.

For years, smartphone makers have worked to make the photos that come out of the tiny camera sensors they use look better than they should. Now, theyre also giving us the tools to more easily revise those images.

Here are the most basic: Google and Samsung phones now let you resize, move or erase people and objects inside photos youve taken. Once you do that, the phones lean on generative AI to fill in the visual gaps left behind and thats it.

Think of it as a little Photoshopping, except the hard work is basically done for you. And for better or worse, there are limits to what it can do.

You cant use those built-in tools to generate people, objects or more fantastical additions that werent part of the original image the way you can with other AI image creation tools. The results dont usually survive serious scrutiny, either its not hard to see places where little details dont line up, or areas that look smudgy because the AI couldnt convincingly fill a gap where an offending object used to be.

Whats potentially more unsettling are tools such as Googles Best Take for its Pixel phones, which give you the chance to select specific expressions for peoples faces in an image if youve taken a bunch of photos in a row.

Some people dont mind it, while others find it a little divorced from reality. No matter where you land, though, expect your photos to get a lot of AI attention the next time you buy a phone.

Your messages to your boss probably shouldnt sound like messages to your friends and vice versa. Samsungs Chat Assist and Googles Magic Compose tools use generative AI to try to adjust the language in your messages to make them more palatable.

The catch? Googles Magic Compose only works in its texting-focused Messages app, which means you cant easily use it for emails or, say, WhatsApp messages. (A similar tool for Gmail and the Chrome web browser, called Help Me Write, is not yet widely available.) People who buy Galaxy S24 phones, meanwhile, can use Samsungs version of this feature wherever they write text to switch between professional, casual, polite, and even emoji-filled variations of their original message.

What can I say? It works, though I cant imagine using it with any regularity. And in some ways, Samsungs Chat Assist tool backs down when its arguably needed most. In a few test emails where I used some very mild swears to allude to (fictional) workplace stress, Samsungs Chat Assist refused to help on the grounds that the messages contained inappropriate language.

The built-in voice recorder apps on Googles Pixels and Samsungs latest phones dont just record audio theyll turn those recordings into full-blown transcripts.

In theory, this should free you up from having to take so many notes while youre in a meeting or a lecture. And for the most part, these features work well enough after a few seconds, theyll dutifully produce readable, if sometimes clumsy, readouts of what youve just heard.

If all you need is a sort of rough draft to accompany your recordings, these automated transcription tools can be really helpful. They can differentiate between multiple speakers, which is handy when you need to skim through a conversation later. And Googles version will even give you a live transcription, which can be nice if youre the sort of person who keeps subtitles on all the time.

But whether youre using a Google phone or one of Samsungs, the resulting transcripts often need a bit of cleanup that means youll need to do a little extra work before you copy and paste the results into something important.

Who among us hasnt clicked into a Wikipedia page, or an article, or a recipe online that takes way too long to get to the point? As long as youre using the Chrome browser, Googles Pixel phones can scan those long webpages and boil them down into a set of high-level blurbs to give you the gist.

Sadly, Googles summaries are often too cursory to feel satisfying.

Samsungs phones can summarize your notes and transcriptions of your recordings, but it will only summarize things you find on the web if you use its homemade web browser. Honestly, that might be worth it: The quality of its summaries are much better than Googles. (You even have the option of switching to a more detailed version of the AI summary, which Google doesnt offer at all.)

Both versions of these summary tools come with a notable caveat, too: They wont summarize articles from websites that have paywalls, which includes just about every major U.S. newspaper.

Samsungs AI tools are free for now, but a tiny footnote on its website suggests the company may eventually charge customers to use them. Its not a done deal yet, but Samsung isnt ruling it out either.

We are committed to making Galaxy AI features available to as many of our users as possible, a spokesperson said in a statement. We will not be considering any changes to that direction before the end of 2025.

Google, meanwhile, already makes some of its AI-powered features exclusive to certain devices. (For example: A Video Boost tool for improving the look of your footage is only available on the companys higher-end Pixel 8 Pro phones.)

In the past, Google has made experimental versions of some AI tools like the Magic Compose feature available only to people who pay for the companys Google One subscription service. And more recently, Google has started charging people for access to its latest AI chatbot. For now, though, the company hasnt said anything either way about putting future AI phone features behind a paywall.

Google did not immediately respond to a request for comment.

Go here to read the rest:

Which AI phone features are useful and how well they actually work - The Washington Post

Posted in Ai