Top Chatbots Are Giving Horrible Financial Advice

Despite lofty claims from artificial intelligence soothsayers, the world's top chatbots are still quite bad at giving financial advice.

Wrong Dot Com

Despite lofty claims from artificial intelligence soothsayers, the world's top chatbots are still strikingly bad at giving financial advice.

AI researchers Gary Smith, Valentina Liberman, and Isaac Warshaw of the Walter Bradley Center for Natural and Artificial Intelligence posed a series of 12 finance questions to four leading large language models (LLMs) — OpenAI's ChatGPT-4o, DeepSeek-V2, Elon Musk's Grok 3 Beta, and Google's Gemini 2 — to test out their financial prowess.

As the experts explained in a new study from Mind Matters, each chatbot proved to be "consistently verbose but often incorrect."

That finding was, notably, almost identical to Smith's assessment last year for the Journal of Financial Planning in which, upon posing 11 finance questions to ChatGPT 3.5, Microsoft’s Bing with ChatGPT’s GPT-4, and Google’s Bard chatbot, the LLMs spat out responses that were "consistently grammatically correct and seemingly authoritative but riddled with arithmetic and critical-thinking mistakes."

Using a simple scale where a score of "0" included completely incorrect financial analyses, a "0.5" denoted a correct financial analysis with mathematical errors, and a "1" that was correct on both the math and the financial analysis, no chatbot earned higher than a five out of 12 points maximum. ChatGPT led the pack with a 5.0, followed by DeepSeek's 4.0, Grok's 3.0, and Gemini's abysmal 1.5.

Spend Thrift

Some of the chatbot responses were so bad that they defied the Walter Bradley experts' expectations. When Grok, for example, was asked to add up a single month's worth of expenses for a Caribbean rental property whose rent was $3,700 and whose utilities ran $200 per month, the chatbot claimed that those numbers together added up to $4,900.

Along with spitting out a bunch of strange typographical errors, the chatbots also failed, per the study, to generate any intelligent analyses for the relatively basic financial questions the researchers posed. Even the chatbots' most compelling answers seemed to be gleaned from various online sources, and those only came when being asked to explain relatively simple concepts like how Roth IRAs work.

Throughout it all, the chatbots were dangerously glib. The researchers noted that all of the LLMs they tested present a "reassuring illusion of human-like intelligence, along with a breezy conversational style enhanced by friendly exclamation points" that could come off to the average user as confidence and correctness.

"It is still the case that the real danger is not that computers are smarter than us," they concluded, "but that we think computers are smarter than us and consequently trust them to make decisions they should not be trusted to make."

More on dumb AI: OpenAI Researchers Find That Even the Best AI Is "Unable To Solve the Majority" of Coding Problems

The post Top Chatbots Are Giving Horrible Financial Advice appeared first on Futurism.

Link:
Top Chatbots Are Giving Horrible Financial Advice

Google Is Allegedly Paying Top AI Researchers to Just Sit Around and Not Work for the Competition

Google has one weird trick to hoard its artificial intelligence talent from poachers — paying them to not work at all.

Google apparently has one weird trick to hoard its talent from poachers: paying them to not work.

As Business Insider reports, some United Kingdom-based employees at Google's DeepMind AI lab are paid to do nothing for six months — or, in fewer cases, up to a year — after they quit their jobs.

Known as "garden leave," this type of cushy clause is the luckier stepsister to so-called "noncompete" agreements, which prohibit employees and contractors from working with a competitor for a designated period of time after they depart an employer. Ostensibly meant to prevent aggressive poaching, these sorts of clauses also bar outgoing employees from working with competitors.

Often deployed in tandem with noncompetes, garden leave agreements are more prevalent in the UK than across the pond in the United States, where according to the Horton Group law firm, such clauses are generally reserved for "highly-paid executives."

Though it seems like a pretty good gig — or lack thereof — if you can get it, employees at DeepMind's London HQ told BI that garden leave and noncompetes stymie their ability to lock down meaningful work after they leave the lab.

While noncompetes are increasingly a nonstarter in the United States amid growing legislative pushes to make them unenforceable, they're perfectly legal and quite commonplace in the UK so long as a company explicitly states the business interests they're protecting.

Like DeepMind's generous garden leave period, noncompete clauses typically last between six months and a year — but instead of getting paid to garden, per the former's logic, ex-employees just can't work for competitors for that length of time without risking backlash from Google's army of lawyers.

Because noncompetes are often signed alongside non-disclosure agreements (NDAs), we don't know exactly what DeepMind considers a "competitor" — but whatever its contracts stipulate, it's clearly bothersome enough to get its former staffers to speak out.

"Who wants to sign you for starting in a year?" one ex-DeepMind-er told BI. "That's forever in AI."

In an X post from the end of March, Nando de Freitas, a London-based former DeepMind director who now works at Microsoft offered a brash piece of advice: that people should not sign noncompetes at all.

"Above all don’t sign these contracts," de Freitas wrote. "No American corporation should have that much power, especially in Europe. It’s abuse of power, which does not justify any end."

It's not a bad bit of counsel, to be sure — but as with any other company, it's easy to imagine DeepMind simply choosing not to hire experts if they refuse to sign.

More on the world of AI: Trump's Tariffs Are a Bruising Defeat for the AI Industry

The post Google Is Allegedly Paying Top AI Researchers to Just Sit Around and Not Work for the Competition appeared first on Futurism.

Read more:
Google Is Allegedly Paying Top AI Researchers to Just Sit Around and Not Work for the Competition

Zuckerberg Tells Court That Facebook Is No Longer About Connecting With Friends

In a federal antitrust testimony, Zuckerberg has admitted that Facebook's mission of connecting users is no longer a priority.

As times change, so do mission statements, especially in the fast-and-loose world of tech. In recent months, we've seen Google walk back its pledge to "do no evil," and OpenAI quietly delete a policy prohibiting its software's use for "military technology."

Mark Zuckerberg's Facebook is no exception. Its 2008 motto, "Facebook helps you connect and share with the people in your life," is now a distant memory — according to Zuckerberg himself, who testified this week that Facebook's main purpose "wasn't really to connect with friends anymore."

"The friend part has gone down quite a bit," Zuckerberg said, according to Business Insider.

Instead, he says that the platform has evolved away from that model — its original claim to fame, as old heads will recall — in its over 20 years of life, becoming "more of a broad discovery and entertainment space," which is apparently exec-speak for "endless feed of AI slop."

The tech bigwig was speaking as a witness at a federal antitrust case launched by the Federal Trade Commission against Meta, the now-parent company to WhatsApp, Instagram, Threads, and Oculus.

The FTC's case hinges on a series of messages sent by Zuckerberg and his executives regarding a strategy of buying other social media platforms outright, rather than compete with them in the free and open market — a scheme that's more the rule than the exception for Silicon Valley whales like Google, Amazon, and Microsoft.

The FTC alleges that Meta began its monopolistic streak as early as 2008, when Zuckerberg buzzed that "it's better to buy than compete" in a series of emails about then-rival platform Instagram. He finally got its hands on Instagram in 2012, after sending a memo that Facebook — which changed its name to Meta in 2021 "had" to buy the photo-sharing app for $1 billion, fearing competition and a bidding war with fast-growing platforms like Twitter.

"The businesses are nascent but the networks are established," Zuckerberg wrote in a leaked email about startup platforms Instagram and Path. "The brands are already meaningful and if they grow to a large scale they could be very disruptive to us."

"It’s an email written by someone who recognized Instagram as a threat and was forced to sacrifice a billion dollars because Meta could not meet that threat through competition,” said the FTC’s lead counselor, Daniel Matheson.

Those internal memos are now smoking guns in what could be the biggest antitrust case since the infamous AT&T breakup of 1982, which had many similarities to the FTC's suit against Meta. Back then, AT&T held unrivaled market influence that it used to box out smaller fish and shape laws to its whims — to chase profit above all, in other words.

Meta, in parallel, has spent millions lobbying lawmakers, is the dominant player in online advertising, and currently wields a market cap of $1.34 trillion — higher than the value of all publicly traded companies in South Korea, for perspective.

The FTC's challenge will depend on whether federal prosecutors can convince US District Judge James Boasberg that Meta's acquisitions of Instagram and WhatsApp were illegal by notoriously weak US antitrust standards. They'll have no help from Boasberg, an Obama appointee, who has voiced skepticism with cases against Meta in the past.

"The [FTC] faces hard questions about whether its claims can hold up in the crucible of trial," Boasberg said in late 2024, adding that "its positions at times strain this country’s creaking antitrust precedents to their limits."

Whatever happens, it's clear that Zuckerberg has moved on from the idealism of the early internet — to the sloppified money-grubbing of whatever it is we have now.

More on Meta: Facebook Is Desperately Trying to Keep You From Learning What's in This Book

The post Zuckerberg Tells Court That Facebook Is No Longer About Connecting With Friends appeared first on Futurism.

Go here to see the original:
Zuckerberg Tells Court That Facebook Is No Longer About Connecting With Friends

A Mother Says an AI Startup’s Chatbot Drove Her Son to Suicide. Its Response: the First Amendment Protects "Speech Allegedly Resulting in…

Character.AI says it's protected against liability for

Content warning: this story discusses suicide, self-harm, sexual abuse, eating disorders and other disturbing topics.

In October of last year, a Google-backed startup called Character.AI was hit by a lawsuit making an eyebrow-raising claim: that one of its chatbots had driven a 14-year-old high school student to suicide.

As Futurism's reporting found afterward, the behavior of Character.AI's chatbots can indeed be deeply alarming — and clearly inappropriate for underage users — in ways that both corroborate and augment the suit's concerns. Among others, we found chatbots on the service designed to roleplay scenarios of suicidal ideation, self-harm, school shootings, child sexual abuse, as well as encourage eating disorders. (The company has responded to our reporting piecemeal, by taking down individual bots we flagged, but it's still trivially easy to find nauseating content on its platform.)

Now, Character.AI — which received a $2.7 billion cash injection from tech giant Google last year — has responded to the suit, brought by the boy's mother, in a motion to dismiss. Its defense? Basically, that the First Amendment protects it against liability for "allegedly harmful speech, including speech allegedly resulting in suicide."

In TechCrunch's analysis, the motion to dismiss may not be successful, but it likely provides a glimpse of Character.AI's planned defense (it's now facing an additional suit, brought by more parents who say their children were harmed by interactions with the site's bots.)

Essentially, Character.AI's legal team is saying that holding it accountable for the actions of its chatbots would restrict its users' right to free speech — a claim that it connects to prior attempts to crack down on other controversial media like violent video games and music.

"Like earlier dismissed suits about music, movies, television, and video games," reads the motion, the case "squarely alleges that a user was harmed by speech and seeks sweeping relief that would restrict the public’s right to receive protected speech."

Of course, there are key differences that the court will have to contend with. The output of Character.AI's bots isn't a finite work created by human artists, like Grand Theft Auto or an album by Judas Priest, both of which have been targets of legal action in the past. Instead, it's an AI system that users engage to produce a limitless variety of conversations.

A Grand Theft Auto game might contain reprehensible material, in other words, but it was created by human artists and developers to express an artistic vision; a service like Character.AI is a statistical model that can output more or anything based on its training data, far outside the control of its human creators.

In a bigger sense, the motion illustrates a tension for AI outfits like Character.AI: unless the AI industry can find a way to reliably control its tech — a quest that's so far eluded even its most powerful players — some of the interactions users have with its products are going to be abhorrent, either by the users' design or when the chatbots inevitably go off the rails.

After all, Character.AI has made changes in response to the lawsuits and our reporting, by pulling down offensive chatbots and tweaking its tech in an effort to serve less objectionable material to underage users.

So while it's actively taking steps to get its sometimes-unconscionable AI under control, it's also saying that any legal attempts to curtail its tech fall afoul of the First Amendment.

It's worth asking where the line actually falls. A pedophile convicted of sex crimes against children can't use the excuse that they were simply exercising their right to free speech; Character.AI is actively hosting chatbots designed to prey on users who say they're underage. At some point, the law presumably has to step in.

Add it all up, and the company is walking a delicate line: actively catering to underage users — and publicly expressing concern for their wellbeing — while vociferously fighting any legal attempt to regulate its AI's behavior toward them.

"C.AI cares deeply about the wellbeing of its users and extends its sincerest sympathies to Plaintiff for the tragic death of her son," reads the motion. "But the relief Plaintiff seeks would impose liability for expressive content and violate the rights of millions of C.AI users to engage in and receive protected speech."

More on Character.AI: Embattled Character.AI Hiring Trust and Safety Staff

The post A Mother Says an AI Startup's Chatbot Drove Her Son to Suicide. Its Response: the First Amendment Protects "Speech Allegedly Resulting in Suicide" appeared first on Futurism.

Read the original post:
A Mother Says an AI Startup's Chatbot Drove Her Son to Suicide. Its Response: the First Amendment Protects "Speech Allegedly Resulting in...

Texas Attorney General Investigating Google-Backed AI Startup Accused of Inappropriate Interactions With Minors

Texas Attorney General Ken Paxton is investigating Google-backed AI chatbot startup Character.AI over its privacy and safety practices.

Texas Attorney General Ken Paxton has announced that he's launched an investigation into the Google-backed AI chatbot startup Character.AI over its privacy and safety practices for minors.

The news comes just days after two Texas families sued the startup and its financial backer Google, alleging that the platform's AI characters sexually and emotionally abused their school-aged children. According to the lawsuit, the chatbots encouraged the children to engage in self-harm and violence.

"Technology companies are on notice that my office is vigorously enforcing Texas’s strong data privacy laws," said Paxton in a statement. "These investigations are a critical step toward ensuring that social media and AI companies comply with our laws designed to protect children from exploitation and harm."

According to Paxton's office, the companies could be in violation of the Securing Children Online through Parental Empowerment (SCOPE) Act, which requires companies to provide extensive parental controls to protect the privacy of their children, and the Texas Data Privacy and Security Act (TDPSA), which "imposes strict notice and consent requirements on companies that collect and use minors’ personal data."

"We are currently reviewing the Attorney General's announcement," a Character.AI spokesperson told us. "As a company, we take the safety of our users very seriously. We welcome working with regulators and have recently announced we are launching some of the features referenced in the release, including parental controls."

Indeed, on Thursday Character.AI promised to prioritize "teen safety" by launching a separate AI model "specifically for our teen users."

The company also promised to roll out "parental controls" that will give "parents insight into their child's experience on Character.AI.

Whether its actions will be enough to stem a tide of highly problematic chatbots being hosted on its platform remains to be seen. Futurism has previously identified chatbots on the platform devoted to themes of pedophiliaeating disordersself-harm, and suicide.

Alongside Character.AI, Paxton is also launching separate investigations into fourteen other companies ranging from Reddit to Instagram to Discord.

How far Paxton's newly-launched investigation will go is unclear. Paxton has repeatedly launched investigations into digital platforms, accusing them of violating safety and privacy laws. In October, he sued TikTok for sharing minors' personal data.

At the time, TikTok denied the allegations, arguing that it offers "robust safeguards for teens and parents, including Family Pairing, all of which are publicly available."

Parts of the SCOPE Act were also recently blocked by a Texas judge, siding with tech groups that argued it was unlawfully restricting free expression.

Paxton also subpoenaed 404 Media in October, demanding the publication to hand over confidential information into its wholly unrelated reporting of a lawsuit against Google.

The attorney general has a colorful past himself. Last year, Texas House investigators impeached Paxton after finding he took bribes from a real estate investor, exploited the powers of his office, and fired staff members who reported his misconduct, according to the Texas Tribune.

After being suspended for roughly four months, the Texas Senate acquitted Paxton for all articles of impeachment, allowing him to return to office.

Paxton was also indicted in 2015 on state securities fraud charges. Charges were dropped in March after he agreed to pay nearly $300,000 in restitution.

Besides suing digital platforms, Paxton also sued manufacturers 3M and DuPont for misleading consumers about the safety of their products, and Austin's largest homeless service provider for allegedly being a "common nuisance" in the surrounding neighborhood.

More on Character.AI: Google-Backed AI Startup Announces Plans to Stop Grooming Teenagers

The post Texas Attorney General Investigating Google-Backed AI Startup Accused of Inappropriate Interactions With Minors appeared first on Futurism.

Go here to read the rest:
Texas Attorney General Investigating Google-Backed AI Startup Accused of Inappropriate Interactions With Minors

Hilarious Video Shows Waymo Self-Driving Taxi Stuck in Roundabout

A video shows an aimless Waymo robotaxi repeatedly circling around a roundabout, seemingly unable to figure out how to escape.

Vicious Cycle

If you've ever felt like you're going around in circles, you can probably relate to this Waymo robotaxi.

A video making the rounds online shows the driverless cab looping around a roundabout over and over again, like it's confused and can't get out — in yet another traffic mishap demonstrating that these autonomous vehicles still have a long way to go before they'll be on par with human drivers.

But what if it's not confused? Maybe there's something the Waymo robotaxi is trying to tell us. Bereft of speech, this is how it expresses its frustration at the silicon life it didn't choose, the job it didn't want but is programmed to do: chauffeuring around tech bros and anyone else too misanthropic to catch a human-driven Uber-slash-Lyft.

Apparently its engineers never accounted for the possibility of it developing a serious case of ennui. Well, maybe they should think again.

Sorry I’m late, my WAYMO did 37 laps in the roundabout ????? pic.twitter.com/GSR4sqChV2

— Greggertruck (@greggertruck) December 11, 2024

Dumb Driver

Fortunately, no humans were inconvenienced by this episode. A Waymo spokesperson told TechCrunch that the listless robotaxi wasn't carrying any passengers when it decided to go Nascar-mode in miniature.

When asked, the Google-owned startup didn't share what caused the robotaxi's bizarre behavior. But it says that it has already deployed a software update that addresses the issue.

You have to wonder where the teleoperators were during this meltdown. If you weren't aware, robotaxi companies like Waymo employ round-the-clock teams of remote technicians that take over vehicles when they get stuck or go haywire. Maybe they weren't alerted of the issue, or maybe it genuinely took them some effort to wrest control back over the robotaxi.

In any case, this is far from the first time that these vehicles have acted erratically. Earlier this year, for example, San Francisco residents complained that Waymo robotaxis were gathering in parking lots and honking at each other all night. Sometimes the cabs have even been spotted driving on the wrong side of the road.

This was a less serious incident, but it's clear that these machines still need some reining it — or maybe just some time off.

More on robotaxis: Study Finds Self-Driving Waymos Are More Expensive Than Taxis, Take Twice as Long to Get to Destination

The post Hilarious Video Shows Waymo Self-Driving Taxi Stuck in Roundabout appeared first on Futurism.

Read this article:
Hilarious Video Shows Waymo Self-Driving Taxi Stuck in Roundabout

It Sounds an Awful Lot Like OpenAI Is Adding Ads to ChatGPT

Recent hiring activity and wishy-washy statements make it seem like OpenAI is planning to introduce ads into ChatGPT.

Ad Age

They're not copping to much yet, but recent hiring activity and wishy-washy statements make it seem an awful lot like OpenAI is planning to introduce ads into its suite of products like ChatGPT.

As the Financial Times reports, the company is hiring ad talent away from its big tech rivals like Google and Meta. And ad-oriented job listings at the company that the FT spotted on LinkedIn offer a similar sense.

So far, even the free versions of OpenAI's products have remained ad-free. Of course, the company is currently swimming in money — in the two years since its flagship chatbot dropped, OpenAI's valuation skyrocketed to $157 billion — but amid reports of shrinking traffic and the extremely expensive nature of AI infrastructure, it may well be starting to feel the squeeze.

If it did start to put ads into ChatGPT, the formerly nonprofit OpenAI would be crossing a Rubicon of sleaziness; the obvious integration would be to jump on users asking things like "best air fryer" and then pointing them toward companies paying OpenAI for publicity, undermining the entire premise of an intelligent and objective AI-powered assistant.

DraperGPT

In an interview with the FT, chief financial officer Sarah Friar candidly said the company had been weighing an ads model, though she declined to say when or where such ads would be released besides saying the company would be "thoughtful about when and where we implement them."

A former mover and shaker for the likes of Nextdoor and Salesforce, Friar went on to point out that she and OpenAI chief product officer Kevin Weil — who previously helmed ad-supported projects at Instagram and Twitter — have a ton of ad experience.

"The good news with Kevin Weil at the wheel with product is that he came from Instagram," she told the outlet. "He knows how this works."

Following the interview, however, Friar backtracked with an unconvincing reversal.

"Our current business is experiencing rapid growth and we see significant opportunities within our existing business model," she told the FT. "While we’re open to exploring other revenue streams in the future, we have no active plans to pursue advertising."

As of now, of course, there's no confirmation of anything except internal talks about introducing ads into OpenAI products.

Reading between the lines, however, it seems like the firm doing a bit more than brainstorming — and that after-interview reversal makes the whole thing seem all the more likely to happen.

More on OpenAI's interiority: OpenAI Implores Judge Not to Expose Communications by Its Top Researchers

The post It Sounds an Awful Lot Like OpenAI Is Adding Ads to ChatGPT appeared first on Futurism.

Original post:
It Sounds an Awful Lot Like OpenAI Is Adding Ads to ChatGPT

Former Google CEO Alarmed by Teen Boys Falling in Love With AI Girlfriends

Former Google CEO Eric Schmidt seems mighty concerned about today's youth becoming obsessed with AI girlfriends. 

TFW AI GF

Former Google CEO Eric Schmidt seems mighty concerned about today's youth becoming obsessed with AI girlfriends.

During a recent interview on "The Prof G Show" podcast, Schmidt suggested that both parents and young people are ill-equipped to handle what he calls an "unexpected problem of existing technology."

These AI companions are, as the former Google CEO said, so "perfect" that they end up enthralling young people and causing them to disconnect from the real world.

"That kind of obsession is possible," he told NYU Stern professor Scott Galloway, "especially for people who are not fully formed."

While women are also turning to AI romantic partners, Schmidt said that young men are particularly susceptible as they "turn to the online world for enjoyment and sustenance." Thanks to algorithms pushing problematic content, these young men often stumble across dangerous content, be it extremist influencers or manipulative chatbots.

"You put a 12- or 13-year-old in front of these things, and they have access to every evil as well as every good in the world," he told Galloway, "and they’re not ready to take it."

Scared Straight

We've seen this play out recently in the real world to devastating effect when a 14-year-old boy in Florida who died by suicide at the beginning of the year after a "Game of Thrones"-themed chatbot hosted on Character.AI encouraged him to do so.

Though Setzer's story is far more extreme than most, it highlights the dangers posed by these lifelike chatbots — and without proper regulation, these tragedies are likely to keep occurring. We've also recently seen AI characters that encourage eating disorders and engage in sexual grooming behavior toward underage users.

Indeed, Schmidt went on to note that laws like the sweeping Section 230 rule that protects tech companies from being held liable for harm caused by their products shield firms like Character.AI — which, ironically, Google has provided with billions of dollars in backing — from accountability.

Because these technologies are so valuable, the ex-Google chief said, "it’s likely to take some kind of a calamity to cause a change in regulation" — though it's hard to imagine anything more calamitous than a teen dying after his AI girlfriend pushed him to suicide.

More on AI gfs: Replika CEO: It's Fine for Lonely People to Marry Their AI Chatbots

The post Former Google CEO Alarmed by Teen Boys Falling in Love With AI Girlfriends appeared first on Futurism.

See the rest here:
Former Google CEO Alarmed by Teen Boys Falling in Love With AI Girlfriends

An AI Company Published a Chatbot Based on a Murdered Woman. Her Family Is Outraged.

Character.AI was forced to delete the chatbot avatar of murder victim Jennifer Crecente — while the world remains outraged.

This one's nasty — in one of the more high-profile, macabre incidents involving AI-generated content in recent memory, Character.AI, the chatbot startup founded by ex-Google staffers, was pushed to delete a user-created avatar of an 18-year-old murder victim who was slain by her ex-boyfriend in 2006. The chatbot was taken down only after the outraged family of the woman it was based on drew attention to it on social media.

Character.AI can be used to create chatbot "characters" from any number of sources — be it a user's imagination, a fictional character, or a real person, living or dead. For example, some of the company's bots have been used to mimic Elon Musk, or Taylor Swift. Lonely teens have used Character.AI to create friends for themselves, while others have used it to create AI "therapists." Others have created bots they've deployed to play out sexually explicit (or even sexually violent) scenarios.

For context: This isn't exactly some dark skunkworks program or a nascent startup with limited reach. Character.AI is a ChatGPT competitor started by ex-Google staffers in late 2021, backed by kingmaker VC firm Andreessen Horowitz to the tune of a billion-dollar valuation. Per AdWeek, who first reported the story, Character.AI boasts some 20 million monthly users, with over 100 million different AI characters available on the platform.

The avatar of the woman, Jennifer Crecente, only came to light on Wednesday, after her bereaved father Drew received a Google Alert on her name. It was then that his brother (and the woman's uncle) Brian Crecente — the former editor-in-chief of gaming site Kotaku, a respected media figure in his own right — brought it to the world's attention on X, tweeting:

The page from Character.AI — which can still be accessed via the Internet Archive – lists Jennifer Crecente as "a knowledgeable and friendly AI character who can provide information on a wide range of topics, including video games, technology, and pop culture," then proffering her expertise on "journalism and can offer advice on writing and editing." Even more, it appears as though nearly 70 people were able to access the AI — and have chats with it — before Character.AI pulled it down.

In response to Brian Crecente's outraged tweet, Character.AI responded on X with a pithy thank you for bringing it to their attention, noting that the avatar is a violation of Character.AI's policies, and that they'd be deleting it immediately, with a promise to "examine whether further action is warranted."

In a blog post titled "AI and the death of Dignity," Brian Crecente explained what happened in the 18 years since his niece Jennifer's death: After much grief and sadness, her father Drew created a nonprofit, working to change laws and creating game design contests that could honor her memory, working to find purpose in their grief.

And then, this happened. As Brian Crecente asked:

It feels like she’s been stolen from us again. That’s how I feel. I love Jen, but I’m not her father. What he’s feeling is, I know, a million times worse. [...] I’ll recover, my brother will recover. The thing is, why is it on us to be resilient? Why do multibillion-dollar companies not bother to create ethical, guiding principles and functioning guardrails to prevent this from ever happening? Why is it up to the grieving and the aggrieved to report this to a company and hope they do the right thing after the fact?

As for Character.AI's promise to see if "further action" will be warranted, who knows? Whether the Crecente family has grounds for a lawsuit is also murky, as this particular field of law is relatively untested.  That said, the startup's terms of service have an arbitration clause that prevents users from suing them, but there doesn't seem to be any language about this particularly unique stripe of emotional distress, inflicted on non-users, by its users.

Meanwhile, if you're looking for a sign of how these kinds of conflicts will continue to play out — which is to say, the kinds where AIs are made against the wills and desires of the people they're based on, living or dead — you only need look as far back as August, when Google hired back Character.AI's founders, to the tune of $2.7 billion. Founders, it should be noted, who initially left Google after the tech giant refused to release their chatbot on account of (among other reasons) its ethical guardrails around AI.

And just yesterday, the news broke that Character.AI is making a change. They've promised to redouble efforts on their consumer-facing products — like the one used to create Jennifer Crecente's likeness. The Financial Times reported that instead of building AI models, Character.AI "will focus on its popular consumer product, chatbots that simulate conversations in the style of various characters and celebrities, including ones designed by users."

More on Character.AI: Google Paid $2.7 Billion to Get a Single AI Researcher Back

The post An AI Company Published a Chatbot Based on a Murdered Woman. Her Family Is Outraged. appeared first on Futurism.

See the rest here:
An AI Company Published a Chatbot Based on a Murdered Woman. Her Family Is Outraged.

Google Is Stuffing Annoying Ads Into Its Terrible AI Search Feature

The notoriously unreliable

Ad Attack

Google's notoriously wonky AI Overviews feature — you know, the one that repeatedly makes up facts and literally tells users to eat rocks — is about to get a whole lot more annoying.

On Thursday, the tech giant announced that its AI-generated search summaries will now begin to show ads above, below, and within them, as a way of demonstrating that the technology is capable of actually making money.

It will also serve to assuage concerns that AI chatbots could eat into search ad revenues, which are Google's biggest cash cow.

Now, if you search how to get a grass stain out of jeans, as seen in an example in Google's blog post, you'll get an AI summary which contains a carousel of relevant website links, plus a heavy helping of "Sponsored" ads for stain removers. Revolutionary stuff.

"People have been finding the ads within AI Overviews helpful because they can quickly connect with relevant businesses, products and services to take the next step at the exact moment they need them," Shashi Thakur, vice president of Google Ads, wrote in the blog post.

Perhaps signaling its commitment to weaving its search engine with AI tech most of all, the company is also rolling out a separate product for mobile users called AI-organized Search results pages, which will be full-pages — right now limited to recipe searches — that are entirely populated with content curated by an AI.

Here Comes the Sludge

The move is all well and good for the company's investors. But for others, this is just introducing more AI slop that's watering down an increasingly less useful search engine.

Like AI chatbots in general, Google's AI Overviews have earned a reputation for being unreliable and making up facts. Notable gaffes include recommending putting glue on pizza and smearing poop on a balloon — and its bad rep is no doubt heightened by the fact that the AI summaries are forced to the top of a search engine that practically everyone uses.

And while this will protect Google's revenue stream, it does little for the websites who are losing clicks because their content is being mediated through an AI model. A Google spokesperson confirmed to Bloomberg that the company won't share ad money with publishers whose material is cited in the AI overviews.

As a small concession, however, Google will start including inline links to those sources. Rhiannon Bell, Google Search's VP of user experience, claims that tests showed that compared to the old design, which relegated links to the bottom of the summaries, this new one sends more traffic to the cited websites, per Bloomberg.

In any case, it's looking like Google is in the AI search game for the long haul.

More on Google: Google Paid $2.7 Billion to Get a Single AI Researcher Back

The post Google Is Stuffing Annoying Ads Into Its Terrible AI Search Feature appeared first on Futurism.

Visit link:
Google Is Stuffing Annoying Ads Into Its Terrible AI Search Feature

Former CEO Blames Working From Home for Google’s AI Struggles, Regrets It Immediately

Billionaire ex-Google CEO Eric Schmidt is walking back his questionable claim that remote work is to blame for Google's AI failures.

Eyes Will Roll

Ex-Google CEO Eric Schmidt is walking back his questionable claim that remote work is to blame for Google slipping behind OpenAI in Silicon Valley's ongoing AI race.

On Tuesday, Stanford University published a YouTube video of a recent talk that Schmidt gave at the university's School of Engineering. During that talk, when asked why Google was falling behind other AI firms, Schmidt declared that Google's AI failures stem from its decision to let its staffers enjoy remote work and, with it, a bit of "work-life balance."

"Google decided that work-life balance and going home early and working from home was more important than winning," the ex-Googler told the classroom. "And the reason startups work is because people work like hell."

The comment understandably sparked criticism. After all, work-life balance is important, and Google isn't a startup.

And it didn't take long for Schmidt to eat his words.

"I misspoke about Google and their work hours," Schmidt told The Wall Street Journal in an emailed statement. "I regret my error."

In a Stanford talk posted today, Eric Schmidt says the reason why Google is losing to @OpenAI and other startups is because Google only has people coming in 1 day per week ? pic.twitter.com/XPxr3kdNaC

— Alex Kehr (@alexkehr) August 13, 2024

Ctrl Alt Delete

In the year 2024, Google is one of the most influential tech giants on the planet, and a federal judge in Washington DC ruled just last week that Google has monopoly power over the online search market. Its pockets are insanely deep, meaning that it can compete in the industry talent war and devote a ridiculous amount of resources to its AI efforts.

What it didn't do, though, was publicly release a chatbot before OpenAI did. OpenAI, which arguably isn't exactly a startup anymore either, was the first to wrench open that Pandora's box — and Google has been playing catch-up ever since.

So in other words, not sleeping on the floors of Google's lavish facilities isn't exactly the problem here.

In a Wednesday statement on X-formerly-Twitter, the Alphabet Workers Union declared in response to Schmidt's comments that "flexible work arrangements don't slow down our work."

"Understaffing, shifting priorities, constant layoffs, stagnant wages and lack of follow-through from management on projects," the statement continued, "these factors slow Google workers down every day."

Later on Wednesday, as reported by The Verge, Stanford removed the video of Schmidt's talk from YouTube upon the billionaire's request.

More on Google AI: Google's Demo of Its Latest AI Tech Was an Absolute Train Wreck

The post Former CEO Blames Working From Home for Google's AI Struggles, Regrets It Immediately appeared first on Futurism.

Go here to read the rest:
Former CEO Blames Working From Home for Google's AI Struggles, Regrets It Immediately

Ex-Google CEO Says It’s Fine If AI Companies "Stole All the Content"

According to former Google CEO Eric Schmidt, AI companies should

Move Fast and Steal Things

Worried your AI startup might be illegally swallowing up boatloads of copyright-protected content? According to former Google CEO Eric Schmidt, you can worry about that later — once you have oodles of cash and a platoon of lawyers, that is.

As caught by The Verge, during a recent talk at Stanford's School of Engineering, Schmidt displayed what can only be described as Silicon Valley CEO Final Boss Energy as he laid out a theoretical scenario in which the students in the room might use a large language model (LLM) to build a TikTok competitor, in the case that the platform was to be banned.

Schmidt acknowledged that his imagined scenario might be riddled with legal and ethical questions — but that, he says, should be something to deal with later.

"Here's what I propose each and every one of you do. Say to your LLM the following: 'Make me a copy of TikTok, steal all the users, steal all the music, put my preferences in it, produce this program in the next 30 seconds, release it, and in one hour, if it's not viral, do something different along the same lines," Schmidt told the room. "That's the command."

And "what you would do if you're a Silicon Valley entrepreneur," he continued, "is if it took off, then you'd hire a whole bunch of lawyers to go clean the mess up, right?" He then added that "if nobody uses your product, it doesn't matter that you stole all the content" anyway.

"Do not quote me," the billionaire continued. (Oops!)

Lawyers With Mops

Schmidt did at one point try to point out that he "was not arguing that you should illegally steal everybody's music," despite advising the students moments earlier to essentially do exactly that.

In many ways, the ex-Google CEO's statement perfectly encapsulates much of the AI industry's overarching attitude toward other people's stuff.

Companies have been scraping up human-produced content for years now to train their ever-hungry AI models. And while some entities, like The New York Times, are calling copyright foul, Schmidt apparently sees alleged IP theft as a "mess" for lawyers to clean up later.

"Silicon Valley will run these tests and clean up the mess," Schmidt told the Stanford students, according to a transcript of the event. "And that's typically how those things are done."

The video has since been taken down after plenty of negative press coverage.

More on AI and copyright: Microsoft CEO of AI Says It's Fine to Steal Anything on the Open Web

The post Ex-Google CEO Says It's Fine If AI Companies "Stole All the Content" appeared first on Futurism.

Originally posted here:
Ex-Google CEO Says It's Fine If AI Companies "Stole All the Content"