Microsoft’s AI Secretly Copying All Your Private Messages

Microsoft is relaunching its AI-powered Recall feature, which records everything you do on your PC by constantly taking screenshots.

Microsoft is finally relaunching "Recall," its AI-powered feature that records almost everything you do on your computer by constantly taking screenshots in the background.

The tool is rolling out exclusively to Copilot+ PCs, a line of Windows 11 computers built with specific hardware optimized for AI tasks. And if it sounds like a privacy nightmare, your suspicions are not unfounded. 

Originally launched last May, Microsoft quickly withdrew Recall after facing widespread backlash, one of the reasons being that security researchers found that Recall's screenshots were stored in an unencrypted database, making it a sitting duck for hackers who'd be able to see potentially anything you'd done on your computer if they broke into it. Since that disastrous debut, the feature has been tested out of the spotlight through Microsoft's Insider program.

Huge risks were still being flagged even as it was being revamped. In December, an investigation by Tom's Hardware found that Recall frequently captured sensitive information in its screenshots, including credit card numbers and Social Security numbers — even though its "filter sensitive information" setting was supposed to prevent that from happening.

For this latest release, Microsoft has tinkered with a few things to make Recall safer. For one, the screenshot database, though easily accessible, is now encrypted. You now have to opt in to having your screenshots saved, when before you had to opt out. You also have the ability to pause Recall on demand.

These are good updates, but they won't change the fact that Recall is an inherently invasive tool. And as Ars Technica notes, it also poses a huge risk not just to the users with Recall on their machines, but to anyone they interact with, whose messages will be screenshotted and processed by the AI — without the person on the other end ever knowing it.

"That would indiscriminately hoover up all kinds of [a user's] sensitive material, including photos, passwords, medical conditions, and encrypted videos and messages," Ars wrote.

This is perhaps its most worrying consequence — how it can turn any PC into a device that surveils others, forcing you to be even more wary about what you send online, even to friends.

"From a technical perspective, all these kind of things are very impressive," warns security researcher Kevin Beaumont in a blog post. "From a privacy perspective, there are landmines everywhere."

In his testing, Beaumont found that Recall's filter for sensitive information was still unreliable. And that encrypted screenshot database? It's only protected by a simple four digit PIN. But the most disturbing find was how good Recall was at indexing everything it stored.

"I sent a private, self deleting message to somebody with a photo of a famous friend which had never been made public," Beaumont wrote. "Recall captured it, and indexed the photo of the person by name in the database. Had the other person receiving had Recall enabled, the image would have been indexed under that person's name, and been exportable later via the screenshot despite it being a self deleting message."

Beaumont's advice is simple, but a sobering indictment of the state of affairs.

"I would recommend that if you're talking to somebody about something sensitive who is using a Windows PC, that in the future you check if they have Recall enabled first."

More on Microsoft: Microsoft's Huge Plans for Mass AI Data Centers Now Rapidly Falling Apart

The post Microsoft's AI Secretly Copying All Your Private Messages appeared first on Futurism.

Visit link:
Microsoft's AI Secretly Copying All Your Private Messages

Google Is Allegedly Paying Top AI Researchers to Just Sit Around and Not Work for the Competition

Google has one weird trick to hoard its artificial intelligence talent from poachers — paying them to not work at all.

Google apparently has one weird trick to hoard its talent from poachers: paying them to not work.

As Business Insider reports, some United Kingdom-based employees at Google's DeepMind AI lab are paid to do nothing for six months — or, in fewer cases, up to a year — after they quit their jobs.

Known as "garden leave," this type of cushy clause is the luckier stepsister to so-called "noncompete" agreements, which prohibit employees and contractors from working with a competitor for a designated period of time after they depart an employer. Ostensibly meant to prevent aggressive poaching, these sorts of clauses also bar outgoing employees from working with competitors.

Often deployed in tandem with noncompetes, garden leave agreements are more prevalent in the UK than across the pond in the United States, where according to the Horton Group law firm, such clauses are generally reserved for "highly-paid executives."

Though it seems like a pretty good gig — or lack thereof — if you can get it, employees at DeepMind's London HQ told BI that garden leave and noncompetes stymie their ability to lock down meaningful work after they leave the lab.

While noncompetes are increasingly a nonstarter in the United States amid growing legislative pushes to make them unenforceable, they're perfectly legal and quite commonplace in the UK so long as a company explicitly states the business interests they're protecting.

Like DeepMind's generous garden leave period, noncompete clauses typically last between six months and a year — but instead of getting paid to garden, per the former's logic, ex-employees just can't work for competitors for that length of time without risking backlash from Google's army of lawyers.

Because noncompetes are often signed alongside non-disclosure agreements (NDAs), we don't know exactly what DeepMind considers a "competitor" — but whatever its contracts stipulate, it's clearly bothersome enough to get its former staffers to speak out.

"Who wants to sign you for starting in a year?" one ex-DeepMind-er told BI. "That's forever in AI."

In an X post from the end of March, Nando de Freitas, a London-based former DeepMind director who now works at Microsoft offered a brash piece of advice: that people should not sign noncompetes at all.

"Above all don’t sign these contracts," de Freitas wrote. "No American corporation should have that much power, especially in Europe. It’s abuse of power, which does not justify any end."

It's not a bad bit of counsel, to be sure — but as with any other company, it's easy to imagine DeepMind simply choosing not to hire experts if they refuse to sign.

More on the world of AI: Trump's Tariffs Are a Bruising Defeat for the AI Industry

The post Google Is Allegedly Paying Top AI Researchers to Just Sit Around and Not Work for the Competition appeared first on Futurism.

Read more:
Google Is Allegedly Paying Top AI Researchers to Just Sit Around and Not Work for the Competition

ChatGPT Is Absolutely Butchering Reporting From Its “News Partners”

A review found that OpenAI's

A review by Columbia's Tow Center for Digital Journalism found that OpenAI's ChatGPT search — a newer version of OpenAI's flagship chatbot designed to paraphrase web queries and provide links to proper sources — is routinely mangling reporting from news outlets, including OpenAI "news partners" that have signed content licensing deals with the AI industry leader.

According to the Columbia Journalism Review, the Tow Center's findings analyzed "two hundred quotes from twenty publications and asked ChatGPT to identify the sources of each quote." The chatbot's accuracy was mixed, with some responses providing entirely accurate attributions, others providing entirely incorrect attribution details, and others offering a blend of fact and fiction.

ChatGPT's search function operates via web crawlers, which return information from around the web as bottled into AI-paraphrased outputs. Some publications, for example The New York Times — which last year sued OpenAI and Microsoft for copyright violations — have blocked OpenAI's web crawlers from rooting around their websites entirely by way of their robots.txt pages. Others, including OpenAI news partners that have signed licensing deals to give the AI company access to their valuable troves of journalistic material in exchange for cash, allow OpenAI's web crawlers to dig through their sites.

Per the CJR, the Tow Center found that in cases where ChatGPT couldn't locate the correct source for a quote due to robots.txt restrictions, it would frequently resort to fabricating source material — as opposed to informing the chatbot user that it couldn't find the quote or that it was blocked from retrieving it. More than a third of all ChatGPT replies returned during the review reportedly contained this type of error.

But no one was spared — not even publications that allow ChatGPT's web crawlers to sift through their sites. According to the review, ChatGPT frequently returned either fully incorrect or partially incorrect attributions for stories penned by journalists at OpenAI-partnered institutions. The same was true for publications not subject to OpenAI licensing deals, but that don't block the AI's crawlers.

It's a terrible look for the AI-powered search feature, which OpenAI billed in a blog post last month as a tool that provides "fast, timely answers with links to relevant web sources," and has received praise from prominent media leaders for its purported potential to benefit journalists and news consumers.

"As AI reshapes the media landscape, Axel Springer's partnership with OpenAI opens up tremendous opportunities for innovative advancements," Mathias Sanchez, an executive at the OpenAI-partnered publisher Axel Springer, said in an October statement. "Together, we're driving new business models that ensure journalism remains both trustworthy and profitable." (According to the Tow Center's review, ChatGPT search frequently returned entirely inaccurate answers when asked to find direct quotes from the Axel Springer-owned publication Politico.)

According to the CJR, the investigators also found that ChatGPT sometimes returned plagiarized news content in cases where the bot's crawlers were blocked by a publisher. We reported on the same phenomenon back in August, when we found that ChatGPT was frequently citing plagiarized versions of original NYT reporting published by DNyuz, a notorious Armenian content mill.

The review further showed that ChatGPT search's ability to provide correct attributions for the same query is wildly unpredictable, with the bot often returning alternately inaccurate and accurate sourcing when given the same prompt multiple times.

A spokesperson for OpenAI admonished the Tow Center's "atypical" testing method, adding that "we support publishers and creators by helping 250M weekly ChatGPT users discover quality content through summaries, quotes, clear links, and attribution."

"We've collaborated with partners to improve in-line citation accuracy and respect publisher preferences, including enabling how they appear in search by managing OAI-SearchBot in their robots.txt," the spokesperson added. "We'll keep enhancing search results."

The media industry is still largely powered by click-based ad revenue, meaning that the Tow Center's findings could be concerning on a business level. If ChatGPT continues to get things wrong, are licensing deals and subscriptions lucrative enough to make up for the loss in traffic? And zooming out, there's the issue of what machine-mangled inaccuracy does to the complicated, much-untrusted news and information landscape: should generative AI become internet users' primary method of finding and metabolizing news, can the public rely on web-surfing tools like ChatGPT search not to muddy the information landscape at large?

That remains to be seen. But in the meantime, a word to the wise: if you're using ChatGPT search, you might want to triple-check that you know where its information is coming from.

More on ChatGPT attributions: Amid New York Times Lawsuit, ChatGPT Is Citing Plagiarized Versions of NYT Articles on an Armenian Content Mill

The post ChatGPT Is Absolutely Butchering Reporting From Its “News Partners” appeared first on Futurism.

Visit link:
ChatGPT Is Absolutely Butchering Reporting From Its “News Partners”

After Years of Chasing Money, OpenAI Reportedly Giving Up on Being a “Nonprofit”

The Financial Times reports that OpenAI is looking to shed its non-profit status once and for all after years of being

ClosedAI

ChatGPT maker OpenAI was founded in 2015 as a nonprofit, only to change its mind four years later, announcing that it had become a "capped-profit" company.

Billions of dollars worth of investment rounds later, the Financial Times is now reporting that the company is finally looking to shed its nonprofit status once and for all.

The company is reportedly in talks to raise further new funds, giving it a valuation of north of $100 billion and potentially making it one of the most valuable Silicon Valley firms ever.

OpenAI has since denied the reporting, arguing in a statement to the FT that "the nonprofit is core to our mission and will continue to exist."

"We remain focused on building AI that benefits everyone and as we’ve previously shared we’re working with our board to ensure that we’re best positioned to succeed in our mission," the statement reads.

No Cap

OpenAI founder and multi-hyphenate billionaire Elon Musk, who rage quit the firm in 2019, has long accused it of turning a blind eye to its nonprofit origins.

Last month Musk even sued OpenAI, arguing that it had abandoned its mission to "benefit humanity" by signing a $10 billion deal with tech giant Microsoft (a previous and largely identical lawsuit filed by Musk was mysteriously abandoned in June.)

"Either turning a nonprofit into a for-profit is legal and everyone should be doing it or it’s illegal and OpenAI is a house of cards," Musk tweeted last week.

Ironically, emails published by OpenAI at the time of Musk's first lawsuit showed that he had been the one pushing OpenAI to become a for-profit entity, suggesting he was simply sour for having abandoned a massively profitable AI venture years too early.

According to the FT's latest report, OpenAI has yet to make a final decision. One option is to remove existing caps on profits for investors, which would be a nail in the coffin for its nonprofit past.

None of this should be particularly surprising at this point, considering the Sam Altman-led entity has quickly turned into one of the most hyper-capitalist ventures in recent history.

Besides, its existing "capped profit" structure clearly hasn't stopped it from raising ungodly amounts of cash — and any public benefit to the project remains elusive.

More on OpenAI: Chef Admits His Smash Hit Pizza Was Invented by ChatGPT

The post After Years of Chasing Money, OpenAI Reportedly Giving Up on Being a “Nonprofit” appeared first on Futurism.

Visit link:
After Years of Chasing Money, OpenAI Reportedly Giving Up on Being a “Nonprofit”