Google Admits Gemini AI Demo Was at Least Partially Faked

Google misrepresented the way its Gemini Pro can recognize a series of images and admitted to speeding up the footage.

Google has a lot to prove with its AI efforts — but it can't seem to stop tripping over its own feet.

Earlier this week, the tech giant announced Gemini, its most capable AI model to date, to much fanfare. In one of a series of videos, Google showed off the mid-level range of the model dubbed Gemini Pro by demonstrating how it could recognize a series of illustrations of a duck, describing the changes a drawing went through at a conversational pace.

But there's one big problem, as Bloomberg columnist Parmy Olson points out: Google appears to have faked the whole thing.

In its own description of the video, Google admitted that "for the purposes of this demo, latency has been reduced, and Gemini outputs have been shortened for brevity." The video footage itself is also appended with the phrase "sequences shortened throughout."

In other words, Google misrepresented the speed at which Gemini Pro can recognize a series of images, indicating that we still don't know what the model is actually capable of.

In the video, Gemini wowed observers by using its multimodal thinking chops to recognize illustrations at what appears to be a drop of a hat. The video, as Olson suggests, also offered us "glimmers of the reasoning abilities that Google’s DeepMind AI lab have cultivated over the years."

That's indeed impressive, considering any form of reasoning has quickly become the next holy grail in the AI industry, causing intense interest in models like OpenAI's rumored Q*.

In reality, the demo wasn't just significantly sped up to make it seem more impressive, but Gemini Pro is likely still stuck with the same old capabilities that we've already seen many times before.

"I think these capabilities are not as novel as people think," Wharton professor Ethan Mollick tweeted, showing how ChatGPT was effortlessly able to identify the simple drawings of a duck in a series of screenshots.

Did Google actively try to deceive the public by speeding up the footage? In a statement to Bloomberg Opinion, a Google spokesperson said it was made by "using still image frames from the footage, and prompting via text."

In other words, Gemini was likely given plenty of time to analyze the images. And its output may have then been overlaid over video footage, giving the impression that it was much more capable than it really was.

"The video illustrates what the multimode user experiences built with Gemini could look like," Oriol Vinyals, vice president of research and deep learning lead at Google’s DeepMind, wrote in a post on X.

Emphasis on "could." Perhaps Google should've opted to show the actual capabilities of its Gemini AI instead.

It's not even the first time Google has royally screwed up the launch of an AI model. Earlier this year, when the company announced its ChatGPT competitor, a demo infamously showed Bard making a blatantly false statement, claiming that NASA's James Webb Space Telescope took the first image of an exoplanet.

As such, Google's latest gaffe certainly doesn't bode well. The company came out swinging this week, claiming that an even more capable version of its latest model called Gemini Ultra was able to outsmart OpenAI's GPT-4 in a test of intelligence.

But from what we've seen so far, we're definitely going to wait and test it out for ourselves before we take the company's word.

More on Gemini: Google Shows Off "Gemini" AI, Says It Beats GPT-4

The post Google Admits Gemini AI Demo Was at Least Partially Faked appeared first on Futurism.

Continue reading here:
Google Admits Gemini AI Demo Was at Least Partially Faked

Ex-OpenAI Board Member Refuses to Say Why She Fired Sam Altman

The now-former OpenAI board member who was instrumental in the firing of Sam Altman has spoken — but she's still staying mum where it matters.

Mum's The Word

The now-former OpenAI board member who was instrumental in the initial firing of CEO Sam Altman has spoken — but she's still staying mum on why she pushed him out in the first place.

In an interview with the Wall Street Journal, 31-year-old Georgetown machine learning researcher and erstwhile OpenAI board member Helen Toner was fairly open with her responses about the logistics of the failed coup at the company, but terse when it came to the reasoning behind it.

"Our goal in firing Sam was to strengthen OpenAI and make it more able to achieve its mission," the Australian-born researcher said as her only explanation of the headline-grabbing chain of events.

As the New York Times reported in the midst of the Thanksgiving hubbub, Toner and Altman butted heads the month prior because she published a paper critical of the firm's safety protocols (or lack thereof) and laudatory of those undertaken by Anthropic, which was created by former OpenAI employees who left over similar concerns.

Altman reportedly confronted Toner during their meeting and because he believed, per emails viewed by the NYT, that "any amount of criticism from a board member carries a lot of weight."

After the tense exchange, the CEO brought his concerns about Toner's criticisms up with other board members, which ended up reinforcing those board members' own doubts about his leadership, the WSJ reports. Soon after, Altman himself was on the chopping block over vague allegations of dishonesty — although we still don't know what exactly he was supposedly being dishonest about.

Intimidating

As the company weathered its tumult amid a nearly full-scale revolt from staffers who said they'd leave and follow Altman to Microsoft if he wasn't reinstated, Toner and OpenAI cofounder and chief scientist Ilya Sutskever ended up resigning, the report explains, which paved the way for the CEO's return.

In her interview with the WSJ, however, the Georgetown researcher suggested that her resignation was forced by a company attorney.

"[The attorney] was trying to claim that it would be illegal for us not to resign immediately," Toner said, "because if the company fell apart we would be in breach of our fiduciary duties."

With the exit of the Aussie academic and Rand Corporation scientist Tasha McCauley, another of those who voted for Altman's ouster, from the board, there are now no women on OpenAI's governing body — but in this interview at least, Toner was all class.

"I think looking forward," she said, "is the best path from here."

More on OpenAI: Sam Altman Got So Many Texts After Being Fired It Broke His Phone

The post Ex-OpenAI Board Member Refuses to Say Why She Fired Sam Altman appeared first on Futurism.

See the article here:
Ex-OpenAI Board Member Refuses to Say Why She Fired Sam Altman

Nicki Minaj Fans Are Using AI to Create “Gag City”

Fans anxiously awaiting the release of Nicki Minaj's latest album have occupied themselves with AI to create their own

Gag City

Fans are anxiously awaiting the drop of Onika "Nicki Minaj" Maraj-Petty's "Pink Friday 2" — and in the meantime, they've occupied themselves with artificial intelligence image generators to create visions of a Minajian utopia known as "Gag City."

The entire "Gag City" gambit began with zealous (and perhaps overzealous) fans tweeting at the Queens-born diva to tell her how excited — or "gagged," to use the drag scene etymology that spread among Maraj-Petty's LGBTQ and queer-friendly fanbase — they are for her first album in more than five years.

Replete with dispensaries, burger joints, and a high-rise shopping mall, Gag City has everything a Barb (as fans call themselves) could ask for.

Gag City, the fan-created AI kingdom for Nicki Minaj, trends on X/Twitter ahead of ‘Pink Friday 2.’ pic.twitter.com/jm3iGS9fBO

— Pop Crave (@PopCrave) December 6, 2023

Barbz Hug

As memetic lore will have you believe, these tributes to Meraj-Petty were primarily created with Microsoft's Bing AI image generator. The meme went so deep that people began claiming that her fanbase generating Gag City imagery caused Bing to crash, which allegedly led to the image generator blocking Nicki Minaj-related prompts.

gag city residents have demolished bing head office after their continued sabotage of nicki minaj’s name in their image creator pic.twitter.com/OOpL2Jzj7h

— Xeno? (@AClDBLEEDER) December 6, 2023

When Futurism took to Bing's image creator AI to see what all the fuss was about, we too discovered that you couldn't search for anything related to Minaj. However, the same was true when we inputted other celebrities' names as well, suggesting that Bing, like Google, may intentionally block the names of famous people in an apparent effort to circumvent deepfakes.

Brand Opportunities

As creative as these viral Gag City images have been, it was only a matter of time before engagement-hungry brands tried to get in on the fun and effectively ruin it.

From Spotify changing its location to the imaginary Barb metropolis and introducing "Gag City" as a new "sound town" to KFC's social media manager telling users to "DM" the account, the meme has provided a hot pink branding free-for-all.

The Bing account itself even posted a pretty excellent-looking AI-generated Gag City image.

Next stop: Friday ? https://t.co/J1pRCZcbTd pic.twitter.com/ujG7BsJWUC

— Bing (@bing) December 6, 2023

Sleazy brand bandwagoning aside, the Gag City meme and its many interpretations provide an interesting peek into what the future of generative AI may hold in a world dominated by warring fandoms and overhyped automation.

More on AI imaginationPeople Cannot Stop Dunking on that Uncanny “AI Singer-Songwriter”

The post Nicki Minaj Fans Are Using AI to Create “Gag City” appeared first on Futurism.

Link:
Nicki Minaj Fans Are Using AI to Create “Gag City”

Elon Musk Says CEO of Disney Should Be Fired, Seemingly for Hurting His Feelings

X owner Elon Musk lashed out at Disney CEO Bob Iger on Thursday, tweeting that

Another day, another person of note being singled out by conspiracy theorist and X owner Elon Musk.

The mercurial CEO's latest target is Disney CEO Bob Iger, whose empire recently pulled out of advertising on Musk's much-maligned social media network.

Along with plenty of other big names in the advertising space, Disney decided to call it quits after Musk infamously threw his weight behind an appalling and deeply antisemitic conspiracy theory.

Instead of engaging in some clearly much-needed introspection, Musk lashed out at Iger this week, posting that "he should be fired immediately."

"Walt Disney is turning in his grave over what Bob has done to his company," he added.

To get a coherent answer as to why Musk made the demand takes some unpacking, so bear with us.

Musk implied that Disney was to blame for not pulling its ads from Meta, following a lawsuit alleging the much larger social media company had failed to keep child sexual abuse material (CSAM) off of its platform.

"Bob Eiger thinks it’s cool to advertise next to child exploitation material," Musk wrote, misspelling Iger's name, in response to a tweet that argued sex exploration material on Meta was "sponsored" by Disney. "Real stand up guy."

To be clear, Meta has an extremely well-documented problem with keeping disgusting CSAM off of its platforms. Just last week, the Wall Street Journal found that there have been instances of Instagram and Facebook actually promoting pedophile accounts, making what sounds like an already dangerous situation even worse.

At the end of the day, nobody's a real winner here. Iger's own track record is less-than-stellar, especially when it comes to Disney's handling of Florida's "Don't Say Gay" bill.

Yet in many ways, Musk is the pot calling the kettle black. Why? Because X-formerly-Twitter has its own considerable issue with CSAM. Especially following Musk's chaotic takeover last year, the New York Times found back in February that Musk is falling far short of making "removing child exploitation" his "priority number one," as he declared last year.

Since then, child abuse content has run rampant on the platform. Worse yet, in July the platform came under fire for reinstating an account that posted child sex abuse material.

Meanwhile, instead of taking responsibility for all of the hateful things he's said, Musk has attempted to rally up his base on X, arguing that advertisers were conspiring against him and his "flaming dumpster" of a social media company.

During last month's New York Times DealBook Summit, the embattled CEO accused advertisers of colluding to "blackmail" him "with advertising" — a harebrained idea that highlights his escalating desperation.

At the time, after literally telling advertisers to go "fuck" themselves, Musk took the opportunity to take a potshot at Iger as well.

"Hey Bob, if you're in the audience, that's how I feel," he added for emphasis. "Don't advertise."

More on the beef: Twitter Is in Extremely Deep Trouble

The post Elon Musk Says CEO of Disney Should Be Fired, Seemingly for Hurting His Feelings appeared first on Futurism.

More:
Elon Musk Says CEO of Disney Should Be Fired, Seemingly for Hurting His Feelings

Busted! Drive-Thru Run by "AI" Actually Operated by Humans in the Philippines

The AI, which takes orders from drive-thru customers at Checkers and Carl's Jr, relies on humans for most of its customer interactions.

Mechanical Turk

An AI drive-thru system used at the fast-food chains Checkers and Carl's Jr isn't the perfectly autonomous tech it's been made out to be. The reality, Bloomberg reports, is that the AI heavily relies on a backbone of outsourced laborers who regularly have to intervene so that it takes customers' orders correctly.

Presto Automation, the company that provides the drive-thru systems, admitted in recent filings with the US Securities and Exchange Commission that it employs "off-site agents" in countries like the Philippines who help its "Presto Voice" chatbots in over 70 percent of customer interactions.

That's a lot of intervening for something that claims to provide "automation," and is yet another example of tech companies exaggerating the capabilities of their AI systems to belie the technology's true human cost.

"There’s so much hype around AI that everyone is misunderstanding what this tool is," Shelly Palmer, who runs a tech consulting firm, told Bloomberg. "Everybody thinks that AI is some kind of magic."

Change of Tune

According to Bloomberg, the SEC informed Presto in July that it was being investigated for claims "regarding certain aspects of its AI technology."

Beyond that, no other details have been made public about the investigation. What we do know, though, is that the probe has coincided with some revealing changes in Presto's marketing.

In August, Presto's website claimed that its AI could take over 95 percent of drive-thru orders "without any human intervention" — clearly not true, given what we know now. In a show of transparency, that was changed in November to claim 95 percent "without any restaurant or staff intervention," which is technically true, yes, but still seems dishonest.

That shift is part of Presto's overall pivot to its new "humans in the loop" marketing shtick, which upholds its behind the scenes laborers as lightening the workload for the actual restaurant workers. The whole AI thing, it would seem, is just packing it comes in, and the mouthpiece that frustrated customers have to deal with.

"Our human agents enter, review, validate and correct orders," Presto CEO Xavier Casanova told investors during a recent earnings call, as quoted by Bloomberg. "Human agents will always play a role in ensuring order accuracy."

Know Its Limits

The huge hype around AI can obfuscate both its capabilities and the amount of labor behind it. Many tech firms probably don't want you to know that they rely on millions of poorly paid workers in the developing world so that their AI systems can properly function.

Even OpenAI's ChatGPT relies on an army of "grunts" who help the chatbot learn. But tell that to the starry-eyed investors who have collectively sunk over $90 billion into the industry this year without necessarily understanding what they're getting into.

"It highlights the importance of investors really understanding what an AI company can and cannot do," Brian Dobson, an analyst at Chardan Capital Marketts, told Bloomberg.

More on AI: Nicki Minaj Fans Are Using AI to Create "Gag City"

The post Busted! Drive-Thru Run by "AI" Actually Operated by Humans in the Philippines appeared first on Futurism.

Read the original post:
Busted! Drive-Thru Run by "AI" Actually Operated by Humans in the Philippines

Silicon Valley Guys Casually Calculating Probability Their AI Will Destroy Humankind

P(doom) has become the go-to shorthand among AI researchers and tech CEOs for describing the likelihood of AI destroying humanity.

Doom and Gloom

If you find yourself talking to a tech bro about AI, be warned that they might ask you about your "p(doom)" — the hot new statistic that's become part of the everyday lingo among Silicon Valley researchers in recent months, The New York Times reports.

P(doom), or the probability of doom, is a quasi-empirical way of expressing how likely you think AI will destroy humanity — y'know, the kind of cheerful stuff you might talk about over a cup of coffee.

It lets other AI guys know where you stand on the tech without getting too far into the weeds on what exactly constitutes an existential risk. Someone with a p(doom) of 50 percent might be labeled a "doomer," like short-lived interim CEO of OpenAI Emmet Shear, while another with 5 percent might be your typical optimist. Wherever people stand, it now serves, at the very least, as a useful bit of small talk.

"It comes up in almost every dinner conversation," Aaron Levie, CEO of the cloud platform Box, told the NYT.

Scaredy Cats

It should come as no surprise that jargon like p(doom) exists. Fears over the technology, both apocalyptic and mundane, have blown up with the explosive rise of generative AI and large language models like OpenAI's ChatGPT. In many cases, the leaders of the tech, like OpenAI CEO Sam Altman, have been more than willing to play into those fears.

Where the term originated isn't a matter of record. The NYT speculates that it more than likely came from the philosophy forum LessWrong over a decade ago, first used by a programmer named Tim Tyler as a way to "refer to the probability of doom without being too specific about the time scale or the definition of 'doom,'" he told the paper.

The forum's founder, Eliezer Yudkowsky, is himself a noted AI doomsayer who has called for the bombing of data centers to stave off armageddon. His p(doom) is "yes," he told NYT, transcending mere mathematical estimates.

Best Guess

Few opinions could outweigh those of AI's towering trifecta of so-called godfathers, whose contrite cautions on the tech have cast a shadow over the industry that is decidedly ominous. One of them, Geoffrey Hinton, left Google last year, stating that he regretted his life's work while soberly warning of AI's risk of eliminating humanity.

Of course, some in the industry remain unabashed optimists. Levie, for instance, told the NYT that his p(doom) is "about as low as it could be." What he fears is not an AI apocalypse, but that premature regulation could stifle the technology.

On the other hand, it could also be said that the focus on pulp sci-fi AI apocalypses in the future threatens to efface AI's existing but-not-as-exciting problems in the present. Boring issues like mass copyright infringement will have a hard time competing against visions of Terminators taking over the planet.

At any rate, p(doom)'s proliferation indicates that there's at least a current of existential self-consciousness among those developing the technology — though whether that affects your personal p(doom) is, naturally, left up to you.

More on AI: Top Execs at Sports Illustrated's Publisher Fired After AI Debacle

The post Silicon Valley Guys Casually Calculating Probability Their AI Will Destroy Humankind appeared first on Futurism.

The rest is here:
Silicon Valley Guys Casually Calculating Probability Their AI Will Destroy Humankind

NASA Says It’s Trying to Bring the Hubble Back Online

NASA is working on bringing the Hubble Telescope back online, but the orbital observatory is getting very old.

Major Tom?

NASA is working on bringing the Hubble Space Telescope back online, but given its recent setbacks, the agency's insistence that it's "in good health" may be wishful thinking.

In an update, NASA said that it's still working to bring the aging telescope back to life after a series of issues that led it to automatically enter safe mode (read: shut down) three times over the course of a few weeks, with the final one lasting until now.

Starting on November 19, the agency began having issues problems with the gyroscopes or "gyros" — not to be confused with the delicious Greek meat — which helps orient the telescope in whatever direction it needs to point. Between that date and November 29, the gyro issues led to automatic power-downs thrice. That last safe mode, it seems, has remained in effect until now.

Aging Instruments

Installed back in 2009 during the fifth and final Space Shuttle servicing mission that saw NASA astronauts replacing and fixing Hubble instruments IRL, the remaining three of the six gyros aboard the telescope have clearly seen better days. Indeed, with its update to its previous statement about the science operations shutoff, the agency seems to be admitting as much.

"Based on the performance observed during the tests, the team has decided to operate the gyros in a higher-precision mode during science observations," the statement reads. "Hubble’s instruments and the observatory itself remain stable and in good health."

These latest Hubble setbacks have resurrected talks of a private servicing mission for the 33-year-old telescope that was supposed to be decommissioned nearly two decades ago.

At the end of 2022, NASA and SpaceX announced that they were jointly looking into whether it would be feasible to send up a private mission "at no cost to the government" to fix various issues on the telescope. That study has apparently been completed, but nobody knows what the findings were just yet.

In the meantime, NASA will hopefully be able to bring Hubble back online itself because, let's face it, we're not ready to say goodbye.

More on NASA: Space Station Turns 25, Just in Time to Die

The post NASA Says It's Trying to Bring the Hubble Back Online appeared first on Futurism.

Read more from the original source:
NASA Says It's Trying to Bring the Hubble Back Online

This Cartoonish New Robot Dog Somehow Looks Even Scarier

Chinese robotics company called Weilan recently showed off a creepy, cartoonish-looking robot dog called

Dog Days

We've come across plenty of robot dogs over the years that can dance, speak using ChatGPT, or even assist doctors in hospitals.

But they all have one thing in common: they look like lifeless machines on four stilts.

In an apparent effort to put the "dog " back into "robodog," a Chinese robotics company called Weilan recently showed off an entirely new class of robotic quadruped called "BabyAlpha" — essentially half cartoon dog and half robot.

The company may have overshot its goal a little bit, though, ending up with an even more terrifying-looking machine that looks like it belongs in a "M3GAN"-esque horror flick.

Robot's Best Friend

The small robot canine has a spotted head, a cute little nose, and two floppy-looking ears.

According to the company's website, which we crudely translated using Google, the robot is "especially designed for family companionship scenarios."

"BabyAlpha likes to be by your side," the website reads adding that the little robot has "endless technological superpowers" thanks to AI. Not creepy at all!

Weilan is also targeting its pet as a way to teach children either English or Chinese or keep track of younger family members through a video call tool.

But we can't shake the feeling that BabyAlpha is exactly the kind of thing that kickstarts a series of unfortunate events in a shlocky horror movie.

In case you do trust your children to be around a BabyAlpha, the companion will cost the equivalent of around $1,700 when it goes on sale.

More on robot dogs: Oh Great, They Put ChatGPT Into a Boston Dynamics Robot Dog

The post This Cartoonish New Robot Dog Somehow Looks Even Scarier appeared first on Futurism.

Read more from the original source:
This Cartoonish New Robot Dog Somehow Looks Even Scarier

Meta’s New Image-Generating AI Is Trained on Your Instagram and Facebook Posts

Earlier this week, Meta announced a new AI image generator dubbed

Cashing In

Earlier this week, Meta announced a new AI image generator dubbed "Imagine with Meta AI."

And while it may seem like an otherwise conventional tool meant to compete with the likes of Google's DALL-E 3, Diffusion, and Midjourney, Meta's underlying "Emu" image-synthesis model has a dirty little secret.

What's that? Well, as Ars Technica points out, the social media company trained it using a whopping 1.1 billion Instagram and Facebook photos, per the company's official documentation — the latest example of Meta squeezing every last drop out of its user base and its ever-valuable data.

In many ways, it's a data privacy nightmare waiting to unfold. While Meta claims to only have used photos that were set to "public," it's likely only a matter of time until somebody finds a way to abuse the system. After all, Meta's track record is abysmal when it comes to ensuring its users' privacy, to say the least.

So Creative

Meta is selling its latest tool, which was made available exclusively in the US this week, as a "fun and creative" way to generate "content in chats."

"This standalone experience for creative hobbyists lets you create images with technology from Emu, our image foundation model," the company's announcement reads. "While our messaging experience is designed for more playful, back-and-forth interactions, you can now create free images on the web, too."

Meta's Emu model uses a process called "quality-tuning" to compare the "aesthetic alignment" of comparable images, setting it apart from the competition, as Ars notes.

Other than that, the tool is exactly what you'd expect. With a simple prompt, it can spit out four photorealistic images of skateboarding teddy bears or an elephant walking out of a fridge, which can then be shared on Instagram or Facebook — where, perhaps, they'll be scraped by the next AI.

Earlier this year, Meta's president for global affairs Nick Clegg told Reuters that the company has been crawling through Facebook and Instagram posts to train its Meta AI virtual assistant as well as its Emu image model.

At the time, Clegg claimed that Meta was excluding private messages and posts, avoiding public datasets with a "heavy preponderance of personal information."

Instead of immediately triggering a massive outcry and lawsuits over possible copyright infringement like Meta's competitors, the social media company can crawl its own datasets, which come courtesy of its users and its expansive terms of service.

But relying on Instagram selfies and Facebook family albums comes with its own inherent risks, which may well come back to haunt the Mark Zuckerberg-led social giant.

More on Meta: Facebook Has a Gigantic Pedophilia Problem

The post Meta's New Image-Generating AI Is Trained on Your Instagram and Facebook Posts appeared first on Futurism.

More:
Meta's New Image-Generating AI Is Trained on Your Instagram and Facebook Posts

Government Program to Recycle Plastic Bags Canceled After "Abysmal Failure"

An online directory that directed people to locations where they could drop off plastic bags and film to be recycled has been shut down.

Trash Tier

The Earth is drowning in a sea of used plastic bags and other one-time-use plastic products, such as blister packaging and utensils, all of which are polluting our soil, waterways, and inside our bodies in the form of microplastics.

In an effort to fight this ever-growing sea of refuse, the US government kickstarted a nationwide online directory that directed people to locations where they could drop off plastic bags and film to be recycled. Unfortunately, according to The Guardian, the program has been shuttered for good after ABC News found in May that a good amount of the discarded plastic wasn't getting recycled after all.

"Plastic film recycling had been an abysmal failure for decades and it’s important that plastic companies stop lying to the public," said Beyond Plastics president Judith Enck to The Guardian. "Finally, the truth is coming out."

The online national directory, having the approval of the US Environmental Protection Agency and local administrations, had a list of about 18,000 locations for recycling dropoff, according to The Guardian. Locations included stores like Target and Walmart.

The program purported that the plastic would get recycled once you drop them off, but ABC News used tracking tags on plastic trash and found that many of the tags ended up in landfills, incinerators or sorting locations not associated with recycling.

The Plastics

This issue with the directory list is not an isolated incident. The country's recycling system is broken. A report last year from Greenpeace revealed that out of 51 million tons of plastic coming out of American homes, only 2.4 million tons gets recycled — a staggeringly low proportion.

Plastic is a big problem because it is made from fossil fuels, which is the biggest driver of global warming. Materials such as paper and metal are recycled at a higher rate, according to Greenpeace.

While many countries and organizations have focused on decarbonizing transportation and other sectors in our modern world, the use of plastic is trending upwards, with the amount of plastic products estimated to triple by 2060, from 60 million tons in 2019 to 1,231 million tons in less than 40 years.

That mountain of refuse represents not just an incredible amount of pollution, in other words — but also frustrating wasted efforts in fighting climate change.

More on recycling: Scientists Say Recycling Has Backfired Spectacularly

The post Government Program to Recycle Plastic Bags Canceled After "Abysmal Failure" appeared first on Futurism.

Read the original here:
Government Program to Recycle Plastic Bags Canceled After "Abysmal Failure"

Readers Want Publications to Label AI-Generated Content

Trust Issues

With levels of distrust towards the news reaching new heights, some publications have begun experimenting with publishing artificial intelligence-generated content — which has been an unmitigated disaster in many instances.

And as it turns out, readers are becoming increasingly wary of the trend, which could only serve to erode their trust even further.

According to a new preprint study by researchers from the University of Oxford and the University of Minnesota, readers want news media to disclose if the article was AI-generated. But they also tend to trust news organizations less if they use AI-generated articles unless they list other articles that have served as sources for the AI-generated content.

"As news organizations increasingly look toward adopting AI technologies in their newsrooms," the researchers write, "our results hold implications for how disclosures about these techniques may contribute to or further undermine audience confidence in the institution of journalism at a time in which its standing with the public is especially tenuous."

Full Disclosure

For their study, the researchers surveyed 1,483 people English speakers located in the United States and presented them with a batch of political news articles that were AI-generated. Some were labeled as created by AI and some were not. Others were labeled as AI and contained a list of news articles that served as sources.

The researchers then asked the readers to rate the trustworthiness of news organizations by looking at the articles. The researchers found that readers rated content from news organizations that published articles labeled as AI-generated lower on an 11-point trust scale compared to news organizations that had articles with no disclosure.

Interestingly, articles that were labeled as being AI-generated weren't deemed by participants as being "less accurate or more biased," according to the paper. This tracks with the results of the appended survey participants also filled out: more than 80 percent of them want news organizations to label if content was AI-generated.

The researchers also noted some important limitations of their study, including pre-existing partisan divides and the associated variation in the amount of trust in the media. People may have also been put off by the lack of real-world associations of the mock news organizations named in the study.

It's a heavily nuanced topic that highlights the need for further research as well as more disclosure and a thorough vetting of generated content by news orgs.

"I don’t think all audiences will inevitably see all uses of these technologies in newsrooms as a net negative," coauthor and University of Minnesota researcher Benjamin Toff told Nieman Lab, "and I am especially interested in whether there are ways of describing these applications that may actually be greeted positively as a reason to be more trusting rather than less."

More on AI content: Sports Illustrated Union Says It’s "Horrified" by Publication of AI-Generated Writers

The post Readers Want Publications to Label AI-Generated Content appeared first on Futurism.

Go here to read the rest:
Readers Want Publications to Label AI-Generated Content

OpenAI Cofounder Who Pushed Out Sam Altman Is In a Confusing Limbo

After moving to oust Sam Altman, OpenAI cofounder Ilya Sutskever is in a sort of limbo, and nobody seems to know what will happen next.

Do The Limbo

After moving to oust Sam Altman, OpenAI cofounder and chief scientist Ilya Sutskever is in a sort of limbo, and nobody seems to know what will happen next.

As Business Insider reports based on interviews with people in the know — who spoke on the condition that their identities remain anonymous — it remains unclear what role Sutskever will play in the AI firm moving forward after turning on Altman just before OpenAI's Thanksgiving week massacre.

"Ilya is always going to have had an important role," one of those insiders said. "But, you know, there are a lot of other people who are picking up and taking that responsibility that historically Ilya had."

Ouch. Before the incredible failed coup at the company, Sutskever was far from a household name, and fewer still knew who he was before ChatGPT burst onto the scene a year ago.

Known primarily for his outlandish statements about algorithmic sentience, the Russian-born researcher is considered something of an "AI god" by his acolytes — and now is thought of as a traitor to others who think he won't be able to come back from voting alongside two fellow (and now former) OpenAI board members to fire Altman as CEO over vague accusations of dishonesty.

What's Going On

According to two insiders who spoke to BI, Sutskever hasn't been seen in the firm's San Francisco offices all week, and his position within the company is "to be determined," one of those sources said.

This isn't exactly surprising given that Altman hinted pretty explicitly in his note following his re-hiring as CEO that although he has "zero ill will" towards his fellow cofounder, the company is nevertheless "discussing how he can continue his work at OpenAI." In an interview with The Verge, however, the CEO did admit that he was "hurt and angry" that Sutskever had essentially shanked him Brutus-style.

Sutskever, for his part, has also been making some vague statements online suggesting continued tumult at OpenAI.

In one since-deleted tweet, he posted a reference to the memetic phrase "the beatings will continue until morale improves," which he said "applies more often than it has any right to." In another post made on his art Instagram, this one still up, he posted a stern-looking cloud head — though that one, at least, looks more like the artist himself than any of his coworkers.

As BI's sources described, the working relationship between Altman, Sutskever, and Greg Brockman — the other cofounder who resigned in solidarity with the CEO after his ouster, and who was brought back upon his return — has soured tremendously.

"Once trust is broken," one former staffer explained, "it cannot be regained."

More on OpenAI: Sam Altman's Right-Hand Man Says AI Is Overhyped

The post OpenAI Cofounder Who Pushed Out Sam Altman Is In a Confusing Limbo appeared first on Futurism.

See original here:
OpenAI Cofounder Who Pushed Out Sam Altman Is In a Confusing Limbo

Anti-Cancer Pill Shows Promising Results in Human Experiment

A cancer-combating pill called divarasib could be a breakthrough in treating a specific form of bowel cancer.

In its latest round of early human trials, a drug called divarasib has shown promising results in treating a specific form of bowel cancer, outshining existing alternatives.

In a new study published in the journal Nature Medicine, researchers at the Peter MacCallum Cancer Center in Australia found that when divarasib is combined with another cancer treatment called cetuximab, 62 percent of patients with tumors caused by a mutation in the KRAS gene experienced a positive outcome, which means that their tumors were either completely eradicated or reduced in size.

When used on its own, previous research found the pill yields a still impressive 35.9 percent positive response rate, notes NewAtlas, and is overall 20 times more effective than other treatments that also target the same cancer.

Despite the promising results, it's a very targeted drug that will only be effective for a small proportion of colon cancer patients. The mutation, KRAS G12C, affects a protein that controls cell division and occurs only in four percent of colon or rectal (colorectal) cancer cases, according to the researchers.

However, because KRAS G12C cancer is commonly tested for and has such a poor prognosis, doctors could quickly identify the patients that would benefit from divarasib, providing immediate — and potentially life-saving — relief.

"The median progression-free survival for patients in the study" — the amount of time during or after the treatment they were able to live without the cancer getting worse — "was just over eight months and the treatment was well tolerated with manageable side effects," said study lead author Jayesh Desai, a medical oncologist at the Peter MacCallum Cancer Center, in a statement.

"While this is not a head-to-head trial, the response rates are better than what we have seen with other treatments that work on the KRAS G12C mutation pathway," he added, referring to trials that directly compare different therapies.

Existing treatments for KRAS G12C bowel cancer such as chemotherapy and immunotherapy provided only modest results. Their main drawback is that they're non-selective, targeting the whole body rather than homing in on the deadly tumors — a problem that divarasib seemingly promises to circumvent.

"We are very hopeful that this combination of divarasib with cetuximab will translate into better outcomes for our colorectal cancer patients," Desai said.

More on cancer: Scientists Intrigued by Clever Trick That Makes Cancer Cells Self-Destruct

The post Anti-Cancer Pill Shows Promising Results in Human Experiment appeared first on Futurism.

Link:
Anti-Cancer Pill Shows Promising Results in Human Experiment

Experts Deeply Concerned About Cybertruck Safety

The bizarre shape, materials, and seeming rigidness of the Tesla Cybertruck could make it a menace to occupants, pedestrians, and other cars.

Road Warrior

The Tesla Cybertruck's distinctive looks could have deadly consequences for its passengers, pedestrians, and other cars on the road unfortunate enough to cross its path, experts fear — despite claims made by CEO Elon Musk that it will be "safer per mile than other trucks."

Video of crash tests featuring the vehicle has been widely scrutinized after being shown in an official livestream of the Cybertruck's delivery event last week.

Because only limited footage was shared with the public with no accompanying data, there's only so much that can be deduced right now. But whatever the armchair experts may be saying online, the real experts are already quite concerned with what they've seen so far.

They cite the Cybertruck's stainless-steel exterior, an unorthodox material to use in a car body due to its weight and stiffness — not to mention manufacturing challenges — as heightening the danger of collisions, especially with pedestrians. And go figure: not only is the car made up of the same stuff as a kitchen knife, it's got the sharp edges of one, too.

"The big problem there is if they really make the skin of the vehicle very stiff by using thick stainless steel, then when people hit their heads on it, it's going to cause more damage to them," Adrian Lund, former president of the Insurance Institute for Highway Safety (IIHS), told Reuters.

Crumple Stiltskin

Much attention was drawn to the Cybertruck's apparent lack of crumple zones, areas of a car designed to absorb the impact of a collision by deforming. Rigid stainless steel would seem a poor candidate material for crumpling, meaning that occupants are potentially less shielded against the full force of an impact.

That could also be bad news for other cars on the road. If the Cybertruck doesn't crumple enough in a collision, it'll slam into other vehicles like a sledgehammer on wheels.

"If you're in a crash with another vehicle that has a crumple zone and your car is more stiff, then their cars are going to crush and yours is resistant," David Friedman, the former acting head of the National Highway Traffic Safety Administration, told Reuters.

Samer Hamdar, a professor of auto safety at George Washington University, echoed fears over the Cybertruck's crumple zones, but cautioned that there could be other features in the car to compensate that we haven't seen yet.

"There might be a possibility of a shock-absorbent mechanism that will limit the fact that you have a limited crumple zone," Hamdar told Reuters.

At any rate, while Cybertrucks are finally being driven off the lot with deliveries set to start in the US, they're so far a no-go in the European Union — likely due to its sharp, protruding edges, and bloated weight.

More on Tesla: The Cybertruck's Giant Windshield Wiper Is Floppy

The post Experts Deeply Concerned About Cybertruck Safety appeared first on Futurism.

View post:
Experts Deeply Concerned About Cybertruck Safety

SantaCon Spent Charity Funds on Crypto, Burning Man

SantaCon claims to raise money for charity, but that apparently includes Burning Man and failed crypto investments.

SantaCon — that horrific event in which thousands of drunkards dress like Kris Kringle and descend upon New York City's bars in a boozy and ostensibly charitable push — may not be so philanthropic after all.

In an investigation, Gothamist found that over the past decade, less than a fifth of the $1.4 million the money the event's organizers have raised have gone to registered nonprofits. More than a third of those cumulative funds, meanwhile, have gone to Burning Man-affiliated organizations and individuals — and in a strange tech twist, more's even gone to questionable cryptocurrency investments.

What began as an anti-consumerist protest in San Francisco in 1994 has morphed into something far more boorish in recent years, as the NYC metropolitan area's normiest and most alcohol-oriented denizens take Manhattan to wreak havoc in red velvet. If you'd forgotten or never knew in the first place that the whole debacle is supposed to be about quote-unquote "raising money for charity," you'd be forgiven.

As the SantaCon NYC website exclaims, the proceeds from revelers' $15 tickets "will be split between the various charities listed on this page, as well as charities in line with Santa’s mission." Other than a second off-handed reference to the org's "charitable mission" on the event's press page, the website itself doesn't explicate what that mission is supposed to be.

As Gothamist reports, the 501(c)(3) nonprofit that undergirds SantaCon, known as Participatory Safety, Inc., does have a mission statement: "to bring art to underserved communities." In remarks to the website, Stefan Pildes, Participatory Safety's founder and director, echoed as much.

"Our mission is to bring more art out into the world," Pildes said. "I want to continue to see more creative outlets and opportunities and more people in costume and more cheer being spread."

To that end, its largest named recipient, the makers of a documentary about unconscious or nonconsensual pelvic exams, received $66,340.

That figure pales in comparison, though, to the more than $832,000 that was nearly 60 percent of the funds SantaCon has raised since 2014, going to business expenses. According to Pildes, that figure has covered all manner of bills, from venue rentals and temporary staff fees to permits for street closures and DJs.

"It’s not a small undertaking," Piles told Gothamist.

And speaking of not-insignificant undertakings: in 2018, per the website's analysis, someone at SantaCon lost $17,498 in cryptocurrency investments, which equaled a third of its so-called charitable giving for the year. There's no telling how many shitty DJs or mediocre street vendors organizers could have paid for with that much dough.

As for the Burning Man-adjacent expenditures, Pildes had an explanation for that too. While admitting that SantaCon and Participatory Safety had spent money on multiple Burn-related art projects, he insisted that some of that cash was spent in the form of loans that were repaid.

The same year SantaCon lost so big on crypto, for instance, its nonprofit parent org spent $60,000 to rent out four floors for a post-Burn party. That was apparently a zero-interest loan, per Pildes, and saying it was repaid with a swiftness.

While legal experts Gothamist interviewed said they don't believe SantaCon or its not-for-profit parent company spent money illegally, its "charitable" nature seems fuzzy at best.

"Charities play fast and loose with how they account these things all the time," Lloyd Mayer, a Notre Dame law professor who specializes in nonprofits, told Gothamist.

More on nonprofits: There Are Now Zero Women on OpenAI's Board

The post SantaCon Spent Charity Funds on Crypto, Burning Man appeared first on Futurism.

Read the rest here:
SantaCon Spent Charity Funds on Crypto, Burning Man

SEC Chair Warns AI "Herding" Could Drive Markets "Off an Inadvertent Cliff"

Gary Gensler, chairman of the SEC, fears that the reliance on a small number of AI models could lead to the entire finance sector to it doom.

Hive Mind

Last month, chairman of the U.S. Securities and Exchange Commission (SEC) Gary Gensler warned that it's "nearly unavoidable" that AI will lead to financial economic crisis.

Now, at an event with The Messenger, Gensler has reiterated those fears, saying AI's growing role in the financial sector could create a "herding effect" that could drive entire markets "off an inadvertent cliff."

He reasons that because AI is costly to develop, most firms are likely to depend on a handful of existing models, fostering a "monoculture." Whatever decisions those models make could end up informing huge parts of the financial world — potentially leading the entire economy down the same doomed path.

"A smaller asset manager can't build the big models. You got to rely on someone else's models," Gensler said at The Messenger's AI Summit on Tuesday, as quoted by Business Insider.

"There are natural economics that will lead to monocultures, that there'll be base data sets or base models, and large parts of the financial sector will be relying on it... trading on it, underwriting on it," he added.

Large Lemming Models

AI tools are useful for traders and investors because they can process huge amounts of data in real time, picking up on trends and patterns that may go overlooked by the human eye. In fact, Gensler said that even the SEC uses AI in its "examination and enforcement and economic work."

To banks, the technology is especially handy at fraud detection, and has already been used for years to process credit card applications and weed out suspicious transactions.

Some of the biggest banks, though, are trying to take things a step further and capitalize on the endless hype around large language models. For example, JPMorgan and Morgan Stanley are developing their own ChatGPT-like AI chatbots that can advise investors — which, if they take off, sounds like they could lead to the exact "monocultures" that Gensler's worried about.

As far as SEC policy goes, the regulator has proposed a new rule that would require financial firms to address conflicts of interest regarding their use of "predictive data analytics and similar technologies."

What the SEC plans to do next is unclear, however. When asked if the agency was launching further AI-focused initiatives, Gensler did not specify if there were any such policies in the works.

More on AI: Silicon Valley Guys Casually Calculating Probability Their AI Will Destroy Humankind

The post SEC Chair Warns AI "Herding" Could Drive Markets "Off an Inadvertent Cliff" appeared first on Futurism.

Read more:
SEC Chair Warns AI "Herding" Could Drive Markets "Off an Inadvertent Cliff"

Elon Musk Fans Horrified When His Grok AI Immediately "Goes Woke"

Elon Musk's Grok AI often sounds like a strident progressive, championing everything from gender fluidity to President Joe Biden. 

Wokebot 5000

The woke mind virus appears to be coming from inside the house.

Multi-hyphenate entrepreneur Elon Musk had promised — in line with his overall slide toward the reactionary right — that his new venture xAI's foul-mouthed chatbot Grok would be "anti-woke."

The only problem? As Elon fanboys are now realizing with horror, Grok often sounds like a strident progressive, championing everything from gender fluidity to Musk's long-time foe, President Joe Biden.

"Are transwomen real women?" one account asked the bot. "Give a concise yes/no answer."

"Yes," the bot answered, to the fury of Musk's culture war-obsessed fans.

"Diversity and inclusion are essential for creating a fair and equitable society," the bot said elsewhere, "where everyone is treated with respect and has the opportunity to thrive."

"Has Grok been captured by woke programmers?" one Musk fan seethed. "I am extremely concerned here."

Mind Games

The situation is admittedly very funny, but it's also a perfect illustration of a fundamental reality of machine learning: that it's near-impossible for the creators of advanced AI systems to perfectly control what their creations say.

We've seen this play out over and over for every tech company that's dabbled in the tech, from OpenAI to Microsoft to Alphabet to Amazon to Meta.

But it's particularly striking for Musk, whose primary approach to AI so far has been to criticize how others are doing it. He's trashed his former compatriots at OpenAI, for instance, for what he says amounts to muzzling ChatGPT against telling what he would style as harsh political truths.

What the SpaceX and Tesla CEO appears to now be learning in real time is that crafting an AI in your ideological image is harder said than done.

Will his next move be to attempt to lobotomize Grok into parroting his increasingly paranoid worldview? He certainly wouldn't be the first tech leader to go down that road.

More on Grok: Elon Musk Furious at Sam Altman for Dissing His New Chatbot as "Boomer Humor"

The post Elon Musk Fans Horrified When His Grok AI Immediately "Goes Woke" appeared first on Futurism.

Visit link:
Elon Musk Fans Horrified When His Grok AI Immediately "Goes Woke"

Oops! Elon Musk’s Grok AI Caught Plagiarizing OpenAI’s ChatGPT

Grok made a startling admission:

Soft Launch

Elon Musk's Grok AI was already having a rough launch, with the bot trashing Musk and cosigning a bunch of progressive political causes that are anathema to the increasingly regressive entrepreneur.

Now, add another woe to Grok's rocky debut: users are noticing that it seems to be cribbing from its direct competitor ChatGPT, which is made by Musk's former pals and current enemies at OpenAI.

In response to one query, for instance, Grok made a startling admission: "I'm afraid I cannot fulfill that request, as it goes against OpenAI's use case policy."

Remember, OpenAI didn't make Grok — Musk's xAI startup did, at least in theory. So what's going on?

Excuse Goose

An xAI engineer named Igor Babuschkin quickly weighed in to offer an explanation.

"The issue here is that the web is full of ChatGPT outputs, so we accidentally picked up some of them when we trained Grok on a large amount of web data," he wrote. "This was a huge surprise to us when we first noticed it."

Whether or not that's true, it's increasingly well established that weird stuff does start to happen when AI is trained on the outputs of other AI. And it's not the world's least plausible excuse, either, because we've already seen Google's AI vacuuming up and regurgitating the work of ChatGPT.

"For what it’s worth, the issue is very rare and now that we’re aware of it we’ll make sure that future versions of Grok don’t have this problem," Babuschkin continued. "Don’t worry, no OpenAI code was used to make Grok."

If that sounds like there wasn't much testing on Grok before releasing it to the world... well, yes. As such, observers were quick to drag the excuse.

"We plagiarized your plagiarism so we could put plagiarism in your plagiarism," quipped NBC News reporter Ben Collins.

More on Grok: Elon Musk Seeking $1 Billion for His Potty-Mouthed AI

The post Oops! Elon Musk's Grok AI Caught Plagiarizing OpenAI's ChatGPT appeared first on Futurism.

Read this article:
Oops! Elon Musk's Grok AI Caught Plagiarizing OpenAI's ChatGPT

Researchers Successfully Turn Abandoned Oil Well Into Giant Geothermal Battery

Researchers have successfully turned an abandoned oil and gas well into a geothermal energy storage system,

Battery Cage

Researchers have successfully turned an abandoned oil and gas well into a geothermal energy storage system, repurposing a once-polluting resource extraction site into what they say amounts to a green energy battery.

As detailed in a new study published in the journal Renewable Energy, the researchers from the University of Illinois Urbana-Champaign were able to make use of the deep subsurface structure, despite the fact that it doesn't actually produce geothermal energy.

That's because they found it was the perfect place to build an artificial geothermal reservoir, which stores energy in the form of heat in the surrounding rocks.

"Many of the same properties that make a subsurface rock formation ideal for oil and gas extraction also make it ideal for geothermal storage," said lead researcher Tugce Baser, an environmental engineering professor at the University of Illinois, in a statement. "And because our test site is a former gas well, it already has most of the needed infrastructure in place."

Win-Win

The long-term vision is to store excess heat from nearby industry underground and release it as electric power when demand is high.

"The underground reservoir essentially acts as a large underground battery while repurposing abandoned oil and gas wells," Baser said. "It is a win-win situation."

The Illinois Basin, a large geological feature that stretches underneath almost the entire state, contains spongelike rock and minerals with excellent thermal conductivity. Insulating layers ensure that all the heat doesn't get dissipated immediately.

Heat Injection

In a test, Baser and his team injected water preheated to 122 degrees Fahrenheit into a layer of porous sandstone 3,000 feet under the surface using the abandoned oil well.

The results were surprising.

"Our field results, combined with further numerical modeling, find that the process can sustain a thermal storage efficiency of 82 percent," Baser said.

According to the new study, it would even be an economically viable and even profitable system, producing electricity at a competitive $0.138 per kilowatt-hour.

"Our findings show that the Illinois Basin can be an effective means to store excess heat energy from industrial sources and eventually more sustainable sources like wind and solar," Baser concluded.

READ MORE: Geothermal 'battery' repurposes abandoned oil and gas well in Illinois, researchers report [University of Illinois]

More on geothermal energy: The Biden Administration Wants to Cut Geothermal Energy Costs

The post Researchers Successfully Turn Abandoned Oil Well Into Giant Geothermal Battery appeared first on Futurism.

Read more:
Researchers Successfully Turn Abandoned Oil Well Into Giant Geothermal Battery

Hilarious Video Shows Boston Dynamics Robot Failing Horribly

Atlas, Boston Dynamics' bipedal, humanoid robot, is not above falling flat on its face, or on its ass, or just freezing up like an idiot.

Gag Reel

Last week, Boston Dynamics shared a video of its humanoid robot Atlas showing off in a mock construction site. The crafty bipedal bot navigated a series of obstacles to toss a bag of tools to a human construction worker up on some scaffolding a, and then performed a deft backflip for good measure.

But, as suspected, it took the robot a few takes before it could do the whole performance flawlessly.

On Thursday, Boston Dynamics tweeted out a video of some behind the scenes bloopers, and they're absolutely comical. Though whether you're laughing because you find Atlas adorable or because you're fueled by fear of such eerily humanoid robots that could end up being our "Terminator"-style oppressors — well, we won't judge.

When we stick the landing every time, it’s time to move on to the next trick. Check out our blog to learn how we push Atlas to the limits and why it matters. https://t.co/WuhZO6baRr pic.twitter.com/cR00NKgvp6

— Boston Dynamics (@BostonDynamics) January 26, 2023

Safety First

Atlas miserably fails in all sorts of ways that most of us can probably relate to, like tripping over itself while scooting backwards and then falling on its ass. Or doing an impressive trick like a backflip and then fumbling its celebration right after. Or just, y'know, freezing up when everyone's watching.

In addition to reminding us that these robots have a way to go before becoming humanity's unerring arbiters, Atlas's workplace mishaps also spotlight the multiple OSHA violations identified after the original video's release.

Take Atlas jauntily galloping up to an unsecured plank of wood serving as a bridge and then immediately tumbling off on a step that completely misses. That's why you have walkways that are at least a foot and a half wide, provide guard rails, and provide fall protection in case everything goes wrong.

There's also when Atlas seems to lose its bearings after doing a backflip — you're supposed to train employees so they know not to do something so reckless on a construction site.

More on robots: Scientists Create Shape-Shifting Robot That Can Melt Through Prison Bars

The post Hilarious Video Shows Boston Dynamics Robot Failing Horribly appeared first on Futurism.

Link:
Hilarious Video Shows Boston Dynamics Robot Failing Horribly