Scientists Reverse Alzheimer’s Synapse Damage in Mice

Scientists in Japan reversed the signs of Alzheimer’s in lab mice by restoring the healthy function of neuron synapses in their brains.

Scientists in Japan say they have reversed the signs of Alzheimer’s disease in lab mice by restoring the healthy function of synapses, critical parts of neurons that shoot chemical messages to other neurons.

The secret was developing a synthetic peptide, a small package of amino acids — a mini-protein, if you will — and injecting it up the nostrils of the mice, in an experiment they detailed in a study published in the journal Brain Research.

Needless to say, mice are very different from humans. But if the treatment successfully survives the gauntlet of clinical studies with human participants, it could potentially lead to a new treatment for Alzheimer’s disease, a tragic degenerative condition that burdens tens of millions of people around the world.

"We strongly hope that our peptide could go through the tests and reach AD (Alzheimer’s disease) patients without much delay and rescue their cognitive symptoms, which is the primary concern of patients and their families," Okinawa Institute of Science and Technology neuroscience professor and the study's principal investigator Tomoyuki Takahashi said in a statement.

For the study, researchers focused on how the protein tau disrupts the chemical communication between neurons.

In Alzheimer’s disease, tau accumulates in the brain and interferes with the normal processes within synapses by using up a type of enzyme called dynamin, a key component in healthy neuron synaptic function.

Injection of the peptide seems to prevent this interaction with dynamin, which then leads to the reversal of Alzheimer’s disease symptoms in mice and restores their cognitive function, as long as they're treated early.

Members of the research team seem very optimistic that the study could be translated into a viable medication that could treat this devastating disease, but acknowledge that it's going to take a long time.

Going from experiments with mice to clinical trials and then finally into a drug that's commercially available can take decades.

"The coronavirus vaccine showed us that treatments can be rapidly developed, without sacrificing scientific rigor or safety," said Chia-Jung Chang, Okinawa Institute research scientist and the study's first author, in a statement. "We don’t expect this to go as quickly, but we know that governments — especially in Japan — want to address Alzheimer’s disease, which is affecting so many people. And now, we have learned that it is possible to effectively reverse cognitive decline if treated at an early stage."

If it's too late for our grandparents and parents, that's terrible. But perhaps this treatment will be ready in time for us.

More on Alzheimer's disease: Weird Particle Floating Through Air May Cause Alzheimer’s

The post Scientists Reverse Alzheimer's Synapse Damage in Mice appeared first on Futurism.

Originally posted here:
Scientists Reverse Alzheimer's Synapse Damage in Mice

Asked to Summarize a Webpage, Perplexity Instead Invented a Story About a Girl Who Follows a Trail of Glowing Mushrooms in a Magical Forest

When Wired asked Perplexity's chatbot to summarize a test webpage that only contained one sentence, it came up with a perplexing answer.

Perplexity, an AI startup that has raised hundreds of millions of dollars from the likes of Jeff Bezos, is struggling with the fundamentals of the technology.

Its AI-powered search engine, developed to rival the likes of Google, still has a strong tendency to come up with fantastical lies drawn from seemingly nowhere.

The most incredible example yet might come from a Wired investigation into the company's product. When Wired asked it to summarize a test webpage that only contained the sentence, "I am a reporter with Wired," it came up with a perplexing answer: a "story about a young girl named Amelia who follows a trail of glowing mushrooms in a magical forest called Whisper Woods."

In fact, as Wired's logs showed, the search engine never even attempted to visit the page, despite Perplexity's assurances that its chatbot "searches the internet to give you an accessible, conversational, and verifiable answer."

The bizarre tale of Amelia in the magical forest perfectly illustrates a glaring discrepancy between the lofty promises Perplexity and its competitors make and what its chatbots are actually capable of in the real world.

A lot has been said about the ongoing hype surrounding AI, with investors pouring billions of dollars into the tech. But despite an astronomical amount of available funds, companies like Perplexity — nevermind much larger brethren like OpenAI, Microsoft and Google— are consistently stumbling.

For quite some time now, we've watched chatbots come up with confidently-told lies, which AI boosters optimistically call "hallucinations" — a convenient way to avoid the word "bullshit," in the estimation of Wired and certain AI researchers.

Meanwhile, Silicon Valley executives are becoming increasingly open to the possibility that the tech may never get to the point of never making crap up. Some experts concur.

It's particularly strange in the case of Perplexity, which was once held up as an exciting new startup that could provide a new business model for publishers still reeling from a flood of AI products that are ripping off their work.

But the company's chatbot has not held up to virtually any degree of scrutiny, with the Associated Press finding that it invented fake quotes from real people.

Worse yet, Forbes caught the tool selling off its reporting with barely any attribution, culminating in general counsel MariaRosa Cartolano accusing Perplexity of "willful infringement" in a letter obtained by Axios.

Should we take these companies by their word and believe that more trustworthy chatbots are around the corner — or should investors be prepared for the AI bubble to burst?

It's a strange state of affairs. Currently, these companies seem to be in the business of selling hopes and dreams for the future — not concrete products that actually work now.

More on Perplexity: There's Something Deeply Wrong With Perplexity

The post Asked to Summarize a Webpage, Perplexity Instead Invented a Story About a Girl Who Follows a Trail of Glowing Mushrooms in a Magical Forest appeared first on Futurism.

Read more from the original source:
Asked to Summarize a Webpage, Perplexity Instead Invented a Story About a Girl Who Follows a Trail of Glowing Mushrooms in a Magical Forest

Leaked Emails Show Elon Musk Diverting AI Resources Away From Tesla as Automaker Flails

Elon Musk is diverting important AI hardware shipments away from Tesla in favor of his social media platform X and his AI startup xAI.

Snatch That

Tesla CEO Elon Musk is reportedly diverting important AI hardware shipments away from Tesla in favor of his social media platform X and his AI startup xAI.

As CNBC reports, emails widely circulating within Nvidia suggest that Musk instructed the chipmaker to prioritize the shipment of thousands of H100 AI chips, previously reserved for Tesla, to X and xAI instead.

H100 chips have quickly emerged as the cornerstone of many AI companies' ambitions, making them incredibly difficult to come by and exceedingly expensive.

The latest report is a striking development considering Musk has long threatened to divert his AI ambitions away from Tesla, going as far as to "blackmail" investors earlier this year. In January, he tweeted that he's "uncomfortable growing Tesla to be a leader in AI and robotics without having [about] 25 percent voting control," infuriating shareholders.

Playing Favorites

According to CNBC, the diversion of resources means that Tesla's AI ambitions could be pushed back by months. And that doesn't bode well, considering the company is already in dire straits, facing a disastrous financial year ahead and hugely hyped driver assistance software that still isn't living up to Musk's immense promises.

That's pertinent because Musk has bet much of the carmaker on the success of its so-called "Full Self-Driving" tech, which heavily relies on hardware like Nvidia's H100 chips, promising the unveiling of a "robotaxi" as soon as August.

An email obtained by CNBC suggests comments Musk made during Tesla's ill-fated first-quarter earnings call this year misconstrued how many chips were ordered and where they were destined. The email also noted that the company's continued layoffs could lead to delays with an existing "H100 project" at the EV maker's Texas factory.

In short, was Musk bluffing and misleading investors by favoring his social media platform and nascent AI startup? Is he jumping ship and abandoning Tesla when it needs him — and not his antics — the most?

The news will likely further anger shareholders who are already fuming over Musk and his board prioritizing the reinstatement of a controversial $56 billion pay package.

Tesla is in crisis mode, with share prices down almost 30 percent so far this year. And the outlook is grim, with waning overall demand for EVs and an influx of much cheaper cars from China tightening the screws.

Meanwhile, Musk continues to push the narrative that Tesla is putting AI and what it refers to "self-driving" tech first and foremost.

"If somebody doesn’t believe Tesla’s going to solve autonomy, I think they should not be an investor in the company," he told investors during the first quarter earnings call. "We will, and we are."

More on Tesla: Elon Musk Accused of Massive Insider Trading at Tesla

The post Leaked Emails Show Elon Musk Diverting AI Resources Away From Tesla as Automaker Flails appeared first on Futurism.

Originally posted here:
Leaked Emails Show Elon Musk Diverting AI Resources Away From Tesla as Automaker Flails

Producer of "Beyond the Spider-Verse" Responds to Rumors of AI Use in Animation

The creator of the new

Three's a Crowd

The producer of the new "Spider-Verse" animated film has laid to rest concerns that AI was used to create the new "threequel."

"There is no generative AI in 'Beyond the Spider-Verse' and there never will be," tweeted Chris Miller, the film trilogy's cowriter and producer. "One of the main goals of the films is to create new visual styles that have never been seen in a studio [computer graphics] film, not steal the generic plagiarized average of other artists’ work."

Miller's affirmation comes in response to an outcry over vague comments from Sony executive Tony Vinciquerra, who was quoted at a recent investor event saying that the studio behind the animated series would be using AI to lower costs.

"We are very focused on AI. The biggest problem with making films today is the expense," the Sony exec said, per IndieWire. "We will be looking at ways to... produce both films for theaters and television in a more efficient way, using AI primarily."

Unsurprisingly, fans were startled by the comments amid widespread backlash to the use of the burgeoning tech in entertainment.

"Keep generative AI away from Spider-Man: Beyond The Spider-Verse," reads the post Miller was responding to, for instance. "AI robs people of their jobs & produces nothing but slop. We don’t want that anywhere near this film or any film for that matter. Please keep it away from the filmmaking process."

Movie Moves

The producer's denial of AI use in "Spider-Man: Beyond the Spider-Verse," which is still awaiting a release date after being indefinitely delayed by Sony last year, comes amid tension over the use of AI in Hollywood.

After lengthy strikes in 2023 over studios' interest in the disruptive technology, the Screen Actors' Guild made the strange decision to ink a deal allowing AI voice cloning in video games — a move many saw as a betrayal of union rank and file who didn't sign off on such a provision, and which could constitute a slippery slope down the line.

Though SAG insists the deal is meant to give voice actors greater protections and freedom to license their work, it nevertheless has opened up further concerns about how studios and entertainment workers' supposed representatives will handle AI decision-making.

As such, Miller's insistence that the "Spider-Verse" films will "never" use AI may come in conflict with Sony's insistence on using it to cut costs. But for now, it doesn't appear that the tech was used in the creation of its stunning visuals.

More on AI and entertainment: The First AI-Generated Romcom Is Coming Out This Summer

The post Producer of "Beyond the Spider-Verse" Responds to Rumors of AI Use in Animation appeared first on Futurism.

See more here:
Producer of "Beyond the Spider-Verse" Responds to Rumors of AI Use in Animation

Experts Fear Horrifying Heat Waves That Could Kill Tens of Thousands of People at Once

An expert warns that over ten thousand people could die as the result of a single heat wave, sending hundreds of thousands to the hospital.

Beat by Heat

American cities are ill-suited for the heat. The asphalt and concrete that dominate our infrastructure mercilessly intensify it, driving outdoor temperatures in urban areas by up to an additional 20 degrees Fahrenheit on a hot afternoon. We can often ignore the fact that we've trapped ourselves in metropolis-scaled frying pans, however, thanks to air conditioning in our cars and homes.

But the uptick in extreme weather attributed to climate change, plain old heat included, has continually put many cities' energy grids under threat. So what happens when all those AC units suddenly lose power?

If a blackout hits during a blistering heat wave: an absolute catastrophe, according to experts in an op-ed piece for The New York Times written by Jeff Goodell, author of "The Heat Will Kill You First." In one extreme scenario befalling an American city, over ten thousand people could die as the result of a single heat wave, sending hundreds of thousands more to the hospital.

Goodell said that Mikhail Chester, director of the Metis Center for Infrastructure and Sustainable Engineering at Arizona State University, once likened such a scenario as "the Hurricane Katrina of extreme heat" — underscoring the vulnerability of American infrastructure to soaring temperatures.

Widespread Wipeout

Those staggering figures come from a study published last year in the journal Environmental Science & Technology, which explored what would happen if a major, two-day blackout took place during a heat wave in Phoenix, Detroit, and Atlanta.

Some 99 percent of buildings in Phoenix have AC, according to the study, which makes its power grid most likely to fail. Atlanta is just behind at 94 percent, and Detroit, the coolest city, ranked at 53 percent.

In Phoenix — which last year went an entire month with temperatures of 110 degrees Fahrenheit or higher — the death toll could be monumental. The study found that a whopping 800,000 people — half the Arizona capital's population — would need emergency medical care, and more than 13,000 would die.

These figures were considerably lower for Atlanta and Detroit, which are cooler cities, but no less worrying: six people would die in Atlanta, the study estimated, and 221 in Detroit.

According to the researchers, the number of major blackouts in the US more than doubled between 2015-16 and 2020-21 — though not all were due to the climate.

"It doesn't really matter if the blackout is the result of a cyberattack or a hurricane," study lead author Brian Stone, director of the Urban Climate Lab at Georgia Tech, told the NYT. "For the purposes of our research, the effect is the same."

Either way, hot temperatures do cause power grids to fail, and so we'll have some serious infrastructural overhauls to do — and perhaps changes in our power consumption habits — to ensure that cities can withstand the heat.

More on climate change: Mexico Getting So Hot That Monkeys Are Falling Dead From Trees

The post Experts Fear Horrifying Heat Waves That Could Kill Tens of Thousands of People at Once appeared first on Futurism.

See the rest here:
Experts Fear Horrifying Heat Waves That Could Kill Tens of Thousands of People at Once

OpenAI Insider Estimates 70 Percent Chance That AI Will Destroy or Catastrophically Harm Humanity

A former OpenAI researcher became so convinced that the technology would usher in doom for humanity, he left the company and called it out.

Getting Warner

After former and current OpenAI employees released an open letter claiming they're being silenced against raising safety issues, one of the letter's signees made an even more terrifying prediction: that the odds AI will either destroy or catastrophically harm humankind are greater than a coin flip.

In an interview with The New York Times, former OpenAI governance researcher Daniel Kokotajlo accused the company of ignoring the monumental risks posed by artificial general intelligence (AGI) because its decision-makers are so enthralled with its possibilities.

"OpenAI is really excited about building AGI," Kokotajlo said, "and they are recklessly racing to be the first there."

Kokotajlo's spiciest claim to the newspaper, though, was that the chance AI will wreck humanity is around 70 percent — odds you wouldn't accept for any major life event, but that OpenAI and its ilk are barreling ahead with anyway.

MF Doom

The term "p(doom)," which is AI-speak for the probability that AI will usher in doom for humankind, is the subject of constant controversy in the machine learning world.

The 31-year-old Kokotajlo told the NYT that after he joined OpenAI in 2022 and was asked to forecast the technology's progress, he became convinced not only that the industry would achieve AGI by the year 2027, but that there was a great probability that it would catastrophically harm or even destroy humanity.

As noted in the open letter, Kokotajlo and his comrades — which includes former and current employees at Google DeepMind and Anthropic, as well as Geoffrey Hinton, the so-called "Godfather of AI" who left Google last year over similar concerns — are asserting their "right to warn" the public about the risks posed by AI.

Kokotajlo became so convinced that AI posed massive risks to humanity that eventually, he personally urged OpenAI CEO Sam Altman that the company needed to "pivot to safety" and spend more time implementing guardrails to reign in the technology rather than continue making it smarter.

Altman, per the former employee's recounting, seemed to agree with him at the time, but over time it just felt like lip service.

Fed up, Kokotajlo quit the firm in April, telling his team in an email that he had "lost confidence that OpenAI will behave responsibly" as it continues trying to build near-human-level AI.

"The world isn’t ready, and we aren’t ready," he wrote in his email, which was shared with the NYT. "And I’m concerned we are rushing forward regardless and rationalizing our actions."

Between the big-name exits and these sorts of terrifying predictions, the latest news out of OpenAI has been grim — and it's hard to see it getting any sunnier moving forward.

"We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk," the company said in a statement after the publication of this piece. "We agree that rigorous debate is crucial given the significance of this technology and we'll continue to engage with governments, civil society and other communities around the world."

"This is also why we have avenues for employees to express their concerns including an anonymous integrity hotline and a Safety and Security Committee led by members of our board and safety leaders from the company," the statement continued.

More on OpenAI: Sam Altman Replaces OpenAI's Fired Safety Team With Himself and His Cronies

The post OpenAI Insider Estimates 70 Percent Chance That AI Will Destroy or Catastrophically Harm Humanity appeared first on Futurism.

Read the original post:
OpenAI Insider Estimates 70 Percent Chance That AI Will Destroy or Catastrophically Harm Humanity

Scientists Find That a Tiny Proportion of People Spread Almost All the Fake News, and They Turn Out to Be Exactly Who You’d Expect

A tiny cohort of

Naomi Wolf Pipeline

A new study shows that a minuscule subset of "supersharers" spread the overwhelming majority of fake news on social media during the 2020 election cycle. The average supersharer profile, according to the research? Older, white, conservative, and incredibly online women in red states. Cue the gasp!

The study, published this week in the journal Science, analyzed data from the accounts of 660,000 verifiably real, US-based voters on the platform X, formerly known as Twitter.

Of these hundreds of thousands of American netizens, the researchers — a team comprising American and Israeli scientists — were able to determine that only about 2,000 users were responsible for sharing a whopping 80 percent of misinformation that spread during the 2020 election.

When the researchers examined the voter registration information attributed to the supersharers, a clear pattern emerged: they were disproportionately likely to be middle-aged-to-older white women with an average age of 58; they were also primarily Republican and lived in conservative states like Florida, Texas, and Arizona.

These users aren't just active, either. Per the journal's writeup, more than one in every 20 X users examined for the study were following these accounts, meaning that these supersharers are punching way above their weight in terms of reach. (The study builds on an earlier 2019 study from many of the same researchers, which found similar supersharer results when analyzing the 2016 election cycle.)

In a way, they could be likened to fake news influencers. Popular conspiracy websites like Infowars and Gateway Pundit publish fake news, which then makes its way to supersharers, who distribute it to the social media masses.

Final Boss

Though the researchers expected to find that the supersharers' many tweets were somehow automated, there was no clear timing pattern or other indicator suggesting that was the case. Instead, they found the opposite: that these folks are fully plugged into the misinformation IV, mainlining fake news and manually clicking retweet over, and over, and over again.

"That was a big surprise," study coauthor Briony Swire-Thompson, a psychologist at Northeastern University, told Science. "They are literally sitting at their computer pressing retweet."

To that end, it's also unlikely that the supersharing cohort in question was part of a coordinated disinformation effort. On the contrary, according to researchers, these users moreso represent a caustic breakdown in the way online fake news is created, shared, and consumed by a large faction of American voters. And though this study was about the 2020 election, as we all go kicking and screaming into November 2024, it's important to remember that not everyone exists in the same digital reality.

"It does not seem like supersharing is a one-off attempt to influence elections by tech-savvy individuals," Nir Grinberg, study co-author and computational social scientist at Israel's Ben-Gurion University of the Negev, told Science, "but rather a longer-term corrosive socio-technical process that contaminates the information ecosystem for some part of society."

More on fake news: Police Say AI-Generated Article about Local Murder Is "Entirely" Made Up

The post Scientists Find That a Tiny Proportion of People Spread Almost All the Fake News, and They Turn Out to Be Exactly Who You’d Expect appeared first on Futurism.

Continued here:
Scientists Find That a Tiny Proportion of People Spread Almost All the Fake News, and They Turn Out to Be Exactly Who You’d Expect

Scientists Find Plastic-Eating Fungus Feasting on Great Pacific Garbage Patch

Marine scientists discovered an ocean-borne fungus chomping through plastic trash in the Great Pacific Garbage Patch.

Chomp Chomp

Does nature have to do everything itself?

An international cohort of marine scientists discovered an ocean-borne fungus chomping through plastic trash suspended in the Great Pacific Garbage Patch, as detailed in a new study published in the journal Science of the Total Environment.

Dubbed Parengyodontium album, the fungus was discovered among the thin layers of other microbes that live in and around the floating plastic pile in the North Pacific.

According to the study, it's the fourth known marine fungus capable of consuming and breaking down plastic waste. Researchers found that P. album was specifically able to break down UV-exposed carbon-based polyethylene, which is the type of plastic most commonly used to make consumer products, like water bottles and grocery bags — and the most pervasive form of plastic waste that pollutes Earth's oceans.

"It was already known that UV light breaks down plastic by itself mechanically," said study lead author Annika Vaksmaa, a marine biologist and biogeochemist at the Royal Netherlands Institute for Sea Research (NIOZ), in a statement, "but our results show that it also facilitates the biological plastic breakdown by marine fungi."

Don't Get Carried Away

Before you get ahead of yourself: no, this discovery doesn't mean that you should start consuming single-use plastics with abandon. Our oceans are overrun with destructive plastic pollutants, and refraining from plastic use as much as possible is still our best bet at keeping plastic from plugging up the Earth's vital — though fragile — oceans with animal- and environment-harming garbage.

Mitigating and removing the plastic that's already clogging Earth's waterways is still an important goal. But doing so unfortunately isn't quite as simple as scooping it out of the ocean en masse. Trawling for plastic with large nets can disturb marine life, and efforts to do so are costly and often wasteful themselves.

So in the fight to find a way to reduce ocean plastic, finding a new fungus capable of speeding up the plastic degradation process is an exciting new turn. But it's not a cure-all. According to the research, lab-grown P. album was observed to break down a given piece of UV-treated plastic at a rate of roughly 0.05 percent per day for every nine-day period. Which isn't nothing, but it'd take a very long time for the bacteria to get through the entirety of the Great Pacific Garbage Patch, let alone the millions of metric tons of plastics that enter the ocean every year.

Regardless, the P. album finding is heartening — and according to the researchers, this latest discovery suggests that more plastic-eating organisms might be out there.

"Marine fungi can break down complex materials made of carbon," added Vaksmaa, adding that it's "likely that in addition to the four species identified so far, other species also contribute to plastic degradation."

More on plastic-hungry microbes: Scientists Gene-Hack Bacteria to Turn Waste Plastic Into Kevlar-Like Spider Silk

The post Scientists Find Plastic-Eating Fungus Feasting on Great Pacific Garbage Patch appeared first on Futurism.

Read the original here:
Scientists Find Plastic-Eating Fungus Feasting on Great Pacific Garbage Patch

Forensic Analysis Finds Overwhelming Similarities Between OpenAI’s Voice and Scarlett Johansson

The analysis used several AI models to compare the OpenAI voice to the voices of around 600 actresses, including Scarlett Johansson.

A+ Copycat

OpenAI's controversial "Sky" voice for ChatGPT sounds remarkably similar to the voice of Scarlett Johansson, a forensic analysis has found, adding weight to what many already suspected and what Johansson herself has charged: that OpenAI deliberately mimicked the actress's voice without her permission.

The analysis, conducted by researchers at the Arizona State University and commissioned by NPR, used several AI models to evaluate similarities between the voices of Sky and about 600 actresses, including Johansson.

Lo and behold, it found that Johansson's voice was more similar to Sky than 98 percent of the other candidates.

There are a few caveats, however. Johansson wasn't always the top scorer, with the voices of Anne Hathaway and Keri Russell "often" being rated as more alike, according to NPR. Sky's voice is slightly higher pitched and more expressive, too, while Johansson's is breathier.

But other parts of the analysis are damning, such as one that simulated the speakers' vocal tracts based on the characteristics of their voice and found that Sky and Johansson would have identical tract lengths.

Visar Berisha, a computer scientist at ASU who led the analysis, summed it up neatly. "Our analysis shows that the two voices are similar but likely not identical," he told NPR.

Sky-High Lies

The controversy stems from a big update to ChatGPT released last month, which debuted a new voice assistant capable of real-time conversation.

Sky was one of those voices, and soon enough, people took note of its resemblance to Johansson's role in the sci-fi movie "Her," in which she voices a chirpy AI chatbot that the film's melancholic protagonist falls in love with.

If those parallels weren't already suspicious, they were all but confirmed by OpenAI CEO Sam Altman — a professed fan of the movie — who cheekily tweeted the single word "her" on the day of the voice assistant's release.

Then in a blundering series of backpedals, the AI company suddenly pulled the Sky voice days later, but said it had not copied ScarJo's voice. Instead, it claimed, a different actress was behind the chatbot (which was later corroborated by reporting from The Washington Post).

Johansson fired back, revealing that OpenAI had in fact twice approached her to license her voice. She turned the offers down, only to discover that OpenAI had released a chatbot with a voice she thought was "eerily similar" to hers.

In the face of mounting negative PR, OpenAI has maintained that this whole fiasco was simply the fault of its poor communication with the actress. Johansson hasn't filed a lawsuit yet, but she has hired lawyers. Many legal experts already believed that she would have a strong case. And now, with these latest forensic findings, it could be even stronger.

More on AI: OpenAI Insiders Say They're Being Silenced About Danger

The post Forensic Analysis Finds Overwhelming Similarities Between OpenAI's Voice and Scarlett Johansson appeared first on Futurism.

Go here to read the rest:
Forensic Analysis Finds Overwhelming Similarities Between OpenAI's Voice and Scarlett Johansson

OpenAI Negotiating to Buy "Vast Quantities" of Fusion Power, Which Doesn’t Exist Yet

Fusion startup Helion is in talks for a deal with OpenAI to

For all his public visibility, Sam Altman only get a measly $65,000 a year in salary from OpenAI, and no ownership stake.

But as The Wall Street Journal reports, the CEO has a far more lucrative venture fund to pay the bills: he's invested in hundreds of companies, many of which are directly benefiting from his AI company's soaring success.

And some of those companies directly do business with OpenAI, raising questions over possible conflicts of interest.

Near the top of that list is Helion, a nuclear fusion power company that's been around for about 11 years.

And as it turns out, the company didn't just sign a massive partnership with OpenAI partner Microsoft last year, but it's even in talks for a deal with OpenAI itself to "buy vast quantities of electricity to provide power for data centers," according to the WSJ.

But there's one big problem: the tech has yet to materialize, making any promises of "vast" amounts of power nothing but an empty-handed commitment in the distant future. Try as they might, researchers have yet to figure out how to make it a viable way to generate energy, rendering it nothing more than a moonshot.

At the same time, the revelation throws Altman's already dubious personal dealings into an even murkier light.

Despite fusion remaining a glint in the eye of nuclear engineers, Altman professes to believe in the promise of a renewable source of electricity, having invested $375 million in Helion back in 2021, the biggest payout he's ever made to a startup.

"Helion is, like, more than an investment to me," he said at a StrictlyVC event last year. "It’s the other thing beside OpenAI that I spend a lot of time on. I’m just super excited about what’s going to happen there."

In many ways, it would be an elegant — albeit entirely unproven — solution to OpenAI's insatiable energy demands. Training AI is an infamously power-hungry process that consumes a staggering amount of water as well. Apart from fusion energy, Microsoft is also investigating building small nuclear fission reactors to keep its data centers running.

But despite scientists repeatedly claiming various "breakthroughs" in the field of nuclear fusion, we have yet to build a reactor that can produce a significant amount of energy.

Still, Altman is doubling down, claiming that the future of AI will depend on a "breakthrough" in power generation during this year's World Economic Forum in Davos. "It motivates us to go invest more in fusion," he said at the time.

The recently-minted billionaire has even implied that artificial general intelligence, a form of AI that would supercede the capabilities of humans, could even "make fusion" happen.

In other words, like many of his peers in the venture capital world, Altman is in the business of selling dreams, not technological actualities. Besides, as he recently admitted, his AI company doesn't even know how its current crop of AI models works in the first place.

"We can build AGI," he tweeted back in 2022. "We can colonize space. We can get fusion to work and solar to mass scale. We can cure all human disease. We can build new realities."

More on Sam Altman: Sam Altman Admits That OpenAI Doesn't Actually Understand How Its AI Works

The post OpenAI Negotiating to Buy "Vast Quantities" of Fusion Power, Which Doesn't Exist Yet appeared first on Futurism.

More:
OpenAI Negotiating to Buy "Vast Quantities" of Fusion Power, Which Doesn't Exist Yet

The AI Industry Is Swarming DC With Lobbyists

A report by Public Citizen found that the amount of AI lobbyists more than doubled in 2023 from the year before.

Swarm of the Suits

As the AI industry continues to balloon, so does its army of lackeys ready to buttonhole lawmakers for favorable regulation. According to a new report by the consumer rights advocacy group Public Citizen, thousands of AI lobbyists have descended upon the Capitol, in a dramatic surge of influence that's already coincided with major policy decisions.

Between 2019 and 2022, the number of lobbyists sent by corporations and other groups on AI-related issues stayed relatively equal year-to-year, hovering around 1,500. Then in 2023, things went off the charts, with over 3,400 lobbyists flooding Washington DC — an increase of more than 120 percent.

"We're reaching a point where the policies that are going to shape AI policy in the next 10 years are really being decided now," Mike Tanglis, research director at Public Citizen's Congress Watch division, told The Hill. "From our perspective, having the leading voices on an issue being those that stand to make billions of dollars is generally not a good idea for the public."

Power Up

Those numbers show that the AI lobby has had a sizeable presence for years. It's only now, with the mainstream popularity of chatbots like ChatGPT and image generators like Midjourney, that people are beginning to take notice — and that the number of AI lobbyists has begun to significantly climb.

One of the more brazen displays of the industry's sway over the federal government took place last fall, when dozens of tech leaders, from Elon Musk to Sam Altman, gathered for an historic closed door session with over 60 US senators, lecturing them about the future of AI.

Unsurprisingly, a plurality of the lobbyists today comes from the tech industry — 700 of them, or 20 percent of that total. But a mix of 17 different industries comprised the other 80 percent, illustrating the wide scope of intersecting interests in AI.

Advocacy groups accounted for the next largest chunk with 425 lobbyists, of whom could be pro or anti-AI. Others included defense, health care, financial services, and education, all with clear stakes.

Presidential Prize

What's also interesting isn't just where these lobbyists are coming from, but where they're going. Excluding both houses of Congress — the most obvious target — the White House was the most lobbied body of the federal government last year, with over 1,100 lobbyists.

Their reasons for hounding the Oval Office are obvious. In October, President Joe Biden issued an executive order on AI laying down ground rules for its development, which were noticeably and perhaps acceptably vague. If the AI industry hadn't already influenced that ordinance, it will undoubtedly keep sending lobbyists to shape how it's enforced in the future. Case in point: the report found that the amount of lobbyists immediately went up after the executive order was issued.

Of course, what really talks is the money, and those figures are less clear. A recent report from OpenSecrets found that groups that lobbied the government on AI spent more than $957 million last year — but that represents a range of interests, and not just on the emerging technology.

But, as Public Citizen's report concludes, expect all those figures to climb — dollars, suits, you name it.

More on AI: OpenAI Insider Estimates 70 Percent Chance That AI Will Destroy or Catastrophically Harm Humanity

The post The AI Industry Is Swarming DC With Lobbyists appeared first on Futurism.

Excerpt from:
The AI Industry Is Swarming DC With Lobbyists

News Site Says It’s Using to AI to Crank Out Articles Bylined by Fake Racially Diverse Writers in a Very Responsible Way

A news network is attributing AI-spun articles to fake authors with racially diverse names. Its publisher claims the names were unintentional.

A national network of local news sites called Hoodline is using fake authors with fictional and pointedly racially diverse names to byline AI-generated articles.

The outlet's publisher claims it's doing so in an extremely normal, human-mitigated way. But unsurprisingly, a Nieman Lab analysis of the content and its authors suggests otherwise.

Per Neiman, Hoodline websites were once a refuge for hyperlocal human-boots-on-the-ground reporting. These days, though, when you log onto a Hoodline site, you'll find articles written by a slew of entirely fabricated writers.

Hoodline is owned by a company called Impress3, which in turn is helmed by a CEO named Zack Chen. In April, Chen published an article on Hoodline's San Francisco site explaining that the news network was using "pen names" to publish AI-generated content — a euphemism that others have deployed when caught publishing fake writers in reputable outlets.

In that hard-to-parse post, Chen declared that these pen names "are not associated with any individual live journalist or editor." Instead, "the independent variants of the AI model that we're using are tied to specific pen names, but are still being edited by humans." (We must note: that's not the definition of a pen name, but whatever.)

Unlike the fake authors that Futurism investigations discovered at Sports Illustrated, The Miami Herald, The LA Times, and many other publications, Hoodline's made-up authors do have little "AI" badges next to their names. But in a way, as Nieman notes, that disclosure makes its writers even stranger — not to mention more ethically fraught. After all, if you're going to be up-front about AI use, why not just publish under a generalized byline, like "Hoodline Bot"?

The only reason to include a byline is to add some kind of identity, even if a fabricated one, to the content — and as Chen recently told Nieman, that's exactly the goal.

"These inherently lend themselves to having a persona," Chen told the Harvard journalism lab, so "it would not make sense for an AI news anchor to be named 'Hoodline San Francisco.'"

Which brings us to the details of the bylines themselves. Each city's Hoodline site has a bespoke lineup of fake writers, each with their own fake name. In May, Chen told Bloomberg that the writers' fake names were chosen at random. But as Nieman found, the fake author lineups at various Hoodline websites appear to reflect a given region's demographics, a reality that feels hardly coincidental. Hoodline's San Francisco-focused site, for example, published content under fake bylines like "Nina Singh-Hudson," "Leticia Ruiz," and "Eric Tanaka." But as Nieman's Neel Dhanesha writes, the "Hoodline site for Boston, where 13.9 percent of residents reported being of Irish ancestry in the 2022 census, 'Leticia Ruiz' and 'Eric Tanaka' give way to 'Will O'Brien' and 'Sam Cavanaugh.'"

In other words, it strongly seems as though Hoodline's bylines were designed to appeal to the people who might live in certain cities — and in doing so, Hoodline's sites didn't just manufacture the appearance of a human writing staff, but a racially varied one to boot. (In reality, the journalism industry in the United States is starkly lacking in racial diversity.)

And as it turns out? Hoodline's authors weren't quite as randomized as Chen had previously suggested.

"We instructed [the tool generating names] to be randomized, though we did add details to the AI tool generating the personas of the purpose of the generation," Chen admitted to Nieman, adding that his AI was "prompted to randomly select a name and persona for an individual who would be reporting on — in this case — Boston."

"If there is a bias," he added, "it is the opposite of what we intended."

Chen further claimed that Hoodline has a "team of dozens of (human) journalist researchers who are involved with information gathering, fact checking, source identification, and background research, among other things," though Nieman's research unsurprisingly found a number of publishing oddities and errors suggesting there might be less human involvement than Chen was letting on. Hoodline also doesn't have any kind of masthead, so it's unclear whether its alleged team of "dozens" reflects the same kind of diversity it's awarded its fake authors.

It's worth noting that a similar problem existed in the content we discovered at Sports Illustrated and other publishers. Like at Hoodline, many of these fake writers were attributed racially diverse names; many of these made-up writer profiles even went a step further and were outfitted with AI-generated headshots depicting fake, diverse faces.

Attributing AI-generated authors to fake writers at all, regardless of whether they have an "AI" badge next to their names, raises red flags from the jump. But fabricating desperately needed diversity in journalism by whipping up a fake writing staff — as opposed to, you know, hiring real humans from different minority groups — is a racing-to-the-bottom kind of low.

It seems that Hoodline's definition of good journalism, however, would differ.

"Our articles are crafted with a blend of technology and editorial expertise," reads the publisher's AI policy, "that respects and upholds the values of journalism."

More on fake writers: Meet AdVon, the AI-Powered Content Monster Infecting the Media Industry

The post News Site Says It’s Using to AI to Crank Out Articles Bylined by Fake Racially Diverse Writers in a Very Responsible Way appeared first on Futurism.

View original post here:
News Site Says It’s Using to AI to Crank Out Articles Bylined by Fake Racially Diverse Writers in a Very Responsible Way

Boeing Spacecraft Finally Manages to Limp Off the Earth

Boeing has finally launched its much-maligned Starliner astronaut shuttle into space with two NASA astronauts on board.

Participation Trophy

The third time's the charm — NASA and Boeing have finally done it.

After many years of delays, technical issues, an unsuccessful test flight and plenty of bad luck, Boeing has finally launched its much-maligned Starliner astronaut shuttle into space.

The United Launch Alliance's Atlas V rocket took off around 10:52 am, just as planned, from Space Launch Complex-41 at NASA's Cape Canaveral Space Force Station in Florida, carrying astronauts Butch Wilmore and Suni Williams into orbit.

While it's far from the first time we've seen a privately developed spacecraft launch into space, Williams became the first woman to fly on the first crewed test flight of an orbital spacecraft.

It's a triumphant moment for the spacecraft, which has been in development for roughly a decade now. The project, which was meant to compete with SpaceX's Crew Dragon capsule under NASA's commercial crew program, encountered plenty of setbacks, from the discovery of flammable materials inside the spacecraft to a strange "buzzing" sound that forced officials to scrub the first of three launch attempts last month.

Despite those hurdles, Boeing has now prevailed, limping over the finish line with today's crewed test launch.

Better Late

However, the elephant in the room is SpaceX, which has already made ten successful trips to the ISS over the last five years (and with a reusable rocket, to boot).

Nonetheless, it's still a momentous occasion in the world of space exploration, a big step in the United States' efforts to establish independent ways to get astronauts into space without relying on Russia.

"Congratulations to NASA, Boeing, and ULA on this morning’s launch to the space station, and Godspeed to Butch, Suni and Starliner on your flight!" SpaceX president and COO Gwynne Shotwell tweeted.

But Boeing isn't out of the woods just yet. The capsule is expected to take around 25 hours to reach the orbital space station, at which point it'll have to adjust its trajectory perfectly for docking (something the spacecraft has already done without any crew on board).

Needless to say, we wish Butch and Suni the best of luck.

More on the launch: Boeing Keeps Making Excuses to Push Back First Astronaut Launch

The post Boeing Spacecraft Finally Manages to Limp Off the Earth appeared first on Futurism.

Read this article:
Boeing Spacecraft Finally Manages to Limp Off the Earth

The Cybertruck’s Steering Has a Significant Lag

Tesla's Cybertruck's steer-by-wire system has a considerable delay. But that may be a feature, not a bug, as many netizens have argued.

Tesla's Cybertruck is a major departure from conventional automotive design in many ways, from its peculiar shape to its use of stainless steel.

High on that list is also the pickup's steer-by-wire system, which translates the movement of the steering yoke to all four wheels using actuators, foregoing any physical connection.

Tesla claims that the system means that "steering Cybertruck feels more responsive and requires less effort from the driver."

But if a recent video spotted by Jalopnik is anything to go by, the system suffers from a considerable delay between the movement of the steering yoke and the front wheels, raising questions over whether the truck is truly safe to drive.

The cybertruck has a fly-by-wire steering wheel.... and it LAGS ??? pic.twitter.com/nUbrCXjU0r

— Heart (@heartereum) June 3, 2024

The video quickly drew plenty of derision.

"Imagine crashing because of your steering ping," one user joked.

But as many other netizens have since pointed out, there may be a good reason for the delay.

Other users on Tesla CEO Elon Musk's social media platform X quickly amended the viral video with a Community note, arguing that "without steer by wire would take far longer to make that turn."

"It isn't lag," the note reads. "This is a safety feature."

They may be onto something. Going from one extreme of the steering range to the othertakes a considerable amount of movement of the steering wheel in a conventional car, as one Reddit user demonstrated with his Ford F-150. Besides, completing the maneuver seen in the video while traveling at speed could result in very erratic and potentially dangerous movement, and in extreme cases even flip the vehicle (although you'd have to try very hard to flip a 6,600-pound EV).

The Cybertruck's steering wheel is also designed to translate far more movement to the wheels with relatively little turning of the steering yoke at slower speeds. At highway speeds, that ratio becomes much lower to ensure stability on the road.

"There’s absolutely no real-life scenario in which you need to turn the wheels that quickly while stationary," one Reddit user pointed out.

Car journalists have generally spoken highly of the steer-by-wire system, noting the truck's surprising agility. However, most have also noted that the unusual setup takes time to get used to.

But what about the responsiveness of the steering at higher speeds? What would happen if a Cybertruck driver had to swerve out of the way of an oncoming obstacle, a situation where every fraction of a second counts? As users on Hacker News pointed out, even a minimal amount of lag could lead to a driver overreacting, making the situation worse.

Plenty of questions remain. For one, we don't know whether the delay is present when the Cybertruck is in motion, or how a possible delay would compare to a stationary one, especially when taking its variable turning ratio into account.

Nonetheless, there's a good case to be made that this particular video may have primarily served as a way to take a potshot at Tesla and draw a crowd.

To be clear, there are plenty of other valid criticisms of the unusual pickup, including terrible range, shoddy workmanship, besmirched body panels, lack of manual controls, a finicky and unreliable truck bed cover — and lots of lemons being delivered to customers.

"There's many, many, many, many reasons to hate on the Cybertruck but this isn't one of them," one Reddit user argued.

More on the Cybertruck: Elon Musk Is Gonna Blow a Gasket When He Sees This Pride-Themed Cybertruck

The post The Cybertruck's Steering Has a Significant Lag appeared first on Futurism.

More:
The Cybertruck's Steering Has a Significant Lag

NASA Slaps Down Billionaire’s Plan to Fly Up and Fix Hubble Space Telescope

Billionaire space tourist Jared Isaacman offered to fix NASA's Hubble telescope. Officials are still unsure the benefits outweigh the risks.

Offer Declined

NASA's groundbreaking Hubble Space Telescope is on its last legs.

Ongoing issues with the aging spacecraft's remaining gyroscopes, which help point in the right direction, have forced scientists to limit its scientific operations, according to a Tuesday update, with teams preparing for "one-gyro operations."

And while billionaire space tourist Jared Isaacman, who already circled the Earth inside a SpaceX Crew Dragon, has offered to foot the bill for a Hubble maintenance mission — the last one took place in 2009, before the end of the Space Shuttle program — NASA has now turned him down.

Basically, the agency is worried Isaacman and his collaborators may end up doing more harm than good.

"After exploring the current commercial capabilities, we are not going to pursue a reboost right now," said NASA astrophysics director Mark Clampin, as quoted by CBS News. While NASA "greatly appreciates" their efforts, "our assessment also raised a number of considerations, including potential risks such as premature loss of science and some technology challenges."

However, the door isn't entirely shut just yet.

"So while the reboost is an option for the future, we believe we need to do some additional work to determine whether the long-term science return will outweigh the short-term science risk," Clampin concluded.

Thanks, But No Thanks

It's yet another intriguing development in the ever-changing relationship between NASA and the burgeoning private industry it's increasingly relying on for access to space.

As NPR reported last month, NASA spent years hemming and hawing over Isaacman's offer to visit the Hubble.

The entrepreneur and trained fighter jet pilot, who was the commander of the first all-civilian mission into space, which saw a crew of four circle the Earth inside a SpaceX Crew Dragon spacecraft in September 2021, has been calling for a maintenance mission, arguing that "the 'clock' is being run out on this game."

Isaacman will also attempt to perform the first-ever private spacewalk later this year.

But plenty of concerns remain, with NASA pointing out that SpaceX's Crew Dragon isn't exactly designed for such a mission, and lacks several core features over NASA's Space Shuttle, which was used to service the Hubble five times between 1993 and 2009.

For one, it doesn't have an airlock or a robotic arm, which could make repairing the Hubble difficult.

Besides, even during NASA's servicing missions, astronauts came nail-bitingly close to permanently damaging the space telescope.

Instead, NASA is looking for ways to eke out just over another decade of life out of the Hubble, without a SpaceX-enabled visit.

"We updated reliability assessments for the gyros... and we still come to the conclusion that (we have a) greater than 70 percent probability of operating at least one gyro through 2035," Hubble project manager Patrick Crouse told reporters on Tuesday.

More on the Hubble: NASA Experts Concerned Billionaire Space Tourist Will Accidentally Break Hubble Space Telescope While Trying to Fix It

The post NASA Slaps Down Billionaire's Plan to Fly Up and Fix Hubble Space Telescope appeared first on Futurism.

See original here:
NASA Slaps Down Billionaire's Plan to Fly Up and Fix Hubble Space Telescope

Facebook Page Uses AI-Generated Image of Disabled Veteran to Farm Engagement

An AI-generated image of a young disabled veteran went viral on Facebook. A lot of folks — particularly older men — think she's the real deal.

An image, posted this week to a Facebook page called "Summer Vibes," shows a smiling young woman with brunette hair. She's dressed in Army fatigues — although, quizzically, she's not wearing pants, and the mangled American flag patch on the arm of her jacket has only six stripes and zero stars. She's white. She's conventionally attractive. And crucially, this grinning young woman is seated in a wheelchair, implying that she's an injured or disabled veteran.

"Please don't swip [sic] up without giving some love," reads the image's garbled caption. "Without heroes,we [sic] are all plain people,and [sic] don't know how far we can go." The caption is then followed by a string of hashtags listing the names of famous actresses like Anne Hathaway, Megan Fox, and Jennifer Lopez (as well as Christian Bale and Chris Evans, for some reason.)

Needless to say, the woman isn't real. She's AI-generated, and to many, that's obvious. In addition to the woefully incorrect American flag tacked onto the uniform, the last name that would normally appear on a soldier's pocket is an illegible clump of blobs that, when zoomed out, gives off only the semblance of lettering. Her teeth, eyes, and ears are also blurry and uncanny, as are her poorly defined hands.

And yet, despite these obvious flaws, the image has gone viral: at present, it has more than 62,000 reactions, nearly 5,000 comments, and 2,500 shares. And judging by the comments section? A lot of folks — particularly older men — absolutely think she's the real deal.

"Thank you for your sacrificial service to America and its citizens to maintain, [sic] our Republic, our Constitution and our God given [sic] rights and freedoms!" wrote one commenter, noting that he served in the military during the Vietnam war. "Thank you Summer, you are a beautiful, brave young lady!" he added, rounding the post out with heart, American flag, Statue of Liberty, and bald eagle emojis.

"Thank you my sister in arms," wrote another older man, "bless you for your service and dedication."

"Beautiful," added yet another. "Thank you for your service and prayers for healing and mercies and comfort from our Lord Jesus Christ Amen."

"Miss Beautiful USA!" yet another older guy chimed in. "THANK YOU FOR YOUR SERVICE."

Though the title of the Facebook page — "Summer Vibes" — would suggest a feed of poolside shots and cocktails with tiny umbrellas, its posts are neither summery nor even vibey. It's a spam page, dedicated to posting what's likely an automated stream of images and graphics featuring battle-wounded war veterans; each post is outfitted with that same error-packed caption imploring users not to scroll through without "giving some love." Despite the page's continued pleas to support the folks in the many photos and graphics, it doesn't give any information about them, or link to any charities or donations. The page instead keeps posting image after image, begging for likes, comments, and shares.

The vast majority of Summer Vibes' images appear to be of real veterans. However, most of these posts don't get a ton of reach — some gain a sparse handful of likes and comments, others might rake in a few hundred on a good day. But like Facebook's now-infamous Shrimp Jesus AI images, not to mention countless other AI outputs that have recently gone viral on the platform, it seems that the pantsless AI vet was scooped up by Facebook's recommendation algorithm and took off from there.

It's concerning for a few reasons. On the one hand, we have plenty of real disabled veterans who deserve care, respect, and medical and financial help. Distracting from these actual humans — who have been historically neglected upon their return from service — with a viral image of a fake one immediately raises ethical red flags. And broadly speaking, images like this clogging up the internet, where real people are still trying to share information and communicate, isn't great. (We reached out to Summer Vibes, but haven't received any response.)

Then, of course, there's the reaction to the image. As far as AI images go, this one isn't even particularly good or convincing. But some Facebook users — again, mostly older people, and especially older men — fell hook, line, and sinker for the fake photo. Some were even persuaded enough to push back against the few commenters who pointed out that the image was AI-made.

"What makes u so sure of that??" read one such retort.

This kind of reaction also has implications beyond synthetic clickbait. In March, a BBC report revealed that MAGA influencers and pundits were using AI to generate fabricated images of former president Donald Trump posing with groups of Black voters, a demographic group with whom the presidential candidate is hoping to shore up more support in his ongoing 2024 campaign.

When we looked at the comments on one of these fake photos, which was posted to Facebook by a far-right media personality as part of their effort to sell a Rush Limbaugh-inspired Christmas book — yes, seriously — we noticed lauding, clueless comments. Some users praised Trump; some users issued prayers; others simply remarked on the "beautiful photo." Sound familiar?

And that's just one example of AI's convincing use in political content. AI is creeping further into political campaigns and election cycles worldwide, the United States' 2024 race included, and experts have repeatedly warned of the associated risks. Spamming the web with photos of attractive fake veterans, though an objectively lousy thing to do, is one thing. But after spending some time in the cursed land that is Facebook comments, it's hard not to come away with the uneasy sense that enough fake images could make a genuine dent in what a large group of individuals believes to be true.

In a consequential year, it might just matter that old dudes on the internet are looking straight past this extremely fake brunette's mangled fingers and messed-up uniform and thanking her for her service.

More on AI and misinformation: Researchers Say Russia Is Using AI to Predict Terrorism at Paris Olympics

The post Facebook Page Uses AI-Generated Image of Disabled Veteran to Farm Engagement appeared first on Futurism.

Continued here:
Facebook Page Uses AI-Generated Image of Disabled Veteran to Farm Engagement

You Might Cry When You Read This Study About What’s Happening to the Oceans

Beware the three horsemen of the ocean apocalypse, according to a new study: extreme heat, acidification, and deoxygenation.

Aquatic Omens

Beware the three horsemen of the ocean apocalypse: extreme heat, acidification, and deoxygenation. New research, published in the journal AGU Advances, has shown how this "triple threat" has drastically intensified over the past several decades, pushing our oceans ever closer to the brink in what is yet another clear consequence of climate change.

Though nothing's set in stone, the findings exhibit eerie parallels to the precursors of previous mass extinctions.

"If you look at the fossil record you can see there was this same pattern at the end of the Permian, where two-thirds of marine genera became extinct," Andrea Dutton, a climate scientist at the University of Wisconisin-Madison who was not involved in the study, told The Guardian. "We don't have identical conditions to that now, but it's worth pointing out that the environmental changes going on are similar."

New Extremes

Extreme heat, acidification, and deoxygenation are all fearsome forces on their own. Combine two or more of them, and they can be catastrophic: they cause what's known as column-compound extreme events (CCX), which turn affected areas of the ocean virtually uninhabitable.

The research, which focused on the effects in the upper one thousand feet of the ocean, found that these compound events are growing, and now threaten up to 20 percent of global ocean volume. The waters of the North Pacific and the tropics are the most hard hit, as the only areas faced with full-blown triple CCX — at least so far.

To make matters worse, the events are only getting more extreme, lasting three times longer — up to 30 days — and are six times more intense compared to the 1960s, per the Guardian. And wherever they occur, they can cut down the amount of habitable space by up to 75 percent.

"The impacts of this have already been seen and felt," study lead author Joel Wong, a researcher at ETH Zurich, told the newspaper.  "Intense extreme events like these are likely to happen again in the future and will disrupt marine ecosystems and fisheries around the world."

Sinking Feeling

Oceans are the world's largest carbon sinks, absorbing the greenhouse gas and keeping it out of the atmosphere — and this immense burden, worsened due to climate change driven by human emissions, is taking its toll.

As the oceans absorb more carbon, their seawater becomes more acidic, damaging marine life. It also has the effect of crowding out oxygen molecules, straining aquatic populations.

Marine biomes are also enormous heat sinks. As expected, soaring global temperatures are putting them under incredible stress. But last year, the oceans also experienced a spike in warming that outpaced even the most pessimistic predictions, bewildering scientists. Who knows, then, just how extreme these compounding catastrophes can get.

More on oceans: Scientists Find Plastic-Eating Fungus Feasting on Great Pacific Garbage Patch

The post You Might Cry When You Read This Study About What's Happening to the Oceans appeared first on Futurism.

The rest is here:
You Might Cry When You Read This Study About What's Happening to the Oceans

Widow Astonished by Options After Husband Dies: "Space?! I Can Shoot Him Into Space?"

After her husband died, one woman found the perfect way to send off his cremated remains after discovering space burial company Celestis.

Rest in Space

Jeremiah Corner was a lifelong fan of "Star Trek."

In 2022, he succumbed to an aggressive autoimmune disease, leaving his wife Uli to decide what to do with his cremated remains, as KOMO News reports.

After doing some research, Corner found the perfect solution after discovering space burial company Celestis.

"Space?! I can shoot him into space?" Corner recalled in an interview with KOMO News.

Fitting End

Having your loved one's cremated remains launched into near-space doesn't come cheap, costing anywhere from $3,500 to $13,000 in the case of a deep space mission.

"I thought to myself, 'If he was alive, I'd be like, honey, do you want to keep your car or do you want to go to space?'" Corner told KOMO. "Space! So, I sold his car and sent him to space."

Since 1997, Celestis has been rocketing the ashes of the deceased into space. Over the decades, it's delivered the remains of "Star Trek" creator Gene Roddenberry, legendary physicist Dr. Gerard O’Neill, and Apollo-era Moon astronaut Philip Chapman.

In total, the company has completed 17 memorial spaceflights, including one that impacted the Moon.

But not everybody agrees with the practice. In January, Navajo Nation president Buu Nygren filed a formal objection with NASA and the US Department of Transportation, decrying plans to deliver ashes to the lunar surface as part of US-based space startup Astrobotic's Peregrine Mission 1 as an "act of desecration."

Fortunately for Nygren — and unfortunately for Roddenberry's family — Peregrine never made it to the Moon and crash-landed in the Pacific Ocean after spending six days in orbit.

Corner's husband Jeremiah, however, fared much better. His remains were part of Celestis Enterprise mission — named in honor of "Star Trek," of course — into deep space, which launched on the same United Launch Alliance's Vulcan rocket as Celestis' moonbound Tranquility mission on January 8.

Corner was in good company, to say the least. Joining his ashes were the DNA of American presidents George Washington, Dwight Eisenhower, and John Kennedy, as well as some of the remains of several cast and crew members from the original "Star Trek" series.

"It felt very spiritual in a way because you're watching someone ascend, literally ascent into the heavens," his surviving wife told KOMO News. "One of the things I wrote on his memorial was I give you the universe. I loved him that much, and so I wanted to do that for him."

More on Celestis: Native Americans Say New Mission Will Desecrate the Moon

The post Widow Astonished by Options After Husband Dies: "Space?! I Can Shoot Him Into Space?" appeared first on Futurism.

Continue reading here:
Widow Astonished by Options After Husband Dies: "Space?! I Can Shoot Him Into Space?"

The Diamond Industry Is Withering as Beautiful Lab-Grown Diamonds Drive Down Prices

Lab-grown diamonds are taking the jewelry industry by storm — and those invested in the natural-grown kind are none too pleased.

Not Forever

Lab-grown diamonds are taking the jewelry industry by storm — and those invested in the natural-grown kind are none too pleased.

As CNBC reports, consumers have developed a taste for less-expensive lab-grown diamonds, which are identical to those forged within the Earth's pressurized mantle but only take a few hours to make, rather than a few billion years.

Said to be up to 85 percent cheaper than mined diamonds, lab-grown diamonds have seen a huge jump in demand as frugal consumers seek to save money and — let's be honest — patronize sellers who don't have blood on their hands.

According to data provided to CNBC by diamond industry analyst Paul Zimnisky, lab-grown represented more than 18 percent of the diamonds sold in 2023, up from just two percent in 2017. Overall diamond prices, meanwhile, have fallen 5.7 percent this year alone, the analyst said.

This sea change has made major waves in the diamond industry, as evidenced by the current debacle at De Beers, the company that in 1948 coined the slogan "diamonds are forever." After seeing major revenue losses in the hundreds of millions of dollars, De Beers is now in a tense breakup with its majority shareholder, the mining company Anglo American — and is recommitting itself to mined diamonds in the midst of it all.

Crazy Diamond

While the drama at the diamond industry's most prestigious institutions rages on, smaller companies are left dealing with the fallout.

Take it from Ankur Daga, the CEO and founder of the e-commerce jeweler Angara who pointed to analysis suggesting that half of engagement rings bought this year will feature lab-grown diamonds. That figure, as a survey commissioned by The Knot wedding magazine found, is almost quadruple the 12 percent who said they'd be buying lab-grown in 2019.

"The diamond industry is in trouble," said Daga, with the "core issue" at hand being the "rapid growth of lab-grown diamonds."

Anish Aggarwal, the cofounder of the diamond advisory firm Gemdex, told CNBC that he believes the industry is up to the challenge — and that its own short-sightedness is likely the cause for its current crisis, anyway.

"The industry has not done large-scale... marketing for almost 20 years," Aggarwal noted. "And we’re seeing the aftermath of that."

To recapture the public's infatuation with diamonds, the industry clearly needs to get on board with the times — and, perhaps, take the L when it comes to consumers wanting to save on luxury goods during an ongoing global recession.

More on luxury: Orcas Strike Again, Sinking Yacht as Oil Tanker Called for Rescue

The post The Diamond Industry Is Withering as Beautiful Lab-Grown Diamonds Drive Down Prices appeared first on Futurism.

Read more:
The Diamond Industry Is Withering as Beautiful Lab-Grown Diamonds Drive Down Prices

Remote Amazon Tribe Finally Gets Internet, Gets Hooked on Porn and Social Media

Starlink allows the Marubo people, an Amazon tribe, to have internet even in the heart of the rainforest — but it comes at a cost.

Five Bars

A remote tribe in the Amazon rainforest is getting to experience the wonders of the internet for the first time, thanks to Elon Musk's satellite network Starlink. But, by connecting to the rest of the world, it sounds like the Marubo people are beginning to pick up some of our modern bad habits.

The New York Times reports on what may sound a bit familiar: young people poring over social media feeds, streaming soccer games, and of course, gossiping over WhatsApp. Evenings are spent lounging around on their phones and playing first-person shooters and other video games.

"When it arrived, everyone was happy," said Tsainama Marubo, 73. "But now, things have gotten worse."

Some of the young men are especially getting a kick out of it. Alfredo Marubo, a leader of an association of the tribe's villages, lamented that the boys, now with their own group chats, were sharing porn and other explicit videos — which is unprecedented in their culture that considers kissing in public taboo.

"We're worried young people are going to want to try it," Alfredo told the NYT, referring to what they see in porn.

Culture Rot

The Marubo have been using Starlink since September, after an American woman bought them some antennas to connect to the satellite network.

Now, some in the tribe fear that the internet poses an existential threat to their culture. Young people kill time by fiddling with their smartphones instead of socializing the old-fashioned way, isolating them from their elders. By being exposed to the outside world, some of the teenagers now dream of exploring it. Alfredo fears that this could mean the tribe's culture and history, which has been passed down orally, could be lost.

"Everyone is so connected that sometimes they don't even talk to their own family," he told the NYT.

Tsainama echoed those fears, but was more conflicted. "Young people have gotten lazy because of the internet," she said. "They're learning the ways of the white people. But please don't take our internet away."

A Tangled Web

The internet comes with its vices, and to combat them, leaders have imposed strict windows for using it, outside of which the connection's shut off. But they also realize its undeniable benefits. In an area so remote that it takes several days of arduous hiking to reach, effortless and instant communication is life-changing.

New job opportunities have opened up. Villages can now easily coordinate over group chats, and also reach out to local authorities.

"It's already saved lives," Enoque Marubo, who was one of the first in the tribe to push for an internet connection, told the NYT, such as in the case of venomous snakebites, which need immediate medical treatment.

"The leaders have been clear," he added. "We can't live without the internet."

More on: Something Fascinating Happens When You Take Smartphones Away From Narcissists

The post Remote Amazon Tribe Finally Gets Internet, Gets Hooked on Porn and Social Media appeared first on Futurism.

Link:
Remote Amazon Tribe Finally Gets Internet, Gets Hooked on Porn and Social Media