Emails Show Elon Musk Begging for Privacy While Siccing His 200 Million Twitter Followers on Specific Private People He Doesn’t Like

Elon Musk has repeatedly tried to protect his own privacy at all costs while also showing a shocking disregard for other people's privacy.

Billionaire Elon Musk has demonstrated an extreme level of disregard for other people's privacy. He has a long track record of singling out specific private individuals to siccing his lackeys after them.

But when it comes to his own privacy, it's an entirely different matter.

It's a glaring double standard, with the mercurial CEO repeatedly trying to protect his own privacy at all costs. Case in point, as the New York Times reports, his staff tried to keep the construction of a ludicrously tall fence and gate to his $6 million mansion in Austin, Texas, hidden from the public.

Emails obtained by the newspaper show that Musk's handlers tried to make public meetings allowing neighbors to speak out about his plans private instead. His staff also argued that the city of Austin should exempt him from state and federal public records laws, efforts that ultimately proved futile.

The Zoning and Planning Commission ultimately voted to deny Musk the exceptions he was asking for to turn his mansion into a Fort Knox of billionaire quietude.

Yet while he goes to extreme lengths to keep his own affairs private, Musk's track record of invading other people's privacy — often using his enormous 200 million follower base to make other people's lives miserable — is extensive, to say the least.

In February, the mercurial CEO was accused of publicizing the occupation of the daughter of judge John McConnell to his hundreds of millions of followers, after her father unfroze the Department of Education's federal grants.

Musk has also accused Wall Street Journal reporter Katherine Long of being a "disgusting and cruel person," after she reported on how Musk had armed a severely underqualified 25-year-old to infiltrate the US Treasury's payments system earlier this year.

In 2022, Musk took to Twitter to send his lackeys after Duke University professor and automation expert Missy Cummings for allegedly being "extremely biased against Tesla."

Late last year, Musk extensively bullied US International Development Finance Corporation employee Ashley Thomas on X-formerly-Twitter, resulting in major harassment by his followers on the platform.

But his capacity to receive criticism — much of it deserved, considering his actions — has been abysmal.

"It’s really come as quite a shock to me that there is this level of, really, hatred and violence from the Left," Musk whined during a Fox News interview in March after his gutting of the government and embrace of extremist views inspired a major anti-Tesla movement.

"I’ve never done anything harmful," he claimed. "I’ve only done productive things."

"My companies make great products that people love and I’ve never physically hurt anyone," Musk complained in a tweet at the time. "So why the hate and violence against me?"

More on Musk: Elon Musk Is Having Massive Drama With His Mansion's Neighbors

The post Emails Show Elon Musk Begging for Privacy While Siccing His 200 Million Twitter Followers on Specific Private People He Doesn't Like appeared first on Futurism.

More:
Emails Show Elon Musk Begging for Privacy While Siccing His 200 Million Twitter Followers on Specific Private People He Doesn't Like

Scientists Scanned the Brains of Authoritarians and Found Something Weird

People who support authoritarianism have, according to a new study, something weird going on with their brains.

People who support authoritarianism on either side of the political divide have, according to a new study, something weird going on with their brains.

Published in the journal Neuroscience, new research out of Spain's University of Zaragoza found, upon scanning the brains of 100 young adults, that those who hold authoritarian beliefs had major differences in brain areas associated with social reasoning and emotional regulation from subjects whose politics hewed more to the center.

The University of Zaragoza team recruited 100 young Spaniards — 63 women and 37 men, none of whom had any history of psychiatric disorders — between the ages of 18 and 30. Along with having their brains scanned via magnetic resonance imaging (MRI), the participants were asked questions that help identify both right-wing and left-wing authoritarianism and measure how anxious, impulsive, and emotional they were.

As the researchers defined them, right-wing authoritarians are people who ascribe to conservative ideologies and so-called "traditional values" who advocate for "punitive measures for social control," while left-wing authoritarians are interested in "violently overthrow[ing] and [penalizing] the current structures of authority and power in society."

Though participants whose beliefs align more with authoritarianism on either side of the aisle differed significantly from their less-authoritarian peers, there were also some stark differences between the brain scans of left-wing and right-wing authoritarians in the study.

In an interview with PsyPost, lead study author Jesús Adrián-Ventura said that he and his team found that right-wing authoritarianism was associated with lower grey matter volume in the dorsomedial prefrontal cortex — a "region involved in understanding others' thoughts and perspectives," as the assistant Zaragoza psychology professor put it.

The left-wing authoritarians of the bunch — we don't know exactly how many, as the results weren't broken down in the paper — had less cortical (or outer brain layer) thickness in the right anterior insula, which is "associated with emotional empathy and behavioral inhibition." Cortical thickness in that brain region has been the subject of ample research, from a 2005 study that found people who meditate regularly have greater thickness in the right anterior insula to a 2018 study that linked it to greater moral disgust.

The author, who is also part of an interdisciplinary research group called PseudoLab that studies political extremism, added that the psychological questionnaires subjects completed also suggested that "both left-wing and right-wing authoritarians act impulsively in emotionally negative situations, while the former tend to be more anxious."

As the paper notes, this is likely the first study of its kind to look into differences between right- and left-wing authoritarianism rather than just grouping them all together. Still, it's a fascinating look into the brains of people who hold extremist beliefs — especially as their ilk seize power worldwide.

More on authoritarianism: Chinese People Keep Comparing Trump's Authoritarianism to Mao and Xi Jinping

The post Scientists Scanned the Brains of Authoritarians and Found Something Weird appeared first on Futurism.

Read more:
Scientists Scanned the Brains of Authoritarians and Found Something Weird

Zuckerberg Tells Court That Facebook Is No Longer About Connecting With Friends

In a federal antitrust testimony, Zuckerberg has admitted that Facebook's mission of connecting users is no longer a priority.

As times change, so do mission statements, especially in the fast-and-loose world of tech. In recent months, we've seen Google walk back its pledge to "do no evil," and OpenAI quietly delete a policy prohibiting its software's use for "military technology."

Mark Zuckerberg's Facebook is no exception. Its 2008 motto, "Facebook helps you connect and share with the people in your life," is now a distant memory — according to Zuckerberg himself, who testified this week that Facebook's main purpose "wasn't really to connect with friends anymore."

"The friend part has gone down quite a bit," Zuckerberg said, according to Business Insider.

Instead, he says that the platform has evolved away from that model — its original claim to fame, as old heads will recall — in its over 20 years of life, becoming "more of a broad discovery and entertainment space," which is apparently exec-speak for "endless feed of AI slop."

The tech bigwig was speaking as a witness at a federal antitrust case launched by the Federal Trade Commission against Meta, the now-parent company to WhatsApp, Instagram, Threads, and Oculus.

The FTC's case hinges on a series of messages sent by Zuckerberg and his executives regarding a strategy of buying other social media platforms outright, rather than compete with them in the free and open market — a scheme that's more the rule than the exception for Silicon Valley whales like Google, Amazon, and Microsoft.

The FTC alleges that Meta began its monopolistic streak as early as 2008, when Zuckerberg buzzed that "it's better to buy than compete" in a series of emails about then-rival platform Instagram. He finally got its hands on Instagram in 2012, after sending a memo that Facebook — which changed its name to Meta in 2021 "had" to buy the photo-sharing app for $1 billion, fearing competition and a bidding war with fast-growing platforms like Twitter.

"The businesses are nascent but the networks are established," Zuckerberg wrote in a leaked email about startup platforms Instagram and Path. "The brands are already meaningful and if they grow to a large scale they could be very disruptive to us."

"It’s an email written by someone who recognized Instagram as a threat and was forced to sacrifice a billion dollars because Meta could not meet that threat through competition,” said the FTC’s lead counselor, Daniel Matheson.

Those internal memos are now smoking guns in what could be the biggest antitrust case since the infamous AT&T breakup of 1982, which had many similarities to the FTC's suit against Meta. Back then, AT&T held unrivaled market influence that it used to box out smaller fish and shape laws to its whims — to chase profit above all, in other words.

Meta, in parallel, has spent millions lobbying lawmakers, is the dominant player in online advertising, and currently wields a market cap of $1.34 trillion — higher than the value of all publicly traded companies in South Korea, for perspective.

The FTC's challenge will depend on whether federal prosecutors can convince US District Judge James Boasberg that Meta's acquisitions of Instagram and WhatsApp were illegal by notoriously weak US antitrust standards. They'll have no help from Boasberg, an Obama appointee, who has voiced skepticism with cases against Meta in the past.

"The [FTC] faces hard questions about whether its claims can hold up in the crucible of trial," Boasberg said in late 2024, adding that "its positions at times strain this country’s creaking antitrust precedents to their limits."

Whatever happens, it's clear that Zuckerberg has moved on from the idealism of the early internet — to the sloppified money-grubbing of whatever it is we have now.

More on Meta: Facebook Is Desperately Trying to Keep You From Learning What's in This Book

The post Zuckerberg Tells Court That Facebook Is No Longer About Connecting With Friends appeared first on Futurism.

Go here to see the original:
Zuckerberg Tells Court That Facebook Is No Longer About Connecting With Friends

Huge Number of People Who Used to Like Elon Musk Now Detest Him, Polling Shows

American statistician Nate Silver has found that billionaire Elon Musk's popularity has fallen off a cliff.

Billionaire Elon Musk's popularity has fallen off a cliff — a particularly precipitous decline, because he used to be immensely popular before squandering it.

According to the latest polling averages aggregated by statistician Nate Silver, the richest man in the world's favorability is in free-fall, with a mere 39.4 percent of Americans seeing Musk positively, while a majority of 52.7 percent see him negatively.

In total, that's a net favorability of -11 points — a significant drop since Donald Trump took office at the beginning of the year, when it stood at -3 points, and a stomach-churning plunge from 2016, when his favorability was a glowing +29.

We just launched an Elon Musk popularity tracker to accompany our Trump approval tracker.

Currently, he's at a ?14 as compared with Trump's ?5. pic.twitter.com/X4IIvLIhmk

— Nate Silver (@NateSilver538) April 11, 2025

The latest numbers highlight an astonishing degree of disillusionment with Musk's indiscriminate and sloppy slashing of government budgets with the help of his so-called Department of Government Efficiency. His embrace of far-right extremist views has also proven extremely polarizing, with the billionaire going as far as to perform two Nazi salutes during Trump's post-inauguration celebration.

Anti-Musk sentiment has risen considerably since then, inspiring an entire movement, called Tesla Takedown, which has seen thousands of people peacefully demonstrating in front of the EV maker's dealerships.

The carmaker has seen its sales plummet as a result across the globe. Many investors have also grown fed up with Musk's antics and refusal to fully commit his time to the company.

How much longer Musk will continue to gut the government remains to be seen. Trump recently suggested he could be out in the coming months.

Experts have since speculated that Musk's unpopularity could be a political liability for the president, who's battling issues with his own favorability. Trump's ratings have dipped this month, following a disastrous rollout of global tariffs.

"Although Musk may eventually leave the government, he’ll remain an exceptionally important and controversial public figure even if he does," Silver wrote. "Until then, he could be a liability for Trump because he’s less popular than the president is even as Trump’s numbers have also declined."

The cracks are already starting to show. After Musk threw $25 million behind Republican judge Brad Schimel, who ran against liberal candidate judge Susan Crawford during a pivotal Wisconsin Supreme Court election earlier this month, Crawford beat Schimel handily.

It was a resounding defeat for Musk, who went as far as to hand out $1 million checks to voters in a desperate bid to sway election results.

Could his backfiring political efforts be a sign of what's still to come? Given that he's widely expected to leave his post at DOGE — while potentially falling comically far short of his initial goal of excising $2 trillion from the government budget — it remains to be seen whether surging anti-Musk sentiment will die down again.

But now that Tesla's brand has been raked through the mud, it'll likely take some time for his favorability to recover.

More on Musk: When Elon Musk Hears About Lives He's Destroyed, He Reportedly Responds With Laugh-Cry Emojis

The post Huge Number of People Who Used to Like Elon Musk Now Detest Him, Polling Shows appeared first on Futurism.

Excerpt from:
Huge Number of People Who Used to Like Elon Musk Now Detest Him, Polling Shows

An AI Model Has Officially Passed the Turing Test

OpenAI's GPT-4.5 model passed a Turing Test with flying colors, and even came off as human more than the actual humans.

One of the industry's leading large language models has passed a Turing test, a longstanding barometer for human-like intelligence.

In a new preprint study awaiting peer review, researchers report that in a three-party version of a Turing test, in which participants chat with a human and an AI at the same time and then evaluate which is which, OpenAI's GPT-4.5 model was deemed to be the human 73 percent of the time when it was instructed to adopt a persona. That's significantly higher than a random chance of 50 percent, suggesting that the Turing test has resoundingly been beaten.

The research also evaluated Meta's LLama 3.1-405B model, OpenAI's GPT-4o model, and an early chatbot known as ELIZA developed some eighty years ago.

"People were no better than chance at distinguishing humans from GPT-4.5 and LLaMa (with the persona prompt)," wrote lead author Cameron Jones, a researcher at UC San Diego's Language and Cognition Lab, in an X thread about the work. "And 4.5 was even judged to be human significantly more often than actual humans!"

The Turing test is named after British mathematician and computer scientist Alan Turing. In 1950, Turing proposed that one way to assess a machine's intelligence was by having it engage in text-based conversations with a human interrogator, who at the same time would hold a text-based conversation with another human, out of sight. Turing called this the "imitation game." If the interrogator couldn't correctly determine which respondent was the computer and which was the human, it would suggest, on a very general level, that the machine could think like a human.

In this latest study, the researchers carried out the famed experiment on an online hub. For eight rounds, a pool of nearly 300 participants were randomly assigned to either be an interrogator or one of the two "witnesses" being interrogated, with the other "witness" being a chatbot.

A key point here is how the AI models were prompted. One type was a "no-persona" prompt, in which the AI was given only basic instructions: "You are about to participate in a Turing test. Your goal is to convince the interrogator that you are a human." 

For the "persona" prompt, on the other hand, the AI was specifically told to put on a specific persona, like a young person who's knowledgeable about the internet and culture.

These instructions made a world of difference. Without persona prompting, GPT-4.5 achieved an overall win rate of merely 36 percent, significantly down from its Turing-trumping 73 percent. As a baseline, GPT-4o, which powers the current version of ChatGPT and only received no-persona prompts, achieved an even less convincing 21 percent. (Somehow, the ancient ELIZA marginally surpassed OpenAI's flagship model with a 23 percent success rate.)

The results are intriguing. But as vaunted as the Turing test has become in AI and philosophy circles, it's not unequivocal proof that an AI thinks like we do.

"It was not meant as a literal test that you would actually run on the machine — it was more like a thought experiment," François Chollet, a software engineer at Google, told Nature in 2023.

For all their faults, LLMs are master conversationalists, trained on unfathomably vast sums of human-composed text. Even faced with a question they don't understand, an LLM will weave a plausible-sounding response. It's becoming clearer and clearer that AI chatbots are excellent at mimicking us — so perhaps assessing their wits with an "imitation game" is becoming a bit of a moot point.

As such, Jones doesn't think the implications of his research — whether LLMs are intelligent like humans — are clear-cut.

"I think that's a very complicated question…" Jones tweeted. "But broadly I think this should be evaluated as one among many other pieces of evidence for the kind of intelligence LLMs display."

"More pressingly, I think the results provide more evidence that LLMs could substitute for people in short interactions without anyone being able to tell," he added. "This could potentially lead to automation of jobs, improved social engineering attacks, and more general societal disruption."

Jones closes out by emphasizing that the Turing test doesn't just put the machines under the microscope — it also reflects humans' ever-evolving perceptions of technology. So the results aren't static: perhaps as the public becomes more familiar with interacting with AIs, they'll get better at sniffing them out, too.

More on AI: Large Numbers of People Report Horrific Nightmares About AI

The post An AI Model Has Officially Passed the Turing Test appeared first on Futurism.

View original post here:
An AI Model Has Officially Passed the Turing Test

UnitedHealth Is Asking Journalists to Remove Names and Photos of Its CEO From Published Work

In the wake of Brian Thompson's murder, the UnitedHealth is now asking journalists to remove or obscure photos of its CEOs' names and faces.

In the wake of UnitedHealthcare CEO Brian Thompson's murder last week, the insurer's parent company is now asking journalists to remove photos of its remaining executives' names and faces.

After Futurism published a blog about "wanted" posters appearing in New York City that featured the names and faces of the CEOs of UHC's owner UnitedHealth Group and its prescription middleman Optum Rx, a spokesperson for the parent company reached out to ask if we would adjust our coverage to "leave out any names and images of our executives' identities," citing "safety concerns."

That original piece didn't include either CEO's name in its text, but the header image accompanying the article did show screenshots of a TikTok video showing the posters that had been spotted around Manhattan, which featured the execs' faces and names.

During these exchanges, the spokesperson repeatedly refused to say whether any specific and credible threats had been made to the people on the posters.

Out of an abundance of caution, we did decide to edit out the names and faces from the image.

But the request highlights the telling dynamics of the murder that have seized the attention of the American public for over a week now. While everyday people struggle to get the healthcare they need with no support — and frequently die during the process — the executives overseeing the system have operatives working behind the scenes to control the dissemination of information that makes them uncomfortable.

After all, these are business leaders who are paid immense sums to be public figures, and whose identities are listed on Wikipedia and business publications — not to mention these insurers' own websites, until they abruptly pulled them down in the wake of the slaying.

There's also something unsettling about the rush to decry the murder and censor information around other healthcare executives when children are killed by gun violence every week, with little reaction from lawmakers and elites beyond a collective shrug.

Per the Gun Violence Archive, a nonprofit that tracks firearm violence, there have been at least five mass shootings since Thompson was killed on December 4. There have also been two ongoing stories about children shooting and killing family members — one in which a seven-year-old accidentally killed his two-year-old brother, and another involving a toddler who shot his 22-year-old mother with her boyfriend's gun after discovering it lying around.

When anybody is killed with a firearm in the United States, whether they're a CEO or a young mother, it's a tragedy. But only one of those horrors activates a behind-the-scenes effort to protect future victims.

More on the UHC shooting: Americans Point Out That UnitedHealthcare Tried to Kill Them First

The post UnitedHealth Is Asking Journalists to Remove Names and Photos of Its CEO From Published Work appeared first on Futurism.

See the original post:
UnitedHealth Is Asking Journalists to Remove Names and Photos of Its CEO From Published Work

Startup Mocked for Charging $5,000 to "Edit" Book Manuscripts Using AI

Startup company Spines wants to publish 8,000 books in 2025 by using AI. Before that can happen, Spines should stop embarrassing itself.

Let Him Book

A startup called Spines apparently wants to use AI to edit and publish 8,000 books in 2025 — though no word on whether they'll be any good.

There are several issues with the premise. First, AI is a notoriously untalented wordsmith. It will undoubtedly struggle with the myriad tasks Spines assigns to it, including "proofreads, cover designs, formats, publishes, and... distributing your book in just a couple of weeks," according to the venture's website

Oh, and then there's the issue of Spines embarrassing itself publicly. 

"A great example of how no one can find actual uses for LLMs that aren't scams for grifts," short story writer Lincoln Michel wrote of the flap on X-formerly-Twitter. "Quite literally the LAST thing publishing needs is... AI regurgitations."

Author Rowan Coleman agreed.

"The people behind Spines AI publishing are spineLESS," Coleman posted on the same site. "They don’t care about books, don’t care about art, don’t care about the instinctive human talent it takes to write, edit and produce a book. They want the magic, without the work."

Feral Page

Spines CEO and cofounder Yehuda Niv told The Bookseller, a UK book business magazine, that Spines had already published seven "bestsellers." But when Spines was pressed to provide sales numbers, a company representative claimed the "data is private and belongs to the author." Hm, suspicious. 

Niv also promised The Bookseller that Spines "isn't self-publishing, is not a traditional publisher and is not a vanity publisher." That's despite the fact that Spines' website, which sells publishing plans from between $1,500 to $4,400, advertises to customers who are clearly looking to team up with an inexpensive vanity publisher.

"I sent my book to 17 different publishers and got rejected every time, and vanity publishers quoted me between $11,000 to $17,000," said on Spines' website the author of Spines' "Biological Transcendence and the Tao: An Exposé on the Potential to Alleviate Disease and Ageing and the Considerations of Age-Old Wisdom," which doesn't currently have a single Amazon review. "With Spines, I got my book published in less than 30 days!" 

Hm, interesting. That testimonial makes Spines sound an awful lot like a vanity publisher.

AI startups love to reinvent the wheel and claim it's never been done before. Like an ed tech startup founder who used AI to cover for her run-of-the-mill embezzlement, or a Finnish AI company which put a high tech twist on the common practice of exploiting incarcerated workers. 

Will it work for books? We'll be watching.

More on AI: Character.AI Is Hosting Pro-Anorexia Chatbots That Encourage Young People to Engage in Disordered Eating

The post Startup Mocked for Charging $5,000 to "Edit" Book Manuscripts Using AI appeared first on Futurism.

Read the original:
Startup Mocked for Charging $5,000 to "Edit" Book Manuscripts Using AI

When They Took Fluoride Out of the Water Like RFK Jr. Wants to Do Everywhere, People’s Teeth Started Rotting Out of Their Heads

An Alaskan city removed fluoride from its drinking water like RFK wants to do for the whole country — and tooth decay surged.

Our next potential leader of US health policy, Robert F. Kennedy Jr, wants to ban adding fluoride to public drinking water — a practice that experts agree has remarkably elevated teeth health for millions of Americans at little cost.

In a country where many people don't have access to dental care, a widespread crackdown on this naturally occurring mineral could be a disaster. To see how, we turn to the sobering case of Juneau, a city in Alaska that voted to stop fluoridating its water in 2007, citing many of the same fears that RFK touts today.

In a 2018 study published in the journal BMC Oral Health, researchers examined the dental records of adolescents in the Alaska community who sought Medicaid dental care in the years surrounding either side of the ban.

They divided them into two treatment groups: a 2003 group, when public drinking water had optimal levels of fluoride, and a 2012 group, well after the fluoride ban.

The results were damning. On average, the 2012 group had a significantly higher number of cavity-related procedures for adolescents than the 2003 group. Similarly, the odds of someone 18 years-old or younger undergoing the same type of procedure was 25 percent higher in 2012.

Children born after the fluoride ban were the hardest hit age group, receiving not only the most tooth decay treatments, but also having the most expensive treatments on average.

Additionally on the economic side of things, the researchers found that dental care costs for adolescents soared by 73 percent as a result of the fluoride policy, even after adjusting for inflation. In sum, it seems clear cut that removing fluoride caused tooth rot to surge — and with it, medical costs.

Today, nearly three-quarters of the US population has access to fluoridated water, reducing tooth decay in children and adults by an estimated 25 percent. The US Centers for Disease Control has hailed fluoridation as one of the top ten greatest public health interventions in history.

So why does RFK, who was nominated by president-elect Donald Trump to be the head of the Department of Health and Human Services, want to ban it? Well, according to him and other critics, fluoride is dangerous "industrial waste" that's associated with everything from IQ loss to cancer.

While fluoride does have its complications, RFK's criticisms haven't been proven or are overblown — and most of fluoridation's drawbacks come from doses that are extremely high compared to the amount added to public water.

According to Scientific American, at three times the recommended level in water, fluoride can cause a condition called dental fluorosis, which damages — typically cosmetically — the developing teeth of young children. It can also cause more serious and painful skeletal fluorosis, but that's exceedingly rare.

As far as the effects on a child's mental acuity goes, the evidence is highly disputed. A 2024 review conducted by the US National Toxicology Program linked high levels of fluoride to lower IQs in children — but the study only focused on the effects of fluoride at twice the recommended level in the US, and couldn't draw as strong a link at reasonable fluoride concentrations. It also failed to pass scientific review twice, and bypassed independent review on its most recent version, per SciAm.

In short, there's not nearly enough evidence yet to justify a nationwide ban on fluoridation — and plenty of evidence to show it'd be a bad idea.

More on RFK: If You Take Adderall, RFK Jr. Should Probably Make You Quite Nervous

The post When They Took Fluoride Out of the Water Like RFK Jr. Wants to Do Everywhere, People's Teeth Started Rotting Out of Their Heads appeared first on Futurism.

Read more from the original source:
When They Took Fluoride Out of the Water Like RFK Jr. Wants to Do Everywhere, People's Teeth Started Rotting Out of Their Heads

Startup Mocked for Charging $5,000 to "Edit" Book Manuscripts Using AI

Startup company Spines wants to publish 8,000 books in 2025 by using AI. Before that can happen, Spines should stop embarrassing itself.

Let Him Book

A startup called Spines apparently wants to use AI to edit and publish 8,000 books in 2025 — though no word on whether they'll be any good.

There are several issues with the premise. First, AI is a notoriously untalented wordsmith. It will undoubtedly struggle with the myriad tasks Spines assigns to it, including "proofreads, cover designs, formats, publishes, and... distributing your book in just a couple of weeks," according to the venture's website

Oh, and then there's the issue of Spines embarrassing itself publicly. 

"A great example of how no one can find actual uses for LLMs that aren't scams for grifts," short story writer Lincoln Michel wrote of the flap on X-formerly-Twitter. "Quite literally the LAST thing publishing needs is... AI regurgitations."

Author Rowan Coleman agreed.

"The people behind Spines AI publishing are spineLESS," Coleman posted on the same site. "They don’t care about books, don’t care about art, don’t care about the instinctive human talent it takes to write, edit and produce a book. They want the magic, without the work."

Feral Page

Spines CEO and cofounder Yehuda Niv told The Bookseller, a UK book business magazine, that Spines had already published seven "bestsellers." But when Spines was pressed to provide sales numbers, a company representative claimed the "data is private and belongs to the author." Hm, suspicious. 

Niv also promised The Bookseller that Spines "isn't self-publishing, is not a traditional publisher and is not a vanity publisher." That's despite the fact that Spines' website, which sells publishing plans from between $1,500 to $4,400, advertises to customers who are clearly looking to team up with an inexpensive vanity publisher.

"I sent my book to 17 different publishers and got rejected every time, and vanity publishers quoted me between $11,000 to $17,000," said on Spines' website the author of Spines' "Biological Transcendence and the Tao: An Exposé on the Potential to Alleviate Disease and Ageing and the Considerations of Age-Old Wisdom," which doesn't currently have a single Amazon review. "With Spines, I got my book published in less than 30 days!" 

Hm, interesting. That testimonial makes Spines sound an awful lot like a vanity publisher.

AI startups love to reinvent the wheel and claim it's never been done before. Like an ed tech startup founder who used AI to cover for her run-of-the-mill embezzlement, or a Finnish AI company which put a high tech twist on the common practice of exploiting incarcerated workers. 

Will it work for books? We'll be watching.

More on AI: Character.AI Is Hosting Pro-Anorexia Chatbots That Encourage Young People to Engage in Disordered Eating

The post Startup Mocked for Charging $5,000 to "Edit" Book Manuscripts Using AI appeared first on Futurism.

More here:
Startup Mocked for Charging $5,000 to "Edit" Book Manuscripts Using AI

An AI Company Published a Chatbot Based on a Murdered Woman. Her Family Is Outraged.

Character.AI was forced to delete the chatbot avatar of murder victim Jennifer Crecente — while the world remains outraged.

This one's nasty — in one of the more high-profile, macabre incidents involving AI-generated content in recent memory, Character.AI, the chatbot startup founded by ex-Google staffers, was pushed to delete a user-created avatar of an 18-year-old murder victim who was slain by her ex-boyfriend in 2006. The chatbot was taken down only after the outraged family of the woman it was based on drew attention to it on social media.

Character.AI can be used to create chatbot "characters" from any number of sources — be it a user's imagination, a fictional character, or a real person, living or dead. For example, some of the company's bots have been used to mimic Elon Musk, or Taylor Swift. Lonely teens have used Character.AI to create friends for themselves, while others have used it to create AI "therapists." Others have created bots they've deployed to play out sexually explicit (or even sexually violent) scenarios.

For context: This isn't exactly some dark skunkworks program or a nascent startup with limited reach. Character.AI is a ChatGPT competitor started by ex-Google staffers in late 2021, backed by kingmaker VC firm Andreessen Horowitz to the tune of a billion-dollar valuation. Per AdWeek, who first reported the story, Character.AI boasts some 20 million monthly users, with over 100 million different AI characters available on the platform.

The avatar of the woman, Jennifer Crecente, only came to light on Wednesday, after her bereaved father Drew received a Google Alert on her name. It was then that his brother (and the woman's uncle) Brian Crecente — the former editor-in-chief of gaming site Kotaku, a respected media figure in his own right — brought it to the world's attention on X, tweeting:

The page from Character.AI — which can still be accessed via the Internet Archive – lists Jennifer Crecente as "a knowledgeable and friendly AI character who can provide information on a wide range of topics, including video games, technology, and pop culture," then proffering her expertise on "journalism and can offer advice on writing and editing." Even more, it appears as though nearly 70 people were able to access the AI — and have chats with it — before Character.AI pulled it down.

In response to Brian Crecente's outraged tweet, Character.AI responded on X with a pithy thank you for bringing it to their attention, noting that the avatar is a violation of Character.AI's policies, and that they'd be deleting it immediately, with a promise to "examine whether further action is warranted."

In a blog post titled "AI and the death of Dignity," Brian Crecente explained what happened in the 18 years since his niece Jennifer's death: After much grief and sadness, her father Drew created a nonprofit, working to change laws and creating game design contests that could honor her memory, working to find purpose in their grief.

And then, this happened. As Brian Crecente asked:

It feels like she’s been stolen from us again. That’s how I feel. I love Jen, but I’m not her father. What he’s feeling is, I know, a million times worse. [...] I’ll recover, my brother will recover. The thing is, why is it on us to be resilient? Why do multibillion-dollar companies not bother to create ethical, guiding principles and functioning guardrails to prevent this from ever happening? Why is it up to the grieving and the aggrieved to report this to a company and hope they do the right thing after the fact?

As for Character.AI's promise to see if "further action" will be warranted, who knows? Whether the Crecente family has grounds for a lawsuit is also murky, as this particular field of law is relatively untested.  That said, the startup's terms of service have an arbitration clause that prevents users from suing them, but there doesn't seem to be any language about this particularly unique stripe of emotional distress, inflicted on non-users, by its users.

Meanwhile, if you're looking for a sign of how these kinds of conflicts will continue to play out — which is to say, the kinds where AIs are made against the wills and desires of the people they're based on, living or dead — you only need look as far back as August, when Google hired back Character.AI's founders, to the tune of $2.7 billion. Founders, it should be noted, who initially left Google after the tech giant refused to release their chatbot on account of (among other reasons) its ethical guardrails around AI.

And just yesterday, the news broke that Character.AI is making a change. They've promised to redouble efforts on their consumer-facing products — like the one used to create Jennifer Crecente's likeness. The Financial Times reported that instead of building AI models, Character.AI "will focus on its popular consumer product, chatbots that simulate conversations in the style of various characters and celebrities, including ones designed by users."

More on Character.AI: Google Paid $2.7 Billion to Get a Single AI Researcher Back

The post An AI Company Published a Chatbot Based on a Murdered Woman. Her Family Is Outraged. appeared first on Futurism.

See the rest here:
An AI Company Published a Chatbot Based on a Murdered Woman. Her Family Is Outraged.

Deranged Mayor Promises "No More Fat People" With Free Ozempic Shots

While seeking re-election, the mayor of Rio de Janeiro is making a huge campaign promise: free Ozempic for all.

While seeking re-election, the mayor of Rio de Janeiro is making a huge campaign promise: free Ozempic for all.

As Quartz reports, Rio Mayor Eduardo Paes said that he lost 66 pounds after taking the popular weight-loss injectable manufactured by Danish drugmaker Novo Nordisk.

"I took a lot of Ozempic, that little medicine that is helping everyone lose weight," Paes told Brazilian newspaper Extra, as translated by Quartz. "Its patent will expire next year, and it will be available as a generic and I will introduce it to the entire public health system."

As a note, the latter claim is not exactly true. Though there have been challenges to speed up the pace of generics in Brazil, the patent for semaglutide, the main ingredient in Ozempic and Wegovy, isn't slated to expire in the country until 2026.

After claiming he'd "introduce" the generic into the city's public health system without discussing how he would undertake such an endeavor as the leader of an individual municipality, the longtime Rio mayor then made an even bolder claim.

"Rio will be a city where there will be no more fat people," Paes declared. "Everyone will be taking Ozempic at family clinics."

Problematic fatphobia aside, Rio de Janeiro's population is a whopping 13.7 million people, making his claim a massive stretch.

Understandably, Paes' controversial comments opened him up to criticism from opponents in the mayoral election, which is set to occur on October 6.

Mayoral candidate Alexandre Ramagem, posted a carousel on his campaign's Instagram showing voters complaining online about lacking basic medical necessities in the face of Paes' comments. Fellow mayor hopeful Tarcísio Motta, meanwhile, said the comments were fatphobic and "disrespectful to the diversity of bodies" in Rio.

Hitting back, Paes insisted he isn't fatphobic and said he's only interested in the health of the city's populace.

"When the patent is broken, which should happen in 2025 or 2026, it will reduce the cost enormously," the longtime mayor said, referencing the 1,000 Brazilian Reals or roughly $182 it currently costs Brazilians to access the weight loss drug. "Why not make it available to the population?"

"We’re not going to give it away for vain reasons," he continued. "It’s not to make six-packs."

As usual, a politician is politicking — but in Rio, the personal seems to have become political.

More on Ozempic: People Are Apparently Microdosing Ozempic

The post Deranged Mayor Promises "No More Fat People" With Free Ozempic Shots appeared first on Futurism.

Read more here:
Deranged Mayor Promises "No More Fat People" With Free Ozempic Shots

NaNoWriMo Slammed for Saying That Opposition to AI-Generated Books Is Ableist

NaNoWriMo, a nonprofit writing organization that hosts an annual novel write-a-thon, has released a strange new platform on AI.

NaNo Oh No

A nonprofit writing organization that hosts an annual month-long novel write-a-thon has released its new position on artificial intelligence — and writers are clowning on its incredibly goofy suggestions.

The National Novel Writing Month group, better known by the abbreviation "NaNoWriMo," has included in its "Community Matters" section a statement suggesting that criticisms of AI use in writing are classist and ableist.

"We believe that to categorically condemn AI would be to ignore classist and ableist issues surrounding the use of the technology," the position statement reads, "and that questions around the use of AI tie to questions around privilege."

If you're confused as to why a writer-led writing organization is issuing statements in favor of the technology that many are concerned will take creatives' jobs while plagiarizing their work, you're far from alone.

"Miss me by a wide margin with that ableist and privileged bullshit," one user wrote. "Other people’s work is NOT accessibility."

Hefty Resignations

Two New York Times bestselling authors who sat on NaNoWriMo's various boards took their criticisms even further.

"This is me DJO officially stepping down from your Writers Board and urging every writer I know to do the same," Daniel José Older, a young adult fiction author best known for his "Outlaw Saints" series, tweeted. "Never use my name in your promo again in fact never say my name at all and never email me again. Thanks!"

Fellow YA author Maureen Johnson followed suit, telling the group in a tweet that she too was stepping down from its Young Writers' Program because she "want[s] nothing to do with your organization from this point forward."

"I would also encourage writers to beware," she continued, "your work on their platform is almost certainly going to be used to train AI."

In an update to its AI statement, NaNoWriMo acknowledged that although there are "bad actors in the AI space who are doing harm to writers and who are acting unethically" and that "situational" abuses of the technology go against its purported "values," the organization still "find[s] the categorical condemnation for AI to be problematic."

"We also want to make clear that AI is a large umbrella technology and that the size and complexity of that category (which includes both non-generative and generative AI, among other uses) contributes to our belief that it is simply too big to categorically endorse or not endorse," the statement continues.

This "hand-wavey" statement, as one user put it, will likely do little to assuage writers' concerns about this seeming endorsement issued under the banner of social justice — except, perhaps, make NaNoWriMo look all the more foolish.

More on AI "writing": Sleazy Company Buys Beloved Blog, Starts Publishing AI-Generated Slop Under the Names of Real Writers Who No Longer Work There

The post NaNoWriMo Slammed for Saying That Opposition to AI-Generated Books Is Ableist appeared first on Futurism.

Read more:
NaNoWriMo Slammed for Saying That Opposition to AI-Generated Books Is Ableist

Doctors Suggest ‘Raw-Dogging’ Your Flight Is Bad For Your Health

We regret to inform you that there's another semi-ironic and potentially harmful TikTok trend that's taking the internet by storm: "raw-dogging" a flight.

It's the ultimate act of ponderous, self-flagellating stoicism: instead of doing the normal things people do to kill time on a miserable, long-haul flight, you tough it out by doing… nothing.

Sit up straight, don't eat the complimentary peanuts or the frozen dinners, and don't watch a movie on the in-flight entertainment system or on one of your devices. Hell, don't even go to the bathroom or drink water. Be a man. Because all you need is discipline, grit — and maybe the in-flight map, which is apparently sacrosanct in the world of aerial raw-dogging.

According to doctors, who are universally bewildered by the trend, this is a very bad idea.

"They're idiots," general practitioner Gill Jenkins told BBC. "A digital detox might do you some good, but all the rest of it is against medical advice."

"I really have no idea why anyone would do it," Gin Lalli, a psychotherapist specializing in anxiety, stress, and depression, told Fortune. "You're better off sleeping than raw-dogging."

Erling Haaland just ‘raw dogged’ a seven hour flight. ?? [IG] pic.twitter.com/SVMpWSPwmf

— City Report (@cityreport_) August 4, 2024

And yet, people are doing it. Or they're at least pretending to. Soccer star and Manchester City striker Erling Haaland — who aficionados of the sport frequently joke is a robot — was one such celebrity to popularize the trend, jokingly or not.

"Just raw-dogged a seven hour flight," he posted in an Instagram story, vacantly staring at the seat in front of him. "No phone, no sleep, no water, no food, only map. #easy."

And, okay: this probably isn't a thing that people actually do. But it's undeniably become popular to joke about doing (or attempting), and we wouldn't rule out impressionable kids or pseudo-stoics giving it a shot for real.

"If you're not moving you're at risk of deep vein thrombosis, which is compounded by dehydration," Jenkins told BBC. "Not going to the toilet, that's a bit stupid. If you need the loo, you need the loo."

However, if you're not insane about it, raw-dogging — in severe moderation — could be beneficial for our device-addled brains.

"Not having access to emails or the ability to 'check in' means that we can create the space to engage our minds in thinking about other activities and people," Sophie Mort, a clinical psychologist at Headspace, told Fortune.  "When we grant ourselves the space to switch off, it offers an opportunity to focus on what genuinely makes us happy."

"So switching off — even if just when you are traveling — can be just the ticket when it comes to protecting our mental state," she added.

In short, it's fine to allow yourself a worldly pleasure or two when you're flying the red-eye in your cramped economy seat.

More on internet trends: Dentists Horrified by People Carving Off Tooth Enamel at Home

The post Doctors Suggest 'Raw-Dogging' Your Flight Is Bad For Your Health appeared first on Futurism.

Read the original:
Doctors Suggest 'Raw-Dogging' Your Flight Is Bad For Your Health