FDA Approves Gene-Hacked CRISPR Pigs for Human Consumption

The US Food and Drug Administration has approved a type of CRISPR gene-edited pig for human consumption.

The US Food and Drug Administration has approved a type of CRISPR gene-edited pig for human consumption.

As MIT Technology Review reports, only an extremely limited list of gene-modified animals are cleared by regulators to be eaten in the United States, including a transgenic salmon that has an extra gene to grow faster, and heat-tolerant beef cattle.

And now a type of illness-resistant pig could soon join their ranks. British company Genus used the popular gene-editing technique CRISPR to make pigs immune to a virus that causes an illness called porcine reproductive and respiratory syndrome (PRRS).

It's the same technology that's been used to gene-hack human babies — experiments that have proven far more controversial — and develop medicine in the form of gene therapies.

The PRRS virus can easily spread in factory farms in the US and cause the inability to conceive, increase the number of stillborn pigs, and trigger respiratory complications, including pneumonia.

It's been called the "most economically important disease" affecting pig producers, since it can have a devastating effect on their bottom lines. According to MIT Tech, it causes losses of more than $300 million a year in the US alone.

Genus' gene-editing efforts have proven highly successful so far, with the pigs appearing immune to 99 percent of known versions of the virus.

Using CRISPR, the company knocked out a receptor that allowed the PRRS virus to enter cells, effectively barring it from infecting its host.

Beyond the respiratory illness, scientists are using gene-editing to make pigs less vulnerable or even immune to other infections, including swine fever.

But before we can eat a pork chop from a gene-edited pig, Genus says that it will have to lock down regulatory approval in Mexico, Canada, Japan, and China as well, the United States' biggest export markets for pork, as MIT Tech reports.

The company is hoping gene-edited pork could land in the US market as soon as next year.

But whether you'll actually know if you're eating meat from a pig that had a virus receptor turned off using a cutting-edge DNA modification technique is unclear.

"We aren't aware of any labelling requirement," Genus subsidiary Pig Improvement Company CEO Matt Culbertson told MIT Tech.

More on CRISPR: Scientist Who Gene-Hacked Human Babies Says Ethics Are "Holding Back" Scientific Progress

The post FDA Approves Gene-Hacked CRISPR Pigs for Human Consumption appeared first on Futurism.

See the original post here:
FDA Approves Gene-Hacked CRISPR Pigs for Human Consumption

California Nuclear Power Plant Deploys Generative AI Safety System

America's first nuclear power plant to use artificial intelligence is, ironically, the last operational one in California. 

America's first nuclear power plant to use artificial intelligence is, ironically, the last operational one in California.

As CalMatters reports, the Diablo Canyon power plant is slated to be decommissioned by the end of this decade. In the interim, the plant's owner, Pacific Gas & Electric (PG&E), claims that it's deploying its "Neutron Enterprise" tool — which will be the first nuclear plant in the nation to use AI — in a series of escalating stages.

Less than 18 months ago, Diablo Canyon was hurtling headlong toward a decommissioning that would have begun in 2024 and ended this year. In late 2023, however, the California Public Utility Commission voted to stay its execution for five years, kicking the can on the inevitable to 2029 and 2030, respectively.

Just under a year after that vote, PG&E announced that it was teaming up with a startup called Atomic Canyon, which was founded with the plant in mind and is also based in the coastal Central California town of San Luis Obispo. That partnership, and the first "stage" of the tool's deployment, brought some of Nvidia's high-powered H100 AI chips to the dying nuclear plant, and with them the compute power needed for generative artificial intelligence.

Running on an internal server without cloud access, Neutron Enterprise's biggest use case, much like so-called AI "search engines," is summarizing a massive trove of millions of regulatory documents that have been fed into it. According to Atomic Canyon CEO and cofounder Trey Lauderdale, this isn't risky — though anyone who has used AI to summarize information knows better, because the tech still often makes factual mistakes.

Speaking to CalMatters, PG&E executive Maureen Zalawick insisted that the AI program will be more of a "copilot" than a "decision-maker," meant to assist flesh-and-blood employees rather than replace them.

"We probably spend about 15,000 hours a year searching through our multiple databases and records and procedures," Zalawick explained. "And that’s going to shrink that time way down."

Lauderdale put it in even simpler terms.

"You can put this on the record," he told CalMatters. "The AI guy in nuclear says there is no way in hell I want AI running my nuclear power plant right now."

If that "right now" caveat gives you pause, you're not alone. Given the shifting timelines for the closure of Diablo Canyon in a state that has been painstakingly phasing out its nuclear facilities since the 1970s over concerns about toxic waste — and the fact that Lauderdale claims to be talking to other plants in other states — there's ample cause for concern.

"The idea that you could just use generative AI for one specific kind of task at the nuclear power plant and then call it a day," cautioned Tamara Kneese of the tech watchdog Data & Society, "I don’t really trust that it would stop there."

As head of Data & Society's Climate, Technology, and Justice program, Kneese said that while using AI to help sift through tomes of documents is worthwhile, "trusting PG&E to safely use generative AI in a nuclear setting is something that is deserving of more scrutiny." This is the same company whose polluting propensities were exposed by the real-life Erin Brokovich in the 1990s, after all.

California lawmakers, meanwhile, were impressed by the tailored usage Atomic Canyon and PG&E propose for the program — but it remains to be seen whether or not that narrow functionality will remain that way.

More on AI and energy: Former Google CEO Tells Congress That 99 Percent of All Electricity Will Be Used to Power Superintelligent AI

The post California Nuclear Power Plant Deploys Generative AI Safety System appeared first on Futurism.

See more here:
California Nuclear Power Plant Deploys Generative AI Safety System

It’s Interesting How Truth Social Moved to Sell Stock Right Before Trump’s Tariffs Were Announced

Just before announcing a major escalation in his tariff war, president Donald Trump freed up the sale of his Truth Social shares.

Just before announcing a major escalation in his tariff war on Wednesday evening — followed by a major stock market wipeout the following morning — president Donald Trump freed up the sale of his Truth Social shares.

As the Financial Times reports, Trump Media and Technology Group (TMTG) revealed that it was planning to sell more than 142 million shares in a late Tuesday filing with the Securities and Exchange Commission.

Most notably, the shares listed in the document include Trump's 114-million-share stake, which is worth roughly $2.3 billion and held in a trust controlled by his son Donald Trump Jr. Other insiders, including a crypto exchange-traded fund, and 106,000 shares held by US attorney Pam Bondi were also included in the latest filing.

While the filing doesn't guarantee any future sale of shares, investors weren't exactly smitten with the optics. Shares plunged eight percent in light of the news, according to the FT, and are down over 45 percent this year amid Trump's escalating trade war.

The timing of the SEC filing is certainly suspect. Trump's "liberation day" tariff announcement on Wednesday triggered a major selloff, causing shares of multinational companies and stock futures to crater.

Trump also vowed in September that he wasn't planning to sell any of his TMTG shares, which caused their value to spike temporarily at the time.

Now that the shares are up for grabs, the president has seemingly had a change of heart — or, perhaps, is getting cold feet now that the economy is feeling the brunt of his catastrophic economic policymaking. It's also possible Trump was always planning to cash out and leave investors exposed.

Meanwhile, Trump Media released a statement on Wednesday, accusing "legacy media outlets" of "spreading a fake story suggesting that a TMTG filing today is paving the way for the Trump trust to sell its shares in TMTG." The company said this week's filing was "routine."

Experts have long pointed out that if Trump were to sell, it could lead to TMTG spiraling.

It's still unclear whether the company — which reported a staggering $400 million loss in 2024, while only netting a pitiful $3.6 million revenue — will realize the mass sale of millions of shares.

But even just the suggestion appears to have spooked investors.

"In this offering it says the Trump trust could sell shares — it doesn't necessarily mean that they will," Morningstar analyst Seth Goldstein told ABC News. "It signals to the market that they could."

"This leaves it up in the air if and when a share sale will happen," he added.

In short, instead of building a viable business that generates meaningful revenue to reflect its valuation, TMTG still feels more like an enrichment scheme for Trump and his closest associates.

"Trump Media has been pretty unsuccessful at creating an operating business model, but they have been quite successful at selling their stock," University of Florida finance professor Jay Ritter told ABC News.

More on TMTG: Trump's Failing Truth Social Was Doing Much Better Under Biden

The post It's Interesting How Truth Social Moved to Sell Stock Right Before Trump's Tariffs Were Announced appeared first on Futurism.

Visit link:
It's Interesting How Truth Social Moved to Sell Stock Right Before Trump's Tariffs Were Announced

Scientists Gene Hack Bacteria That Breaks Down Plastic Waste

The scientists edited to the bacteria to prove which enzyme it used to degrade PET plastics into bioavailable carbon.

Bottom Feeders

We may have a way of literally eating away at our planet's pollution crisis.

As part of a new study published in the journal Environmental Science and Technology, researchers have shed additional light on a possibly game-changing bacteria that grows on common polyethylene terephthalate (PET) plastics, confirming that it can break down and eat the polymers that make up the waste.

Scientists have long been interested in the plastic-decomposing abilities of the bacteria, Comamonas testosteroni. But this is the first time that the mechanisms behind that process have been fully documented, according to study senior author Ludmilla Aristilde.

"The machinery in environmental microbes is still a largely untapped potential for uncovering sustainable solutions we can exploit," Aristilde, an associate professor of civil and environmental engineering at Northwestern University in Illinois, told The Washington Post.

Enzyme or Reason

To observe its plastic-devouring ability, the researchers isolated a bacterium sample, grew it on shards of PET plastics, and then used advanced microscopic imaging to look for changes inside the microbe, in the plastic, and in the surrounding water.

Later, they identified the specific enzyme that helped break down the plastic. To prove it was the one, they edited the genes of the bacteria so that it wouldn't secrete the enzyme and found that without it, the bacteria's plastic degrading abilities were markedly diminished.

That gene-hacking trick formed a full picture of what goes on. First, the bacteria more or less chews on the plastic to break it into microscopic particles. Then, they use the enzyme to degrade the tiny pieces into their monomer building blocks, which provide a bioavailable source of carbon.

"It is amazing that this bacterium can perform that entire process, and we identified a key enzyme responsible for breaking down the plastic materials," Aristilde said in a statement about the work. "This could be optimized and exploited to help get rid of plastics in the environment."

PET Project

PET plastics, which are often used in water bottles, account for 12 percent of global solid waste, the researchers said. It also accounts for up to 50 percent of the microplastics found in wastewater.

That happens to be the environment that C. testosteroni thrives in, opening up the possibility of tailoring the bacteria to clean up our sewage before it's dumped into the ocean, for example.

But we'll need to understand more about the bacteria before that can happen.

"There's a lot of different kinds of plastic, and there are just as many potential solutions to reducing the environmental harm of plastic pollution," Timothy Hollein, a professor of biology at Loyola University Chicago who was not involved with the study, told WaPo. "We're best positioned to pursue all options at the same time."

More on pollution: A Shocking Percentage of Our Brains Are Made of Microplastics, Scientists Find

The post Scientists Gene Hack Bacteria That Breaks Down Plastic Waste appeared first on Futurism.

Excerpt from:
Scientists Gene Hack Bacteria That Breaks Down Plastic Waste

NaNoWriMo Slammed for Saying That Opposition to AI-Generated Books Is Ableist

NaNoWriMo, a nonprofit writing organization that hosts an annual novel write-a-thon, has released a strange new platform on AI.

NaNo Oh No

A nonprofit writing organization that hosts an annual month-long novel write-a-thon has released its new position on artificial intelligence — and writers are clowning on its incredibly goofy suggestions.

The National Novel Writing Month group, better known by the abbreviation "NaNoWriMo," has included in its "Community Matters" section a statement suggesting that criticisms of AI use in writing are classist and ableist.

"We believe that to categorically condemn AI would be to ignore classist and ableist issues surrounding the use of the technology," the position statement reads, "and that questions around the use of AI tie to questions around privilege."

If you're confused as to why a writer-led writing organization is issuing statements in favor of the technology that many are concerned will take creatives' jobs while plagiarizing their work, you're far from alone.

"Miss me by a wide margin with that ableist and privileged bullshit," one user wrote. "Other people’s work is NOT accessibility."

Hefty Resignations

Two New York Times bestselling authors who sat on NaNoWriMo's various boards took their criticisms even further.

"This is me DJO officially stepping down from your Writers Board and urging every writer I know to do the same," Daniel José Older, a young adult fiction author best known for his "Outlaw Saints" series, tweeted. "Never use my name in your promo again in fact never say my name at all and never email me again. Thanks!"

Fellow YA author Maureen Johnson followed suit, telling the group in a tweet that she too was stepping down from its Young Writers' Program because she "want[s] nothing to do with your organization from this point forward."

"I would also encourage writers to beware," she continued, "your work on their platform is almost certainly going to be used to train AI."

In an update to its AI statement, NaNoWriMo acknowledged that although there are "bad actors in the AI space who are doing harm to writers and who are acting unethically" and that "situational" abuses of the technology go against its purported "values," the organization still "find[s] the categorical condemnation for AI to be problematic."

"We also want to make clear that AI is a large umbrella technology and that the size and complexity of that category (which includes both non-generative and generative AI, among other uses) contributes to our belief that it is simply too big to categorically endorse or not endorse," the statement continues.

This "hand-wavey" statement, as one user put it, will likely do little to assuage writers' concerns about this seeming endorsement issued under the banner of social justice — except, perhaps, make NaNoWriMo look all the more foolish.

More on AI "writing": Sleazy Company Buys Beloved Blog, Starts Publishing AI-Generated Slop Under the Names of Real Writers Who No Longer Work There

The post NaNoWriMo Slammed for Saying That Opposition to AI-Generated Books Is Ableist appeared first on Futurism.

Read more:
NaNoWriMo Slammed for Saying That Opposition to AI-Generated Books Is Ableist

Government Test Finds That AI Wildly Underperforms Compared to Human Employees

A series of blind assessments found that human-written summaries scored significantly better than summaries generated by AI.

Sums It Up

Generative AI is absolutely terrible at summarizing information compared to humans, according to the findings of a trial for the Australian Securities and Investment Commission (ASIC) spotted by Australian outlet Crikey.

The trial, conducted by Amazon Web Services, was commissioned by the government regulator as a proof of concept for generative AI's capabilities, and in particular its potential to be used in business settings.

That potential, the trial found, is not looking promising.

In a series of blind assessments, the generative AI summaries of real government documents scored a dire 47 percent on aggregate based on the trial's rubric, and were decisively outdone by the human-made summaries, which scored 81 percent.

The findings echo a common theme in reckonings with the current spate of generative AI technology: not only are AI models a poor replacement for human workers, but their awful reliability means it's unclear if they'll have any practical use in the workplace for the majority of organizations.

Signature Shoddiness

The assessment used Meta's open source Llama2-70B, which isn't the newest model out there, but with up to 70 billion parameters, it's certainly a capable one.

The AI model was instructed to summarize documents submitted to a parliamentary inquiry, and specifically to focus on what was related to ASIC, such as where the organization was mentioned, and to include references and page numbers. Alongside the AI, human employees at ASIC were asked to write summaries of their own.

Then five evaluators were asked to assess the human and the AI-generated summaries after reading the original documents. These were done blindly — the summaries were simply labeled A and B — and scorers had no clue that AI was involved at all.

Or at least, they weren't supposed to. At the end, when the assessors had finished up and were told about the true nature of the experiment, three said that they suspected they were looking at AI outputs, which is pretty damning on its own.

Sucks On All Counts

All in all, the AI performed lower on all criteria compared to the human summaries, the report said.

Strike one: the AI model was flat-out incapable of providing the page numbers of where it got its information.

That's something the report notes can be fixed with some tinkering with the AI model. But a more fundamental issue was that it regularly failed to pick up on nuance or context, and often made baffling choices about what to emphasize or highlight.

Beyond that, the AI summaries tended to include irrelevant and redundant information and were generally "waffly" and "wordy."

The upshot: these AI summaries were so bad that the assessors agreed that using them could require more work down the line, because of the amount of fact-checking they require. If that's the case, then the purported upsides of using the technology — cost-cutting and time-saving — are seriously called into question.

More on AI: NaNoWriMo Slammed for Saying That Opposition to AI-Generated Books Is Ableist

The post Government Test Finds That AI Wildly Underperforms Compared to Human Employees appeared first on Futurism.

Excerpt from:
Government Test Finds That AI Wildly Underperforms Compared to Human Employees

Recruiters Are Getting Bombarded With Crappy, AI-Generated CVs

Companies and recruiters are getting flooded with AI-generated job applications and they are badly written.

Trash Mountain

Companies and recruiters are getting flooded with AI-generated job applications — and predictably, many of them are badly written and generic sounding, Financial Times reports.

The use of AI has reached such a fever pitch that about half of job seekers are using AI tools like OpenAI's ChatGPT or Google's Gemini to churn out cover letters and resumes, and to fill out job assessment forms. FT used interviews with recruiters and employers, in addition to several surveys, to arrive at that estimate.

And it's seriously annoying people who need to fill positions.

"We’re definitely seeing higher volume and lower quality, which means it is harder to sift through," Khyati Sundaram, chief executive at recruitment website Applied, told FT. "A candidate can copy and paste any application question into ChatGPT, and then can copy and paste that back into that application form."

Productivity Killer

Several surveys have also found job applicants are making ample use of the tech, like this recent poll by Canva, in which 45 percent of 5,000 people surveyed said they had used AI to "build, update, or improve their resumes."

Worst of all, many applicants are clearly not going over the text they send out.

"Without proper editing, the language will be clunky and generic, and hiring managers can detect this," Victoria McLean, CEO of career consultancy company CityCV, told FT. "CVs need to show the candidate’s personality, their passions, their story, and that is something AI simply can’t do."

With no clear solution to this problem in sight, employers will have to rely heavily on in-person interviews to assess a candidate, recruiters told FT, which goes to show that AI isn't making everybody's jobs easier.

Besides job recruiters, AI has also made educators' jobs harder. It has become practically impossible to detect AI-generated writing in student work, requiring teachers to assess pupils in other ways — such as in-class assignments.

A recent UpWork survey revealed that 77 percent of workers who had used AI find the technology cumbersome and hampering their productivity.

What's clear from these disparate tales is that AI may not be the magic bullet proponents of AI claim it to be, especially when it comes to the job search market.

More on AI: OpenAI Exec Says AI Will Kill Creative Jobs That "Shouldn't Have Been There in the First Place"

The post Recruiters Are Getting Bombarded With Crappy, AI-Generated CVs appeared first on Futurism.

Read more:
Recruiters Are Getting Bombarded With Crappy, AI-Generated CVs