Tesla Factory Accused of Spewing Illegal Amounts of Air Pollution

An environmental group has slapped Tesla with a lawsuit this week for spewing pollution and violating the Clean Air Act.

Smogging Gun

In an awkward turn, an environmental group has slapped Tesla with a lawsuit this week, CNBC reports, for spewing pollution from its factory in Fremont, California and violating the Clean Air Act.

Despite Tesla touting that its factories are conscious about limiting waste, the California nonprofit group Environmental Democracy Project alleges in its lawsuit that the electric vehicle company has disregarded the Clean Air Act "hundreds of times since January 2021, emitting harmful pollution into the neighborhoods surrounding the Factory," as reported by CNBC.

The litigants say the pollution has continued to this day, with the factory spewing "excess amounts of air pollution, including nitrogen oxides, arsenic, cadmium, and other harmful chemicals."

This comes on the heels of the local air pollution control agency, the Bay Area Air Quality Management District, announcing that it's seeking to stop Tesla from releasing more pollutants into the community — and dinging it for some 112 notices of violation since 2019.

"Each of these violations can emit as much as 750 pounds of illegal air pollution, according to some estimates," the agency wrote in a statement earlier this month. "The violations are frequent, recurring, and can negatively affect public health and the environment."

Factory Hazard

The Tesla factory in California isn't the only one facing criticism. In Germany, hardline environmental activists recently breached the barriers around a factory just outside Berlin and tried to storm the plant, upset that Tesla felled more than 200 acres of trees.

"Companies like Tesla are there to save the car industry, they’re not there to save the climate," one anti-Tesla activist in Germany told a Wired reporter.

This is a persistent criticism against Tesla and other electric vehicles that aim to save the environment and make a profit selling a product: are they really that green?

After all, the manufacture of an electric battery takes up an enormous amount of fossil fuels and requires the mining of lithium, cobalt and other minerals.

However, life cycle analysis of electric vehicles versus one that runs on fossil fuels shows that EVs win the race on lifetime emission output.

But that doesn't excuse the pollution allegations at Tesla factories, which have also earned scorn in California for hazardous waste violations.

More on Tesla: Investor Predicts Tesla Could "Go Bust"

The post Tesla Factory Accused of Spewing Illegal Amounts of Air Pollution appeared first on Futurism.

Read more here:

Tesla Factory Accused of Spewing Illegal Amounts of Air Pollution

Doctors Administer Oxytocin Nasal Spray to Lonely People

Doctors administered the

We might like to think of ourselves as rational creatures, but the fact is that the whole experience of being human is basically the result of a bunch of swirling chemicals in the brain.

Case in point? A team of European and Israeli doctors just released an intriguing study, published in the journal Psychother Psychosom, in which they administered oxytocin — that's the much hyped feel-good hormone that's released by physical intimacy, among other activities — to lonely people as a nasal spray.

Take a beat to get over the premise of giving people in social distress direct doses of what's known to many researchers as the "love hormone," because the results were pretty interesting.

While the subjects didn't report a reduction in perceived loneliness, perceived stress, or quality of life, they did report a reduction in acute feelings of loneliness — a narrow distinction, but one that was clearly tantalizing to the researchers, especially because the effect seemed to linger for months after treatment.

"The psychological intervention was associated with a reduced perception of stress and an improvement in general loneliness in all treatment groups, which was still visible at the follow-up examination after three months," said the paper's senior author Jana Lieberz, a faculty member at Germany's University of Bonn, in a press release about the research.

Perhaps more intuitively — oxytocin is strongly associated with bonding — the researchers also found that subjects dosed with the hormone had an easier time connecting with others during group therapy sessions in which they were enrolled.

"This is a very important observation that we made — oxytocin was able to strengthen the positive relationship with the other group members and reduce acute feelings of loneliness right from the start," Leiberz said. "It could therefore be helpful to support patients with this at the start of psychotherapy. This is because we know that patients can initially feel worse than before starting therapy as soon as problems are named. The observed effects of administering oxytocin can in turn help those affected to stay on the ball and continue."

Further research is clearly needed; the trial size was limited, at just 78 participants, and it's difficult to parse the exact difference between "perceived" and "acute" loneliness they reported.

But the doctors behind the study are clearly intrigued, writing in the press release that the work "could help to alleviate loneliness," which is "associated with many mental and physical illnesses."

While Lieberz "emphasizes that oxytocin should not be seen as a panacea," the release continues, the "results of the study suggest that oxytocin can be used to achieve positive effects during interventions."

With the rush of academic and commercial interest we've seen in the potential pharmaceutical benefits of everything from ketamine to MDMA, don't be surprised if we see a rush of interest in oxytocin over the next few years.

More on oxytocin: Scientists Discover That Dogs Cry Tears of Joy When Reuinted With Owners

The post Doctors Administer Oxytocin Nasal Spray to Lonely People appeared first on Futurism.

The rest is here:

Doctors Administer Oxytocin Nasal Spray to Lonely People

OpenAI Employees Forced to Sign NDA Preventing Them From Ever Criticizing Company

OpenAI employees are forced to sign a restrictive nondisclosure agreement (NDA) forbidding them from ever criticizing the company.

Cone of Silence

ChatGPT creator OpenAI might have "open" in the name, but its business practices seem diametrically opposed to the idea of open dialogue.

Take this fascinating scoop from Vox, which pulls back the curtain on the restrictive nondisclosure agreement (NDA) that employees at the Sam Altman-helmed company are forced to sign to retain equity. Here's what Vox's Kelsey Piper wrote of the legal documents:

It turns out there’s a very clear reason for that. I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.

If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI "due to losing confidence that it would behave responsibly around the time of AGI," has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document.

Signature Flourish

How egregious the NDA is depends on your industry and view of employees' rights. But what's certain is that it flies directly in the face of the "open" in OpenAI's name, as well as much of its rhetoric around what it frames as the responsible and transparent development of advanced AI.

For its part, OpenAI issued a cryptic denial after Vox published its story that seems to contradict what Kokotajlo has said about having to give up equity when he left.

"We have never canceled any current or former employee’s vested equity nor will we if people do not sign a release or nondisparagement agreement when they exit," it said. When Vox asked if that was a policy change, OpenAI replied only that the statement "reflects reality."

It's possible to imagine a world in which the development of AI was guided by universities and publicly funded instead of being pushed forward by impulsive and profit-seeking corporations. But that's not the timeline we've ended up in — and how that reality influences the outcome of AI research is anyone's guess.

More on OpenAI: OpenAI Researcher Quits, Flames Company for Axing Team Working to Prevent Superintelligent AI From Turning Against Humankind

The post OpenAI Employees Forced to Sign NDA Preventing Them From Ever Criticizing Company appeared first on Futurism.

Read more here:

OpenAI Employees Forced to Sign NDA Preventing Them From Ever Criticizing Company

OpenAI Researcher Quits, Flames Company for Axing Team Working to Prevent Superintelligent AI From Turning Against Humankind

It might sound like a joke, but OpenAI has dissolved the team responsible for making sure advanced AI doesn't turn against humankind.

OpenAI Shut

It might sound like a joke, but OpenAI has dissolved the team responsible for making sure advanced AI doesn't turn against humankind.

Yes, you read that right. The objective of the team, formed just last summer, was to "steer and control AI systems much smarter than us."

"To solve this problem within four years, we’re starting a new team, co-led by Ilya Sutskever and Jan Leike, and dedicating 20 percent of the compute we’ve secured to date to this effort," the company wrote at the time. "We’re looking for excellent ML researchers and engineers to join us."

If those two names sound familiar, it's because Sutskever departed the company under a dark cloud this week, prompting Leike to quit in disgust.

And now the other shoe has dropped: as first reported by Wired, the entire team has now been dissolved.

Terminator Prequel

Sutskever, who was intimately involved with last year's plot to out OpenAI CEO Sam Altman, has remained largely mum this week. But Leike has been publicly sounding off.

"I joined because I thought OpenAI would be the best place in the world to do this research," he wrote on X-formerly-Twitter. "However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point."

Among his gripes: that the company wasn't living up to its promises to dedicate technical resources to the effort.

"Over the past few months my team has been sailing against the wind," he continued. "Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done."

But his objections also sound more existential than just company politics.

"Building smarter-than-human machines is an inherently dangerous endeavor," Leike wrote. "OpenAI is shouldering an enormous responsibility on behalf of all of humanity."

"But over the past years, safety culture and processes have taken a backseat to shiny products," he alleged.

OpenAI, for its part, has been busy doing exactly that: this week, it showed off a new version of ChatGPT that can respond to live video through a user's smartphone camera in an emotionally inflected voice that Altman compared to the 2013 romantic tragedy "Her."

"It’s a process of trust collapsing bit by bit, like dominoes falling one by one," one OpenAI employee told Vox.

More on OpenAI: The Person Who Was in Charge of OpenAI's $175 Million Fund Appears to Be Fake

The post OpenAI Researcher Quits, Flames Company for Axing Team Working to Prevent Superintelligent AI From Turning Against Humankind appeared first on Futurism.

Read more:

OpenAI Researcher Quits, Flames Company for Axing Team Working to Prevent Superintelligent AI From Turning Against Humankind

New Law Would Force Government to Declassify Every UFO Document

In an age of growing UFO belief, a handful of Congress members are calling on the government to show its cards. 

Show Us Receipts

In an age of widespread public interest in UFOs, a handful of Congress members are calling on the government to show its cards.

Titled the "UAP Transparency Act," a reference to the government's new preferred term for UFOs of "unidentified aerial/anomalous phenomenon," the new law introduced by Tennessee Republican Tim Burchett would require that all documents about these phenomena be declassified.

"It's simple," Burchett told Fox News of his bill. "They spend all this time telling us they don't exist, then release the files, dagnabbit."

One of the legislative branch's most outspoken believers in extraterrestrials, the Tennesee congressman once claimed that "UFOs were in the Bible," specifically citing the chapter of Ezekiel as evidence.

As such, he's long used his power and podium to urge the government to share more information with the public about any knowledge of UFOs.

"This bill isn’t all about finding little green men or flying saucers, it’s about forcing the Pentagon and federal agencies to be transparent with the American people," Burchett said in the bill's press release. "I’m sick of hearing bureaucrats telling me these things don’t exist while we’ve spent millions of taxpayer dollars on studying them for decades."

Checkered Past

The GOP congressman  — who, we should note, made a transphobic comment so vile that he was shut down by a Fox host — is joined by Reps. Jared Moskowitz (D-FL), Anna Paulina Luna (R-FL), and Eric Burlison (R-MO) in cosponsoring the bill, which would require the Pentagon to declassify all its UAP documents within 270 days of its passage.

Behind this declassification push is Burchett's belief that the government has sustained a long term coverup of its knowledge and use of UFO technology.

"The devil has been in our way through this thing," the congressman, who sits on the House of Representative's oversight subcommittee, said during a hearing last year. "We’ve run into roadblocks from members from the intelligence community, the Pentagon."

Inspired by testimony from whistleblower David Grusch last year, Burchett and his bill cosponsors sent a letter to the intelligence community's inspector general requesting more information about claims that the government had retrieved and reverse-engineered alien technology. This year, the Pentagon's UFO office released a 63-page document saying it had no such records, which may or may not have been in response to that letter last summer

Given both the strange religiosity and the general discrediting of Grusch and his testimony, we're taking this bill with a grain of salt — but it will be interesting to see what happens with it, even if its chances of passing currently seem like a long shot.

More on UFOs: The Head of the SETI Institute Says He's Seen Zero Evidence of Alien Technology

The post New Law Would Force Government to Declassify Every UFO Document appeared first on Futurism.

Follow this link:

New Law Would Force Government to Declassify Every UFO Document

Sam Altman Clearly Freaked Out by Reaction to News of OpenAI Silencing Former Employees

OpenAI got a lot of criticism this week for departures and a restrictive nondisclosure agreement. Sam Altman is clearly freaked.

Damage Plan

OpenAI has had a tough week.

Its reveal of the upcoming ChatGPT 4o — that's an "o" for "omni," not a "40" as in the number — was impressive on a technical level, but CEO Sam Altman made an unforced error by comparing the talking AI to the dark 2013 romance film "Her," kicking off a wave of negative headlines. Then things got even more tumultuous when a pair of high-powered researchers left the company in departures that looked like something between "getting pushed out" and "quitting in disgust."

The optics were especially appalling because both had been working on OpenAI's team dedicated to controlling any future superintelligent AI the company might create, with one of the two harshly criticizing OpenAI's purported lack of commitment in that domain. And then things got even more embarrassing for the company when Vox reported that any workers leaving OpenAI had to sign a draconian nondisclosure agreement to retain their equity stipulating, among other things, that they could never criticize the company in the future.

It looks like the wall-to-wall criticism of all that has gotten under the skin of OpenAI's leadership, because the company's head is now in full damage control mode.

Sam I Am

Altman, as usual, is taking center stage in the company's response — though so far he's seemed to strategically focus on the equity side of the equation rather than the explosive claim that OpenAI is silencing former employees who might have ethical concerns about its work.

"We have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement). vested equity is vested equity, full stop," he wrote on X-formerly-Twitter. "There was a provision about potential equity cancellation in our previous exit docs; although we never clawed anything back, it should never have been something we had in any documents or communication."

"This is on me and one of the few times I've been genuinely embarrassed running OpenAI; I did not know this was happening and I should have," he continued. "The team was already in the process of fixing the standard exit paperwork over the past month or so. if any former employee who signed one of those old agreements is worried about it, they can contact me and we'll fix that too. Very sorry about this."

Meanwhile, OpenAI president Greg Brockman published his own lengthy response to the situation — signed off by him and Altman — that managed to say very little in about 500 words.

"We know we can't imagine every possible future scenario," read the statement. "So we need to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities. We will keep doing safety research targeting different timescales. We are also continuing to collaborate with governments and many stakeholders on safety."

It's worth pointing out that neither of these statements get quite as far as the non-financial takeaway of Vox's reporting: are Altman and Brockman saying that former employees can now sound off about the company's approach to hot-button issues? Or are they just trying to defuse some of the outrage before the company continues to try to keep its ex-workers quiet when things die down?

We'll definitely be watching to see which it is.

More on OpenAI: OpenAI Secretly Trained GPT-4 With More Than a Million Hours of Transcribed YouTube Videos

The post Sam Altman Clearly Freaked Out by Reaction to News of OpenAI Silencing Former Employees appeared first on Futurism.

See the article here:

Sam Altman Clearly Freaked Out by Reaction to News of OpenAI Silencing Former Employees

Tesla’s Firing Even More Employees as Crisis Deepens

Tesla is slashing another 600 jobs at its facilities in California, running the gamut from factory workers to highly qualified engineers.

Head Count

Electric vehicle maker Tesla has been hit by round after round after round of layoffs over the past month or so, to the degree that it's somewhat hazy exactly how many employees have been affected.

And the cuts keep piling up, with CNBC now obtaining government paperwork through a public records request that shows the company is slashing another 600 jobs at its facilities in California, running the gamut from factory workers to highly qualified engineers.

The implication of CNBC's reporting is that the full extent of Tesla's cuts may end up being drastically higher than the 10 percent headcount reduction CEO Elon Musk warned about at the outset of the cullings back in April. As the network points out, Musk claimed in an April earnings call that "inefficiency" at the company was running between 20 and 30 percent, suggesting that the seemingly ongoing sackings could come to total nearly a third of the company's previous workforce of more than 140,000 at the end of last year.

Masterful Gambit

It's striking to watch Tesla writhe and contract, because just a few years ago it was on top of the world, having seemingly demonstrated not only that it could make electric vehicles a red-hot status symbol but that it could produce them at scale.

Things have soured since then, though. Maintenance and reliability for the vehicles have proven to be more painful than initially expected, sapping consumer interest, at the same time that cheaper alternatives are emerging overseas.

And hovering over the entire mess is Musk himself, whose antics since buying X-formerly-Twitter have raised questions about his business acumen at the same time that he's aggressively pushed Tesla's incomplete self-driving and robotics efforts while hyper-publically embracing a menagerie of conspiratorial and rightwing positions that seem to have alienated the company's core buyers.

"It got to the point where we felt like we were driving around in a QAnon-MAGA hat, as if Tesla had become a symbol of white supremacy," one former Tesla owner explained last year.

More on Tesla: 98 Percent of Drivers Who Try Tesla's Full Self-Driving Demo Ditch It After the Trail Period Is Over

The post Tesla's Firing Even More Employees as Crisis Deepens appeared first on Futurism.

Continue reading here:

Tesla's Firing Even More Employees as Crisis Deepens

Strange Photos Show NASA Astronauts Testing Spacesuits With No Arms or Visors

Arms Race

New photos from NASA show the space agency's astronauts testing spacesuits in the Arizona desert — but we're not sure these things are quite spaceworthy yet.

Why? Because they're missing, among other things, arms, legs and visors, leading to an entertaining photoshoot in which astronauts Kate Rubins and Andre Douglas trudge around completing Moonish tasks while garbed half in space gear and half in fairly regular-looking hiking clothes, including sunglasses that look comically out of place with the off-world getups.

NASA's writeup doesn't quite explain the eccentric spacesuit design, but it does specify that the outfits are mockups. Reading between the lines, it sounds like the agency is basically rehearsing parts of the Artemis 3 mission — slated to return astronauts to the lunar surface in a few years — even if the suits aren't fully cooked yet, to practice and pin down any shortcomings in the design back on the safety of Earth.

"Field tests play a critical role in helping us test all of the systems, hardware, and technology we’ll need to conduct successful lunar operations during Artemis missions," said Barbara Janoiko, director for the field test at Johnson. "Our engineering and science teams have worked together seamlessly to ensure we are prepared every step of the way for when astronauts step foot on the Moon again."

Moon Walkers

It also sounds like the astronauts are being prepared for the geological research they'll need to conduct on the Moon. That's well-precedented; the Apollo astronauts were so highly trained that by the time they landed, it's estimated that each had the equivalent of a master's degree in geology.

"During Artemis III, the astronauts will be our science operators on the lunar surface with an entire science team supporting them from here on Earth," said NASA Goddard Space Flight Center science officer Cherie Achilles in the writeup. "This simulation gives us an opportunity to practice conducting geology from afar in real time."

And big picture, it's just another fascinating glimpse into the exhaustive preparation that NASA and its Artemis astronauts are now undertaking to prepare for humankind's first crewed lunar landing since 1972.

"The test will evaluate gaps and challenges associated with lunar South Pole operations, including data collection and communications between the flight control team and science team in Houston for rapid decision-making protocols," reads the blurb. "At the conclusion of each simulated moonwalk, the science team, flight control team, crewmembers, and field experts will come together to discuss and record lessons learned."

More on NASA: NASA Admits Space Station Junk Crashed Through Man's Roof

The post Strange Photos Show NASA Astronauts Testing Spacesuits With No Arms or Visors appeared first on Futurism.

View original post here:

Strange Photos Show NASA Astronauts Testing Spacesuits With No Arms or Visors

Man Steals Cybertruck, Leads Cops on World’s Lamest Car Chase

The accused, 41-year-old Corey Cohee, allegedly stole the Cybertruck but was quickly located by cops using the vehicle's tracking app.

Cyber Crimes

Chalk it up to media scrutiny or whatever, but it sure seems like Cybertruck drivers get caught up in a lot of stupid stuff. This time though, the owner wasn't to blame: on Saturday morning, a man in Delaware allegedly stole one of those distinctly shaped Tesla pickups, resulting in what must have been a comical-looking car chase that proved short-lived.

The accused is 41-year-old Corey Cohee, who allegedly jacked the Cybertruck from someone's home in the small town of Lincoln, Sussex County. The truck had a temporary New Jersey registration permit, so it must've been brand new. Extra bad luck for the owner.

Per a statement from the Delaware State Police, state troopers responded to a stolen car report around 8 am. Once they arrived at the property, the cops were able to make use of the Cybertruck's tracking app, a standard feature of Teslas, to locate the vehicle.

They found it on a dirt road not much later. But the thief hadn't bailed; he was still inside the car, just idling there for some reason. But when the cops approached, he sped off in the Cybertruck, perhaps feeling invincible inside the minimalist confines of the "bulletproof" behemoth.

Grand Heft Auto

And so a chase ensued. The police said the driver ignored all signals to pull over. One imagines the otherworldly-looking stainless steel coffin, which weighs close to 7,000 pounds, piggishly bumbling its way across unpaved roads with powerful jerks of acceleration, gracing those backwoods like a meandering UFO.

But with the cops in hot pursuit, the driver, later identified as Cohee, apparently decided to give up. He pulled over, and the cops arrested him "without incident." Cohee has been committed to Sussex Correctional Institution on a $4,002 secured bond, the police statement said. It's unclear if the Cybertruck was damaged.

Needless to say, this was a pretty hare-brained crime, audacious as it was. You couldn't name a more conspicuous looking vehicle on the road right now to steal, never mind one that can be easily tracked with an app. But there's just something about the Tesla's bold styling that provokes even bolder behavior, it seems.

More on Tesla: Tesla Video Showing Cybertruck Beating Porsche 911 While Towing Porsche 911 Was a Lie, Independent Test Demonstrates

The post Man Steals Cybertruck, Leads Cops on World's Lamest Car Chase appeared first on Futurism.

Go here to see the original:

Man Steals Cybertruck, Leads Cops on World's Lamest Car Chase

Tesla Employees "Walking on Eggshells" as Furious Musk Continues Layoffs

Amid recurring rounds of layoffs, Tesla workers are dreading their own terminations — and it's unclear when the cuts are going to end.

Around and Around

Amid recurring rounds of layoffs, Tesla workers are now dreading their own terminations — exacerbated by the company not saying when the cuts are going to end.

Insider sources tell Bloomberg that the rolling layoffs, which are part of CEO Elon Musk's plans to cut upwards of 10 percent of staff to save money, will likely continue until at least June.

For those who've already borne the brunt of the reductions, the horror doesn't yet seem to be over.

"It's difficult to imagine the feeling of walking on eggshells every day at work, uncertain whether or not you'll be able to pay your bills or feed your family," former Tesla sales rep Michael Minick wrote in a LinkedIn post. "For those of us who were part of the first wave of layoffs, it was almost like waking up to a bandaid being ripped off."

Laid off in April, Minick expressed solidarity with those still at the company as they await their fates.

"Is it too much to ask for a company to hold some accountability and put an end to the uncertainty?" he continued. "It would be a relief to know that they can breathe and focus on their work, without the gray cloud of uncertainty looming over. People deserve clarity if an end to the layoffs will have a stop date."

Cut It Up

At this point, it's unclear just how many rounds of layoffs Tesla has done since it began job-slashing in April and announced its 10 percent global workforce cut. By our count, there have been at least four, but as news of them hit our feeds, it's unclear which round we're even talking about anymore. What proportion of its workforce has been affected is hazy at this point.

Hanging over these cuts are Tesla's massive sales woes as it struggles to recapture the already-depressed electric vehicle market, which it once dominated. As Reuters reports, the company's latest forte in its crisis control mode is offering discounts to European car rental companies in hopes of making at least some sales.

With shares down, as Bloomberg notes, by 29 percent this year, things are looking very bad for Tesla — and as usual, the workers who built the company are bearing the brunt of its problems.

More on Tesla: Man Buys Used Tesla, Discovers Horrendous Issue

The post Tesla Employees "Walking on Eggshells" as Furious Musk Continues Layoffs appeared first on Futurism.

View post:

Tesla Employees "Walking on Eggshells" as Furious Musk Continues Layoffs

Neuralink Jamming Wires Deeper Into Second Patient’s Brain

Despite issues manifesting in its first implantee, the FDA has given Neuralink the green light to implant a brain chip in a second patient.

Despite issues manifesting in its first implantee, the Food and Drug Administration has given Neuralink the green light to implant a brain chip in a second human test subject.

As the Wall Street Journal reports, Neuralink appears to have gotten the go-ahead because it proposed a fix for the problems suffered by its first patient, 29-year-old quadriplegic Noland Arbaugh.

As previous reports indicate, a majority of the 64 thread-like wires connected to Arbaugh's chip began to come loose, causing him to lose some of the implant's functionality just a month after it was inserted earlier this year.

To fix it, an anonymous insider told the newspaper, the Elon Musk-owned company plans to jam the wires, each thinner than a human hair, even deeper into the next subject's brain — which yes, sounds pretty gruesome to us, especially considering the monkey death toll associated with the company.

Despite how freaky all of that sounds, the description of what's happening to Neuralink's first test subject are more emotionally tragic than physically disturbing, and there appears to be some hope that the fix for the second implant might improve upon the failures of the first go-round.

In interviews with Bloomberg and the WSJ, Arbaugh described the incredible highs of having experiences restored to him, such as being able to better communicate with friends and play video games using his mind, only to start unexpectedly losing them.

"[Neuralink] told me that the threads were getting pulled out of my brain," Arbaugh told Bloomberg. "It was really hard to hear. I thought I’d gotten to use it for maybe a month, and then my journey was coming to an end."

As he noted in both interviews, that rapid loss of functionality so soon after it had been bestowed upon him took a huge emotional toll.

"I was on such a high and then to be brought down that low. It was very, very hard," he told the WSJ. "I cried."

According to Bloomberg, Neuralink has implemented algorithmic workarounds to better interpret the data from Arbaugh's implant, as only 15 percent of the threads remain intact and continue transmitting data remotely.

With FDA approval in place, the company's next steps will be to approve one of the more than 1,000 applicants who've applied, according to the WSJ's source. The company hopes to do the implantation sometime in June, that insider added.

While it's good that things haven't gone worse for Arbaugh, it's clear now that Neuralink is applying Silicon Valley's "move fast and break things" adage, coined by Musk's nemesis Mark Zuckerberg, to its approach to human test subjects.

More on NeuralinkNeuralink Knew Its Implant Likely to Malfunction in First Human Patient, Did Brain Surgery Anyway

The post Neuralink Jamming Wires Deeper Into Second Patient's Brain appeared first on Futurism.

Link:

Neuralink Jamming Wires Deeper Into Second Patient's Brain

Blue Origin Astronauts Trapped by Foliage After Capsule Touchdown

When the Blue Origin space capsule touched down on Earth after a flight to the edge of space, it landed in a thicket of bushes.

Bush League

By all accounts, billionaire Jeff Bezos' space outfit Blue Origin had a successful crewed roundtrip flight to the edge of space on Sunday — with everything happening as expected except for one little problem that the company's aerospace engineers and scientists probably didn't foresee: some pesky shrubbery.

When the capsule finally touched down on Earth with its six passengers, it landed in a thicket of bushes in the middle of West Texas scrubland.

In live streaming footage of the roundtrip flight, Blue Origin staffers at the 50 minute mark are seen trying to stamp down some stubborn shrubbery around the capsule with the space tourists inside, peering from the windows. Two staffers brought along a blue metal two-step ladder for the crew members to use to disembark, but it took several long minutes for the team to kick back the shrubs surrounding the vessel and position the ladder on the uneven ground.

Finally, when the ladder was in place and the shrubs briefly tamed, the space tourists exited the hatch, with arms raised in triumph.

 

Feeling Blue

Some in the online peanut gallery took the opportunity to gently poke fun at the shrub incident. In r/SpaceXMasterrace, one Redditor posted a meme with two pictures: a Blue Origin rocket launching into space and an image capture of the capsule on Sunday surrounded by shrubbery. The meme is titled "Who would win? Giant Dick Ship [versus] A Few Planty Bois." The Redditor labeled the post: "Unexpected Foliage Contingency."

Besides the foliage issue, this flight made history because one of its crew members, Ed Dwight, became the oldest astronaut in human history at 90 years old. He was also at one time the first Black astronaut candidate for America's space program back in the 1960s, but was passed over, making Sunday his chance for a spectacular redo.

The Sunday flight was also a triumph for Blue Origin after a hiatus of two years. The company had temporarily grounded operations in 2022 after its reusable rocket, New Shepard, suffered a booster malfunction mid-flight and was forced to eject its capsule of NASA experiments. Thankfully, the flight had no passengers.

The incident drew the scrutiny of the Federal Aviation Administration, which issued "21 corrective actions," including a redesign to some engine parts.

More on Blue Origin: Jeff Bezos' Blue Origin Rocket Tests Spew Enough Methane to Be Spotted From Space

The post Blue Origin Astronauts Trapped by Foliage After Capsule Touchdown appeared first on Futurism.

More:

Blue Origin Astronauts Trapped by Foliage After Capsule Touchdown

Man Drinks Poison Oak Smoothie in Bid to Develop Resistance

Jeff Horwitz, reporter for The Wall Street Journal, has developed an immunity to poison oak after eating the leaves in smoothies.

How far would you go to avoid a rash from a common pest on hikes?

Well, one reporter for The Wall Street Journal has gone as far as to blend poison oak into smoothies and mix them into his salad bowl — all in a bid to develop an immunity towards the chemical irritants found in the plant's leaves.

Jeff Horwitz, who usually reports on technology, wrote about his slightly mad mission for a feature article in the Saturday newspaper.

"I started eating poison oak in January, when the first buds began to swell on the hazardous plant’s bare stems," he wrote, explaining that he was sick of getting poison oak rashes during mushroom foraging trips in California.

And surprisingly, despite some stern written warnings he came across during his research, Horwitz's newfound habit of eating poison oak seems to have built up a resistance to the shrub and its plant resin urushiol, also found in poison ivy and sumac, and which causes the rash.

After ingesting an increasing amount of poison oak leaves in his smoothies and salads — the "taste of young poison oak is surprisingly mild, grassy and only a little bit tart," he notes — he didn't get any signs in his body that it was stressed out from the experiment, except for red rashes here and there. He also experienced an itchy butt — presumably from pooping out the remnants.

At the end of his experiment, Horwitz says he could rub a poison oak leaf on his skin and not experience any rash breakouts.

"My poison-oak salad days are over, but I do intend to nibble a few leaves here and there when hiking around the Bay Area in an effort to maintain my resistance on a permanent basis," he wrote.

Horwitz got his idea from reading about how California's indigenous tribes would make tea from poison oak roots and eat the leaves to develop immunity. He also read online forums where outdoors enthusiasts discussed noshing on poison ivy or poison oak helped them develop a resistance, though much of literature he consulted warned not to eat the plants.

In the first half of the 20th Century, pharmaceutical companies capitalized on this folk remedy and sold to the public poison ivy pills and shots in order to prevent spring and summertime rashes, according to Horwitz. But for unknown reasons, Big Pharma stopped making these urushiol extract medicines, making the larger public forget there's a preventative treatment for the rash beyond a good shower, antihistamine pills or hydrocortisone cream.

But before you reach for your blender or visit Erewhon and ask them to drop a couple of poison oak leaves into your smoothie order, Horwitz reports that pharmacologist Mahmoud ElSohly, who has been working with medical startup Hapten Sciences, has developed a new urushiol drug that would prevent poison ivy or poison oak rashes.

The medication could be available to the public as soon as 2026.

More on poisons: Venomous vs Poisonous: What Is the Difference Between Venom, Poison, and Toxins?

The post Man Drinks Poison Oak Smoothie in Bid to Develop Resistance appeared first on Futurism.

Read more from the original source:

Man Drinks Poison Oak Smoothie in Bid to Develop Resistance

OpenAI Removing Voice From ChatGPT That Sounds Too Much Like Scarlett Johansson

The latest ChatGPT update drew comparisons to the film

Movie Madness

OpenAI is apparently not feeling too flattered about those comparisons to the movie "Her" anymore. On Sunday, the Microsoft-backed startup announced that it was pausing the use of Sky, a voice available for the latest version of ChatGPT that can have spoken conversations in realtime, after users pointed out that it sounded a lot like the actress Scarlett Johansson.

In "Her," Johansson voices an AI chatbot named Samantha that the film's melancholic protagonist, played by Joaquin Phoenix, falls in love with after talking to her through his phone and computer.

Those parallels didn't go unnoticed by users of GPT-4o's flagship "Voice Mode," who have joked that the Sky voice should be called Samantha. But OpenAI, in its latest blog post, insisted that the similarities are merely a coincidence.

"We believe that AI voices should not deliberately mimic a celebrity's distinctive voice — Sky's voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice," the blog post read. "To protect their privacy, we cannot share the names of our voice talents."

We’ve heard questions about how we chose the voices in ChatGPT, especially Sky. We are working to pause the use of Sky while we address them.

Read more about how we chose these voices: https://t.co/R8wwZjU36L

— OpenAI (@OpenAI) May 20, 2024

Coy Copycat

The denial raises more questions than it answers. If Sky isn't an imitation of ScarJo, then why pause the use of the voice? It would seem that this is less a case of buckling to community scrutiny, and more of OpenAI walking on legal eggshells. Johansson, after all, hasn't balked at suing companies as massive as Disney in the past.

OpenAI points to the fact that Sky is voiced by a different actress as evidence of its innocence. Of course, that doesn't preclude the actress having been directed to evoke Johannson's likeness in the performance.

Whether that's the case, OpenAI's CEO Sam Altman has only fueled the comparisons. He's spoken about how "Her" is one of his favorite sci-fi movies, calling it "incredibly prophetic." And on the day that GPT-4o was released with "Voice Mode," Altman cheekily tweeted the single word "her" — directly linking the update with the movie.

We suspect that if there are any legal troubles brewing related to this — the company's plausible-deniability-speak may be evidence of that — OpenAI will want to handle this behind closed doors. It certainly already has enough lawsuits on its plate already.

More on OpenAI: Machine Learning Researcher Links OpenAI to Drug-Fueled Sex Parties

The post OpenAI Removing Voice From ChatGPT That Sounds Too Much Like Scarlett Johansson appeared first on Futurism.

Read the rest here:

OpenAI Removing Voice From ChatGPT That Sounds Too Much Like Scarlett Johansson

Google’s Top Autocomplete Suggestion for "How to Edge" Is Wild

If you search the words

It is our somber duty to report, dear reader, that if you search the words "how to edge" on Google, the top autocomplete suggestion isn't how to edge a beard, or a lawn, or a snowboard.

Instead, Google suggests that its users are looking for how to edge "in class."

For those blessedly innocent enough to not know what the colloquial term "edging" means, let us elucidate you: as WebMD explains, edging occurs when a person — often a cisgender man, though not exclusively — gets aroused just to the brink or edge of an orgasm, but then backs off stimulation so as not to achieve one too quickly. It's a way to prevent premature ejaculation, essentially, and to lengthen the experience of pleasure.

While that's all fine and good, it's obviously unacceptable to edge oneself in public, much less a classroom. Nevertheless, not only does Google Search pull it up as a top query, but the company's generative AI-assisted search option returns both a YouTube video and a Change.org petition about it. What gives?

Per our not-so-scientific research — eg, just Googling around a bit — it appears that "how to edge in class and not get caught" is something of a TikTok meme. As such, people looking for explainers on the video streaming platform could, theoretically speaking, be using Google to make those searches happen.

As one might expect, the hopefully facetious videos are seemingly filmed by horny boys advising viewers to watch a bunch of porn — they often describe Pornhub as "the black and yellow site" or "yellow YouTube" to circumvent TikTok's censorship algorithms — wear bulky jackets, and sit in the back of the classroom.

Despite there not being all that many videos on the topic, edging in class seems to have become a meme in 2023, coinciding with the uptick in jokes about "gooning," a form of prolonged masturbation in which one gets into the ecstatic arousal state before orgasm for hours on end. Tantra, eat your heart out.

As with other Manosphere-adjacent memes like looksmaxxing and bone-smashing, any "edging in class" content should be taken with a heaping of salt, as the people behind these joke hoaxes traffic in convincing those without context into thinking they're legit.

That said, Google may want to take a look at why its search engine and AI are surfacing info about this dumbass viral trend.

More on memes: GameStop Stock Is Crashing Catastrophically After Meme Hype

The post Google's Top Autocomplete Suggestion for "How to Edge" Is Wild appeared first on Futurism.

Go here to read the rest:

Google's Top Autocomplete Suggestion for "How to Edge" Is Wild

Machine Learning Researcher Links OpenAI to Drug-Fueled Sex Parties

An ML researcher is claiming to have knowledge of kinky drug-fueled orgies linked to OpenAI in Silicon Valley's storied hacker houses.

A machine learning researcher is claiming to have knowledge of kinky drug-fueled orgies in Silicon Valley's storied hacker houses — and appears to be linking those parties, and the culture surrounding them, to OpenAI.

"The thing about being active in the hacker house scene is you are accidentally signing up for a career as a shadow politician in the Silicon Valley startup scene," begins the lengthy X-formerly-Twitter post by Sonia Joseph, a former Princeton ML researcher who's now affiliated with the deep learning institute Mila Quebec.

What follows is a vague and anecdotal diatribe about the "dark side" of startup culture — made particularly explosive by Joseph's reference to so-called "consensual non-consent" sex parties that she says took place within the artificial general intelligence (AGI) enthusiast community in the valley.

The jumping off point, as far as we can tell, stems from a thread announcing that OpenAI superalignment chief Jan Leike was leaving the company as it dissolved his team that was meant to prevent advanced AI from going rogue.

At the end of his X thread, Leike encouraged remaining employees to "feel the AGI," a phrase that was also ascribed to newly-exited OpenAI cofounder Ilya Sutskever during seemingly cultish rituals revealed in an Atlantic exposé last year — but nothing in that piece, nor the superalignment chief's tweets, suggests anything having to do with sex, drugs, or kink.

Still, Joseph addressed her second viral memo-length tweet "to the journalists contacting me about the AGI consensual non-consensual (cnc) sex parties." And in the post, said she'd witnessed "some troubling things" in Silicon Valley's "community house scene" when she was in her early 20s and new to the tech industry.

"It is not my place to speak as to why Jan Leike and the superalignment team resigned. I have no idea why and cannot make any claims," wrote the researcher, who is not affiliated with OpenAI. "However, I do believe my cultural observations of the SF AI scene are more broadly relevant to the AI industry."

"I don't think events like the consensual non-consensual (cnc) sex parties and heavy LSD use of some elite AI researchers have been good for women," Joseph continued. "They create a climate that can be very bad for female AI researchers... I believe they are somewhat emblematic of broader problems: a coercive climate that normalizes recklessness and crossing boundaries, which we are seeing playing out more broadly in the industry today. Move fast and break things, applied to people."

While she said she doesn't think there's anything generally wrong with "sex parties and heavy LSD use," she also charged that the culture surrounding these alleged parties "leads to some of the most coercive and fucked up social dynamics that I have ever seen."

"I have seen people repeatedly get shut down for pointing out these problems," Joseph wrote. "Once, when trying to point out these problems, I had three OpenAI and Anthropic researchers debate whether I was mentally ill on a Google document. I have no history of mental illness; and this incident stuck with me as an example of blindspots/groupthink."

"It’s likely these problems are not really on OpenAI but symptomatic of a much deeper rot in the Valley," she added. "I wish I could say more, but probably shouldn’t."

Overall, it's hard to make heads or tails of these claims. We've reached out to Joseph and OpenAI for more info.

"I'm not under an NDA. I never worked for OpenAI," Joseph wrote. "I just observed the surrounding AI culture through the community house scene in SF, as a fly-on-the-wall, hearing insider information and backroom deals, befriending dozens of women and allies and well-meaning parties, and watching many them get burned."

More on OpenAI: Sam Altman Clearly Freaked Out by Reaction to News of OpenAI Silencing Former Employees

The post Machine Learning Researcher Links OpenAI to Drug-Fueled Sex Parties appeared first on Futurism.

Read more here:

Machine Learning Researcher Links OpenAI to Drug-Fueled Sex Parties

Godfather of AI Says There’s an Expert Consensus AI Will Soon Exceed Human Intelligence

Geoffrey Hinton fears that we aren't doing enough to ensure the safe development of AI, with military applications posing the biggest threat.

Impending Gloom

Geoffrey Hinton, one of the "godfathers" of AI, is adamant that AI will surpass human intelligence — and worries that we aren't being safe enough about its development.

This isn't just his opinion, though it certainly carries weight on its own. In an interview with the BBC's Newsnight program, Hinton claimed that the idea of AI surpassing human intelligence as an inevitability is in fact the consensus of leaders in the field.

"Very few of the experts are in doubt about that," Hinton told the BBC. "Almost everybody I know who is an expert on AI believes that they will exceed human intelligence — it's just a question of when."

Rogue Robots

Hinton is one of three "godfathers" of AI, an appellation he shares with Université de Montréal's Yoshua Bengio and Meta's Yann LeCun — the latter of whom Hinton characterizes in the interview as thinking that an AI superintelligence will be "no problem."

In 2023, Hinton quit his position at Google, and in a remark that has become characteristic for his newfound role as the industry's Oppenheimer, said that he regretted his life's work while warning of the existential risks posed by the technology — a line he doubled down on during the BBC interview.

"Given this big spectrum of opinions, I think it's wise to be cautious" about developing and regulating AI, Hinton said.  "I think there's a chance they'll take control. And it's a significant chance — it's not like one percent, it's much more," he added. "Whether AI goes rogue and tries to take over, is something we may be able to control or we may not, we don't know."

As it stands, military applications of the technology — such as the Israeli Defense Forces reportedly using an AI system to pick out airstrike targets in Gaza — are what seem to worry Hinton the most.

"What I'm most concerned about is when these [AIs] can autonomously make the decision to kill people," he told the BBC, admonishing world governments for their lack of willingness to regulate this area.

Jobs Poorly Done

A believer in universal basic income, Hinton also said he's "worried about AI taking over mundane jobs." This would boost productivity, Hinton added, but the gains in wealth would disproportionately go to the wealthy and not to those whose jobs were destroyed.

If it's any consolation, Hinton doesn't think that a rogue AI takeover of humanity is a totally foregone conclusion — only that AI will eventually be smarter than us. Still, you could argue that the profit-driven companies that are developing AI models aren't the most trustworthy stewards of the tech's safe development.

OpenAI, which has a history of ethical flip-flopping, was recently criticized by a former safety worker after he lost faith that the company would responsibly develop a superintelligent AI. So even if  Hinton is a little guilty of doom and gloom, he's certainly not alone.

More on AI: The New ChatGPT Has a Huge Problem in Chinese

The post Godfather of AI Says There's an Expert Consensus AI Will Soon Exceed Human Intelligence appeared first on Futurism.

Read the original post:

Godfather of AI Says There's an Expert Consensus AI Will Soon Exceed Human Intelligence

Microplastics Found in Every Human Testicle

No ifs, ands, or nuts about it — microplastics are everywhere, including in every human testicle tested for a new study.

What do the pyramids, the oceans, the blood of newborns, and human and canine testicles all have in common?

They've all been found to be host to cancer-causing microplastics — which may also, scientists hypothesize, be why sperm counts have been diminishing for decades.

A new paper published in the journal Toxological Science describes alarming results from a study that tested testicle samples from 23 humans and 47 pet dogs, finding microplastics in every single subject: 330 micrograms of microplastics per gram of tissue and 123 micrograms found in the dogs.

"At the beginning, I doubted whether microplastics could penetrate the reproductive system," paper coauthor Xiaozhong Yu told The Guardian. "When I first received the results for dogs I was surprised. I was even more surprised when I received the results for humans."

Besides the jarring prevalence, the team was also concerned about the heightened concentration of polyethylene and PVC found in the human samples, which came from postmortem subjects ranging in age from 16 to 88.

Though this isn't the first study to find microplastics in human testes and semen, the comparative concentrations between the human and canine samples is novel — and not in a good way.

Though the correlation isn't yet perfectly understood, some recent mice studies have found a link between reduced sperm counts and microplastics exposure, and the chemicals released by the pollutants may also be associated with some hormonal abnormalities and disruptions as well.

That's likely because PVC in particular is, well, super freakin' toxic.

"PVC can release a lot of chemicals that interfere with spermatogenesis," Yu explained, "and it contains chemicals that cause endocrine disruption."

More research is needed, but one thing's for sure: our degradation of the environment has come home to our own bodies, and we're only starting to understand how that will affect us all.

More on nuts: Scientists Grow Teeny Tiny Testicles in Laboratory

The post Microplastics Found in Every Human Testicle appeared first on Futurism.

Continued here:

Microplastics Found in Every Human Testicle

Girl Awestruck After Capturing Brilliant Blue Comet By Chance on Camera

In Portugal, a comet shot through the sky and illuminated it turquoise — and one lucky girl captured the entire thing on camera.

Shine Bright

In Portugal, a comet shot through the sky and illuminated the whole sky turquoise — and one lucky girl captured the entire extravaganza on camera.

Posted on X-formerly-Twitter and Instagram, the stunning fireball footage from Portuguese content creator Milena Refacho has gone incredibly viral, with her original post garnering more than six million views as netizens marveled with her at the spectacle.

O meteorito na tuga pic.twitter.com/4ZxJ50ZFIo

— ???? ???????????? ? (@milarefacho) May 19, 2024

In the video, the 19-year-old Refacho's friends can be heard exclaiming to the heavens — and, in one hilarious instance, to hell — in Portuguese at the surprise light show.

Later, the European Space Agency confirmed that a comet fragment had indeed paraded through the skies of Portugal and Spain, and that it appeared to have burnt up in the atmosphere, resulting in the fantastical blue-green explosion that lit up the sky at nearly midnight local time. It's unlikely that any of the fragments survived the fiery crash, the ESA added.

Comet Conundrum

While it's certainly not uncommon for such space projectiles to leave brilliant tails behind them as they burn up in our planet's atmosphere, this comet fragment's descent was extra-bright, the New York Times explains, because it was careening at around 100,000 miles per hour. That's twice the average speed of a rocky asteroid, which seems to have made it twice as bright, too.

As the NYT adds in its write-up, the ice-rock composure of most comets suggests they were born at the dawn of our Solar System, which makes this one's incredible final display all the more special.

In an interview with the newspaper, planetary astronomer Meg Schwamb of Queen's University in Belfast said that although there are "notable meteor showers throughout the year, which are the result of the Earth crossing debris clouds of specific comets," the brilliance of this comet may tell scientists something about its size.

This chunk, Schwamb said, "is likely a bit bigger than a good fraction of the meteors we see during meteor showers, so this just made a bigger light show."

"It’s an unexpected interplanetary fireworks show," the astronomer added.

More on hunks of space: Giant Piece of Space Junk Crashes Down on Farm of Canadian, Who Intends to Sell It and Spend Money on Hockey Rink

The post Girl Awestruck After Capturing Brilliant Blue Comet By Chance on Camera appeared first on Futurism.

Continue reading here:

Girl Awestruck After Capturing Brilliant Blue Comet By Chance on Camera

The New ChatGPT Has a Huge Problem in Chinese

A data training failure has resulted in OpenAI's new GPT-4o model spitting out spam and porn-littered Chinese-language responses.

Dirty Data

A pollution problem with OpenAI training data has rendered its new chatbot's Chinese outputs chock-full of porn and spam, the MIT Technology Review reports.

Last week, OpenAI released GPT-4o, a decidedly flirty new large language model (LLM) equipped with new and advanced capabilities — for example, the ability to "see" through users' device cameras, as well as the power to converse out loud in real-time. But for all of GPT-4o's apparent advancements, it seems to have at least one massive blindspot: the Chinese language.

To train AI models, you need tokens, or units of data that represent information that an AI uses to "read" and learn. According to MIT Tech, AI researchers were quick to discover that nearly all of the 100 longest Chinese-language tokens used by the AI to decipher Chinese prompts were comprised of spammy porn and gambling content — resulting in bizarre, smut- and spam-ridden responses to completely run-of-the-mill queries.

"This is sort of ridiculous," Tianle Cai, an AI researcher and PhD candidate at Princeton, wrote in a Github post showcasing the polluted tokens.

Unforced Error

The worst part? According to experts, the problem of uncleaned data is a well-known AI training hurdle — and likely wouldn't have been too hard to fix.

"Every spam problem has a solution," Deedy Das, an AI investor at Menlo Ventures who formerly worked on Google's Search team, told MIT Tech, adding that just auto-translating tokenized content to detect certain problematic keywords could feasibly "get you 60 percent of the way" to a clean dataset.

"At the end of the day," he continued, "I just don't think they did the work in this case."

"The English tokens seem fine," Cai, the Princeton researcher, told MIT Tech, "but the Chinese ones are not."

In other words, the likeliest reason for OpenAI's error is that ensuring its Chinese-language tokens were mostly free of porn and gambling spam just didn't make the to-do list.

It's a bad look for OpenAI. The Chinese language has the most native speakers on the planet. And numbers aside, if the future of our internet will indeed center on AI-generated material — as opposed to human-created and built websites, communities, and worlds — errors like not ensuring that a premier chatbot can parse the native language of over one billion humans means that people, not to mention entire cultures, inherently get left out.

That is to say, let's hope this is a learning moment.

More on AI and non-English languages: Huge Proportion of Internet Is AI-Generated Slime, Researchers Find

The post The New ChatGPT Has a Huge Problem in Chinese appeared first on Futurism.

View original post here:

The New ChatGPT Has a Huge Problem in Chinese