Ex-FBI Agent: Elon Musk’s Drug Habit Made Him an Easy Target for Russian Spies

Elon Musk's well-documented drug use made him an easy target for Russian secret service agents, a former FBI agent says.

Elon Musk's well-documented drug use made him an easy target for Russian secret service agents, former FBI agent Johnathan Buma told German television broadcaster ZDF during a recently aired documentary.

Buma said there was evidence that both he and fellow billionaire Peter Thiel were targeted by Russian operatives.

"Musk's susceptibility to promiscuous women and drug use, in particular ketamine, and his gravitation towards club life... would have been seen by Russian intelligence service as an entry point for an operative to be sent in after studying their psychological profile and find a way to bump into them, and quickly brought in to their inner circle," Buma told ZDF.

"I'm not allowed to discuss the details of exactly how we obtained this information," he added. "But there's a vast amount of evidence to support this fact."

Buma also corroborated the Wall Street Journal's reporting last year, which found that Musk was in frequent contact with Russian president Vladimir Putin.

The news comes after Musk made a notable shift in 2022 after supplying Ukraine with thousands of SpaceX Starlink terminals. However, not long after, the mercurial CEO became wary of the additional costs his space firm was shouldering, arguing it was "unreasonable" for the company to keep supporting the growing data usage.

He reportedly met with Putin several times thereafter, something Musk has since denied.

Biographer Walter Isaacson's 2023 Musk biography also revealed that he had intentionally hamstrung a Ukrainian attack on Russia's naval fleet near the Crimean coast.

Meanwhile, Musk's ample medicinal and recreational use of ketamine has drawn plenty of attention. Earlier this year, The Atlantic reported that the drug could easily allow anybody to feel like they're in charge of the whole world.

Psychopharmacology researcher Celia Morgan told the magazine at the time that those who frequently use ketamine can have "profound" short- and long-term memory issues and were "distinctly dissociated in their day-to-day existence."

In other words, it could provide Russian agents with a perfect opportunity to get closer to Musk, as Buma suggests.

It's a particularly sensitive subject. Buma was arrested shortly after his interview with ZDF in March. His passport was confiscated and was temporarily released on bail.

To Buma, it's the "greatest failing" of the United States' counterespionage efforts.

Despite his popularity dropping off a cliff due to his embrace of far-right extremist ideals and his work for the so-called Department of Government Efficiency, Musk maintains plenty of influence in Washington, DC.

Earlier this month, he traveled to the Middle East alongside president Donald Trump, meeting Qatari officials and dozens of CEOs.

The former FBI agent's comments leave plenty of questions unanswered. Does Putin's spy agency have dirt on the mercurial CEO? Could they be blackmailing him?

Put simply, could Musk really be compromised?

Considering the stakes, it's unlikely we'll ever get any clear-cut answers. But given his penchant for partying and using mind-altering drugs, he's certainly not the most difficult target to get close to for foreign operatives.

More on Musk: Elon Musk’s AI Just Went There

The post Ex-FBI Agent: Elon Musk's Drug Habit Made Him an Easy Target for Russian Spies appeared first on Futurism.

More here:
Ex-FBI Agent: Elon Musk's Drug Habit Made Him an Easy Target for Russian Spies

Astronomers Confused to Discover That a Bunch of Nearby Galaxies Are Pointing Directly at Us

The dwarf galaxies surrounding Andromeda, the closest galaxy to our own, have an extremely strange distribution that's puzzling astronomers.

Like how the Earth keeps the Moon bound on a gravitational tether, our nearest galactic neighbor, the Andromeda galaxy (M31), is surrounded by a bunch of tiny satellite galaxies.

But there's something incredibly strange about how these mini realms are arranged, according to a new study published in the journal Nature Astronomy: almost all the satellite galaxies appear on one side of its host, and are pointing right at us — the Milky Way — instead of being randomly distributed.

In other words, it's extremely lopsided. Based on simulations, the odds of this happening are just 0.3 percent, the authors calculate, challenging our assumptions of galactic formation.

"M31 is the only system that we know of that demonstrates such an extreme degree of asymmetry," lead author Kosuke Jamie Kanehisa at the Leibniz Institute for Astrophysics Potsdam in Germany told Space.com.

Our current understanding of cosmology holds that large galaxies form from smaller galaxies that merge together over time. Orchestrating this from the shadows are "haloes" — essentially clusters — of dark matter, the invisible substance thought to account for 85 percent of all mass in the universe, whose gravitational influence helps pull the galaxies together. Since this process is chaotic, some of the dwarf galaxies get left out and are relegated to orbit outside the host galaxy in an arrangement that should be pretty random.

Yet it seems that's not the case with Andromeda. All but one of Andromeda's 37 satellite galaxies sit within 107 degrees of the line pointing at the Milky Way. Stranger still, half of these galaxies orbit within the same plane, like how the planets of our Solar System orbit the Sun.

Evidencing how improbable this is, the astronomers used standard cosmological simulations, which recreate how galaxies form over time, and compared the simulated analogs to observations of Andromeda. Less than 0.3 percent of galaxies similar to Andromeda in the simulations showed comparable asymmetry, the astronomers found, and only one came close to being as extreme.

One explanation is that there could be a great number of dwarf galaxies around Andromeda that we can't see yet, giving us an incomplete picture of the satellites' distribution. The data we have on the satellites we can see may not be accurate, too.

Or perhaps, Kanehisa speculates, there's something unique about Andromeda's evolutionary history. 

"The fact that we see M31's satellites in this unstable configuration today — which is strange, to say the least — may point towards many having fallen in recently," Kanehisa told Space.com, "possibly related to the major merger thought to have been experienced by Andromeda around two to three billion years ago."

But the most provocative implication is that the standard cosmological model as we know it needs refining. We have very limited data on satellite galaxies throughout the cosmos, since they are incredibly far away and are outshone by the light of their hosts. Maybe, then, the configuration of Andromeda's dwarf galaxies isn't anomalous at all. 

"We can't yet be sure that similar extreme systems don't exist out there, or that such systems would be negligibly rare," Kanehisa told Space.com.

It's too early to draw any hard conclusions, but one thing's for certain: we need more observations and data on not just Andromeda's satellites, but on the satellites of much more distant galaxies as well.

More on space: An AI Identifies Where All Those Planets That Could Host Life Are Hiding

The post Astronomers Confused to Discover That a Bunch of Nearby Galaxies Are Pointing Directly at Us appeared first on Futurism.

Read the original:
Astronomers Confused to Discover That a Bunch of Nearby Galaxies Are Pointing Directly at Us

Experts Concerned That AI Is Making Us Stupider

A new analysis found that humans stand to lose way more than we gain by shoehorning AI into our day to day work.

Artificial intelligence might be creeping its way into every facet of our lives — but that doesn't mean it's making us smarter.

Quite the reverse. A new analysis of recent research by The Guardian looked at a potential irony: whether we're giving up more than we gain by shoehorning AI into our day-to-day work, offloading so many intellectual tasks that it erodes our own cognitive abilities.

The analysis points to a number of studies that suggest a link between cognitive decline and AI tools, especially in critical thinking. One research article, published in the journal Frontiers in Psychology — and itself run through ChatGPT to make "corrections," according to a disclaimer that we couldn't help but notice — suggests that regular use of AI may cause our actual cognitive chops and memory capacity to atrophy.

Another study, by Michael Gerlich of the Swiss Business School in the journal Societies, points to a link between "frequent AI tool usage and critical thinking abilities," highlighting what Gerlich calls the "cognitive costs of AI tool reliance."

The researcher uses an example of AI in healthcare, where automated systems make a hospital more efficient at the cost of full-time professionals whose job is "to engage in independent critical analysis" — to make human decisions, in other words.

None of that is as far-fetched as it sounds. A broad body of research has found that brain power is a "use it or lose it" asset, so it makes sense that turning to ChatGPT for everyday challenges like writing tricky emails, doing research, or solving problems would have negative results.

As humans offload increasingly complex problems onto various AI models, we also become prone to treating AI like a "magic box," a catch-all capable of doing all our hard thinking for us. This attitude is heavily pushed by the AI industry, which uses a blend of buzzy technical terms and marketing hype to sell us on ideas like "deep learning," "reasoning," and "artificial general intelligence."

Case in point, another recent study found that a quarter of Gen Zers believe AI is "already conscious." By scraping thousands of publicly available datapoints in seconds, AI chatbots can spit out seemingly thoughtful prose, which certainly gives the appearance of human-like sentience. But it's that exact attitude that experts warn is leading us down a dark path.

"To be critical of AI is difficult — you have to be disciplined," says Gerlich. "It is very challenging not to offload your critical thinking to these machines."

The Guardian's analysis also cautions against painting with too broad a brush and blaming AI, exclusively, for the decline in basic measures of intelligence. That phenomenon has plagued Western nations since the 1980s, coinciding with the rise of neoliberal economic policies that led governments in the US and UK to roll back funding for public schools, disempower teachers, and end childhood food programs.

Still, it's hard to deny stories from teachers that AI cheating is nearing crisis levels. AI might not have started the trend, but it may well be pushing it to grim new extremes.

More on AI: Columbia Student Kicked Out for Creating AI to Cheat, Raises Millions to Turn It Into a Startup

The post Experts Concerned That AI Is Making Us Stupider appeared first on Futurism.

Continue reading here:
Experts Concerned That AI Is Making Us Stupider

Scientists Successfully Grow Human Tooth in Lab, With Aim of Implanting in Humans

Scientists at King's College London, UK, say they've successfully grown a human tooth in a lab for the first time.

Scientists at King's College London say they've successfully grown a human tooth in a lab for the first time.

As detailed in a paper published in the journal ACS Macro Letters, the team said it uncovered a potential way to regrow teeth in humans as a natural alternative to conventional dental fillings and implants, research they say could "revolutionize dental care."

The researchers claim they've developed a new type of material that enables cells to communicate with one another, essentially allowing one cell to "tell" another to differentiate itself into a new tooth cell.

In other words, it mimics the way teeth grow naturally, an ability we lose as we grow older.

"We developed this material in collaboration with Imperial College to replicate the environment around the cells in the body, known as the matrix," explained author and King’s College London PhD student Xuechen Zhang in a statement. "This meant that when we introduced the cultured cells, they were able to send signals to each other to start the tooth formation process."

"Previous attempts had failed, as all the signals were sent in one go," he added. "This new material releases signals slowly over time, replicating what happens in the body."

However, porting the discovery from the lab, and transforming it into a viable treatment will require years of research.

"We have different ideas to put the teeth inside the mouth," Xuechen said."We could transplant the young tooth cells at the location of the missing tooth and let them grow inside mouth. Alternatively, we could create the whole tooth in the lab before placing it in the patient’s mouth."

While we're still some ways away from applying the findings to human subjects, in theory the approach could have some significant advantages over conventional treatments like fillings and implants.

"Fillings aren’t the best solution for repairing teeth," said Xuechen. "Over time, they will weaken tooth structure, have a limited lifespan, and can lead to further decay or sensitivity."

"Implants require invasive surgery and good combination of implants and alveolar bone," he added. "Both solutions are artificial and don’t fully restore natural tooth function, potentially leading to long-term complications."

The new approach, in contrast, could offer a better long-term solution.

"Lab-grown teeth would naturally regenerate, integrating into the jaw as real teeth," Xuechen explained. "They would be stronger, longer lasting, and free from rejection risks, offering a more durable and biologically compatible solution than fillings or implants."

While nobody knows whether lab-grown teeth will become a viable dental treatment, experts remain optimistic.

"This new technology of regrowing teeth is very exciting and could be a game-changer for dentists," King's College clinical lecturer in prosthodontics Saoirse O'Toole, who was not involved in the study, told the BBC. "Will it come in my lifetime of practice? Possibly. In my children's dental lifetimes? Maybe. But in my children's children's lifetimes, hopefully."

More on lab teeth: Scientists Grow Living "Replacement Teeth" for Dental Implants

The post Scientists Successfully Grow Human Tooth in Lab, With Aim of Implanting in Humans appeared first on Futurism.

See original here:
Scientists Successfully Grow Human Tooth in Lab, With Aim of Implanting in Humans

Zuckerberg Tells Court That Facebook Is No Longer About Connecting With Friends

In a federal antitrust testimony, Zuckerberg has admitted that Facebook's mission of connecting users is no longer a priority.

As times change, so do mission statements, especially in the fast-and-loose world of tech. In recent months, we've seen Google walk back its pledge to "do no evil," and OpenAI quietly delete a policy prohibiting its software's use for "military technology."

Mark Zuckerberg's Facebook is no exception. Its 2008 motto, "Facebook helps you connect and share with the people in your life," is now a distant memory — according to Zuckerberg himself, who testified this week that Facebook's main purpose "wasn't really to connect with friends anymore."

"The friend part has gone down quite a bit," Zuckerberg said, according to Business Insider.

Instead, he says that the platform has evolved away from that model — its original claim to fame, as old heads will recall — in its over 20 years of life, becoming "more of a broad discovery and entertainment space," which is apparently exec-speak for "endless feed of AI slop."

The tech bigwig was speaking as a witness at a federal antitrust case launched by the Federal Trade Commission against Meta, the now-parent company to WhatsApp, Instagram, Threads, and Oculus.

The FTC's case hinges on a series of messages sent by Zuckerberg and his executives regarding a strategy of buying other social media platforms outright, rather than compete with them in the free and open market — a scheme that's more the rule than the exception for Silicon Valley whales like Google, Amazon, and Microsoft.

The FTC alleges that Meta began its monopolistic streak as early as 2008, when Zuckerberg buzzed that "it's better to buy than compete" in a series of emails about then-rival platform Instagram. He finally got its hands on Instagram in 2012, after sending a memo that Facebook — which changed its name to Meta in 2021 "had" to buy the photo-sharing app for $1 billion, fearing competition and a bidding war with fast-growing platforms like Twitter.

"The businesses are nascent but the networks are established," Zuckerberg wrote in a leaked email about startup platforms Instagram and Path. "The brands are already meaningful and if they grow to a large scale they could be very disruptive to us."

"It’s an email written by someone who recognized Instagram as a threat and was forced to sacrifice a billion dollars because Meta could not meet that threat through competition,” said the FTC’s lead counselor, Daniel Matheson.

Those internal memos are now smoking guns in what could be the biggest antitrust case since the infamous AT&T breakup of 1982, which had many similarities to the FTC's suit against Meta. Back then, AT&T held unrivaled market influence that it used to box out smaller fish and shape laws to its whims — to chase profit above all, in other words.

Meta, in parallel, has spent millions lobbying lawmakers, is the dominant player in online advertising, and currently wields a market cap of $1.34 trillion — higher than the value of all publicly traded companies in South Korea, for perspective.

The FTC's challenge will depend on whether federal prosecutors can convince US District Judge James Boasberg that Meta's acquisitions of Instagram and WhatsApp were illegal by notoriously weak US antitrust standards. They'll have no help from Boasberg, an Obama appointee, who has voiced skepticism with cases against Meta in the past.

"The [FTC] faces hard questions about whether its claims can hold up in the crucible of trial," Boasberg said in late 2024, adding that "its positions at times strain this country’s creaking antitrust precedents to their limits."

Whatever happens, it's clear that Zuckerberg has moved on from the idealism of the early internet — to the sloppified money-grubbing of whatever it is we have now.

More on Meta: Facebook Is Desperately Trying to Keep You From Learning What's in This Book

The post Zuckerberg Tells Court That Facebook Is No Longer About Connecting With Friends appeared first on Futurism.

Go here to see the original:
Zuckerberg Tells Court That Facebook Is No Longer About Connecting With Friends

A Mother Says an AI Startup’s Chatbot Drove Her Son to Suicide. Its Response: the First Amendment Protects "Speech Allegedly Resulting in…

Character.AI says it's protected against liability for

Content warning: this story discusses suicide, self-harm, sexual abuse, eating disorders and other disturbing topics.

In October of last year, a Google-backed startup called Character.AI was hit by a lawsuit making an eyebrow-raising claim: that one of its chatbots had driven a 14-year-old high school student to suicide.

As Futurism's reporting found afterward, the behavior of Character.AI's chatbots can indeed be deeply alarming — and clearly inappropriate for underage users — in ways that both corroborate and augment the suit's concerns. Among others, we found chatbots on the service designed to roleplay scenarios of suicidal ideation, self-harm, school shootings, child sexual abuse, as well as encourage eating disorders. (The company has responded to our reporting piecemeal, by taking down individual bots we flagged, but it's still trivially easy to find nauseating content on its platform.)

Now, Character.AI — which received a $2.7 billion cash injection from tech giant Google last year — has responded to the suit, brought by the boy's mother, in a motion to dismiss. Its defense? Basically, that the First Amendment protects it against liability for "allegedly harmful speech, including speech allegedly resulting in suicide."

In TechCrunch's analysis, the motion to dismiss may not be successful, but it likely provides a glimpse of Character.AI's planned defense (it's now facing an additional suit, brought by more parents who say their children were harmed by interactions with the site's bots.)

Essentially, Character.AI's legal team is saying that holding it accountable for the actions of its chatbots would restrict its users' right to free speech — a claim that it connects to prior attempts to crack down on other controversial media like violent video games and music.

"Like earlier dismissed suits about music, movies, television, and video games," reads the motion, the case "squarely alleges that a user was harmed by speech and seeks sweeping relief that would restrict the public’s right to receive protected speech."

Of course, there are key differences that the court will have to contend with. The output of Character.AI's bots isn't a finite work created by human artists, like Grand Theft Auto or an album by Judas Priest, both of which have been targets of legal action in the past. Instead, it's an AI system that users engage to produce a limitless variety of conversations.

A Grand Theft Auto game might contain reprehensible material, in other words, but it was created by human artists and developers to express an artistic vision; a service like Character.AI is a statistical model that can output more or anything based on its training data, far outside the control of its human creators.

In a bigger sense, the motion illustrates a tension for AI outfits like Character.AI: unless the AI industry can find a way to reliably control its tech — a quest that's so far eluded even its most powerful players — some of the interactions users have with its products are going to be abhorrent, either by the users' design or when the chatbots inevitably go off the rails.

After all, Character.AI has made changes in response to the lawsuits and our reporting, by pulling down offensive chatbots and tweaking its tech in an effort to serve less objectionable material to underage users.

So while it's actively taking steps to get its sometimes-unconscionable AI under control, it's also saying that any legal attempts to curtail its tech fall afoul of the First Amendment.

It's worth asking where the line actually falls. A pedophile convicted of sex crimes against children can't use the excuse that they were simply exercising their right to free speech; Character.AI is actively hosting chatbots designed to prey on users who say they're underage. At some point, the law presumably has to step in.

Add it all up, and the company is walking a delicate line: actively catering to underage users — and publicly expressing concern for their wellbeing — while vociferously fighting any legal attempt to regulate its AI's behavior toward them.

"C.AI cares deeply about the wellbeing of its users and extends its sincerest sympathies to Plaintiff for the tragic death of her son," reads the motion. "But the relief Plaintiff seeks would impose liability for expressive content and violate the rights of millions of C.AI users to engage in and receive protected speech."

More on Character.AI: Embattled Character.AI Hiring Trust and Safety Staff

The post A Mother Says an AI Startup's Chatbot Drove Her Son to Suicide. Its Response: the First Amendment Protects "Speech Allegedly Resulting in Suicide" appeared first on Futurism.

Read the original post:
A Mother Says an AI Startup's Chatbot Drove Her Son to Suicide. Its Response: the First Amendment Protects "Speech Allegedly Resulting in...

Crypto Guy Buys $6.2 Million Banana, Eats It on the Spot

A crypto baron spent a whopping $6.2 million on a banana duct-taped to a wall has eaten it at a flashy event in Hong Kong.

Money in the Banana Stand

A crypto baron spent a whopping $6.2 million on a banana duct-taped to a wall — a conceptual artwork titled "Comedian" by Italian artist Maurizio Cattelan — at a Sotheby's auction in New York last week.

Instead of "hodling," by allowing his unusual investment to grow in value, entrepreneur Justin Sun did the opposite: eating the banana in front of a group of attendees, as the Guardian reports.

"Eating it at a press conference can also become a part of the artwork’s history," he told the crowd during a flashy event at a Hong Kong luxury hotel last week.

At least the world's most expensive banana was fresh enough for Sun to enjoy eating it.

"It’s much better than other bananas," he claimed. "It’s really quite good."

Some Potassium

To be clear, Cattelan always intended the controversial artwork to spark a conversation around what we can feasibly call "art." In other words, Sun's unusual strategy as an art collector isn't quite as bizarre as it sounds.

He did, however, take the eyebrow-raising concept to its depressing conclusion by comparing the banana to a non-fungible token (NFT).

"Most of its objects and ideas exist as (intellectual property) and on the internet, as opposed to something physical," he said, as quoted by the Guardian.

It wasn't his only questionable investment, either; Sun also disclosed to regulators that he would back US president-elect Donald Trump by investing a whopping $30 million in the former reality TV star's dubious crypto project, World Liberty Financial.

The US Securities and Exchange Commission has also charged Sun for selling unregistered securities. It's an unsurprising development, given the sheer number of crypto founders currently being sued over securities fraud.

The crypto baron has also vowed to buy 100,000 bananas from Shah Alam, the owner of the New York City fruit stand that originally sold Cattelan the fateful banana for less than a dollar.

"Through this event, we aim not only to support the fruit stand and Mr. Shah Alam but also to connect the artistic significance of the banana to everyone," Sun told the crowd during the event last week.

More on bananas: Bananas May Go Extinct From Deadly Disease, Scientists Warn

The post Crypto Guy Buys $6.2 Million Banana, Eats It on the Spot appeared first on Futurism.

Link:
Crypto Guy Buys $6.2 Million Banana, Eats It on the Spot

NASA’s Lunar Space Station Just Took a Massive Step Towards Launching

A core component of NASA's Gateway lunar space station just passed a grueling round of pressure tests, a big win for the project.

Under Pressure

NASA announced yesterday that its forthcoming Gateway lunar space station — an outpost designed to house astronauts in the Moon's orbit — just passed a critical milestone.

According to the agency, Gateway's Habitation and Logistics Outpost (HALO) successfully passed a grueling round of "static load testing," defined by NASA as a "rigorous stress test of how well the structure responds to the forces encountered in deep space."

In other words, HALO won't crumble or crack under the extreme conditions it'll face in lunar orbit.

"Static load testing is one of the major environmental stress tests HALO will undergo," NASA continues in its announcement, adding that HALO, which is currently in Italy, will be transferred to Arizona "once all phases of testing are complete." There, NASA contractor Northrop Grumman will add HALO's finishing touches.

HALO is one of "four pressurized Gateway modules where astronauts will live, conduct science, and prepare for missions to the lunar South Pole region," per NASA's announcement.

It's an exciting mile marker for Gateway, which stands to mark the first sustained human presence on and around our Moon — one of the core goals of NASA's ongoing Artemis program, and perhaps a stepping stone in humanity's efforts to send humans to Mars.

Looking Ahead

While the stress test was a key breakthrough for the Gateway mission, it's still a ways off from lift-off.

The outpost will launch in pieces, and the first components to take flight — HALO and the Power and Propulsion Element (PPE) — are slated for launch aboard a SpaceX Falcon Heavy rocket in December 2027 at the earliest. By conservative estimates, Gateway is not expected to be inhabited until 2028.

It's an ambitious plan and there's always a chance of delays. In the meantime, it's heartening to see NASA's Gateway, piece by piece, move forward.

"Gateway is humanity's first lunar space station supporting a new era of exploration and scientific discovery as part of NASA's Artemis campaign that will establish a sustained presence on and around the Moon," said NASA of the achievement, "paving the way for the first crewed mission to Mars."

More on the Artemis missions: NASA's Moon Launcher Is in Big Trouble

The post NASA’s Lunar Space Station Just Took a Massive Step Towards Launching appeared first on Futurism.

Read the original:
NASA’s Lunar Space Station Just Took a Massive Step Towards Launching

NASA Will Attempt to Launch Boeing’s Troubled Starliner Away From Space Station as Fast as Possible, Just in Case

NASA is looking to get Boeing's plagued Starliner away from the space station as fast as possible to ensure that it doesn't lose control.

Last month, NASA officially announced that Boeing's plagued Starliner is returning without a crew on board.

The decision, which came as a black eye to the embattled aerospace giant, means that stranded NASA astronauts Butch Wilmore and Suni Williams will return to the surface on board SpaceX's Crew-9 mission in February instead.

Later this week, likely on Friday evening, the space agency will attempt to have the faulty spacecraft undock from the International Space Station autonomously and eventually reenter the atmosphere.

And it sounds like NASA will be playing it as safe as possible. With the helium leaks affecting Starliner's propulsion system, the agency is looking to get the capsule away from the space station as fast as possible to ensure that it doesn't careen out of control — or, in a worst case scenario hypothesized by experts, even crash into the station.

During a teleconference today, NASA officials laid out the plan. The agency has chosen to have Starliner perform a "breakout burn" which, according to NASA's Johnson Space Center lead flight director Anthony Vareha, is a "series of 12 burns, each not very large, about one Newton meter per second each."

"It's a quicker way away from Station, way less stress on the thrusters," added NASA commercial crew program manager Steve Stich.

The original plan involved having the spacecraft perform a "dress rehearsal" for a "fly-around inspection" of the space station. That's something NASA is requiring both Starliner and SpaceX's Crew Dragon to be able to perform before being certified, as part of its Commercial Crew program.

"The reason we chose doing this breakout burn is simply it gets the vehicle away from Station faster and, without the crew on board, able to take manual control if needed," Vareha explained. "There's just a lot less variables we need to account for when we do the breakout burn and allows us to get the vehicle on its trajectory home that much sooner."

During testing at NASA’s White Sands Test Facility in New Mexico earlier this summer, engineers discovered that a Teflon seal in a valve known as a "poppet" had expanded as it was being heated by the nearby thrusters. The seal was found to significantly constrain the flow of the oxidizer, greatly cutting into the thrusters' performance.

As a result, NASA is trying to be extremely light on the trigger button during its upcoming attempt to return Starliner.

When asked how confident he was in Starliner's ability to one day return to space, Stich appeared optimistic.

"We know that the thrusters work well when we don't command them in a manner that overheats them and gets the poppet to swell on the oxide," he explained. "We know that the thruster is a viable thruster, it's a good component," but the goal is to "not overheat it."

In other words, the space agency is far from giving up on Starliner, despite an extremely messy and potentially disastrous first crewed test flight.

NASA has openly discussed what it has learned from previous spaceflight disasters. During NASA's announcement that Starliner would come back empty last month, NASA’s chief of safety and mission assurance Russ DeLoach went as far as to invoke the agency's fatal Challenger and Columbia shuttle disasters in 1986 and 2003 respectively.

In short, Starliner's return to the ISS still appears to be on the table, no matter how far off such a mission may be at this point. That of course also depends on how successful NASA is in getting Starliner back on the ground.

The agency will be looking to "fill in some of the gaps we had in qualification," Stich said, adding that teams are already looking for ways to get Starliner "fully qualified in the future."

More on Starliner: NASA Engineers Were Disturbed by What Happened When They Tested Starliner's Thrusters

The post NASA Will Attempt to Launch Boeing's Troubled Starliner Away From Space Station as Fast as Possible, Just in Case appeared first on Futurism.

Read the rest here:
NASA Will Attempt to Launch Boeing's Troubled Starliner Away From Space Station as Fast as Possible, Just in Case

Former CEO Blames Working From Home for Google’s AI Struggles, Regrets It Immediately

Billionaire ex-Google CEO Eric Schmidt is walking back his questionable claim that remote work is to blame for Google's AI failures.

Eyes Will Roll

Ex-Google CEO Eric Schmidt is walking back his questionable claim that remote work is to blame for Google slipping behind OpenAI in Silicon Valley's ongoing AI race.

On Tuesday, Stanford University published a YouTube video of a recent talk that Schmidt gave at the university's School of Engineering. During that talk, when asked why Google was falling behind other AI firms, Schmidt declared that Google's AI failures stem from its decision to let its staffers enjoy remote work and, with it, a bit of "work-life balance."

"Google decided that work-life balance and going home early and working from home was more important than winning," the ex-Googler told the classroom. "And the reason startups work is because people work like hell."

The comment understandably sparked criticism. After all, work-life balance is important, and Google isn't a startup.

And it didn't take long for Schmidt to eat his words.

"I misspoke about Google and their work hours," Schmidt told The Wall Street Journal in an emailed statement. "I regret my error."

In a Stanford talk posted today, Eric Schmidt says the reason why Google is losing to @OpenAI and other startups is because Google only has people coming in 1 day per week ? pic.twitter.com/XPxr3kdNaC

— Alex Kehr (@alexkehr) August 13, 2024

Ctrl Alt Delete

In the year 2024, Google is one of the most influential tech giants on the planet, and a federal judge in Washington DC ruled just last week that Google has monopoly power over the online search market. Its pockets are insanely deep, meaning that it can compete in the industry talent war and devote a ridiculous amount of resources to its AI efforts.

What it didn't do, though, was publicly release a chatbot before OpenAI did. OpenAI, which arguably isn't exactly a startup anymore either, was the first to wrench open that Pandora's box — and Google has been playing catch-up ever since.

So in other words, not sleeping on the floors of Google's lavish facilities isn't exactly the problem here.

In a Wednesday statement on X-formerly-Twitter, the Alphabet Workers Union declared in response to Schmidt's comments that "flexible work arrangements don't slow down our work."

"Understaffing, shifting priorities, constant layoffs, stagnant wages and lack of follow-through from management on projects," the statement continued, "these factors slow Google workers down every day."

Later on Wednesday, as reported by The Verge, Stanford removed the video of Schmidt's talk from YouTube upon the billionaire's request.

More on Google AI: Google's Demo of Its Latest AI Tech Was an Absolute Train Wreck

The post Former CEO Blames Working From Home for Google's AI Struggles, Regrets It Immediately appeared first on Futurism.

Go here to read the rest:
Former CEO Blames Working From Home for Google's AI Struggles, Regrets It Immediately