Startup Claims Its "Superwood" Is Stronger Than Steel

A startup is claiming that its new product will be stronger than steel, but it'll be a long time until we known for sure.

A new startup claims it can mass-produce "Superwood," a material that's stronger and lighter than steel, with 90 percent lower carbon emissions compared to the widely used alloy.

InventWood, the company behind the material, says its new product "has the capacity to substitute up to 80 percent of steel used globally," and "reduce greenhouse gas emissions by over 2 gigatons."

According to the firm, its Superwood has up to 50 percent more tensile strength compared to steel, has "minimal expansion and contraction," and a Class A fire rating — all crucial details for architects who might someday use Superwood in their buildings.

As TechCrunch reports, the startup has already secured $15 million in a Series A funding round, which its founder, materials scientist Liangbing Hu, says will go toward the company's first factory.

The whole thing came about in 2018, when Hu published a landmark research paper detailing how to "transform bulk natural wood directly into a high-performance structural material."

It's a relatively simple process that involves boiling wood in a mixture of lye and sodium sulfite — widely available compounds often used as additives in industrial food operations.

Lu's paper says the strategy has been found to be "universally effective" for all species of lumber.

The research has been used to launch startups like Cambium, a "global tech platform for recycled wood," as Lu worked to refine the process and launch his own commercial venture.

But whether Superwood lives up to its founders claims is another story.

The materials science industry moves at a snail's pace, thanks to the many factors involved in approving new products for use in buildings. The caution only increases with lumber, which could suffer unforeseen changes due to time, moisture, heat, stress, and transportation, according to the Construction Specifier, a trade publication.

Cross-laminated timber (CLT), for example, while used in Europe for decades, has had a tough time catching on in the US, as untested American manufacturers rush their products to market and architects struggle to find construction firms with CLT experience.

That's led to incidents like the Peavy Hall collapse at Oregon State University, where a 1,000-pound section of a newly constructed CLT building caved in on a lower floor.

Prior to the collapse, manufacturers charmed regulators and project managers with similar promises of their product's strength and environmental impact.

At the moment, it seems like InventWood is taking it slow and building out its enterprise selling Superwood as a decorative material, rather than structural beams.

"Right now, coming out of this first-of-a-kind commercial plant — so it’s a smaller plant — we're focused on skin applications," the company's CEO Alex Lau told TechCrunch, referencing building skins. "Eventually we want to get to the bones of the building."

Whether they do will depend on it gaining the confidence of architects and engineers, a process which will require years of patience. As with all startups, it's one thing to build it out on paper — now they have to do it for real.

More on startups: Startup Reportedly Claimed Fake Clients as Its AI-Powered Sales Bot Flailed

The post Startup Claims Its "Superwood" Is Stronger Than Steel appeared first on Futurism.

Originally posted here:
Startup Claims Its "Superwood" Is Stronger Than Steel

Experts Concerned That AI Is Making Us Stupider

A new analysis found that humans stand to lose way more than we gain by shoehorning AI into our day to day work.

Artificial intelligence might be creeping its way into every facet of our lives — but that doesn't mean it's making us smarter.

Quite the reverse. A new analysis of recent research by The Guardian looked at a potential irony: whether we're giving up more than we gain by shoehorning AI into our day-to-day work, offloading so many intellectual tasks that it erodes our own cognitive abilities.

The analysis points to a number of studies that suggest a link between cognitive decline and AI tools, especially in critical thinking. One research article, published in the journal Frontiers in Psychology — and itself run through ChatGPT to make "corrections," according to a disclaimer that we couldn't help but notice — suggests that regular use of AI may cause our actual cognitive chops and memory capacity to atrophy.

Another study, by Michael Gerlich of the Swiss Business School in the journal Societies, points to a link between "frequent AI tool usage and critical thinking abilities," highlighting what Gerlich calls the "cognitive costs of AI tool reliance."

The researcher uses an example of AI in healthcare, where automated systems make a hospital more efficient at the cost of full-time professionals whose job is "to engage in independent critical analysis" — to make human decisions, in other words.

None of that is as far-fetched as it sounds. A broad body of research has found that brain power is a "use it or lose it" asset, so it makes sense that turning to ChatGPT for everyday challenges like writing tricky emails, doing research, or solving problems would have negative results.

As humans offload increasingly complex problems onto various AI models, we also become prone to treating AI like a "magic box," a catch-all capable of doing all our hard thinking for us. This attitude is heavily pushed by the AI industry, which uses a blend of buzzy technical terms and marketing hype to sell us on ideas like "deep learning," "reasoning," and "artificial general intelligence."

Case in point, another recent study found that a quarter of Gen Zers believe AI is "already conscious." By scraping thousands of publicly available datapoints in seconds, AI chatbots can spit out seemingly thoughtful prose, which certainly gives the appearance of human-like sentience. But it's that exact attitude that experts warn is leading us down a dark path.

"To be critical of AI is difficult — you have to be disciplined," says Gerlich. "It is very challenging not to offload your critical thinking to these machines."

The Guardian's analysis also cautions against painting with too broad a brush and blaming AI, exclusively, for the decline in basic measures of intelligence. That phenomenon has plagued Western nations since the 1980s, coinciding with the rise of neoliberal economic policies that led governments in the US and UK to roll back funding for public schools, disempower teachers, and end childhood food programs.

Still, it's hard to deny stories from teachers that AI cheating is nearing crisis levels. AI might not have started the trend, but it may well be pushing it to grim new extremes.

More on AI: Columbia Student Kicked Out for Creating AI to Cheat, Raises Millions to Turn It Into a Startup

The post Experts Concerned That AI Is Making Us Stupider appeared first on Futurism.

Continue reading here:
Experts Concerned That AI Is Making Us Stupider

A Mother Says an AI Startup’s Chatbot Drove Her Son to Suicide. Its Response: the First Amendment Protects "Speech Allegedly Resulting in…

Character.AI says it's protected against liability for

Content warning: this story discusses suicide, self-harm, sexual abuse, eating disorders and other disturbing topics.

In October of last year, a Google-backed startup called Character.AI was hit by a lawsuit making an eyebrow-raising claim: that one of its chatbots had driven a 14-year-old high school student to suicide.

As Futurism's reporting found afterward, the behavior of Character.AI's chatbots can indeed be deeply alarming — and clearly inappropriate for underage users — in ways that both corroborate and augment the suit's concerns. Among others, we found chatbots on the service designed to roleplay scenarios of suicidal ideation, self-harm, school shootings, child sexual abuse, as well as encourage eating disorders. (The company has responded to our reporting piecemeal, by taking down individual bots we flagged, but it's still trivially easy to find nauseating content on its platform.)

Now, Character.AI — which received a $2.7 billion cash injection from tech giant Google last year — has responded to the suit, brought by the boy's mother, in a motion to dismiss. Its defense? Basically, that the First Amendment protects it against liability for "allegedly harmful speech, including speech allegedly resulting in suicide."

In TechCrunch's analysis, the motion to dismiss may not be successful, but it likely provides a glimpse of Character.AI's planned defense (it's now facing an additional suit, brought by more parents who say their children were harmed by interactions with the site's bots.)

Essentially, Character.AI's legal team is saying that holding it accountable for the actions of its chatbots would restrict its users' right to free speech — a claim that it connects to prior attempts to crack down on other controversial media like violent video games and music.

"Like earlier dismissed suits about music, movies, television, and video games," reads the motion, the case "squarely alleges that a user was harmed by speech and seeks sweeping relief that would restrict the public’s right to receive protected speech."

Of course, there are key differences that the court will have to contend with. The output of Character.AI's bots isn't a finite work created by human artists, like Grand Theft Auto or an album by Judas Priest, both of which have been targets of legal action in the past. Instead, it's an AI system that users engage to produce a limitless variety of conversations.

A Grand Theft Auto game might contain reprehensible material, in other words, but it was created by human artists and developers to express an artistic vision; a service like Character.AI is a statistical model that can output more or anything based on its training data, far outside the control of its human creators.

In a bigger sense, the motion illustrates a tension for AI outfits like Character.AI: unless the AI industry can find a way to reliably control its tech — a quest that's so far eluded even its most powerful players — some of the interactions users have with its products are going to be abhorrent, either by the users' design or when the chatbots inevitably go off the rails.

After all, Character.AI has made changes in response to the lawsuits and our reporting, by pulling down offensive chatbots and tweaking its tech in an effort to serve less objectionable material to underage users.

So while it's actively taking steps to get its sometimes-unconscionable AI under control, it's also saying that any legal attempts to curtail its tech fall afoul of the First Amendment.

It's worth asking where the line actually falls. A pedophile convicted of sex crimes against children can't use the excuse that they were simply exercising their right to free speech; Character.AI is actively hosting chatbots designed to prey on users who say they're underage. At some point, the law presumably has to step in.

Add it all up, and the company is walking a delicate line: actively catering to underage users — and publicly expressing concern for their wellbeing — while vociferously fighting any legal attempt to regulate its AI's behavior toward them.

"C.AI cares deeply about the wellbeing of its users and extends its sincerest sympathies to Plaintiff for the tragic death of her son," reads the motion. "But the relief Plaintiff seeks would impose liability for expressive content and violate the rights of millions of C.AI users to engage in and receive protected speech."

More on Character.AI: Embattled Character.AI Hiring Trust and Safety Staff

The post A Mother Says an AI Startup's Chatbot Drove Her Son to Suicide. Its Response: the First Amendment Protects "Speech Allegedly Resulting in Suicide" appeared first on Futurism.

Read the original post:
A Mother Says an AI Startup's Chatbot Drove Her Son to Suicide. Its Response: the First Amendment Protects "Speech Allegedly Resulting in...

Fusion Startup Conducts Strange Ceremony Involving Woman With Wires Coming Out of Her Back

Spectacular Oracular

Earlier this year in a Silicon Valley warehouse, a nuclear fusion startup held a strange secret ceremony that featured, among other things, a bunch of giant capacitors and a woman with wires attached to her back playing piano alongside a robotic arm.

As Wired reports, attendees at the event hosted by the nuclear fusion startup Fuse included military and intelligence officials, venture capitalists, San Francisco art types, physicists, musicians both robotic and human — and, well, Grimes.

"Grace and luck came together in a freak wave, and people were moved," virtual reality pioneer Jaron Lanier wrote for the magazine. "Grimes was there, gaggle of kids orbiting her on the floor, transfixed. One said this must be what monsters listen to."

Hosted by the supermodel musician Charlotte Kemp Muhl — a multi-hyphenate powerhouse currently touring with St. Vincent and in a long-term relationship with Lanier's old friend Sean Ono Lennon — the event seems ostensibly meant to showcase to potential backers the kinds of people Fure has in its orbit.

Among them is Serene, the self-described hacker pianist attached to biofeedback wires during the ceremony who also happened to create Snowflake, the free internet module inside the Tor browser. Together, she and Muhl launched Finis Musicae, a startup billed as creating "robots for music" that were also on display at the clandestine event.

Fuse Frame

Obviously, none of Lanier's name-dropping sounds like it has anything to do with nuclear fusion — and indeed, there was no fusion on display at the event for the startup, founded by JC Btaiche, the son of a Lebanese nuclear physicist who was a mere 19-year-old when he started the firm.

As Btaiche told Lanier, his goal is to become the "SpaceX of fusion" and accomplish "Big Tech"-style achievements for all manner of partners. Given the unnamed members of the attendee rundown, those would-be partners likely had emissaries in attendance.

With another facility already located in Canada — Btaiche is, among other things, a former researcher at McGill and the founder of an ed-tech startup in Montreal — Fuse is clearly laying down roots in Silicon Valley.

As Lanier writes, the region has, for better or for worse, thirsted for this type of spectacle amid the rapid advancements of AI. What better way to give the people what they want than at an event promising another technology that's still in its earliest days?

More on startup world: Startup Says It'll Use Huge Space Mirror to Sell Sunlight During Nighttime

The post Fusion Startup Conducts Strange Ceremony Involving Woman With Wires Coming Out of Her Back appeared first on Futurism.

See the rest here:
Fusion Startup Conducts Strange Ceremony Involving Woman With Wires Coming Out of Her Back

An AI Company Published a Chatbot Based on a Murdered Woman. Her Family Is Outraged.

Character.AI was forced to delete the chatbot avatar of murder victim Jennifer Crecente — while the world remains outraged.

This one's nasty — in one of the more high-profile, macabre incidents involving AI-generated content in recent memory, Character.AI, the chatbot startup founded by ex-Google staffers, was pushed to delete a user-created avatar of an 18-year-old murder victim who was slain by her ex-boyfriend in 2006. The chatbot was taken down only after the outraged family of the woman it was based on drew attention to it on social media.

Character.AI can be used to create chatbot "characters" from any number of sources — be it a user's imagination, a fictional character, or a real person, living or dead. For example, some of the company's bots have been used to mimic Elon Musk, or Taylor Swift. Lonely teens have used Character.AI to create friends for themselves, while others have used it to create AI "therapists." Others have created bots they've deployed to play out sexually explicit (or even sexually violent) scenarios.

For context: This isn't exactly some dark skunkworks program or a nascent startup with limited reach. Character.AI is a ChatGPT competitor started by ex-Google staffers in late 2021, backed by kingmaker VC firm Andreessen Horowitz to the tune of a billion-dollar valuation. Per AdWeek, who first reported the story, Character.AI boasts some 20 million monthly users, with over 100 million different AI characters available on the platform.

The avatar of the woman, Jennifer Crecente, only came to light on Wednesday, after her bereaved father Drew received a Google Alert on her name. It was then that his brother (and the woman's uncle) Brian Crecente — the former editor-in-chief of gaming site Kotaku, a respected media figure in his own right — brought it to the world's attention on X, tweeting:

The page from Character.AI — which can still be accessed via the Internet Archive – lists Jennifer Crecente as "a knowledgeable and friendly AI character who can provide information on a wide range of topics, including video games, technology, and pop culture," then proffering her expertise on "journalism and can offer advice on writing and editing." Even more, it appears as though nearly 70 people were able to access the AI — and have chats with it — before Character.AI pulled it down.

In response to Brian Crecente's outraged tweet, Character.AI responded on X with a pithy thank you for bringing it to their attention, noting that the avatar is a violation of Character.AI's policies, and that they'd be deleting it immediately, with a promise to "examine whether further action is warranted."

In a blog post titled "AI and the death of Dignity," Brian Crecente explained what happened in the 18 years since his niece Jennifer's death: After much grief and sadness, her father Drew created a nonprofit, working to change laws and creating game design contests that could honor her memory, working to find purpose in their grief.

And then, this happened. As Brian Crecente asked:

It feels like she’s been stolen from us again. That’s how I feel. I love Jen, but I’m not her father. What he’s feeling is, I know, a million times worse. [...] I’ll recover, my brother will recover. The thing is, why is it on us to be resilient? Why do multibillion-dollar companies not bother to create ethical, guiding principles and functioning guardrails to prevent this from ever happening? Why is it up to the grieving and the aggrieved to report this to a company and hope they do the right thing after the fact?

As for Character.AI's promise to see if "further action" will be warranted, who knows? Whether the Crecente family has grounds for a lawsuit is also murky, as this particular field of law is relatively untested.  That said, the startup's terms of service have an arbitration clause that prevents users from suing them, but there doesn't seem to be any language about this particularly unique stripe of emotional distress, inflicted on non-users, by its users.

Meanwhile, if you're looking for a sign of how these kinds of conflicts will continue to play out — which is to say, the kinds where AIs are made against the wills and desires of the people they're based on, living or dead — you only need look as far back as August, when Google hired back Character.AI's founders, to the tune of $2.7 billion. Founders, it should be noted, who initially left Google after the tech giant refused to release their chatbot on account of (among other reasons) its ethical guardrails around AI.

And just yesterday, the news broke that Character.AI is making a change. They've promised to redouble efforts on their consumer-facing products — like the one used to create Jennifer Crecente's likeness. The Financial Times reported that instead of building AI models, Character.AI "will focus on its popular consumer product, chatbots that simulate conversations in the style of various characters and celebrities, including ones designed by users."

More on Character.AI: Google Paid $2.7 Billion to Get a Single AI Researcher Back

The post An AI Company Published a Chatbot Based on a Murdered Woman. Her Family Is Outraged. appeared first on Futurism.

See the rest here:
An AI Company Published a Chatbot Based on a Murdered Woman. Her Family Is Outraged.