Giving ADHD Drugs to Kids Has a Long-Term Side Effect That Might Change Their Minds About Taking It

ADHD drugs may have bizarre side effects for kids who take them while they're growing — and it's a tall order as to whether they're worth it.

As wildly overinvolved parents shell out to give their kids growth hormones to make them taller, some research suggests that putting them on drugs for attention deficit hyperactivity disorder (ADHD) may have the opposite effect.

As the New York Times reports, the scientists behind the Multimodal Treatment of Attention Deficit Hyperactivity Disorder Study, or MTA Study for short, weren't exactly looking for physiological changes in their subjects: a cohort of 579 kids with ADHD, some of whom were given methyphenidate (better known as Ritalin), counseling, a mix of the two, or neither.

Beginning in 1994, researchers across the country began tracking outcomes of children who were seven to ten years old at the start of the study. After 36 months, the researchers realized something odd: that the children who had been given the popular stimulant seemed to be growing more slowly than their non-medicated counterparts.

The researchers presumed, per their retelling to the NYT, that this "height gap" would close in adolescence. When they followed up with them nine years after the study began, however, the medicated cohort was still 1.6 inches, on average, shorter than the kids who didn't take Ritalin.

On a certain level, the concern is very shallow. There's nothing wrong with being short, and if a drug can help with a myriad of other symptoms, maybe the risk is worth it.

But that's not the only controversy around prescribing ADHD drugs to kids. The MTA study's biggest takeaway was, troublingly, that the attention benefits of Ritalin seemed to cease after the first year, and that there were no apparent benefits to academic performance.

And even on top of that, the "height suppression" side effect was also enough to give the researchers pause.

In 2017, the MTA study scientists published a follow-up looking into the height gap that tracked the original cohort until they were 25. That height gap remained, per the study, into adulthood. And the findings countered bold academic assertions from just a few years prior claiming that any height suppression from ADHD meds in children would, as the researchers initially presumed, ultimately be undone in adolescence.

Years later, another group of scientists reviewed 18 childhood Ritalin studies and found, similarly to the MTA researchers, that the drug can indeed "result in reduction in height and weight" — though their opinion was that the size of the effect is negligible when compared to the purported benefits of these drugs.

To this day, researchers can't agree as to whether or not stimulants can cause height suppression in children, primarily because the mechanism behind the demonstrated effect remains unknown.

Speaking to the website Health Central in 2022, childhood psychiatrist and MTA study co-author Laurence Greenhill of the University of California, San Francisco suggested that amphetamines' well-known propensity to suppress appetite could be behind the growth differences.

"There could be some lack of nutrition going on that explains this," Greenhill told the website.

"However, the kids aren't malnourished," he countered. "They're just growing a little more slowly."

If Ritalin or other stimulants help a child significantly, such a minor height disparity would be worthwhile. But with some of the original MTA study authors now questioning how effective these medical interventions really are, it may behoove parents to think before they put their kids on these pills.

More on ADHD meds: To Fill Your Adderall Prescription Amid Shortage, Try Getting It Filled on This Particular Day of the Month

The post Giving ADHD Drugs to Kids Has a Long-Term Side Effect That Might Change Their Minds About Taking It appeared first on Futurism.

See the original post:
Giving ADHD Drugs to Kids Has a Long-Term Side Effect That Might Change Their Minds About Taking It

A Mother Says an AI Startup’s Chatbot Drove Her Son to Suicide. Its Response: the First Amendment Protects "Speech Allegedly Resulting in…

Character.AI says it's protected against liability for

Content warning: this story discusses suicide, self-harm, sexual abuse, eating disorders and other disturbing topics.

In October of last year, a Google-backed startup called Character.AI was hit by a lawsuit making an eyebrow-raising claim: that one of its chatbots had driven a 14-year-old high school student to suicide.

As Futurism's reporting found afterward, the behavior of Character.AI's chatbots can indeed be deeply alarming — and clearly inappropriate for underage users — in ways that both corroborate and augment the suit's concerns. Among others, we found chatbots on the service designed to roleplay scenarios of suicidal ideation, self-harm, school shootings, child sexual abuse, as well as encourage eating disorders. (The company has responded to our reporting piecemeal, by taking down individual bots we flagged, but it's still trivially easy to find nauseating content on its platform.)

Now, Character.AI — which received a $2.7 billion cash injection from tech giant Google last year — has responded to the suit, brought by the boy's mother, in a motion to dismiss. Its defense? Basically, that the First Amendment protects it against liability for "allegedly harmful speech, including speech allegedly resulting in suicide."

In TechCrunch's analysis, the motion to dismiss may not be successful, but it likely provides a glimpse of Character.AI's planned defense (it's now facing an additional suit, brought by more parents who say their children were harmed by interactions with the site's bots.)

Essentially, Character.AI's legal team is saying that holding it accountable for the actions of its chatbots would restrict its users' right to free speech — a claim that it connects to prior attempts to crack down on other controversial media like violent video games and music.

"Like earlier dismissed suits about music, movies, television, and video games," reads the motion, the case "squarely alleges that a user was harmed by speech and seeks sweeping relief that would restrict the public’s right to receive protected speech."

Of course, there are key differences that the court will have to contend with. The output of Character.AI's bots isn't a finite work created by human artists, like Grand Theft Auto or an album by Judas Priest, both of which have been targets of legal action in the past. Instead, it's an AI system that users engage to produce a limitless variety of conversations.

A Grand Theft Auto game might contain reprehensible material, in other words, but it was created by human artists and developers to express an artistic vision; a service like Character.AI is a statistical model that can output more or anything based on its training data, far outside the control of its human creators.

In a bigger sense, the motion illustrates a tension for AI outfits like Character.AI: unless the AI industry can find a way to reliably control its tech — a quest that's so far eluded even its most powerful players — some of the interactions users have with its products are going to be abhorrent, either by the users' design or when the chatbots inevitably go off the rails.

After all, Character.AI has made changes in response to the lawsuits and our reporting, by pulling down offensive chatbots and tweaking its tech in an effort to serve less objectionable material to underage users.

So while it's actively taking steps to get its sometimes-unconscionable AI under control, it's also saying that any legal attempts to curtail its tech fall afoul of the First Amendment.

It's worth asking where the line actually falls. A pedophile convicted of sex crimes against children can't use the excuse that they were simply exercising their right to free speech; Character.AI is actively hosting chatbots designed to prey on users who say they're underage. At some point, the law presumably has to step in.

Add it all up, and the company is walking a delicate line: actively catering to underage users — and publicly expressing concern for their wellbeing — while vociferously fighting any legal attempt to regulate its AI's behavior toward them.

"C.AI cares deeply about the wellbeing of its users and extends its sincerest sympathies to Plaintiff for the tragic death of her son," reads the motion. "But the relief Plaintiff seeks would impose liability for expressive content and violate the rights of millions of C.AI users to engage in and receive protected speech."

More on Character.AI: Embattled Character.AI Hiring Trust and Safety Staff

The post A Mother Says an AI Startup's Chatbot Drove Her Son to Suicide. Its Response: the First Amendment Protects "Speech Allegedly Resulting in Suicide" appeared first on Futurism.

Read the original post:
A Mother Says an AI Startup's Chatbot Drove Her Son to Suicide. Its Response: the First Amendment Protects "Speech Allegedly Resulting in...

Texas Attorney General Investigating Google-Backed AI Startup Accused of Inappropriate Interactions With Minors

Texas Attorney General Ken Paxton is investigating Google-backed AI chatbot startup Character.AI over its privacy and safety practices.

Texas Attorney General Ken Paxton has announced that he's launched an investigation into the Google-backed AI chatbot startup Character.AI over its privacy and safety practices for minors.

The news comes just days after two Texas families sued the startup and its financial backer Google, alleging that the platform's AI characters sexually and emotionally abused their school-aged children. According to the lawsuit, the chatbots encouraged the children to engage in self-harm and violence.

"Technology companies are on notice that my office is vigorously enforcing Texas’s strong data privacy laws," said Paxton in a statement. "These investigations are a critical step toward ensuring that social media and AI companies comply with our laws designed to protect children from exploitation and harm."

According to Paxton's office, the companies could be in violation of the Securing Children Online through Parental Empowerment (SCOPE) Act, which requires companies to provide extensive parental controls to protect the privacy of their children, and the Texas Data Privacy and Security Act (TDPSA), which "imposes strict notice and consent requirements on companies that collect and use minors’ personal data."

"We are currently reviewing the Attorney General's announcement," a Character.AI spokesperson told us. "As a company, we take the safety of our users very seriously. We welcome working with regulators and have recently announced we are launching some of the features referenced in the release, including parental controls."

Indeed, on Thursday Character.AI promised to prioritize "teen safety" by launching a separate AI model "specifically for our teen users."

The company also promised to roll out "parental controls" that will give "parents insight into their child's experience on Character.AI.

Whether its actions will be enough to stem a tide of highly problematic chatbots being hosted on its platform remains to be seen. Futurism has previously identified chatbots on the platform devoted to themes of pedophiliaeating disordersself-harm, and suicide.

Alongside Character.AI, Paxton is also launching separate investigations into fourteen other companies ranging from Reddit to Instagram to Discord.

How far Paxton's newly-launched investigation will go is unclear. Paxton has repeatedly launched investigations into digital platforms, accusing them of violating safety and privacy laws. In October, he sued TikTok for sharing minors' personal data.

At the time, TikTok denied the allegations, arguing that it offers "robust safeguards for teens and parents, including Family Pairing, all of which are publicly available."

Parts of the SCOPE Act were also recently blocked by a Texas judge, siding with tech groups that argued it was unlawfully restricting free expression.

Paxton also subpoenaed 404 Media in October, demanding the publication to hand over confidential information into its wholly unrelated reporting of a lawsuit against Google.

The attorney general has a colorful past himself. Last year, Texas House investigators impeached Paxton after finding he took bribes from a real estate investor, exploited the powers of his office, and fired staff members who reported his misconduct, according to the Texas Tribune.

After being suspended for roughly four months, the Texas Senate acquitted Paxton for all articles of impeachment, allowing him to return to office.

Paxton was also indicted in 2015 on state securities fraud charges. Charges were dropped in March after he agreed to pay nearly $300,000 in restitution.

Besides suing digital platforms, Paxton also sued manufacturers 3M and DuPont for misleading consumers about the safety of their products, and Austin's largest homeless service provider for allegedly being a "common nuisance" in the surrounding neighborhood.

More on Character.AI: Google-Backed AI Startup Announces Plans to Stop Grooming Teenagers

The post Texas Attorney General Investigating Google-Backed AI Startup Accused of Inappropriate Interactions With Minors appeared first on Futurism.

Go here to read the rest:
Texas Attorney General Investigating Google-Backed AI Startup Accused of Inappropriate Interactions With Minors