Giving ADHD Drugs to Kids Has a Long-Term Side Effect That Might Change Their Minds About Taking It

ADHD drugs may have bizarre side effects for kids who take them while they're growing — and it's a tall order as to whether they're worth it.

As wildly overinvolved parents shell out to give their kids growth hormones to make them taller, some research suggests that putting them on drugs for attention deficit hyperactivity disorder (ADHD) may have the opposite effect.

As the New York Times reports, the scientists behind the Multimodal Treatment of Attention Deficit Hyperactivity Disorder Study, or MTA Study for short, weren't exactly looking for physiological changes in their subjects: a cohort of 579 kids with ADHD, some of whom were given methyphenidate (better known as Ritalin), counseling, a mix of the two, or neither.

Beginning in 1994, researchers across the country began tracking outcomes of children who were seven to ten years old at the start of the study. After 36 months, the researchers realized something odd: that the children who had been given the popular stimulant seemed to be growing more slowly than their non-medicated counterparts.

The researchers presumed, per their retelling to the NYT, that this "height gap" would close in adolescence. When they followed up with them nine years after the study began, however, the medicated cohort was still 1.6 inches, on average, shorter than the kids who didn't take Ritalin.

On a certain level, the concern is very shallow. There's nothing wrong with being short, and if a drug can help with a myriad of other symptoms, maybe the risk is worth it.

But that's not the only controversy around prescribing ADHD drugs to kids. The MTA study's biggest takeaway was, troublingly, that the attention benefits of Ritalin seemed to cease after the first year, and that there were no apparent benefits to academic performance.

And even on top of that, the "height suppression" side effect was also enough to give the researchers pause.

In 2017, the MTA study scientists published a follow-up looking into the height gap that tracked the original cohort until they were 25. That height gap remained, per the study, into adulthood. And the findings countered bold academic assertions from just a few years prior claiming that any height suppression from ADHD meds in children would, as the researchers initially presumed, ultimately be undone in adolescence.

Years later, another group of scientists reviewed 18 childhood Ritalin studies and found, similarly to the MTA researchers, that the drug can indeed "result in reduction in height and weight" — though their opinion was that the size of the effect is negligible when compared to the purported benefits of these drugs.

To this day, researchers can't agree as to whether or not stimulants can cause height suppression in children, primarily because the mechanism behind the demonstrated effect remains unknown.

Speaking to the website Health Central in 2022, childhood psychiatrist and MTA study co-author Laurence Greenhill of the University of California, San Francisco suggested that amphetamines' well-known propensity to suppress appetite could be behind the growth differences.

"There could be some lack of nutrition going on that explains this," Greenhill told the website.

"However, the kids aren't malnourished," he countered. "They're just growing a little more slowly."

If Ritalin or other stimulants help a child significantly, such a minor height disparity would be worthwhile. But with some of the original MTA study authors now questioning how effective these medical interventions really are, it may behoove parents to think before they put their kids on these pills.

More on ADHD meds: To Fill Your Adderall Prescription Amid Shortage, Try Getting It Filled on This Particular Day of the Month

The post Giving ADHD Drugs to Kids Has a Long-Term Side Effect That Might Change Their Minds About Taking It appeared first on Futurism.

See the original post:
Giving ADHD Drugs to Kids Has a Long-Term Side Effect That Might Change Their Minds About Taking It

An AI Company Published a Chatbot Based on a Murdered Woman. Her Family Is Outraged.

Character.AI was forced to delete the chatbot avatar of murder victim Jennifer Crecente — while the world remains outraged.

This one's nasty — in one of the more high-profile, macabre incidents involving AI-generated content in recent memory, Character.AI, the chatbot startup founded by ex-Google staffers, was pushed to delete a user-created avatar of an 18-year-old murder victim who was slain by her ex-boyfriend in 2006. The chatbot was taken down only after the outraged family of the woman it was based on drew attention to it on social media.

Character.AI can be used to create chatbot "characters" from any number of sources — be it a user's imagination, a fictional character, or a real person, living or dead. For example, some of the company's bots have been used to mimic Elon Musk, or Taylor Swift. Lonely teens have used Character.AI to create friends for themselves, while others have used it to create AI "therapists." Others have created bots they've deployed to play out sexually explicit (or even sexually violent) scenarios.

For context: This isn't exactly some dark skunkworks program or a nascent startup with limited reach. Character.AI is a ChatGPT competitor started by ex-Google staffers in late 2021, backed by kingmaker VC firm Andreessen Horowitz to the tune of a billion-dollar valuation. Per AdWeek, who first reported the story, Character.AI boasts some 20 million monthly users, with over 100 million different AI characters available on the platform.

The avatar of the woman, Jennifer Crecente, only came to light on Wednesday, after her bereaved father Drew received a Google Alert on her name. It was then that his brother (and the woman's uncle) Brian Crecente — the former editor-in-chief of gaming site Kotaku, a respected media figure in his own right — brought it to the world's attention on X, tweeting:

The page from Character.AI — which can still be accessed via the Internet Archive – lists Jennifer Crecente as "a knowledgeable and friendly AI character who can provide information on a wide range of topics, including video games, technology, and pop culture," then proffering her expertise on "journalism and can offer advice on writing and editing." Even more, it appears as though nearly 70 people were able to access the AI — and have chats with it — before Character.AI pulled it down.

In response to Brian Crecente's outraged tweet, Character.AI responded on X with a pithy thank you for bringing it to their attention, noting that the avatar is a violation of Character.AI's policies, and that they'd be deleting it immediately, with a promise to "examine whether further action is warranted."

In a blog post titled "AI and the death of Dignity," Brian Crecente explained what happened in the 18 years since his niece Jennifer's death: After much grief and sadness, her father Drew created a nonprofit, working to change laws and creating game design contests that could honor her memory, working to find purpose in their grief.

And then, this happened. As Brian Crecente asked:

It feels like she’s been stolen from us again. That’s how I feel. I love Jen, but I’m not her father. What he’s feeling is, I know, a million times worse. [...] I’ll recover, my brother will recover. The thing is, why is it on us to be resilient? Why do multibillion-dollar companies not bother to create ethical, guiding principles and functioning guardrails to prevent this from ever happening? Why is it up to the grieving and the aggrieved to report this to a company and hope they do the right thing after the fact?

As for Character.AI's promise to see if "further action" will be warranted, who knows? Whether the Crecente family has grounds for a lawsuit is also murky, as this particular field of law is relatively untested.  That said, the startup's terms of service have an arbitration clause that prevents users from suing them, but there doesn't seem to be any language about this particularly unique stripe of emotional distress, inflicted on non-users, by its users.

Meanwhile, if you're looking for a sign of how these kinds of conflicts will continue to play out — which is to say, the kinds where AIs are made against the wills and desires of the people they're based on, living or dead — you only need look as far back as August, when Google hired back Character.AI's founders, to the tune of $2.7 billion. Founders, it should be noted, who initially left Google after the tech giant refused to release their chatbot on account of (among other reasons) its ethical guardrails around AI.

And just yesterday, the news broke that Character.AI is making a change. They've promised to redouble efforts on their consumer-facing products — like the one used to create Jennifer Crecente's likeness. The Financial Times reported that instead of building AI models, Character.AI "will focus on its popular consumer product, chatbots that simulate conversations in the style of various characters and celebrities, including ones designed by users."

More on Character.AI: Google Paid $2.7 Billion to Get a Single AI Researcher Back

The post An AI Company Published a Chatbot Based on a Murdered Woman. Her Family Is Outraged. appeared first on Futurism.

See the rest here:
An AI Company Published a Chatbot Based on a Murdered Woman. Her Family Is Outraged.