Artificial intelligence is infiltrating health care. We shouldnt let it make all the decisions. – MIT Technology Review

Posted: April 27, 2023 at 2:53 pm

This article is from The Checkup, MIT Technology Reviews weekly biotech newsletter. To receive it in your inbox every Thursday, sign up here.

Would you trust medical advice generated by artificial intelligence? Its a question Ive been thinking over this week, in view of yet more headlines proclaiming that AI technologies can diagnose a range of diseases. The implication is often that theyre better, faster, and cheaper than medically trained professionals.

Many of these technologies have well-known problems. Theyre trained on limited or biased data, and they often dont work as well for women and people of color as they do for white men. Not only that, but some of the data these systems are trained on are downright wrong.

Dont settle for half the story.Get paywall-free access to technology news for the here and now.

Theres another problem. As these technologies begin to infiltrate health-care settings, researchers say were seeing a rise in whats known as AI paternalism. Paternalism in medicine has been problematic since the dawn of the profession. But now, doctors may be inclined to trust AI at the expense of a patients own lived experiences, as well as their own clinical judgment.

AI is already being used in health care. Some hospitals use the technology to help triage patients. Some use it to aid diagnosis, or to develop treatment plans. But the true extent of AI adoption is unclear, says Sandra Wachter, a professor of technology and regulation at the University of Oxford in the UK.

Sometimes we dont actually know what kinds of systems are being used, says Wachter. But we do know that their adoption is likely to increase as the technology improves and as health-care systems look for ways to reduce costs, she says.

Research suggests that doctors may already be putting a lot of faith in these technologies. In a study published a few years ago, oncologists were asked to compare their diagnoses of skin cancer with the conclusions of an AI system. Many of them accepted the AIs results, even when those results contradicted their own clinical opinion.

Theres a very real risk that well come to rely on these technologies to a greater extent than we should. And heres where paternalism could come in.

Paternalism is captured by the idiom the doctor knows best, write Melissa McCradden and Roxanne Kirsch of the Hospital for Sick Children in Ontario, Canada, in a recent scientific journal paper. The idea is that medical training makes a doctor the best person to make a decision for the person being treated, regardless of that persons feelings, beliefs, culture, and anything else that might influence the choices any of us make.

Paternalism can be recapitulated when AI is positioned as the highest form of evidence, replacing the all-knowing doctor with the all-knowing AI, McCradden and Kirsch continue. They say there is a rising trend toward algorithmic paternalism. This would be problematic for a whole host of reasons.

For a start, as mentioned above, AI isnt infallible. These technologies are trained on historical data sets that come with their own flaws. Youre not sending an algorithm to med school and teaching it how to learn about the human body and illnesses, says Wachter.

As a result, AI cannot understand, only predict, write McCradden and Kirsch. An AI could be trained to learn which patterns in skin cell biopsies have been associated with a cancer diagnosis in the past, for example. But the doctors who made those past diagnoses and collected that data might have been more likely to miss cases in people of color.

And identifying past trends wont necessarily tell doctors everything they need to know about how a patients treatment should continue. Today, doctors and patients should collaborate in treatment decisions. Advances in AI use shouldnt diminish patient autonomy.

So how can we prevent that from happening? One potential solution involves designing new technologies that are trained on better data. An algorithm could be trained on information about the beliefs and wishes of various communities, as well as diverse biological data, for instance. Before we can do that, we need to actually go out and collect that dataan expensive endeavor that probably wont appeal to those who are looking to use AI to cut costs, says Wachter.

Designers of these AI systems should carefully consider the needs of the people who will be assessed by them. And they need to bear in mind that technologies that work for some groups wont necessarily work for others, whether thats because of their biology or their beliefs. Humans are not the same everywhere, says Wachter.

The best course of action might be to use these new technologies in the same way we use well-established ones. X-rays and MRIs are used to help inform a diagnosis, alongside other health information. People should be able to choose whether they want a scan, and what they would like to do with their results. We can make use of AI without ceding our autonomy to it.

Philip Nitschke, otherwise known as Dr. Death, is developing an AI that can help people end their own lives. My colleague Will Douglas Heaven explored the messy morality of letting AI make life-and-death decisions in this feature from the mortality issue of our magazine.

In 2020, hundreds of AI tools were developed to aid the diagnosis of covid-19 or predict how severe specific cases would be. None of them worked, as Will reported a couple of years ago.

Will has also covered how AI that works really well in a lab setting can fail in the real world.

My colleague Melissa Heikkil has explored whether AI systems need to come with cigarette-pack-style health warnings in a recent edition of her newsletter, The Algorithm.

Tech companies are keen to describe their AI tools as ethical. Karen Hao put together a list of the top 50 or so words companies can use to show they care without incriminating themselves.

Scientists have used an imaging technique to reveal the long-hidden contents of six sealed ancient Egyptian animal coffins. They found broken bones, a lizard skull, and bits of fabric. (Scientific Reports)

Genetic analyses can suggest targeted treatments for people with colorectal cancerbut people with African ancestry have mutations that are less likely to benefit from these treatments than those with European ancestry. The finding highlights how important it is for researchers to use data from diverse populations. (American Association for Cancer Research)

Sri Lanka is considering exporting 100,000 endemic monkeys to a private company in China. A cabinet spokesperson has said the monkeys are destined for Chinese zoos, but conservationists are worried that the animals will end up in research labs. (Reuters)

Would you want to have electrodes inserted into your brain if they could help treat dementia? Most people who have a known risk of developing the disease seem to be open to the possibility, according to a small study. (Brain Stimulation)

A gene therapy for a devastating disease that affects the muscles of some young boys could be approved following a decision due in the coming weeksdespite not having completed clinical testing. (STAT)

See original here:

Artificial intelligence is infiltrating health care. We shouldnt let it make all the decisions. - MIT Technology Review

Related Posts