The messy morality of letting AI make life-and-death decisions – MIT Technology Review

Posted: October 13, 2022 at 1:33 pm

By the 2000s, an algorithm had been developed in the US to identify recipients for donated kidneys. But some people were unhappy with how the algorithm had been designed. In 2007, Clive Grawe, a kidney transplant candidate from Los Angeles, told a room full of medical experts that their algorithm was biased against older people like him. The algorithm had been designed to allocate kidneys in a way that maximized years of life saved. This favored younger, wealthier, and whiter patients, Grawe and other patients argued.

Such bias in algorithms is common. Whats less common is for the designers of those algorithms to agree that there is a problem. After years of consultation with laypeople like Grawe, the designers found a less biased way to maximize the number of years savedby, among other things, considering overall health in addition to age. One key change was that the majority of donors, who are often people who have died young, would no longer be matched only to recipients in the same age bracket. Some of those kidneys could now go to older people if they were otherwise healthy. As with Scribners committee, the algorithm still wouldnt make decisions that everyone would agree with. But the process by which it was developed is harder to fault.

I didnt want to sit there and give the injection. If you want it, you press the button.

Nitschke, too, is asking hard questions.

A former doctor who burned his medical license after a years-long legal dispute with the Australian Medical Board, Nitschke has the distinction of being the first person to legally administer a voluntary lethal injection to another human. In the nine months between July 1996, when the Northern Territory of Australia brought in a law that legalized euthanasia, and March 1997, when Australias federal government overturned it, Nitschke helped four of his patients to kill themselves.

The first, a 66-year-old carpenter named Bob Dent, who had suffered from prostate cancer for five years, explained his decision in an open letter: If I were to keep a pet animal in the same condition I am in, I would be prosecuted.

Nitschke wanted to support his patients decisions. Even so, he was uncomfortable with the role they were asking him to play. So he made a machine to take his place. I didnt want to sit there and give the injection, he says. If you want it, you press the button.

The machine wasnt much to look at: it was essentially a laptop hooked up to a syringe. But it achieved its purpose. The Sarco is an iteration of that original device, which was later acquired by the Science Museum in London. Nitschke hopes an algorithm that can carry out a psychiatric assessment will be the next step.

But theres a good chance those hopes will be dashed. Creating a program that can assess someones mental health is an unsolved problemand a controversial one. As Nitschke himself notes, doctors do not agree on what it means for a person of sound mind to choose to die. You can get a dozen different answers from a dozen different psychiatrists, he says. In other words, there is no common ground on which an algorithm could even be built.

View original post here:

The messy morality of letting AI make life-and-death decisions - MIT Technology Review

Related Posts