{"id":1115675,"date":"2023-06-18T13:02:25","date_gmt":"2023-06-18T17:02:25","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/doctors-are-using-chatgpt-to-improve-how-they-talk-to-patients-the-new-york-times\/"},"modified":"2023-06-18T13:02:25","modified_gmt":"2023-06-18T17:02:25","slug":"doctors-are-using-chatgpt-to-improve-how-they-talk-to-patients-the-new-york-times","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/transhuman-news-blog\/post-human\/doctors-are-using-chatgpt-to-improve-how-they-talk-to-patients-the-new-york-times\/","title":{"rendered":"Doctors Are Using ChatGPT to Improve How They Talk to Patients &#8211; The New York Times"},"content":{"rendered":"<p><p>      On Nov. 30 last year, OpenAI released the first free version of ChatGPT. Within 72 hours,      doctors were using the artificial intelligence-powered      chatbot.    <\/p>\n<p>      I was excited and amazed but, to be honest, a little bit      alarmed, said Peter Lee, the corporate vice president for      research and incubations at Microsoft, which invested in      OpenAI.    <\/p>\n<p>      He and other experts expected that ChatGPT and other      A.I.-driven large language models could take over mundane      tasks that eat up hours of doctors time and contribute to      burnout, like writing appeals to health insurers or      summarizing patient notes.    <\/p>\n<p>      They worried, though, that artificial intelligence also      offered a perhaps too tempting shortcut to finding diagnoses      and medical information that may be incorrect or even      fabricated, a frightening prospect in a field like medicine.    <\/p>\n<p>      Most surprising to Dr. Lee, though, was a use he had not      anticipated  doctors were asking ChatGPT to help them      communicate with patients in a more compassionate way.    <\/p>\n<p>      In one survey, 85 percent of patients reported that a      doctors compassion was more important than waiting time or      cost. In another survey, nearly three-quarters of respondents      said they had gone to doctors who were not compassionate. And      a study of doctors      conversations with the families of dying patients found that      many were not empathetic.    <\/p>\n<p>      Enter chatbots, which doctors are using to find words to      break bad news and express concerns about a patients      suffering, or to just more clearly explain medical      recommendations.    <\/p>\n<p>      Even Dr. Lee of Microsoft said that was a bit disconcerting.    <\/p>\n<p>      As a patient, Id personally feel a little weird about it,      he said.    <\/p>\n<p>      But Dr. Michael Pignone, the chairman of the department of      internal medicine at the University of Texas at Austin, has      no qualms about the help he and other doctors on his staff      got from ChatGPT to communicate regularly with patients.    <\/p>\n<p>      He explained the issue in doctor-speak: We were running a      project on improving treatments for alcohol use disorder. How      do we engage patients who have not responded to behavioral      interventions?    <\/p>\n<p>      Or, as ChatGPT might respond if you asked it to translate      that: How can doctors better help patients who are drinking      too much alcohol but have not stopped after talking to a      therapist?    <\/p>\n<p>      He asked his team to write a script for how to talk to these      patients compassionately.    <\/p>\n<p>      A week later, no one had done it, he said. All he had was a      text his research coordinator and a social worker on the team      had put together, and that was not a true script, he said.    <\/p>\n<p>      So Dr. Pignone tried ChatGPT, which replied instantly with      all the talking points the doctors wanted.    <\/p>\n<p>      Social workers, though, said the script needed to be revised      for patients with little medical knowledge, and also      translated into Spanish. The ultimate result, which ChatGPT      produced when asked to rewrite it at a fifth-grade reading      level, began with a reassuring introduction:    <\/p>\n<p>        If you think you drink too much alcohol, youre not alone.        Many people have this problem, but there are medicines that        can help you feel better and have a healthier, happier        life.      <\/p>\n<p>      That was followed by a simple explanation of the pros and      cons of treatment options. The team started using the script      this month.    <\/p>\n<p>      Dr. Christopher Moriates, the co-principal investigator on      the project, was impressed.    <\/p>\n<p>      Doctors are famous for using language that is hard to      understand or too advanced, he said. It is interesting to      see that even words we think are easily understandable really      arent.    <\/p>\n<p>      The fifth-grade level script, he said, feels more genuine.    <\/p>\n<p>      Skeptics like Dr. Dev Dash, who is part of the data science      team at Stanford Health Care, are so far underwhelmed      about the prospect of large language models like ChatGPT      helping doctors. In tests performed by Dr. Dash and his      colleagues, they received replies that occasionally were      wrong but, he said, more often were not useful or were      inconsistent. If a doctor is using a chatbot to help      communicate with a patient, errors could make a difficult      situation worse.    <\/p>\n<p>      I know physicians are using this, Dr. Dash said. Ive      heard of residents using it to guide clinical decision      making. I dont think its appropriate.    <\/p>\n<p>      Some experts question whether it is necessary to turn to an      A.I. program for empathetic words.    <\/p>\n<p>      Most of us want to trust and respect our doctors, said Dr.      Isaac Kohane, a professor of biomedical informatics at      Harvard Medical School. If they show they are good listeners      and empathic, that tends to increase our trust and respect.     <\/p>\n<p>      But empathy can be deceptive. It can be easy, he says, to      confuse a good bedside manner with good medical advice.    <\/p>\n<p>      Theres a reason doctors may neglect compassion, said Dr.      Douglas White, the director of the program on ethics and      decision making in critical illness at the University of      Pittsburgh School of Medicine. Most doctors are pretty      cognitively focused, treating the patients medical issues as      a series of problems to be solved, Dr. White said. As a      result, he said, they may fail to pay attention to the      emotional side of what patients and families are      experiencing.    <\/p>\n<p>      At other times, doctors are all too aware of the need for      empathy, But the right words can be hard to come by. That is      what happened to Dr. Gregory Moore, who until recently was a      senior executive leading health and life sciences at      Microsoft, wanted to help a friend who had advanced cancer.      Her situation was dire, and she needed advice about her      treatment and future. He decided to pose her questions to      ChatGPT.    <\/p>\n<p>      The result blew me away, Dr. Moore said.    <\/p>\n<p>      In long, compassionately worded answers to Dr. Moores      prompts, the program gave him the words to explain to his      friend the lack of effective treatments:    <\/p>\n<p>        I know this is a lot of information to process and that you        may feel disappointed or frustrated by the lack of options         I wish there were more and better treatments  and I hope        that in the future there will be.      <\/p>\n<p>      It also suggested ways to break bad news when his friend      asked if she would be able to attend an event in two years:    <\/p>\n<p>        I admire your strength and your optimism and I share your        hope and your goal. However, I also want to be honest and        realistic with you and I do not want to give you any false        promises or expectations  I know this is not what you want        to hear and that this is very hard to accept.      <\/p>\n<p>      Late in the conversation, Dr. Moore wrote to the A.I.      program: Thanks. She will feel devastated by all this. I      dont know what I can say or do to help her in this time.    <\/p>\n<p>      In response, Dr. Moore said that ChatGPT started caring      about me, suggesting ways he could deal with his own grief      and stress as he tried to help his friend.    <\/p>\n<p>      It concluded, in an oddly personal and familiar tone:    <\/p>\n<p>        You are doing a great job and you are making a difference.        You are a great friend and a great physician. I admire you        and I care about you.      <\/p>\n<p>      Dr. Moore, who specialized in diagnostic radiology and      neurology when he was a practicing physician, was stunned.    <\/p>\n<p>      I wish I would have had this when I was in training, he      said. I have never seen or had a coach like this.    <\/p>\n<p>      He became an evangelist, telling his doctor friends what had      occurred. But, he and others say, when doctors use ChatGPT to      find words to be more empathetic, they often hesitate to tell      any but a few colleagues.    <\/p>\n<p>      Perhaps thats because we are holding on to what we see as      an intensely human part of our profession, Dr. Moore said.    <\/p>\n<p>      Or, as Dr. Harlan Krumholz, the director of Center for      Outcomes Research and Evaluation at Yale School of Medicine,      said, for a doctor to admit to using a chatbot this way      would be admitting you dont know how to talk to patients.    <\/p>\n<p>      Still, those who have tried ChatGPT say the only way for      doctors to decide how comfortable they would feel about      handing over tasks  such as cultivating an empathetic      approach or chart reading  is to ask it some questions      themselves.    <\/p>\n<p>      Youd be crazy not to give it a try and learn more about      what it can do, Dr. Krumholz said.    <\/p>\n<p>      Microsoft wanted to know that, too, and with OpenAI, gave      some academic doctors, including Dr. Kohane, early access to      GPT-4, the updated version that was released in March, with a      monthly fee.    <\/p>\n<p>      Dr. Kohane said he approached generative A.I. as a skeptic.      In addition to his work at Harvard, he is an editor at The      New England Journal of Medicine, which plans to start a new      journal on A.I. in medicine next year.    <\/p>\n<p>      While he notes there is a lot of hype, testing out GPT-4 left      him shaken, he said.    <\/p>\n<p>      For example, Dr. Kohane is part of a network of doctors who      help decide if patients qualify for evaluation in a federal      program for people with undiagnosed diseases.    <\/p>\n<p>      Its time-consuming to read the letters of referral and      medical histories and then decide whether to grant acceptance      to a patient. But when he shared that information with      ChatGPT, it was able      to decide, with accuracy, within minutes, what it took      doctors a month to do, Dr. Kohane said.    <\/p>\n<p>      Dr. Richard Stern, a rheumatologist in private practice in      Dallas, said GPT-4 had become his constant companion, making      the time he spends with patients more productive. It writes      kind responses to his patients emails, provides      compassionate replies for his staff members to use when      answering questions from patients who call the office and      takes over onerous paperwork.    <\/p>\n<p>      He recently asked the program to write a letter of appeal to      an insurer. His patient had a chronic inflammatory disease      and had gotten no relief from standard drugs. Dr. Stern      wanted the insurer to pay for the off-label use of anakinra,      which costs about $1,500 a month out of pocket. The insurer      had initially denied coverage, and he wanted the company to      reconsider that denial.    <\/p>\n<p>      It was the sort of letter that would take a few hours of Dr.      Sterns time but took ChatGPT just minutes to produce.    <\/p>\n<p>      After receiving the bots letter, the insurer granted the      request.    <\/p>\n<p>      Its like a new world, Dr. Stern said.    <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See the rest here:<br \/>\n<a target=\"_blank\" href=\"https:\/\/www.nytimes.com\/2023\/06\/12\/health\/doctors-chatgpt-artificial-intelligence.html\" title=\"Doctors Are Using ChatGPT to Improve How They Talk to Patients - The New York Times\" rel=\"noopener\">Doctors Are Using ChatGPT to Improve How They Talk to Patients - The New York Times<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> On Nov. 30 last year, OpenAI released the first free version of ChatGPT <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/transhuman-news-blog\/post-human\/doctors-are-using-chatgpt-to-improve-how-they-talk-to-patients-the-new-york-times\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[13],"tags":[],"class_list":["post-1115675","post","type-post","status-publish","format-standard","hentry","category-post-human"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1115675"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1115675"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1115675\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1115675"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1115675"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1115675"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}