To understand AI advancements in health care, there are two storylines we must follow – Diginomica

If ever there were an industry that could reap the benefits of AI, it is healthcare. The adoption of this technology to actually make medicine better is obvious. However, with this adoption comes a slew of ethical issues.

Lets start with some numbers: In 2018, the US spent $3.65 trillion on healthcare. That works out to $11,121 per capita, a 4.4% increase over 2017. In addition:

The per capita spend in western economies, other than Switzerland, which was about 80%, was 50% or less. The worse news is that the US has slipped to 36th in the world in quality of healthcare. (The above data is from Centers for Medicare & Medicaid Services and CIA World FactBook.)

Another lesser-known statistic is the magnitude of iatrogenic disease. From Wikipedia: an iatrogenic disorder occurs when the deleterious effects of the therapeutic or diagnostic regimen causes pathology independent of the condition for which the regimen is advised.

In other words, they are harmed by medical practice. According to a Johns Hopkins study, 251,454 deaths stemmed from a medical error - making it the third leading cause of death in the US, just behind cancer and heart disease.

All industries are facing the problem of which areas to apply AI. In an article in Healthcare IT News, some advice for the healthcare industry was: while AI may have the potential to discover new treatment methods, the report finds strongly entrenched ways of working in the healthcare industry that are resistant to change. The authors warn that simply adding AI applications to a fragmented system will not create sustainable change. Good advice for any industry.

Writing for Nature partner journal Digital Medicine, Trishan Panch, Heather Mattie and Leo Anthony Celi outline the obstacles healthcare faces in implementing AI solutions:

Data is balkanized along organizational boundaries, severely constraining the ability to provide services to patients across a care continuum within one organization or across organizations However, the inconvenient truth is that at present the algorithms that feature prominently in research literature are in fact not, for the most part, executable at the frontlines of clinical practiceAI innovations by themselves do not re-engineer the incentives that support existing ways of workingmost healthcare organizations lack the data infrastructure required to collect the data needed to optimally train algorithms to (a) fit the local population and/or the local practice patterns.

What industry isnt facing the same obstacles? Eric Topol is a cardiologist and geneticist, Executive Vice-President of Scripps Research, founder of a new medical school. His current book, just out is, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. His previous books are The Creative Destruction of Medicine (a take-off on Schumpeter's 'Creative Destruction in Economics,'" and "The Patient Will See You Now." Dr. Topol sees a more hopeful future for AI, but also cautions about its drawbacks and impediments.

The common thread running through Topols books is that medicine is a mess and technology will save it. I admire Topol for taking a stand on the state of medicine, but his breathless enthusiasm for technology overlooks how difficult it is to effect change in the $3+ trillion industry.

As examples of where AI shows promise for medicine, Topol explains that: Machine learning of mammography images from more than 1,000 patients, coupled with biopsy results indicating risk of cancer, showed that more than 30 percent of breast surgeries could be avoided. However, will doctors comply? Eye disease and radiology are two medical areas that are getting priority with lots of research and deep learning algorithmic development. The problem is not many physicians are practicing deep medicine, but rather, those doctors can be overconfident, condescending, arrogant, or simply not caring.

The industry is moving ahead with AI drug discovery, AI mental health (Instagram filters apparently say much about ones mental state), but Topol points out that there are the deep liabilities AI brings. Too commonly we ascribe the capability of machines to 'read' scans or slides," Topol writes, when they really cant read. Machines lack of understanding cannot be emphasized enough. Recognition is not understanding; there is zero context.

A rather horrifying example of the deep liabilities is the Dying Algorithm, a digital neural network 18 layers thick based on the electronic health records of nearly 160,000 people. It was, he writes, able to predict the time until death on a test population of 40,000 patient records with remarkable accuracy. Google and a trio of medical centers are now working with 47 billion data points to predict whether a patient would die, length of hospital stay, unexpected hospital admission, and final discharge diagnoses.

This is a perfect example of something that could be extremely useful, but is entirely, thoroughly unethical.

With all of the AI-driven scans and surgical assistants et al. that are mildly interesting, one he points out that is actually available now is speech recognition and transcription, so doctors can actually talk to their patients instead of typing in their EHR and not even making eye contact.This isn't super-science like robots doing brain surgery or models predicting the moment of my death. The technology behind NLP is already viable and organizations might want to find applications for it before looking for the "game-changer."

Topol confines himself mostly to doctors and hospitals, but there is a multitude of opportunities in healthcare for AI. In just about every customer-facing business, augmented intelligence is a current and promising approach. For Topol, There are about 10,000 human diseases, and there's not a doctor who could recall any significant fraction of them. If doctors can't remember a possible diagnosis when making up a differential, then they will diagnose according to the possibilities that are mentally 'available' to them, and an error can result." He is critical about IBMs Watson (as am I). He says, Watson does ingest abstracts, but it doesnt transform all that data into a structured database that would be useful to a working doctor.

Lesson there: ask an expert before you believe the hype.

I like Topol. I met him a few years ago at a medical device conference. He was talking about mobility, not yet AI. He took out a card, put it in his jacket pocket, held up an iPhone and showed on the big screen for the audience, live, his EKG and other vitals (which were impressive). Then he explained that a doctor in Europe was looking at this at the same time we were.

There were some ooh's and ah's, but in retrospect, I wonder what happened to that data, and the data from perhaps hundreds of thousands of other people who would use the device. Who owns that data? Also, what it hundreds of thousands of data streaming EKGs, similar to these, got sold and resold to another company who married it to the mountain of unregulated personal data to ultimately deny you credit, housing, education using models you will never see?

This is my point about AI for healthcare (or anything). There are two storylines to consider: the usefulness of the application - and the ultimate effect, often unintended, on people.

View original post here:

To understand AI advancements in health care, there are two storylines we must follow - Diginomica

Related Posts

Comments are closed.