Cerner AI expert discusses important ‘misconceptions’ about the technology – Healthcare IT News

Dr. Tanuj Gupta, vice president at Cerner Intelligence, is an expert in healthcare artificial intelligence and machine learning. Part of his job is explaining, from his expert point of view, what he considers misconceptions with AI, especially misconceptions in healthcare.

In this interview with Healthcare IT News, Gupta discusses what he says are popular misconceptions with gender and racial bias in algorithms, AI replacing clinicians, and the regulation of AI in healthcare.

Q. In general terms, why do you think there are misconceptions about AI in healthcare, and why do they persist?

A. I've given more than 100 presentations on AI and ML in the past year. There's no doubt these technologies are hot topics in healthcare that usher in great hope for the advancement of our industry.

While they have the potential to transform patient care, quality and outcomes, there also are concerns about the negative impact this technology could have on human interaction, as well as the burden they could place on clinicians and health systems.

Q. Should we be concerned about gender and racial bias in ML algorithms?

A. Traditionally, healthcare providers consider a patient's unique situation when making decisions, along with information sources, such as their clinical training and experiences, as well as published medical research.

Now, with ML, we can be more efficient and improve our ability to examine large amounts of data, flag potential problems and suggest next steps for treatment. While this technology is promising, there are some risks. Although AI and ML are just tools, they have many points of entry that are vulnerable to bias, from inception to end use.

As ML learns and adapts, it's vulnerable to potentially biased input and patterns. Existing prejudices especially if they're unknown and data that reflects societal or historical inequities can result in bias being baked into the data that's used to train an algorithm or ML model to predict outcomes. If not identified and mitigated, clinical decision-making based on bias could negatively impact patient care and outcomes. When bias is introduced into an algorithm, certain groups can be targeted unintentionally.

Gender and racial biases have been identified in commercial facial-recognition systems, which are known to falsely identify Black and Asian faces 10 to 100 times more than Caucasian faces, and have more difficulty identifying women than men. Bias is also seen in natural language processing that identifies topic, opinion and emotion.

If the systems in which our AI and ML tools are developed or implemented are biased, then their resulting health outcomes can be biased, which can perpetuate health disparities. While breaking down systemic bias can be challenging, it's important that we do all we can to identify and correct it in all its manifestations. This is the only way we can optimize AI and ML in healthcare and ensure the highest quality of patient experience.

Q. Could AI replace clinicians?

A. The short answer is no. AI and ML will not replace clinician judgement. Providers will always have to be involved in the decision-making process, because we hold them accountable for patient care and outcomes.

We already have some successful guardrails in other areas of healthcare that we'll likely evolve to for AI and ML. For example, one parallel is verbal orders. If a doctor gives a nurse a verbal order for a medication, the nurse repeats it back to them before entering it in the chart, and the doctor must sign off on it. If that medication ends up causing harm to the patient, the doctor can't say the nurse is at fault.

Additionally, any standing protocol orders that a hospital wants to institute must be approved by a committee of physicians who then have a regular review period to ensure the protocols are still safe and effective. That way, if the nurse executes a protocol order and there's a patient-safety issue, that medical committee is responsible and accountable not the nurse.

The same thing is going to be there with AI and ML algorithms. There won't be an algorithm that arbitrarily runs on a tool or machine, treating a patient without doctor oversight.

If we throw a bunch of algorithms into the electronic health record that say, "treat the patient this way" or "diagnose him with this," we'll have to hold the clinician and possibly the algorithm maker if it becomes regulated by the U.S. Food and Drug Administration accountable for the outcomes. I can't imagine a situation where that would change.

Clinicians can use,and are using, AI and ML to improve care and maybe make healthcare even more human than it is today. AI and ML could also allow physicians to enhance the quality of time spent with patients.

Bottom line, I think we as the healthcare industry should embrace AI and ML technology. It won't replace us; it will just become a new and effective toolset to use with our patients. And using this technology responsibly means always staying on top of any potential patient safety risks.

Q. What should we know about the regulation of AI in healthcare?

A. AI introduces some important concerns around data ownership, safety and security. Without a standard for how to handle these issues, there's the potential to cause harm, either to the healthcare system or to the individual patient.

For these reasons, important regulations should be expected. The pharmaceutical, clinical treatment and medical device industries provide a precedent for how to protect data rights, privacy, and security, and drive innovation in an AI-empowered healthcare system.

Let's start with data rights. When people use an at-home DNA testing kit, they likely gave broad consent for your data to be used for research purposes, as defined by the U.S. Department of Health and Human Services in a 2017 guidance document.

While that guidance establishes rules for giving consent, it also creates the process for withdrawing consent. Handling consent in an AI-empowered healthcare system may be a challenge, but there's precedent for thinking through this issue to both protect rights and drive innovation.

With regard to patient safety concerns, the Food and Drug Administration has published two documents to address the issue: Draft Guidance on Clinical Decision Support Software and Draft Guidance on Software as a Medical Device. The first guidance sets a framework for determining if an ML algorithm is a medical device.

Once you've determined your ML algorithm is in fact a device, the second guidance provides "good machine learning practices." Similar FDA regulations on diagnostics and therapeutics have kept us safe from harm without getting in the way of innovation. We should expect the same outcome for AI and ML in healthcare.

Finally, let's look at data security and privacy. The industry wants to protect data privacy while unlocking more value in healthcare. For example, HHS has long relied on the Health Insurance Portability and Accountability Act, which was signed into law in 1996.

While HIPAA is designed to safeguard protected health information, growing innovation in healthcare particularly regarding privacy led to HHS' recently issued proposed rule to prevent information blocking and encourage healthcare innovation.

It's safe to conclude that AI and ML in healthcare will be regulated. But that doesn't mean these tools won't be useful. In fact, we should expect the continued growth of AI applications for healthcare as more uses and benefits of the technology surface.

Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.

Read the rest here:

Cerner AI expert discusses important 'misconceptions' about the technology - Healthcare IT News

Related Posts

Comments are closed.