Responsible AI in health care starts at the top but its everyones responsibility (VB Live) – VentureBeat

Posted: March 18, 2021 at 12:39 am

Presented by Optum

Health cares Quadruple Aim is to improve health outcomes, enhance the experiences of patients and providers, and reduce costs and AI can help. In this VB Live event, learn more about how stakeholders can use AI responsibly, ethically, and equitably to ensure all populations benefit.

Register here for free.

Breakthroughs in the application of machine learning and other forms of artificial intelligence (AI) in health care are rapidly advancing, creating advantages in the fields clinical and administrative realms. Its on the administrative side think workflows or back office processes where the technology has been more fully adopted. Using AI to simplify those processes creates efficiencies that reduce the amount of work it takes to deliver health care and improves the experiences of both patients and providers.

But its increasingly clear that applying AI responsibly needs to be a central focus for organizations who use data and information to improve outcomes and the overall experience.

Advanced analytics and AI have a significant impact in how important decisions are made across the health care ecosystem, says Sanji Fernando, SVP of artificial intelligence and analytics platforms at Optum. And, so, the company has guidelines for the responsible use of advanced analytics and AI for all of UnitedHealth Group.

Its important for us to have a framework, not only for the data scientists and machine learning engineers, but for everyone in our organization operations, clinicians, product managers, marketing to better understand expectations and how we want to drive breakthroughs to better support our customers, patients, and the wider health care system, he says. We view the promise of AI and its responsible use as part of our shared responsibility to use these breakthroughs appropriately for patients, providers, and our customers.

The guideline focuses on making sure everyone is considering how to appropriately use advanced analytics and AI, how these models are trained, and how they are monitored and evaluated over time, he adds.

Machine learning models, by definition, learn from the available data thats being created throughout the health care system. Inequities in the system may be reflected in the data and predictions that machine learning models return. Its important for everyone to be aware that health inequity may exist and that models may reflect that, he explains.

By consistently evaluating how models may classify or infer, and looking at how that affects folks of different races, ethnicities, and ages, we can be more aware of where some models may require consistent examination to best ensure they are working the way wed like them to, he says. The reality is that theres no magic bullet to fix an ML model automatically, but its important for us to understand and consistently learn where these models may impact different groups.

Transparency is a key factor in delivering responsible AI. That includes being very clear about how youre training your models, the appropriate use of data used to train an algorithm, as well as data privacy. When possible, it also means understanding how specific features are being identified or leveraged within the model. Basics like an age or date are straightforward features, but the challenge arises with paragraphs of natural language and unstructured text. Each word, phrase or paragraph can be considered a feature, creating an enormous number of combinations to consider.

But understanding feature importance the features that are more important to the model is important to provide better insight into how the model may actually be working, he explains. Its not true mathematical interpretability, but it gives us a better awareness.

Another important factor is being able to reproduce the performance and results of a model. Results will necessarily change when you train or retrain an algorithm, so you want to be able to trace that history, by being able to reproduce results over time. This ensures the consistency and appropriateness of the model remains constant (and allows for potential adjustments should they be needed).

Theres no shortage of tools and capabilities available across the field of responsible AI because there are so many people who are passionate about making sure we all use AI responsibly. For example, Optum uses an open-source bias audit tool from the University of Chicago. But there are any number of approaches and great thinking from a tooling perspective, Fernando says, so its really becoming an industry best practice to implement a policy of responsible AI.

The other piece of the puzzle requires work and a commitment from everyone in the ecosystem: making responsible use everyones responsibility, not just the machine learning engineer or data scientist.

Our aspiration is that every employee understands these responsibilities and takes ownership of them, he says, whether UHG employees are using ML-driven recommendations in their day-to-day work, designing new products and services, or theyre the data scientists and ML engineers who can evaluate models and understand output class distributions, we all have a shared responsibility to ensure these tools are achieving the best and most equitable results for the people we serve.

To learn more about the ways that AI is impacting the delivery and administration of health care across the ecosystem, the benefits of machine learning for cost savings and efficiency, and the importance of responsible AI for every worker, dont miss this VB Live event.

Dont miss out!

Register here for free.

Youll learn:

Speakers:

See the rest here:

Responsible AI in health care starts at the top but its everyones responsibility (VB Live) - VentureBeat

Related Posts