AI Can Be Both Accurate and Transparent – HBR.org Daily

Posted: May 18, 2023 at 2:01 am

In 2019, Apples credit card business came under fire for offering a woman one twentieth the credit limit offered to her husband. When she complained, Apple representatives reportedly told her, I dont know why, but I swear were not discriminating. Its just the algorithm.

Today, more and more decisions are made by opaque, unexplainable algorithms like this often with similarly problematic results. From credit approvals to customized product or promotion recommendations to resume readers to fault detection for infrastructure maintenance, organizations across a wide range of industries are investing in automated tools whose decisions are often acted upon with little to no insight into how they are made.

This approach creates real risk. Research has shown that a lack of explainability is both one of executives most common concerns related to AI and has a substantial impact on users trust in and willingness to use AI products not to mention their safety.

And yet, despite the downsides, many organizations continue to invest in these systems, because decision-makers assume that unexplainable algorithms are intrinsically superior to simpler, explainable ones. This perception is known as the accuracy-explainability tradeoff: Tech leaders have historically assumed that the better a human can understand an algorithm, the less accurate it will be.

Specifically, data scientists draw a distinction between so-called black-box and white-box AI models: White-box models typically include just a few simple rules, presented for example as a decision tree or a simple linear model with limited parameters. Because of the small number of rules or parameters, the processes behind these algorithms can typically be understood by humans.

In contrast, black-box models use hundreds or even thousands of decision trees (known as random forests), or billions of parameters (as deep learning models do), to inform their outputs. Cognitive load theory has shown that humans can only comprehend models with up to about seven rules or nodes, making it functionally impossible for observers to explain the decisions made by black-box systems. But does their complexity necessarily make black-box models more accurate?

To explore this question, we conducted a rigorous, large-scale analysis of how black and white-box models performed on a broad array of nearly 100 representative datasets (known as benchmark classification datasets), spanning domains such as pricing, medical diagnosis, bankruptcy prediction, and purchasing behavior. We found that for almost 70% of the datasets, the black box and white box models produced similarly accurate results. In other words, more often than not, there was no tradeoff between accuracy and explainability: A more-explainable model could be used without sacrificing accuracy.

This is consistent with other emerging research exploring the potential of explainable AI models, as well as our own experience working on case studies and projects with companies across diverse industries, geographies, and use cases. For example, it has been repeatedly demonstrated that COMPAS, the complicated black box tool thats widely used in the U.S. justice system for predicting likelihood of future arrests, is no more accurate than a simple predictive model that only looks at age and criminal history. Similarly, a research team created a model to predict likelihood of defaulting on a loan that was simple enough that average banking customers could easily understand it, and the researchers found that their model was less than 1% less accurate than an equivalent black box model (a difference that was within the margin of error).

Of course, there are some cases in which black-box models are still beneficial. But in light of the downsides, our research suggests several steps companies should take before adopting a black-box approach:

As a rule of thumb, white-box models should be used as benchmarks to assess whether black-box models are necessary. Before choosing a type of model, organizations should test both and if the difference in performance is insignificant, the white-box option should be selected.

One of the main factors that will determine whether a black-box model is necessary is the data involved. First, the decision depends on the quality of the data. When data is noisy (i.e., when it includes a lot of erroneous or meaningless information), relatively simple white-box methods tend to be effective. For example, we spoke with analysts at Morgan Stanley who found that for their highly noisy financial datasets, simple trading rules such as buy stock if company is undervalued, underperformed recently, and is not too large worked well.

Second, the type of data also affects the decision. For applications that involve multimedia data such as images, audio, and video, black-box models may offer superior performance. For instance, we worked with a company that was developing AI models to help airport staff predict security risk based on images of air cargo. They found that black-box models had a higher chance of detecting high-risk cargo items that could pose a security threat than equivalent white-box models did. These black-box tools enabled inspection teams to save thousands of hours by focusing more on high-risk cargo, substantially boosting the organizations performance on security metrics. In similarly complex applications such as face-detection for cameras, vision systems in autonomous vehicles, facial recognition, image-based medical diagnostic devices, illegal/toxic content detection, and most recently, generative AI tools like ChatGPT and DALL-E, a black box approach may be advantageous or even the only feasible option.

Transparency is always important to build and maintain trust but its especially critical for particularly sensitive use cases. In situations where a fair decision-making process is of utmost importance to your users, or in which some form of procedural justice is a requirement, it may make sense to prioritize explainability even if your data might otherwise lend itself to a black box approach, or if youve found that less-explainable models are slightly more accurate.

For instance, in domains such as hiring, allocation of organs for transplant, and legal decisions, opting for a simple, rule-based, white-box AI system will reduce risk to both the organization and its users. Many leaders have discovered these risks the hard way: In 2015, Amazon found that its automated candidate screening system was biased against female software developers, while a Dutch AI welfare fraud detection tool was shut down in 2018 after critics decried it as a large and non-transparent black hole.

An organizations choice between white or black-box AI also depends on its own level of AI readiness. For organizations that are less digitally developed, in which employees tend to have less trust in or understanding of AI, it may be best to start with simpler models before progressing to more complex solutions. That typically means implementing a white-box model that everyone can easily understand, and only exploring black-box options once teams have become more accustomed to using these tools.

For example, we worked with a global beverage company that launched a simple white-box AI system to help employees optimize their daily workflows. The system offered limited recommendations, such as which products should be promoted and how much of different products should be restocked. Then, as the organization matured in its use of and trust in AI, managers began to test out whether more complex, black-box alternatives might offer advantages in any of these applications.

In certain domains, explainability might be a legal requirement, not a nice-to-have. For instance, in the U.S., the Equal Credit Opportunity Act requires financial institutions to be able to explain the reasons why credit has been denied to a loan applicant. Similarly, Europes General Data Protection Regulation (GDPR) suggests that employers should be able to explain how candidates data has been used to inform hiring decisions. When organizations are required by law to be able to explain the decisions made by their AI models, white-box models are the only option.

Finally, there are of course contexts in which black-box models are both undeniably more accurate (as was the case in 30% of the datasets we tested in our study) and acceptable with respect to regulatory, organizational, or user-specific concerns. For example, applications such as computer vision for medical diagnoses, fraud detection, and cargo management all benefit greatly from black-box models, and the legal or logistical hurdles they pose tend to be more manageable. In cases like these, if an organization does decide to implement an opaque AI model, it should take steps to address the trust and safety risks associated with a lack of explainability.

In some cases, it is possible to develop an explainable white-box proxy to clarify, in approximate terms, how a black-box model has reached a decision. Even if this explanation isnt fully accurate or complete, it can go a long way to build trust, reduce biases, and increase adoption. In addition, a greater (if imperfect) understanding of the model can help developers further refine it, adding more value to these businesses and their end users.

In other cases, organizations may truly have very limited insight into why a model makes the decisions it does. If an approximate explanation isnt possible, leaders can still prioritize transparency in how they talk about the model both internally and externally, openly acknowledging the risks and working to address them.

***

Ultimately, there is no one-size-fits-all solution to AI implementation. All new technology comes with risks, and the choice of how to balance those risks with the potential rewards will depend on the specific business context and data. But our research demonstrates that in many cases, simple, interpretable AI models perform just as well as black box alternatives without sacrificing the trust of users or allowing hidden biases to drive decisions.

The authors would like to acknowledge Gaurav Jha and Sofie Goethals for their contribution.

Read the rest here:

AI Can Be Both Accurate and Transparent - HBR.org Daily

Related Posts