IBMs Arin Bhowmick explains why AI trust is hard to achieve in the enterprise – VentureBeat

While appreciation of the potential impact AI can have on business processes has been building for some time, progress has not nearly been as quick as many initial forecasts led many organizations to expect.

Arin Bhowmick, chief design officer for IBM, explained to VentureBeat what needs to be done to achieve the level of AI explainability that will be required to take AI to the next level in the enterprise.

This interview has been edited for clarity and brevity.

VentureBeat: It seems a lot of organizations are still not trustful of AI. Do you think thats improving?

Arin Bhowmick: I do think its improved or is getting better. But we still have a long way to go. We havent historically been able to bake in trust and fairness and explainable AI into the products and experiences. From an IBM standpoint, we are trying to create reliable technology that can augment [but] not really replace human decision-making. We feel that trust is essential to the adoption. It allows organizations to understand and explain recommendations and outcomes.

What we are essentially trying to do is akin to a nutritional label. Were looking to have a similar kind of transparency in AI systems. There is still some hesitation in adoption of AI because of a lack of trust. Roughly 80-85% of some of the professionals that took part in an IBM survey from different organizations said their organization has been pretty negatively impacted by problems such as bias, especially in the data. I would say 80% or more agree that consumers are more likely to choose services from a company that offers transparency and an ethical framework for how its AI models are built.

VentureBeat: As an AI model runs, it can generate different results as the algorithms learn more about the data. How much does that lack of consistency impact trust?

Bhowmick: The AI model used to do the prediction is as good as the data. Its not just models. Its about what it does and the insight it provides at that point in time that develops trust. Does it tell the user why the recommendation is made or is significant, how it came up with the recommendations, and how confident it is? AI tends to be a black box. The trick around developing trust is to unravel the black box.

VentureBeat: How do we achieve that level of AI explainability?

Bhowmick: Its hard. Sometimes its hard to even judge the root cause of a prediction and insight. It depends on how the model was constructed. Explainability is also hard because when it is provided to the end user, its full of technical mumbo jumbo. Its not in the voice and tone that the user actually understands.

Sometimes explainability is also a little bit about the why, rather than the what. Giving an example of explainability in the context of the tasks that the user is doing is really, really hard. Unless the developers who are creating these AI-based [and] infused systems actually follow the business process, the context is not going to be there.

VentureBeat: How do we even measure this?

Bhowmick: There is a fairness score and a bias score. There is a concept of model accuracy. Most tools that are available do not provide a realistic score of the element of bias. Obviously, the higher the bias, the worse your model is. Its pretty clear to us that a lot of the source of the bias happens to be in the data and the assumptions that are used to create the model.

What we tried to do is we baked in a little bit of bias detection and explainability into the tooling itself. It will look at the profile of the data and match it against other items and other AI models. Well be able to tell you that what youre trying to produce already has built-in bias, and heres what you can do to fix it.

VentureBeat: That then becomes part of the user experience?

Bhowmick: Yes, and thats very, very important. Whatever bias feeds into the system has huge ramifications. We are creating ethical design practices across the company. We have developed specific design thinking exercises and workshops. We run workshops to make sure that we are considering ethics at the very beginning of our business process planning and design cycle. Were also using AI to improve AI. If we can build in sort of bias and explainable AI checkpoints along the way, inherently we will scale better. Thats sort of the game plan here.

VentureBeat: Will every application going have an AI model embedded within it?

Bhowmick: Its not about the application, its about whether there are things within that application that AI can help with. If the answer is yes, most applications will have infused AI in them. It will be unlikely that applications will not have AI.

VentureBeat: Will most organizations embed AI engines in their applications or simply involve external AI capabilities via an application programming interface (API)?

Bhowmick: Both will be true. I think the API would be good for people who are getting started. But as the level of AI maturity increases, there will be more information that is specific to a problem statement that is specific to an audience. For that, they will likely have to build custom AI models. They might leverage APIs and other tooling, but to have an application that really understands the user and really gets at the crux of the problem, I think its important that its built in-house.

VentureBeat: Overall, whats your best AI advice to organizations?

Bhowmick: I still find that our level of awareness of what is AI and what it can do, and how it can help us, is not high. When we talk to customers, all of them want to go into AI. But when you ask them what are the use cases, they sometimes are not able to articulate that.

I think adoption is somewhat lagging because of peoples understanding and acceptance of AI. But theres enough information on AI principles to read up on. As you develop an understanding, then look into tooling. It really comes down to awareness.

I think were in the hype cycle. Some industries are ahead, but if I could give one piece of advice to everyone, it would be dont force-fit AI. Make sure you design AI in your system in a way that makes sense for the problem youre trying to solve.

Excerpt from:

IBMs Arin Bhowmick explains why AI trust is hard to achieve in the enterprise - VentureBeat

Related Posts

Comments are closed.