Why transparency is key to promoting trust in artificial intelligence – IT PRO

Artificial intelligence (AI) is inescapable. In our daily lives we probably encounter it and its best friend machine learning much more frequently than we think. Did you buy something online yesterday, use face login on your smartphone, check your Facebook, look for something on Google, or use Google Maps? AI was right there.

Advertisement - Article continues below

When AI is helping us find the most efficient route home, were often quite happy to let it do its job. But this technology already does so much more, from helping to decide whether to grant us bank loans and diagnose our illnesses, to presenting targeted advertising.

As AI gets more and more embedded in our lives and helps make decisions that are increasingly significant to us, were rightly concerned about transparency. When big new stories like the Cambridge Analytica scandal or ongoing discussion around inherent biases in facial recognition hit the headlines, we are concerned about bias (intentional or otherwise), and our trust in AI takes a hit.

Explainable AI gives us a route to greater trust in AI. It is designed to help us learn more about how AI works in any given situation. So, instead of the AI just giving us an answer to a question, it shows us how it got to the answer. The alternative is the so-called black box situation where an AI uses an unspecified range of information and algorithms to get to an answer, but doesnt make any of this transparent.

Advertisement - Article continues below

In theory, explainable AI gives us confidence in the conclusions an AI system draws. Dr Terence Tse, Associate Professor of Finance at ESCP Business School, gives the following example: Imagine you want to obtain a loan and the approval is purely determined by an algorithm. Your loan gets rejected. If the algorithm in question is a black box its an issue for all parties. The bank cannot say why this is happening, and you don't know what to do in order to obtain the loan. Having explainable AI will help.

Explainable AI is a vital aspect of understanding an AIs competence in coming up with any particular set of outputs. Mark Stefik, Research Fellow and Lead of Explainable AI at PARC, a Xerox company, tells IT Pro: Typically, when people interact with AIs and the systems do the right thing, then people overestimate the AIs competence. They assume that the machines think like people, which they do not. They assume that machines have common sense, which they do not.

In fact, AI does not think like humans do at all. We use think in relation to AI to describe a way of working that in reality is different to that of our own brains. AI uses algorithms and machine learning to help it draw conclusions from data it is given, or from insights it generates. In showing how an AI has reached its decision, explainable AI can help uncover biases and in doing so not only provide individuals with redress, as in the banking example above, but also help refine the AI system itself.

Advertisement - Article continues below

Oleg Rogynskyy, Founder and CEO of People.ai says: A lack of explainability on how the machine learning model thinks can result in biases. If there is a bias hidden in the data set a machine learning model is trained on, it will consider the bias a ground truth.

Explainability techniques can be used to detect and then remove biases and ensure a level of trust between the machines and the user.

As AI takes an increasingly important role in our everyday lives, we are getting more and more concerned about whether we can trust it. As Stefik puts it: The need for explainable AI increases if we want to use the systems in critical situations, where there are real consequences for good and bad decisions. People want to know when they can trust the systems before they rely on them.

The IT Pro Podcast: Looking forward to 2020

With 2019 behind us, we predict what trends the IT industry can expect over the next year

The industry recognises this need. In a recent IBM survey of 4,500 IT decision makers, 83% of respondents said being able to explain how AI arrived at a decision was universally important. That number rose to 92% among those already deploying AI, as opposed to 75% of those considering a deployment.

Advertisement - Article continues below

Rogynskyy is unequivocal in his message, saying: Explainable AI must be prevalent everywhere. Tse was similarly forthright, adding: If we want to gain public trust in the deployment of AI, we have to make explainable AI a priority.

Stefik, however, has reservations, particularly when it comes to how we define terms like trust and explainable, which he argues are nuanced and complex concepts. Nevertheless, he hasnt written explainable AI off completely, saying: It is not ready as a complete (or well-defined) approach to making trustworthy systems, but it will be part of the solution.

Top 5 challenges of migrating applications to the cloud

Explore how VMware Cloud on AWS helps to address common cloud migration challenges

3 reasons why now is the time to rethink your network

Changing requirements call for new solutions

All-flash buyers guide

Tips for evaluating Solid-State Arrays

Enabling enterprise machine and deep learning with intelligent storage

The power of AI can only be realised through efficient and performant delivery of data

See the article here:

Why transparency is key to promoting trust in artificial intelligence - IT PRO

Related Posts

Comments are closed.