ThoughtSpot Co-Founder Ajeet Singh – we need trust in AI, but what kind of trust is just as important – Diginomica

(via Pixabay )

Can we trust Artificial Intelligence (AI)? Its one of the big questions of our age, but the best response to it is to first understand that trust in AI takes many forms.

For example: trust that the algorithm is not automating historic or systemic bias, after being trained on flawed or biased data; trust that its processes are transparent and explainable, rather than locked in a black box; trust that the system is well designed and reflects the diversity of the real world; and trust that its users have not abdicated from common sense and personal responsibility by adopting it.

We also need to trust that they have our best interests at heart and are not excluding groups or individuals deliberately or accidentally; trust that the picture an AI presents is fair and accurate, and will not create a cascade of problems in our lives; trust that it will help us perhaps to be fitter, healthier, and happier; and simply trust that its reliable and it works.

Hysterical media coverage of malignant AIs and job-stealing robots have contributed to public unease about the rise of intelligence in technology, abetted by decades of sci-fi dystopias. The latter invariably present the march of progress as something to fear, rather than, say, speed cures for killer diseases or help us use natural resources more sustainably and responsibly. A lack of trust helps no one.

Ajeet Singh is Executive Chairmanand Co-Founderof AI company ThoughtSpot, which he co-founded to be a self-styled Google for numbers. The aim was to democratize organizations structured data by putting the underlying facts and figures in the hands of decision makers. It was a tall order, he explains:

It's easier said than done, because it is not about just taking some off-the-shelf NLP libraries and slapping them on top of a database. Search and NLP have been done for unstructured data, but when you apply it to structured data, those technologies don't work in search.

Google ranks answers, but the the onus is on the user to pick the right one. But when you're dealing with numbers, you have to provide people with precise answers.

This is another part of the challenge of building trust in, and into, intelligence-infused technologies. While Google may have been set up to make information easier to find, the knock-on effect has been people making information easier for Google to find, gaming its algorithms and distorting the nature of information in the process. Google itself is an advertising behemoth; none of this engenders trust.

Yet when it comes to the kind of structured data that ThoughtSpot aims to address, there is a different set of challenges, says Singh. One is that analysts and data scientists waste too much of their time organising and cleaning data, and precious little taking a deep dive to retrieve its value:

Business people should be able to answer their own questions. There are a couple of million analysts in the world. They tend to be well-educated, well-trained people, but often their job is reduced to just building a pie chart. They are grossly under-utilized; they should be doing deeper analysis and finding new opportunities. Meanwhile, the business keeps waiting. So we said, if we empower the business directly, we can uplift both of these communities, the business user community as well as the analysts.

Trust is core to this issue, says Singh, because new technologies succeed or fail on whether people trust them in all of the ways outlined above:

We know in real life that we build relationships with humans that we trust. I fundamentally believe that of the AI-based technologies that are being created now, the ones that can create that trust with the users a virtuous cycle of trust are the ones that will grow. If they don't have that trust, they will die.I don't subscribe to a dystopian view of the world. I think that it's going to be a partnership between humans and machines. But there needs to be that trust.

For Singh, there are four principles to building trust in AI and related technologies which he calls the STAR model: security, transparency, accuracy, and relevance. After all, another element of building trust in intelligent systems involves them not filling our lives with noise:

Users need to trust that you're going to treat the data with the respect it deserves, which means it needs to be safe and secure with you. If you're using AI to build, say, healthcare applications where you're actually making medical recommendations, the case for transparency is very strong. You need to explain how the AI is working, how it is coming up with recommendations or decisions.

The end user may not be in a position to understand a complex technology, so a certain level of transparency is essential in how it is working, how the decisions are being made, and how you're training it.

Technology innovation has been ahead of societal understanding of how information is being used and how it is being processed as we saw in the case of Facebook. Sometimes computer scientists dont understand the implications of their own creations, because technologies can now get adopted at a scale that has never happened before.

Not everything should be regulated, because then we will slow down progress. But there needs to be accountability, and I would think that transparency is more important than regulation. As long as there is transparency in the system.

But is slowing down the headlong rush to AI necessarily a bad thing? Wouldnt it be better to adopt it smartly, appropriately, manageably, and sensibly, rather than in a tactical arms race? He says:

I always like to make the distinction between driving fast and driving rash. You can drive fast, but you're still within limits, you're still aware of others, you have responsibility to others that are driving on the road with you.

Take AI in healthcare. There is a huge opportunity such as what is going on right now. If we [the technology community] can use AI to accelerate the discovery and development of a vaccine, that would be an amazing thing, and I wouldnt want that to be slowed down at all.

It doesnt mean that technology companies should not move rationally, or that society should not move rationally, but we should move fast. I really believe that there is still so much to be done as a society, particularly when it comes to healthcare, or education in developing countries. There is a lot of opportunity to uplift society at large.

I feel that the regulators and the government have been very reactive, but the technology industry hasn't taken the responsibility of bringing them along. And they also haven't taken the proactive effort to say, We need to be partners. We can make very positive impact on the world at least, thats my personal view.

Continue reading here:

ThoughtSpot Co-Founder Ajeet Singh - we need trust in AI, but what kind of trust is just as important - Diginomica

Related Posts

Comments are closed.