How DeepMind thinks it can make chatbots safer – MIT Technology Review

Posted: September 27, 2022 at 7:44 am

Some technologists hope that one day we will develop a superintelligent AI system that people will be able to have conversations with. Ask it a question, and it will offer an answer that sounds like something composed by a human expert. You could use it to ask for medical advice, or to help plan a holiday. Well, that's the idea, at least.

In reality, were still a long way away from that. Even the most sophisticated systems of today are pretty dumb. I once got Metas AI chatbot BlenderBot to tell me that a prominent Dutch politician was aterrorist. Inexperimentswhere AI-powered chatbots were used to offer medical advice, they told pretend patients to kill themselves. Doesnt fill you with a lot of optimism, does it?

Thats why AI labs are working hard to make their conversational AIs safer and more helpful before turning them loose in the real world. I justpublished a storyabout Alphabet-owned AI lab DeepMinds latest effort: a new chatbot called Sparrow.

DeepMinds new trick to making a good AI-powered chatbot was to have humans tell it how to behaveand force it to back up its claims using Google search. Human participants were then asked to evaluate how plausible the AI systems answers were. The idea is to keep training the AI using dialogue between humans and machines.

In reporting the story, I spoke to Sara Hooker, who leads Cohere for AI, a nonprofit AI research lab.

She told me that one of the biggest hurdles in safely deploying conversational AI systems is their brittleness, meaning they perform brilliantly until they are taken to unfamiliar territory, which makes them behave unpredictably.

It is also a difficult problem to solve because any two people might disagree on whether a conversation is inappropriate. And even if we agree that something is appropriate right now, this may change over time, or rely on shared context that can be subjective, Hooker says.

Despite that, DeepMinds findings underline that AI safety is not just a technical fix. You need humans in the loop.

More here:

How DeepMind thinks it can make chatbots safer - MIT Technology Review

Related Posts