Software that swaps out words can now fool the AI behind Alexa and Siri – MIT Technology Review

The news: Software called TextFooler can trick natural-language processing (NLP) systems into misunderstanding text just by replacing certain words in a sentence with synonyms. In tests, it was able to drop the accuracy of three state-of-the-art NLP systems dramatically. For example, Googles powerful BERT neural net was worse by a factor of five to seven at identifying whether reviews on Yelp were positive or negative.

How it works: The software, developed by a team at MIT, looks for the words in a sentence that are most important to an NLP classifier and replaces them with a synonym that a human would find natural. For example, changing the sentence The characters, cast in impossibly contrived situations, are totally estranged from reality to The characters, cast in impossibly engineered circumstances, are fully estranged from reality makes no real difference to how we read it. But the tweaks made an AI interpret the sentences completely differently.

Why it matters: We have seen many examples of such adversarial attacks, most often with image recognition systems, where tiny alterations to the input can flummox an AI and make it misclassify what it sees. TextFooler shows that this style of attack also breaks NLP, the AI behind virtual assistantssuch as Siri, Alexa and Google Homeas well as other language classifiers like spam filters and hate-speech detectors. The researchers say that tools like TextFooler can help make NLP systems more robust, by revealing their weaknesses.

Continued here:

Software that swaps out words can now fool the AI behind Alexa and Siri - MIT Technology Review

Related Posts

Comments are closed.