AI Software That Analyzes Human Emotions Criticized By Researchers – Mashable India

Posted: December 13, 2019 at 3:24 pm

When you think of Artificial Intelligence, you think of all the great technological advancements that are outperforming humans at almost all tasks. This is great, but a lot of AI apps have emerged in the market today that make you question the societal and human impact of this highly-advanced technology. Speaking of which, a group of prominent researchers are alarmed by the harmful societal impact of AI and called for a ban on automated analysis of facial expressions in hiring and other related decisions, reports Reuters.

SEE ALSO: Facebook Has Made An Artificial Intelligence Assistant For Minecraft

As an example of problematic and harmful AI, researchers cited example of the company HireVue, that sells systems for remote video interviews of employers like Hilton and Unilever. HireVue is an AI-based app that analyzes facial movements, tone of voice, speech patterns without disclosing the score to the candidates who have applied for the job. In fact, the nonprofit Electronic Privacy Information Center has also filed a complaint about HireVue to the U.S. FTC.

AI Now, a New-York-based research institute released its fourth annual report on the effects of artificial intelligence tools, where it mentioned that job screening is one of the ways where these kind of software is used without accountability and favors privileged groups. The institute also stated that it wants to take priority action against these software-driven affect recognition, since science doesnt justify its use and there hasnt been widespread adoption of the technology yet. AI Now has also criticized Amazon for its Rekognition software.

HireVue said that it wasnt aware of the AI Now report and didnt respond to any questions surrounding criticism or complaint about the app. Many job candidates have benefited from HireVues technology to help remove the very significant human bias in the existing hiring process, said spokeswoman Kim Paone.

SEE ALSO: Chromes New Feature Uses AI To Describe Images For Blind And Low-Vision Users

A lot of other AI apps have come under scanner over concerns related to harmful use of Artificial Intelligence. For instance, an AI-based babysitter app, Predictim, that uses advanced AI to analyze the risk levels attached to a babysitter. The app gives you a risk score for the babysitter as well as complete details on the babysitter by scanning their social media accounts. Other AI app that received flak recently was the Image-Net Roulette, an online tool that used racist and offensive labels to describe and classify humans.

Follow this link:

AI Software That Analyzes Human Emotions Criticized By Researchers - Mashable India

Related Posts