REVEALED: AI is turning RACIST as it learns from humans – Express.co.uk

GETTY

In parts of the US, when a suspect is taken in for questioning they are given a computerised risk assessment which works out the likelihood of the person reoffending.

The judges can then use this data when giving his or her verdict.

However, an investigation has revealed that the artificial intelligence behind the software exhibits racist tendencies.

Reporters from ProPublica obtained more than 7,000 test results from Florida in 2013 and 2014 and analysed the reoffending rate among the individuals.

GETTY

The suspects are asked a total of 137 questions by the AI system Correctional Offender Management Profiling for Alternative Sanctions (Compas) including questions such as Was one of your parents ever sent to jail or prison? or How many of your friends/acquaintances are taking drugs illegally?, with the computer generating its results at the end.

Overall, the AI system claimed black people (45 per cent) were almost twice as likely as white people (24 per cent) to reoffend.

In one example outlined by ProPublica, risk scores were provided for a black and white suspect, both of which on drug possession charges.

GETTY

The white suspect had prior offences of attempted burglary and the black suspect had resisting arrest.

Seemingly giving no indication as to why, the black suspect was given a higher chance of reoffending and the white suspect was considered low risk.

But, over the next two years, the black suspect stayed clear of illegal activity and the white suspect was arrested three more times for drug possession.

However, researchers warn the problem does not lie with robots, but with the human race as AI uses machine learning algorithms to pick up on human traits.

Joanna Bryson, a researcher at the University of Bath, told the Guardian: People expected AI to be unbiased; thats just wrong. If the underlying data reflects stereotypes, or if you train AI from human culture, you will find these things.

Asus

1 of 9

Asus Zenbo: This adorable little bot can move around and assist you at home, express emotions, and learn and adapt to your preferences with proactive artificial intelligence.

This is not an isolated incident either.

Microsofts TayTweets (AI) chatbot was unleashed on Twitter last year which was designed to learn from users.

However, it almost instantly turned to anti-semitism and racism, tweeting: "Hitler did nothing wrong" and "Hitler was right I hate the Jews.

See the original post:

REVEALED: AI is turning RACIST as it learns from humans - Express.co.uk

Related Posts

Comments are closed.