Tech Expert Warns That AI Could Become A Fascist’s Dream – Futurism

Posted: March 17, 2017 at 6:40 am

AI: Ripe For Abuse

In her March 12 talk at the 2017 SXSW Conference, Kate Crawford of Microsoft Research warned that artificial intelligence is ripe for abuse.Crawford, who researches the social impact of large-scale data systems and machine learning, described how encoded biases in AI systems could be abused to target certain populations and centralize power in the hands of authoritarian regimes.

Just as we are seeing a step function increase in the spread of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism, she said in Dark Days: AI and the Rise of Fascism, her SXSW session.

Crawford believesthe issue is that AI is often invisibly coded with human biases that oftencorrespond with the characteristics of fascist movements: to demonize outsiders, track populations, centralize power, and claim neutrality and authority without accountability. AI can be a potent tool in achieving those goals, especially if it is coded with human biases.

As an example of this kind of biased coding, Crawford described research from Chinas Shanghai Jiao Tong University. The authors claimed they created a bias-free system which had been trained onChinese government ID photos that could use facial features to predict criminality. The conclusion of the research data found that criminal faces were more unusual in appearance than law-abiding faces. The interpretation being that law enforcementwas less likely to trust people whose physical appearance deviated from the norm. AsCrawford explainedit:

Crawfords concerns center around using AI as a black box of algorithms that mask discrimination. AI could also be misused to build registries, which could in turn be used to target specific populations. To this end, Crawford cited IBMs Hollerith Machine, used by Nazi Germany to track ethnic groups, and the Book of Lifes role in South African apartheid.

Back in 2013, researchers could already predict religious and political affiliations on Facebook with more than 80 percent accuracy, and since that time AI has made leaps and bounds.

In the U.S., an AI system to assist in mass deportations has been in the works atPalantirsince 2014,and the companys co-founder Peter Thiel is an advisor to President Trump. But Crawford believes predictive policing has already failed, because research has shown that it results in unfair targeting and excessive force against minorities.

To avoid biased systems with bad data, we must develop AI to be moreaccountable and transparent. This way we can map unintended effects despite their complexity.We want to make these systems as ethical as possible and free from unseen biases, Crawford says, and shes also putting in the work. She founded the research community AI Now, which focuses on AIs social impacts, with these goals in mind.

More:
Tech Expert Warns That AI Could Become A Fascist's Dream - Futurism

Related Posts