Why artificial intelligence is far too human – The Boston Globe

LUCY NALAND FOR THE BOSTON GLOBE

Have you ever wondered how the Waze app knows shortcuts in your neighborhood better than you? Its because Waze acts like a superhuman air traffic controller it measures distance and traffic patterns, it listens to feedback from drivers, and it compiles massive data set to get you to your location as quickly as possible.

Even as we grow more reliant on these kinds of innovations, we still want assurances that were in charge, because we still believe our humanity elevates us above computers. Movies such as 2001: A Space Odyssey and the Terminator franchise teach us to fear computers programmed without any understanding of humanity; when a human sobs, Arnold Schwarzeneggers robotic character asks, Whats wrong with your eyes? They always end with the machines turning on their makers.

Advertisement

What most people dont know is that artificial intelligence ethicists worry the opposite is happening: We are putting too much of ourselves, not too little, into the decision-making machines of our future.

God created humans in his own image, if you believe the scriptures. Now humans are hard at work scripting artificial intelligence in much the same way in their own image. Indeed, todays AI can be just as biased and imperfect as the humans who engineer it. Perhaps even more so.

Get This Week in Opinion in your inbox:

Globe Opinion's must-reads, delivered to you every Sunday.

We already assign responsibility to artificial intelligence programs more widely than is commonly understood. People are diagnosed with diseases, kept in prison, hired for jobs, extended housing loans, and placed on terrorist watch lists, in part or in full, as a result of, AI programs weve empowered to decide for us. Sure, humans might have the final word. But computers can control how the evidence is weighed.

And and no one has asked you what you want.

That was by design. Automation was done in part to remove human bias from the equation. So why does a computer algorithm reviewing bank loans exhibit racial prejudice against applicants?

It turns out that algorithms, which are the building blocks of AI acquire bias the same way that humans do through instruction. In other words, theyve got to be taught.

Advertisement

Computer models can learn by analyzing data sets for relationships. For example, if you want to train a computer to understand how words relate to each other, you can upload the entire English-langugage Web and let the machine assign relational values to words based on how often they appear next to other words; the closer together, the greater the value. In this pattern recognition, the computer begins to paint a picture of what words mean.

Teaching computers to think keeps getting easier. But theres a serious miseducation problem as well. While humans can be taught to differentiate between implicit and explicit bias, and recognize both in themselves, a machine simply follows a series of if-then statements. When those instructions reflect the biases and dubious assumptions of their creators, a computer will execute them faithfully while still looking superficially neutral. What we have to stop doing is assuming things are objective and start assuming things are biased. Because thats what our actual evidence has been so far, says Cathy ONeil, data scientist and author of the recent book Weapons of Math Destruction.

As with humans, bias starts with the building blocks of socialization: language. The magazine Science recently reported on a study showing that implicit associations including prejudices are communicated through our language. Language necessarily contains human biases, and the paradigm of training machine learning on language corpora means that AI will inevitably imbibe these biases as well, writes Arvind Narayanan, co-author of the study.

The scientists found that words like flower are more closely associated with pleasantness than insect. Female words were more closely associated with the home and arts than with career, math, and science. Likewise, African-American names were more frequently associated with unpleasant terms than names more common among white people were.

This becomes an issue when job recruiting programs trained on language sets like this are used to select resumes for interviews. If the program associates African-American names with unpleasant characteristics, its algorithmic training will be more likely to select European named candidates. Likewise, if the job-recruiting AI is told to search for strong leaders, it will be less likely to select women, because their names are associated with homemaking and mothering.

The scientists took their findings a step farther and found a 90 percent correlation between how feminine or masculine the job title ranked in their word-embedding research and the actual number of men versus women employed in 50 different professions according to Department Labor statistics. The biases expressed in language directly relates to the roles we play in life.

AI is just an extension of our culture, says co-author Joanna Bryson, a computer scientist at the University of Bath in the United Kingdom and Princeton University. Its not that robots are evil. Its that the robots are just us.

Even AI giants like Google cant escape the impact of bias. In 2015, the companys facial recognition software tagged dark skinned people as gorillas. Executives at FaceApp, a photo editing program, recently apologized for building an algorithm that whitened the users skin in their pictures. The company had dubbed it the hotness filter.

In these cases, the error grew from data sets that didnt have enough dark-skinned people, which limited the machines ability to learn variation within darker skin tones. Typically, a programmer instructs a machine with a series of commands, and the computer follows along. But if the programmer tests the design on his peer group, coworkers, and family, hes limited what the machine can learn and imbues it with whichever biases shape his own life.

Photo apps are one thing, but when their foundational algorithms creep into other areas of human interaction, the impacts can be as profound as they are lasting.

The faces of one in two adult Americans have been processed through facial recognition software. Law enforcement agencies across the country are using this gathered data with little oversight. Commercial facial-recognition algorithms have generally done a better job of telling white men apart than they do with women and people of other races, and law enforcement agencies offer few details indicating that their systems work substantially better. Our justice system has not decided if these sweeping programs constitute a search, which would restrict them under the Fourth Amendment. Law enforcement may end up making life-altering decisions based on biased investigatory tools with minimal safeguards.

Meanwhile, judges in almost every state are using algorithms to assist in decisions about bail, probation, sentencing, and parole. Massachusetts was sued several years ago because an algorithm it uses to predict recidivism among sex offenders didnt consider a convicts gender. Since women are less likely to reoffend, an algorithm that did not consider gender likely overestimated recidivism by female sex offenders. The intent of the scores was to replace human bias and increase efficiency in an overburdened judicial system. But, as mathematician Julia Angwin reported in ProPublica, these algorithms are using biased questionnaires to come to their determinations and yielding flawed results.

A ProPublica study of the recidivism algorithm used in Fort Lauderdale found that 23.5 percent of white men were labeled as being at an elevated risk for getting into trouble again, but didnt re-offend. Meanwhile, 44.9 percent of black men were labeled higher risk for future offenses, but didnt re-offend, showing how these scores are inaccurate and favor white men.

While the questionnaires dont ask specifically about skin color, data scientists say they back into race by asking questions like: When was your first encounter with police?

The assumption is that someone who comes in contact with police as a young teenager is more prone to criminal activity than someone who doesnt. But this hypothesis doesnt take into consideration that policing practices vary and therefore so does the polices interaction with youth. If someone lives in an area where the police routinely stop and frisk people, he will be statistically more likely to have had an early encounter with the police. Stop-and-frisk is more common in urban areas where African-Americans are more likely to live than whites.This measure doesnt calculate guilt or criminal tendencies, but becomes a penalty when AI calculates risk. In this example, the AI is not just computing for the individuals behavior, it is also considering the polices behavior.

Ive talked to prosecutors who say, Well, its actually really handy to have these risk scores because you dont have to take responsibility if someone gets out on bail and they shoot someone. Its the machine, right? says Joi Ito, director of the Media Lab at MIT.

Its even easier to blame a computer when the guts of the machine are trade secrets. Building algorithms is big business, and suppliers guard their intellectual property tightly. Even when these algorithms are used in the public sphere, their inner workings are seldom open for inspection. Unlike humans, these machine algorithms are much harder to interrogate because you dont actually know what they know, Ito says.

Whether such a process is fair is difficult to discern if a defendant doesnt know what went into the algorithm. With little transparency, there is limited ability to appeal the computers conclusions. The worst thing is the algorithms where we dont really even know what theyve done and theyre just selling it to police and theyre claiming its effective, says Bryson, co-author of the word embedding study.

Most mathematicians understand that the algorithms should improve over time. As theyre updated, they learn more if theyre presented with the right data. In the end, the relatively few people who manage these algorithms have an enormous impact on the future. They control the decisions about who gets a loan, who gets a job, and, in turn, who can move up in society. And yet from the outside, the formulas that determine the trajectories of so many lives remain as inscrutable as the will of the divine.

Link:

Why artificial intelligence is far too human - The Boston Globe

Related Posts

Comments are closed.