{"id":226510,"date":"2017-07-08T18:43:59","date_gmt":"2017-07-08T22:43:59","guid":{"rendered":"http:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/why-artificial-intelligence-is-far-too-human-the-boston-globe.php"},"modified":"2017-07-08T18:43:59","modified_gmt":"2017-07-08T22:43:59","slug":"why-artificial-intelligence-is-far-too-human-the-boston-globe","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/why-artificial-intelligence-is-far-too-human-the-boston-globe.php","title":{"rendered":"Why artificial intelligence is far too human &#8211; The Boston Globe"},"content":{"rendered":"<p><p>    LUCY NALAND FOR THE BOSTON GLOBE  <\/p>\n<p>    Have you ever wondered how the Waze app knows shortcuts in your    neighborhood better than you? Its because Waze acts like a    superhuman air traffic controller  it measures distance and    traffic patterns, it listens to feedback from drivers, and it    compiles massive data set to get you to your location as    quickly as possible.  <\/p>\n<p>    Even as we grow more reliant on these kinds of innovations, we    still want assurances that were in charge, because we still    believe our humanity elevates us above computers. Movies such    as 2001: A Space Odyssey and the Terminator franchise teach    us to fear computers programmed without any understanding of    humanity; when a human sobs, Arnold Schwarzeneggers robotic    character asks, Whats wrong with your eyes? They always end    with the machines turning on their makers.  <\/p>\n<p>    Advertisement  <\/p>\n<p>    What most people dont know is that artificial intelligence    ethicists worry the opposite is happening: We are putting too    much of ourselves, not too little, into the decision-making    machines of our future.  <\/p>\n<p>    God created humans in his own image, if you believe the    scriptures. Now humans are hard at work scripting artificial    intelligence in much the same way  in their own image. Indeed,    todays AI can be just as biased and imperfect as the humans    who engineer it. Perhaps even more so.  <\/p>\n<p>        Get This Week in        Opinion in your inbox:      <\/p>\n<p>        Globe Opinion's must-reads, delivered to you every Sunday.      <\/p>\n<p>    We already assign responsibility to artificial intelligence    programs more widely than is commonly understood. People are    diagnosed with diseases, kept in prison, hired for jobs,    extended housing loans, and placed on terrorist watch lists, in    part or in full, as a result of, AI programs weve empowered to decide for us. Sure, humans might have    the final word. But computers can control how the evidence is    weighed.  <\/p>\n<p>        And and no one has asked you what you want.      <\/p>\n<p>    That was by design. Automation was done in part to remove human    bias from the equation. So why does a computer algorithm    reviewing bank loans exhibit racial prejudice against    applicants?  <\/p>\n<p>    It turns out that algorithms, which are the building blocks of    AI  acquire bias the same way that humans do  through    instruction. In other words, theyve got to be taught.  <\/p>\n<p>    Advertisement       <\/p>\n<p>    Computer models can learn by analyzing data sets for    relationships. For example, if you want to train a computer to    understand how words relate to each other, you can upload the    entire English-langugage Web and let the machine assign    relational values to words based on how often they appear next    to other words; the closer together, the greater the value. In    this pattern recognition, the computer begins to paint a    picture of what words mean.  <\/p>\n<p>    Teaching computers to think keeps getting easier. But theres a    serious miseducation problem as well. While humans can be    taught to differentiate between implicit and explicit bias, and    recognize both in themselves, a machine simply follows a series    of if-then statements. When those instructions reflect the    biases and dubious assumptions of their creators, a computer    will execute them faithfully  while still looking    superficially neutral. What we have to stop doing is assuming    things are objective and start assuming things are biased.    Because thats what our actual evidence has been so far, says    Cathy ONeil, data scientist and author of the recent book    Weapons of Math Destruction.  <\/p>\n<p>    As with humans, bias starts with the building blocks of    socialization: language. The magazine Science recently reported on a study showing that implicit associations     including prejudices  are communicated through our language.    Language necessarily contains human biases, and the paradigm    of training machine learning on language corpora means that AI    will inevitably imbibe these biases as well, writes Arvind    Narayanan, co-author of the study.  <\/p>\n<p>    The scientists found that words like flower are more closely    associated with pleasantness than insect. Female words were    more closely associated with the home and arts than with    career, math, and science. Likewise, African-American names    were more frequently associated with unpleasant terms than    names more common among white people were.  <\/p>\n<p>    This becomes an issue when job recruiting programs trained on    language sets like this are used to select resumes for    interviews. If the program associates African-American names    with unpleasant characteristics, its algorithmic training will    be more likely to select European named candidates. Likewise,    if the job-recruiting AI is told to search for strong leaders,    it will be less likely to select women, because their names are    associated with homemaking and mothering.  <\/p>\n<p>    The scientists took their findings a step farther and found a    90 percent correlation between how feminine or masculine the    job title ranked in their word-embedding research and the    actual number of men versus women employed in 50 different    professions according to Department Labor statistics. The    biases expressed in language directly relates to the roles we    play in life.  <\/p>\n<p>    AI is just an extension of our culture, says co-author Joanna    Bryson, a computer scientist at the University of Bath in the    United Kingdom and Princeton University. Its not that robots    are evil. Its that the robots are just us.  <\/p>\n<p>    Even AI giants like Google cant escape the impact of bias. In    2015, the companys facial recognition software tagged dark skinned people as gorillas.    Executives at FaceApp, a photo editing program, recently apologized for building an algorithm    that whitened the users skin in their pictures. The company    had dubbed it the hotness filter.  <\/p>\n<p>    In these cases, the error grew from data sets that didnt have    enough dark-skinned people, which limited the machines ability    to learn variation within darker skin tones. Typically, a    programmer instructs a machine with a series of commands, and    the computer follows along. But if the programmer tests the    design on his peer group, coworkers, and family, hes limited    what the machine can learn and imbues it with whichever biases    shape his own life.  <\/p>\n<p>    Photo apps are one thing, but when their foundational    algorithms creep into other areas of human interaction, the    impacts can be as profound as they are lasting.  <\/p>\n<p>    The faces of one in two adult Americans have been processed    through facial recognition software. Law enforcement agencies    across the country are using this gathered data with little    oversight. Commercial facial-recognition algorithms have generally done a better job    of telling white men apart than they do with women and    people of other races, and law enforcement agencies offer few    details indicating that their systems work substantially    better. Our justice system has not decided if these sweeping    programs constitute a search, which would restrict them under    the Fourth Amendment. Law enforcement may end up making    life-altering decisions based on biased investigatory tools    with minimal safeguards.  <\/p>\n<p>    Meanwhile, judges in almost every state are using algorithms to    assist in decisions about bail, probation, sentencing, and    parole. Massachusetts was sued several years ago because an    algorithm it uses to predict recidivism among sex offenders    didnt consider a convicts gender. Since women are less likely    to reoffend, an algorithm that did not consider gender likely    overestimated recidivism by female sex offenders. The intent of    the scores was to replace human bias and increase efficiency in    an overburdened judicial system. But, as mathematician Julia    Angwin reported in ProPublica, these algorithms are    using biased questionnaires to come to their determinations and    yielding flawed results.  <\/p>\n<p>    A ProPublica study of the recidivism algorithm used in Fort    Lauderdale found that 23.5 percent of white men were labeled as    being at an elevated risk for getting into trouble again, but    didnt re-offend. Meanwhile, 44.9 percent of black men were    labeled higher risk for future offenses, but didnt re-offend,    showing how these scores are inaccurate and favor white men.  <\/p>\n<p>    While the questionnaires dont ask specifically about skin    color, data scientists say they back into race by asking    questions like: When was your first encounter with police?  <\/p>\n<p>    The assumption is that someone who comes in contact with police    as a young teenager is more prone to criminal activity than    someone who doesnt. But this hypothesis doesnt take into    consideration that policing practices vary and therefore so    does the polices interaction with youth. If someone lives in    an area where the police routinely stop and frisk people, he    will be statistically more likely to have had an early    encounter with the police. Stop-and-frisk is more common in    urban areas where African-Americans are more likely to live    than whites.This measure doesnt calculate guilt or criminal    tendencies, but becomes a penalty when AI calculates risk. In    this example, the AI is not just computing for the individuals    behavior, it is also considering the polices behavior.  <\/p>\n<p>    Ive talked to prosecutors who say, Well, its actually    really handy to have these risk scores because you dont have    to take responsibility if someone gets out on bail and they    shoot someone. Its the machine, right? says Joi Ito,    director of the Media Lab at MIT.  <\/p>\n<p>    Its even easier to blame a computer when the guts of the    machine are trade secrets. Building algorithms is big business,    and suppliers guard their intellectual property tightly. Even    when these algorithms are used in the public sphere, their    inner workings are seldom open for inspection. Unlike humans,    these machine algorithms are much harder to interrogate because    you dont actually know what they know, Ito says.  <\/p>\n<p>    Whether such a process is fair is difficult to discern if a    defendant doesnt know what went into the algorithm. With    little transparency, there is limited ability to appeal the    computers conclusions. The worst thing is the algorithms    where we dont really even know what theyve done and theyre    just selling it to police and theyre claiming its effective,    says Bryson, co-author of the word embedding study.  <\/p>\n<p>    Most mathematicians understand that the algorithms should    improve over time. As theyre updated, they learn more  if    theyre presented with the right data. In the end, the    relatively few people who manage these algorithms have an    enormous impact on the future. They control the decisions about    who gets a loan, who gets a job, and, in turn, who can move up    in society. And yet from the outside, the formulas that    determine the trajectories of so many lives remain as    inscrutable as the will of the divine.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Link:<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.bostonglobe.com\/ideas\/2017\/07\/07\/why-artificial-intelligence-far-too-human\/jvG77QR5xPbpwBL2ApAFAN\/story.html\" title=\"Why artificial intelligence is far too human - The Boston Globe\">Why artificial intelligence is far too human - The Boston Globe<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> LUCY NALAND FOR THE BOSTON GLOBE Have you ever wondered how the Waze app knows shortcuts in your neighborhood better than you? Its because Waze acts like a superhuman air traffic controller it measures distance and traffic patterns, it listens to feedback from drivers, and it compiles massive data set to get you to your location as quickly as possible. Even as we grow more reliant on these kinds of innovations, we still want assurances that were in charge, because we still believe our humanity elevates us above computers <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/why-artificial-intelligence-is-far-too-human-the-boston-globe.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[13],"tags":[],"class_list":["post-226510","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/226510"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=226510"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/226510\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=226510"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=226510"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=226510"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}