{"id":193638,"date":"2017-05-18T14:26:03","date_gmt":"2017-05-18T18:26:03","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/pushy-ai-bots-nudge-humans-to-change-behavior-scientific-scientific-american\/"},"modified":"2017-05-18T14:26:03","modified_gmt":"2017-05-18T18:26:03","slug":"pushy-ai-bots-nudge-humans-to-change-behavior-scientific-scientific-american","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/pushy-ai-bots-nudge-humans-to-change-behavior-scientific-scientific-american\/","title":{"rendered":"Pushy AI Bots Nudge Humans to Change Behavior &#8211; Scientific &#8230; &#8211; Scientific American"},"content":{"rendered":"<p><p>    When people work together on a project, they often come to    think theyve figured out the problems in their own respective    spheres. If trouble persists, its somebody elseengineering,    say, or the marketing departmentwho is screwing up. That    local focus means finding the best way forward for the    overall project is often a struggle. But what if adding    artificial intelligence to the conversation, in the form of a    computer program called a bot, could actually make people in    groups more productive?  <\/p>\n<p>    This is the tantalizing implication of a study published    Wednesday in Nature. Hirokazu Shirado and Nicholas    Christakis, researchers at Yale Universitys Institute for Network    Science, were wondering what would happen if they looked at    artificial intelligence (AI) not in the usual wayas a    potential replacement for peoplebut instead as a useful    companion and helper, particularly for altering human social    behavior in groups.  <\/p>\n<p>    First the researchers asked paid volunteers arranged in online    networks, each occupying one of 20 connected positions, or    nodes, to solve a simple problem: Choose one of three colors    (green, orange or purple) with the individual, or local, goal    of having a different color from immediate neighbors, and the    collective goal of ensuring that every node in the network    was a different color from all of its neighbors. Subjects pay    improved if they solved the problem quickly. Two thirds of the    groups reached a solution in the allotted five minutes and the    average time to a solution was just under four minutes. But a    third of the groups were still stymied at the deadline.  <\/p>\n<p>    The researchers then put a botbasically a computer program    that can execute simple commandsin three of    the 20 nodes in each network. When the bots were programmed to    act like humans and focused logically on resolving conflicts    with their immediate neighbors, they didnt make much    difference. But when the researchers gave the bots just enough    AI to behave in a slightly noisy fashion, randomly choosing a    color regardless of neighboring choices, the groups they were    in solved the problem 85 percent of the timeand in 1.7 minutes    on average, 55.6 percent faster than humans alone.  <\/p>\n<p>    Being just noisy enoughmaking random color choices about 10    percent of the timemade all the difference, the study    suggests. When a bot got much noisier than that, the benefit    soon vanished. A bots influence also varied depending on    whether it was positioned at the center of a network with lots    of neighbors or on the periphery.  <\/p>\n<p>    So why would making what looks like the wrong choicein other    words, a mistakeimprove a groups performance? The immediate    result, predictably, was short-term conflict, with the bots    neighbors in effect muttering, Why are you suddenly    disagreeing with me? But that conflict served to nudge    neighboring humans to change their behavior in ways that appear    to have further facilitated a global solution, the co-authors    wrote. The humans began to play the game differently.  <\/p>\n<p>    Errors, it seems, do not entirely deserve their bad reputation.    There are many, many natural processes where noise is    paradoxically beneficial, Christakis says. The best example    is mutation. If you had a species in which every individual was    perfectly adapted to its environment, then when the environment    changed, it would die. Instead, random mutations can help a    species sidestep extinction.  <\/p>\n<p>    Were beginning to find that errorand noisy individuals that    we would previously assume add nothingactually improve    collective decision-making, says Iain Couzin, who studies    group behavior in humans and other species at the Max Planck    Institute for Ornithology and was not involved in the new work.    He praises the deliberately simplified model used in the    Nature study for enabling the co-authors to study    group decision-making in great detail, because they have    control over the connectivity. The resulting ability to    minutely track how humans and algorithms collectively make    decisions, Couzin says, is really going to be the future of    quantitative social science.  <\/p>\n<p>    But how realistic is it to think human groups will want to    collaborate with algorithms or botsespecially slightly noisy    onesin making decisions? Shirado and Christakis informed some    of their test groups that they would be partnering with bots.    Perhaps surprisingly, it made no difference. The attitude was,    I don't care that youre a bot if youre helping me do my    job, Christakis says. Many people are already accustomed to    talking with a computer when they call an airline or a bank, he    adds, and the machine often does a pretty good job. Such    collaborations are almost certain to become more common amid    the increasing integration of the internet with physical    devices, from automobiles to coffee makers.  <\/p>\n<p>    Real-world, bot-assisted company meetings might not be too far    behind. Business conferences already tout blended digital and in-person    events, featuring what one conference planner describes as    integrated online and offline catalysts that use virtual    reality, augmented reality and artificial intelligence. Shirado    and Christakis suggest slightly noisy bots are also likely to    turn up in crowdsourcing applicationsfor instance, to speed up    citizen science assessment of archaeological or astronomical    images. They say such bots could also be useful in social    mediato discourage racist remarks, for example.  <\/p>\n<p>    But last year when Microsoft introduced a twitter bot with    simple AI, other users quickly turned it into epithetspouting    bigot. And the opposite concern is that mixing humans and    machines to improve group decision-making could enable    businessesor botsto manipulate people. Ive thought a lot    about this, Christakis says. You can invent a gun to hunt for    food or to kill people. You can develop nuclear energy to    generate electric power or make the atomic bomb. All scientific    advances have this Janus-like potential for evil or good.  <\/p>\n<p>    The important thing is to understand the behavior involved, so    we can use it to good ends and also be aware of the potential    for manipulation, Couzin says. Hopefully this new research    will encourage other researchers to pick up on this idea and    apply it to their own scenarios. I dont think it can be just    thrown out there and used willy-nilly.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Original post:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/www.scientificamerican.com\/article\/pushy-ai-bots-nudge-humans-to-change-behavior\/\" title=\"Pushy AI Bots Nudge Humans to Change Behavior - Scientific ... - Scientific American\">Pushy AI Bots Nudge Humans to Change Behavior - Scientific ... - Scientific American<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> When people work together on a project, they often come to think theyve figured out the problems in their own respective spheres. If trouble persists, its somebody elseengineering, say, or the marketing departmentwho is screwing up.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/ai\/pushy-ai-bots-nudge-humans-to-change-behavior-scientific-scientific-american\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":8,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187743],"tags":[],"class_list":["post-193638","post","type-post","status-publish","format-standard","hentry","category-ai"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/193638"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/8"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=193638"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/193638\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=193638"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=193638"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=193638"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}