{"id":203821,"date":"2017-07-05T23:12:25","date_gmt":"2017-07-06T03:12:25","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-stupidity-learning-to-trust-artificial-intelligence-sometimes-breaking-defense\/"},"modified":"2017-07-05T23:12:25","modified_gmt":"2017-07-06T03:12:25","slug":"artificial-stupidity-learning-to-trust-artificial-intelligence-sometimes-breaking-defense","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/artificial-stupidity-learning-to-trust-artificial-intelligence-sometimes-breaking-defense\/","title":{"rendered":"Artificial Stupidity: Learning To Trust Artificial Intelligence (Sometimes) &#8211; Breaking Defense"},"content":{"rendered":"<p><p>      A young Marine reaches out for a hand-launched drone.    <\/p>\n<p>    In science fiction and real life alike, there are plenty of    horror stories where humans trust     artificial intelligence too much. They range from letting    the fictional SkyNet control our nuclear weapons     to letting Patriots shoot down friendly planes or letting Tesla    Autopilot crash into a truck. At the same time, though,    theres also a danger of not trusting AI enough.  <\/p>\n<p>    As conflict on earth, in space, and in cyberspace becomes    increasingly fast-paced and complex, the Pentagons     Third Offset initiative is counting on artificial    intelligence to help     commanders, combatants, and analysts chart a course through    chaos  what weve dubbed the War Algorithm (click here for    the full series). But if the software itself is too    complex, too opaque, or too unpredictable for its users to    understand, theyll just turn it off and do things manually. At    least, theyll try: What worked for Luke Skywalker against the    first Death Star probably wont work in real life. Humans cant        respond to cyberattacks in microseconds or coordinate        defense against a massive missile strike in real time. With        Russia and China both investing in AI systems, deactivating    our own AI may amount to     unilateral disarmament.  <\/p>\n<p>    Abandoning AI is not an option. Never is     abandoning human input. The challenge is to create an    artificial intelligence that can earn the humans trust, a AI    that seems transparent or even human.  <\/p>\n<p>      Robert Work    <\/p>\n<p>    Tradeoffs for Trust  <\/p>\n<p>    Clausewitz had a term calledcoup doeil, a    great commanders intuitive grasp of opportunity and danger on    the battlefield, said     Robert Work, the outgoing Deputy Secretary of Defense and    father of the Third Offset, at     a Johns Hopkins AI conference in May. Learning machines    are going to     give more and more commanderscoup doeil.  <\/p>\n<p>    Conversely, AI can speak the ugly truths that human    subordinates may not. There are not many captains that are    going to tell a four-star COCOM (combatant commander) that    idea sucks,' Work said, (but) the machine will say, you are    an idiot, there is a 99 percent probability that you are going    to get your ass handed to you.  <\/p>\n<p>    Before commanders will take an AIs insights as useful,    however, Work emphasized, they need to trust and understand how    it works. That requires intensive operational    test and evaluation, where you convince yourself that the    machineswilldo exactly what you expect    them to, reliably and repeatedly, he said. This goes back to    trust.  <\/p>\n<p>    Trust is so important, in fact, that two experts we heard from    said they were willing to accept some tradeoffs in performance    in order to get it: A less advanced and versatile AI, even a    less capable one, is better than a brilliant machine you cant    trust.  <\/p>\n<p>      Army command post    <\/p>\n<p>    The intelligence community, for instance, is keenly interested    in AI that can help its analysts make sense of     mind-numbing masses of data. But the AI has to help the    analysts explain how it came to its conclusions, or    they can never brief them to their bosses, explained Jason    Matheny, director of the Intelligence Advanced Research    Projects Agency. IARPA is the intelligence equivalent of DARPA,    which is running its own explainable    AI project. So, when IARPA held one recent contest for    analysis software, Matheny told the AI conference, it barred    entry to programs whose reasoning could not be explained in    plain English.  <\/p>\n<p>    From the start of this program, (there) was a requirement that    all the systems be explainable in natural language, Matheny    said. That ended up consuming up about half the effort of the    researchers, and they were really irritated.because it meant    they couldnt in most cases use the best deep neural net    approaches to solve this problem, they had to use kernel-based    methods that were easier to explain.  <\/p>\n<p>    Compared to cutting edge but harder-to-understand software,    Matheny said, we got a 20-30 percent performance loss but    these tools were actually adopted. They were used by analysts    because they were explainable.  <\/p>\n<p>    Transparent, predictable software isnt only importance for    analysts: Its also vital for pilots, said Igor Cherepinsky,    director ofautonomy    programsat     Sikorsky. Sikorskys goal for its MATRIX automated    helicopter is that the AI prove itself as reliable as flight    controls for manned aircraft, failing only once in a billion    flight hours. Its the same probability as the wing falling    off, Cherepinsky told me in an interview. By contrast,    traditional autopilots are permitted much higher rates of    failure, on the assumption a competent human pilot will take    over if theres a problem.  <\/p>\n<p>      Sikorskys experimental unmanned UH-60 Black Hawk    <\/p>\n<p>    To reach that higher standard  and just as important, to be    able to prove theyd reached it  the Sikorsky team    ruled out the latest AI techniques, just as IARPA had done, in    favor of more old-fashioned deterministic programming. While    deep learning AI can surprise its human users with flashes of    brilliance  or stupidity  deterministic software always    produces the same output from a given input.  <\/p>\n<p>    Machine learning cannot be verified and certified,    Cherepinsky said. Some algorithms (in use elsewhere) we chose    not to use even though they work on the surface, theyre not    certifiable, verifiable, and testable.  <\/p>\n<p>    Sikorsky has used some deep learning algorithms in its flying    laboratory, Cherepinsky said, and hes far from giving up on    the technology, but he doesnt think its ready for real world    use: The current state of the art (is) theyre not explainable    yet.  <\/p>\n<p>    Robots With A Human Face  <\/p>\n<p>    Explainable, tested, transparent algorithms are necessary but    hardly sufficient to making an artificial intelligence that    people will trust. They help address our rational    concerns about AI, but if humans were purely rational, we might    not need AI in the first place. Its one thing to build AI    thats trustworthy in general and in the abstract, quite    another to get actual individual humans to trust it. The AI    needs to communicate effectively with humans, which means it    needs to communicate the way humans do  even think    the way a human does.  <\/p>\n<p>    You see in artificial intelligence an increasing trend towards    lifelike agents and a demand for those agents, like Siri,    Cortana, and Alexa, to be more emotionally responsive, to be    more nuanced in ways that are human-like, David    Hanson, CEO of Hong Kong-based Hanson Robotics, told the    Johns Hopkins conference. When we deal with AI and robots, he    said, intuitively, we think of them as life forms.  <\/p>\n<p>      David Hanson with his Einstein robot.    <\/p>\n<p>    Hanson makes AI toys like a talking Einstein doll and    expressive talking heads like Han and    Sophia, but    hes looking far beyond such gadgets to the future of ever-more    powerful AI. How can we, if we make them intelligent, make    them caring and safe? he asked. We need a global initiative    to create benevolent super intelligence.  <\/p>\n<p>    Theres a danger here, however. Its called    anthropomorphization, and we do it all the time. People    chronically attribute human-like thoughts and emotions to our    cats, dogs, and other animals, ignoring how they are really    very different from us. But at least cats and dogs  and birds,    and fish, and scorpions, and worms  are, like us, animals.    They think with neurons and neurotransmitters, they breathe air    and eat food and drink water, they mate and breed, are born and    die. An artificial intelligence has none of these things in    common with us, and programming it to imitate humanity doesnt    make it human. The old phrase putting lipstick on a pig    understates the problem, because a pig is biochemically pretty    similar to us. Think instead of putting lipstick on a squid     except a squid is a close cousin to humanity compared to an AI.  <\/p>\n<p>    With these worries in mind, I sought out Hanson after his panel    and asked him about humanizing AI. There are three reasons, he    told me: Humanizing AI makes it more useful, because it can    communicate better with its human users; it makes AI smarter,    because the human mind is the only template of intelligence we    have; and it makes AI safer, because we can teach our machines    not only to act more human but to be more human.    These three things combined give us better hope of developing    truly intelligent adaptive machines sooner and making sure that    theyre safe when they do happen, he said.  <\/p>\n<p>      This squids thought process is less alien to you than an      artificial intelligence would be.    <\/p>\n<p>    Usefulness: On the most basic level, Hanson    said, using robots and intelligent virtual agents with a    human-like form makes them appealing. It creates a lot of uses    for communicating and for providing value.  <\/p>\n<p>    Intelligence: Consider convergent evolution in    nature, Hanson told me. Bats, birds, and bugs all have wings,    although they grow and work differently. Intelligence may    evolve the same way, with AI starting in a very different place    from humans but ending up awfully similar.  <\/p>\n<p>    We may converge on human level intelligence in machines by    modeling the human organism, Hanson said. AI originally was    an effort to match the capacities of the human mind in the    broadest sense, (with) creativity, consciousness, and    self-determination  and we found that that was really hard,    (but still) theres no better example of mind that we know of    than the human mind.  <\/p>\n<p>    Safety: Beyond convergent evolution is    co-evolution, where two species shape each other over    time, as humans have bred wolves into dogs and irascible    aurochs into placid cows. As people and AI interact, Hanson    said, people will naturally select for features that desirable    and can be understand by humans, which then puts a pressure on    the machines to get smarter, more capable, more understanding,    more trustworthy.  <\/p>\n<p>      Sorry, real robots wont be this cute and friendly.    <\/p>\n<p>    By contrast, Hanson warned, if we fear AI and keep it at arms    length, it may develop unexpectedly deep in our networks, in    some internet backbone or industrial control system where it    has not co-evolved in constant contact with humanity. Putting    them out of sight, out of mind, means were developing aliens,    he said, and if they do become truly alive, and intelligent,    creative, conscious, adaptive, but theyre alien, they dont    care about us.  <\/p>\n<p>    You may contain your machine so thats it safe, but    what about your neighbors machine? What about the neighbor    nations? What about some hackers who are off the grid? Hanson    told me. I would say it will happen, we dont know when. My    feeling is that if we can there first with a machine    that we can understand, that proves itself trustworthy, that    forms a positive relationship with us, that would be better.  <\/p>\n<p>    Click to read the previous stories in the series:  <\/p>\n<p>        Artificial Stupidity: When Artificial Intelligence + Human =    Disaster  <\/p>\n<p>        Artificial Stupidity: Fumbling The Handoff From AI To Human    Control  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Here is the original post:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/breakingdefense.com\/2017\/07\/artificial-stupidity-learning-to-trust-the-machine\/\" title=\"Artificial Stupidity: Learning To Trust Artificial Intelligence (Sometimes) - Breaking Defense\">Artificial Stupidity: Learning To Trust Artificial Intelligence (Sometimes) - Breaking Defense<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> A young Marine reaches out for a hand-launched drone.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/artificial-stupidity-learning-to-trust-artificial-intelligence-sometimes-breaking-defense\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187742],"tags":[],"class_list":["post-203821","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/203821"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=203821"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/203821\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=203821"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=203821"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=203821"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}