{"id":211877,"date":"2017-08-15T12:17:11","date_gmt":"2017-08-15T16:17:11","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/what-an-artificial-intelligence-researcher-fears-about-ai-iflscience\/"},"modified":"2017-08-15T12:17:11","modified_gmt":"2017-08-15T16:17:11","slug":"what-an-artificial-intelligence-researcher-fears-about-ai-iflscience","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/what-an-artificial-intelligence-researcher-fears-about-ai-iflscience\/","title":{"rendered":"What An Artificial Intelligence Researcher Fears About AI &#8211; IFLScience"},"content":{"rendered":"<p><p>    As an artificial intelligence researcher, I often come across    the idea that many     people are afraid of what AI might bring. Its perhaps    unsurprising, given both history and the entertainment    industry, that     we might be afraid of a cybernetic takeover that forces us    to live locked away, Matrix-like, as     some sort of human battery.  <\/p>\n<p>    And yet it is hard for me to look up from the     evolutionary computer models I use to develop AI, to think    about how the innocent virtual creatures on my screen might    become the monsters of the future. Might I become the    destroyer of worlds, as Oppenheimer lamented after    spearheading the construction of the first nuclear bomb?  <\/p>\n<p>    I would take the fame, I suppose, but perhaps the critics are    right. Maybe I shouldnt avoid asking: As an AI expert, what do    I fear about artificial intelligence?  <\/p>\n<p>    Fear of the unforeseen  <\/p>\n<p>    The HAL 9000 computer, dreamed up byscience    fiction author Arthur C. Clarkeand brought to life    bymovie    director Stanley Kubrickin 2001: A Space Odyssey,    is a good example of a system that fails because of unintended    consequences. In many complex systems  the RMS Titanic, NASAs    space shuttle, the Chernobyl nuclear power plant  engineers    layer many different components together. The designers may    have known well how each element worked individually, but    didnt know enough about how they all worked together.  <\/p>\n<\/p>\n<p>    That resulted in systems that could never be completely    understood, and could fail in unpredictable ways. In each    disaster  sinking a ship, blowing up two shuttles and    spreading radioactive contamination across Europe and Asia  a    set of relatively small failures combined together to create a    catastrophe.  <\/p>\n<p>    I can see how we could fall into the same trap in AI research.    We look at the latest research from cognitive science,    translate that into an algorithm and add it to an existing    system. We try to engineer AI without understanding    intelligence or cognition first.  <\/p>\n<p>    Systems like IBMs Watson and Googles Alpha equip artificial    neural networks with enormous computing power, and accomplish    impressive feats. But if these machines make mistakes,     they lose on Jeopardy! or dont     defeat a Go master. These are not world-changing    consequences; indeed, the worst that might happen to a regular    person as a result is losing some money betting on their    success.  <\/p>\n<p>    But as AI designs get even more complex and computer processors    even faster, their skills will improve. That will lead us to    give them more responsibility, even as the risk of unintended    consequences rises. We know that to err is human, so it is    likely impossible for us to create a truly safe system.  <\/p>\n<p>    Fear of misuse  <\/p>\n<p>    Im not very concerned about unintended consequences in the    types of AI I am developing, using an approach called     neuroevolution. I create virtual environments and evolve    digital creatures and their brains to solve increasingly    complex tasks. The creatures performance is evaluated; those    that perform the best are selected to reproduce, making the    next generation. Over many generations these machine-creatures    evolve cognitive abilities.  <\/p>\n<p>    Right now we are taking baby steps to evolve machines that can    do simple navigation tasks, make simple decisions, or remember    a couple of bits. But soon we will evolve machines that can    execute more complex tasks and have much better general    intelligence. Ultimately we hope to create human-level    intelligence.  <\/p>\n<p>    Along the way, we will find and eliminate errors and problems    through the process of evolution. With each generation, the    machines get better at handling the errors that occurred in    previous generations. That increases the chances that well    find unintended consequences in simulation, which can be    eliminated before they ever enter the real world.  <\/p>\n<p>    Another possibility thats farther down the line is using    evolution to influence the ethics of artificial intelligence    systems. Its likely that human ethics and morals, such as    trustworthiness    and altruism,    are a result of our evolution  and factor in its continuation.    We could set up our virtual environments to give evolutionary    advantages to machines that demonstrate kindness, honesty and    empathy. This might be a way to ensure that we develop more    obedient servants or trustworthy companions and fewer ruthless    killer robots.  <\/p>\n<p>    While neuroevolution might reduce the likelihood of unintended    consequences, it doesnt prevent misuse. But that is a moral    question, not a scientific one. As a scientist, I must follow    my obligation to the truth, reporting what I find in my    experiments, whether I like the results or not. My focus is not    on determining whether I like or approve of something; it    matters only that I can unveil it.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read more: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/www.iflscience.com\/technology\/what-an-artificial-intelligence-researcher-fears-about-ai\/\" title=\"What An Artificial Intelligence Researcher Fears About AI - IFLScience\">What An Artificial Intelligence Researcher Fears About AI - IFLScience<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/what-an-artificial-intelligence-researcher-fears-about-ai-iflscience\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":7,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187742],"tags":[],"class_list":["post-211877","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/211877"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=211877"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/211877\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=211877"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=211877"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=211877"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}