{"id":212065,"date":"2017-08-16T18:16:35","date_gmt":"2017-08-16T22:16:35","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/elon-musk-is-wrong-again-ai-isnt-more-dangerous-than-north-korea-fortune\/"},"modified":"2017-08-16T18:16:35","modified_gmt":"2017-08-16T22:16:35","slug":"elon-musk-is-wrong-again-ai-isnt-more-dangerous-than-north-korea-fortune","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/elon-musk-is-wrong-again-ai-isnt-more-dangerous-than-north-korea-fortune\/","title":{"rendered":"Elon Musk Is Wrong Again. AI Isn&#8217;t More Dangerous Than North Korea. &#8211; Fortune"},"content":{"rendered":"<p><p>    Elon Musk's recent remark on Twitter    that artificial intelligence (AI) is     more    dangerous than    North Korea is based on his bedrock belief in the power of    thought. But this philosophy has a dark side.      <\/p>\n<p>    If you believe that a good idea can    take over the world and if you conjecture that computers can or    will have ideas, then you have to consider the possibility that    computers may one day take over the world. This logic has taken    root in Musk's mind and, as someone who turns ideas into action    for a living, he wants to make sure you get on board too. But    hes wrong, and you shouldnt believe his apocalyptic warnings.      <\/p>\n<p>    Here's the story Musk wants you to know    but hasn't been able to boil down to a single tweet. By dint of    clever ideas, hard work, and significant investment, computers    are getting faster and more capable. In the last few years,    some famously hard computational problems have been mastered,    including identifying objects in images, recognizing the words    that people say, and outsmarting human champions in games like    Go. If machine learning researchers can create programs that    can replace captioners, transcriptionists, and board game    masters, maybe it won't be long before they can replace    themselves. And, once computer programs are in the business of    redesigning themselves, each time they make themselves better,    they make themselves better at making themselves better.       <\/p>\n<p>    The resulting intelligence explosion    would leave computers in a position of power, where they, not    humans, control our future. Their objectives, even if benign    when the machines were young, could be threatening to our very    existence in the hands of an intellect dwarfing our own. That's    why Musk thinks this issue is so much bigger than war with    North Korea. The loss of a handful of major cities wouldn't be    permanent, whereas human extinction by a system seeking to    improve its own capabilities by turning us into computational    components in its mega-brainthat would be forever.       <\/p>\n<p>    Musks comparison, however, grossly    overestimates the likelihood of an intelligence explosion. His    primary mistake is in extrapolating from recent successes of    machine learning the eventual development of general    intelligence. But machine learning is not as dangerous as it    might look on the surface.  <\/p>\n<p>    For example, you may see a machine    perform a task that appears to be superhuman and immediately be    impressed. When people learn to understand speech or play    games, they do so in the context of the full range of human    experiences. Thus when you see something that can respond to    questions or beat you soundly in a board game, it is not    unreasonable to infer that it also possesses a range of other    human capacities. But that's not how these systems work.       <\/p>\n<p>    In a nutshell, here's the methodology    that has been successful for building advanced systems of late:    First, people decide what problem they want to solve and they    express it in the form of a piece of code called an objective    functiona way for the system to score itself on the task. They    then assemble perhaps millions of examples of precisely the    kind of behavior they want their system to exhibit. After that    they design the structure of their AI system and tune it to    maximize the objective function through a combination of human    insight and powerful optimization algorithms.       <\/p>\n<p>    At the end of this process, they get a    system that, often, can exhibit superhuman performance. But the    performance is on the particular task that was selected at the    beginning. If you want the system to do something else, you    probably will need to start the whole process over from    scratch. Moreover, the game of life does not have a clear    objective functioncurrent methodologies are not suited to    creating a broadly intelligent machine.  <\/p>\n<p>    Someday we may inhabit a world with    intelligent machines. But we will develop together and will    have a billion decisions to make that shape how that world    develops. We shouldn't let our fears prevent us from moving    forward technologically.  <\/p>\n<p>    Michael L. Littman is a professor of    computer science at Brown University and co-director of Brown's    Humanity Centered Robotics Initiative.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Original post: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/fortune.com\/2017\/08\/15\/elon-musk-ai-artificial-intelligence-threat-twitter-north-korea\/\" title=\"Elon Musk Is Wrong Again. AI Isn't More Dangerous Than North Korea. - Fortune\">Elon Musk Is Wrong Again. AI Isn't More Dangerous Than North Korea. - Fortune<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Elon Musk's recent remark on Twitter that artificial intelligence (AI) is more dangerous than North Korea is based on his bedrock belief in the power of thought. But this philosophy has a dark side. If you believe that a good idea can take over the world and if you conjecture that computers can or will have ideas, then you have to consider the possibility that computers may one day take over the world.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/elon-musk-is-wrong-again-ai-isnt-more-dangerous-than-north-korea-fortune\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":7,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187742],"tags":[],"class_list":["post-212065","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/212065"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=212065"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/212065\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=212065"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=212065"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=212065"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}