{"id":147965,"date":"2016-06-13T12:53:12","date_gmt":"2016-06-13T16:53:12","guid":{"rendered":"http:\/\/www.designerchildren.com\/superintelligence-paths-dangers-strategies-wikipedia\/"},"modified":"2016-06-13T12:53:12","modified_gmt":"2016-06-13T16:53:12","slug":"superintelligence-paths-dangers-strategies-wikipedia-2","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/superintelligence\/superintelligence-paths-dangers-strategies-wikipedia-2\/","title":{"rendered":"Superintelligence: Paths, Dangers, Strategies &#8211; Wikipedia &#8230;"},"content":{"rendered":"<p><p>    Superintelligence: Paths, Dangers, Strategies    (2014) is a book by Swedish philosopher Nick Bostrom from    the University of Oxford. It argues that    if machine brains surpass human brains in general intelligence,    then this new superintelligence could replace humans    as the dominant lifeform on Earth. Sufficiently intelligent    machines could improve their own capabilities faster than human    computer scientists.[1] As the fate    of gorillas now depends more on humans than on the actions of    gorillas themselves, so will the fate of future humanity depend    on the actions of the machine superintelligence.[2] The outcome could be an    existential catastrophe for humans.[3]  <\/p>\n<p>    Bostrom's book has been translated into many languages and is    available as an audiobook.[4][5]  <\/p>\n<p>    It is unknown whether human-level artificial intelligence will    arrive in a matter of years, later this century, or not until    future centuries. Regardless of the initial timescale, once    human-level machine intelligence is developed, a    \"superintelligent\" system that \"greatly exceeds the cognitive    performance of humans in virtually all domains of interest\"    would follow surprisingly quickly, possibly even    instantaneously. Such a superintelligence would be difficult to    control or restrain.  <\/p>\n<p>    While the ultimate goals of superintelligences can vary    greatly, a functional superintelligence will spontaneously    generate, as natural subgoals, \"instrumental goals\" such as    self-preservation and goal-content integrity, cognitive    enhancement, and resource acquisition. For example, an agent    whose sole final goal is to solve the Riemann    hypothesis (a famous unsolved, mathematical conjecture) could    create, and act upon, a subgoal of transforming the entire    Earth into some form of computronium (hypothetical \"programmable matter\") to assist in    the calculation. The superintelligence would proactively resist    any outside attempts to turn the superintelligence off or    otherwise prevent its subgoal completion. In order to prevent    such an existential catastrophe, it might    be necessary to successfully solve the \"AI control problem\" for    the first superintelligence. The solution might involve    instilling the superintelligence with goals that are compatible    with human survival and well-being. Solving the control problem    is surprisingly difficult because most goals, when translated    into machine-implementable code, lead to unforeseen and    undesirable consequences.  <\/p>\n<p>    The book ranked #17 on the New York Times list of best selling    science books for August 2014.[6] In the same    month, business magnate Elon Musk made    headlines by agreeing with the book that artificial    intelligence is potentially more dangerous than nuclear    weapons.[7][8][9] Bostroms work on superintelligence has also influenced    Bill Gatess    concern for the existential risks facing humanity over the    coming century.[10][11] In a March    2015 interview with Baidu's CEO, Robert Li, Gates claimed he would    \"highly recommend\" Superintelligence.[12]  <\/p>\n<p>    The science editor of the Financial Times found that    Bostroms writing \"sometimes veers into opaque language that    betrays his background as a philosophy professor\" but    convincingly demonstrates that the risk from superintelligence    is large enough that society should start thinking now about    ways to endow future machine intelligence with positive    values.[1] A review in The Guardian    pointed out that \"even the most sophisticated machines created    so far are intelligent in only a limited sense\" and that    \"expectations that AI would soon overtake human intelligence    were first dashed in the 1960s\", but finds common ground with    Bostrom in advising that \"one would be ill-advised to dismiss    the possibility altogether\".[3]  <\/p>\n<p>    Some of Bostrom's colleagues suggest that nuclear war presents    a greater threat to humanity than superintelligence, as does    the future prospect of the weaponisation of nanotechnology and biotechnology.[13]The    Economist stated that \"Bostrom is forced to spend much    of the book discussing speculations built upon plausible    conjecture... but the book is nonetheless valuable. The    implications of introducing a second intelligent species onto    Earth are far-reaching enough to deserve hard thinking, even if    the prospect of actually doing so seems remote.\"[14]Ronald Bailey wrote in the    libertarian Reason that Bostrom makes a strong    case that solving the AI control problem is the \"essential task    of our age\".[15] According to Tom Chivers of    The Daily Telegraph, the book is    difficult to read, but nonetheless rewarding.[16]  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Go here to read the rest:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"https:\/\/en.wikipedia.org\/wiki\/Superintelligence:_Paths,_Dangers,_Strategies\" title=\"Superintelligence: Paths, Dangers, Strategies - Wikipedia ...\">Superintelligence: Paths, Dangers, Strategies - Wikipedia ...<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Superintelligence: Paths, Dangers, Strategies (2014) is a book by Swedish philosopher Nick Bostrom from the University of Oxford.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/superintelligence\/superintelligence-paths-dangers-strategies-wikipedia-2\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187765],"tags":[],"class_list":["post-147965","post","type-post","status-publish","format-standard","hentry","category-superintelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/147965"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=147965"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/147965\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=147965"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=147965"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=147965"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}