{"id":194855,"date":"2015-03-25T01:41:00","date_gmt":"2015-03-25T05:41:00","guid":{"rendered":"http:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/artificial-intelligence-systems-more-apt-to-fail-than-to-destroy.php"},"modified":"2015-03-25T01:41:00","modified_gmt":"2015-03-25T05:41:00","slug":"artificial-intelligence-systems-more-apt-to-fail-than-to-destroy","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/artificial-intelligence-systems-more-apt-to-fail-than-to-destroy.php","title":{"rendered":"Artificial intelligence systems more apt to fail than to destroy"},"content":{"rendered":"<p><p>17 hours ago by David Stauth            <\/p>\n<p>    The most realistic risks about the dangers of artificial    intelligence are basic mistakes, breakdowns and cyber attacks,    an expert in the field says  more so than machines that become    super powerful, run amok and try to destroy the human race.  <\/p>\n<p>    Thomas Dietterich, president of the Association for the    Advancement of Artificial Intelligence and a distinguished    professor of computer science at Oregon State University, said    that the recent contribution of $10 million by Elon Musk to the    Future of Life Institute will help support some important and    needed efforts to ensure AI safety.  <\/p>\n<p>    But the real risks may not be as dramatic as some people    visualize, he said.  <\/p>\n<p>    \"For a long time the risks of artificial intelligence have    mostly been discussed in a few small, academic circles, and now    they are getting some long-overdue attention,\" Dietterich said.    \"That attention, and funding to support it, is a very important    step.\"  <\/p>\n<p>    Dietterich's perspective of problems with AI, however, is a    little more pedestrian than most  not so much that it will    overwhelm humanity, but that like most complex engineered systems, it may not    always work.  <\/p>\n<p>    \"We're now talking about doing some pretty difficult and    exciting things with AI, such as automobiles that drive    themselves, or robots that can effect rescues or operate    weapons,\" Dietterich said. \"These are high-stakes tasks that    will depend on enormously complex algorithms.  <\/p>\n<p>    \"The biggest risk is that those algorithms may not always    work,\" he added. \"We need to be conscious of this risk and    create systems that can still function safely even when the AI    components commit errors.\"  <\/p>\n<p>    Dietterich said he considers machines becoming self-aware and    trying to exterminate humans to be more science fiction than    scientific fact. But to the extent that computer systems are given increasingly dangerous    tasks, and asked to learn from and interpret their experiences,    he says they may simply make mistakes.  <\/p>\n<p>    \"Computer systems can already beat humans at chess, but that    doesn't mean they can't make a wrong move,\" he said. \"They can    reason, but that doesn't mean they always get the right answer.    And they may be powerful, but that's not the same thing as    saying they will develop superpowers.\"  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read more: <\/p>\n<p><a target=\"_blank\" href=\"http:\/\/phys.org\/news346402935.html\/RK=0\/RS=NivE.c7HfHHzkneLRZ841PJxrNA-\" title=\"Artificial intelligence systems more apt to fail than to destroy\">Artificial intelligence systems more apt to fail than to destroy<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> 17 hours ago by David Stauth The most realistic risks about the dangers of artificial intelligence are basic mistakes, breakdowns and cyber attacks, an expert in the field says more so than machines that become super powerful, run amok and try to destroy the human race. Thomas Dietterich, president of the Association for the Advancement of Artificial Intelligence and a distinguished professor of computer science at Oregon State University, said that the recent contribution of $10 million by Elon Musk to the Future of Life Institute will help support some important and needed efforts to ensure AI safety. But the real risks may not be as dramatic as some people visualize, he said <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/artificial-intelligence-systems-more-apt-to-fail-than-to-destroy.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[13],"tags":[],"class_list":["post-194855","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/194855"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=194855"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/194855\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=194855"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=194855"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=194855"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}