{"id":199149,"date":"2015-04-07T11:41:25","date_gmt":"2015-04-07T15:41:25","guid":{"rendered":"http:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/ai-doomsayer-says-his-ideas-are-catching-on.php"},"modified":"2015-04-07T11:41:25","modified_gmt":"2015-04-07T15:41:25","slug":"ai-doomsayer-says-his-ideas-are-catching-on","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/ai-doomsayer-says-his-ideas-are-catching-on.php","title":{"rendered":"AI Doomsayer Says His Ideas Are Catching On"},"content":{"rendered":"<p><p>    Philosopher Nick Bostrom says major tech companies are    listening to his warnings about investing in AI safety    research.  <\/p>\n<p>    Nick Bostrom  <\/p>\n<p>    Over the past year, Oxford University philosophy professor    Nick    Bostrom has gained visibility for warning about the    potential risks posed by more advanced forms of artificial    intelligence. He now says that his warnings are earning the    attention of companies pushing the boundaries of artificial    intelligence research.  <\/p>\n<p>    Many people working on AI remain skeptical of or even hostile    to Bostroms ideas. But since his book on the subject,    Superintelligence, appeared last summer, some    prominent technologists and scientistsincluding Elon Musk,    Stephen Hawking, and Bill Gateshave echoed some of his    concerns. Google is even assembling an ethics committee to    oversee its artificial intelligence work.  <\/p>\n<p>    Bostrom met last week with MIT Technology Reviews San    Francisco bureau chief, Tom Simonite, to discuss his effort to    get artificial intelligence researchers to consider the dangers    of their work (see Our Fear of Artificial Intelligence).  <\/p>\n<p>    How did you come to believe that artificial    intelligence was a more pressing problem for the world than,    say, nuclear holocaust or a major pandemic?  <\/p>\n<p>    A lot of things could cause catastrophes, but relatively few    could actually threaten the entire future of Earth-inhabiting    intelligent life. I think artificial intelligence is one of the    biggest, and it seems to be one where the efforts of a small    number of people, or one extra unit of resources, might make a    nontrivial difference. With nuclear war, a lot of big, powerful    groups are already interested in that.  <\/p>\n<p>    What about climate change, which is widely seen as the    biggest threat facing humanity at the moment?  <\/p>\n<p>    Its a very, very small existential risk. For it to be one, our    current models would have to be wrongeven the worst scenarios    [only] mean the climate in some parts of the world would be a    bit more unfavorable. Then we would have to be incapable of    remediating that through some geoengineering, which also looks    unlikely.  <\/p>\n<p>    Certain ethical theories imply that existential risk is just    way more important. All things considered, existential risk    mitigation should be much bigger than it is today. The world    spends way more on developing new forms of lipstick than on    existential risk.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>More:<\/p>\n<p><a target=\"_blank\" href=\"http:\/\/www.technologyreview.com\/news\/536381\/ai-doomsayer-says-his-ideas-are-catching-on\" title=\"AI Doomsayer Says His Ideas Are Catching On\">AI Doomsayer Says His Ideas Are Catching On<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Philosopher Nick Bostrom says major tech companies are listening to his warnings about investing in AI safety research. Nick Bostrom Over the past year, Oxford University philosophy professor Nick Bostrom has gained visibility for warning about the potential risks posed by more advanced forms of artificial intelligence. He now says that his warnings are earning the attention of companies pushing the boundaries of artificial intelligence research <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/ai-doomsayer-says-his-ideas-are-catching-on.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[13],"tags":[],"class_list":["post-199149","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/199149"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=199149"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/199149\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=199149"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=199149"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=199149"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}