{"id":229620,"date":"2017-07-22T03:33:21","date_gmt":"2017-07-22T07:33:21","guid":{"rendered":"http:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/will-we-be-wiped-out-by-machine-overlords-maybe-we-need-a-pbs-newshour.php"},"modified":"2017-07-22T03:33:21","modified_gmt":"2017-07-22T07:33:21","slug":"will-we-be-wiped-out-by-machine-overlords-maybe-we-need-a-pbs-newshour","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/superintelligence\/will-we-be-wiped-out-by-machine-overlords-maybe-we-need-a-pbs-newshour.php","title":{"rendered":"Will we be wiped out by machine overlords? Maybe we need a &#8230; &#8211; PBS NewsHour"},"content":{"rendered":"<p><p>    JUDY WOODRUFF: Now: the fears around the    development of artificial intelligence.  <\/p>\n<p>    Computer superintelligence is a long, long way from the stuff    of sci-fi movies, but several high-profile leaders and thinkers    have been worrying quite publicly about what they see as the    risks to come.  <\/p>\n<p>    Our economics correspondent, Paul Solman, explores that. Its    part of his weekly series, Making Sense.  <\/p>\n<p>    ACTOR: I want to talk to you about the    greatest scientific event in the history of man.  <\/p>\n<p>    ACTOR: Are you building an A.I.?  <\/p>\n<p>    PAUL SOLMAN: A.I., artificial intelligence.  <\/p>\n<p>    ACTRESS: Do you think I might be switched off?  <\/p>\n<p>    ACTOR: Its not up to me.  <\/p>\n<p>    ACTRESS: Why is it up to anyone?  <\/p>\n<p>    PAUL SOLMAN: Some version of this scenario has    had prominent tech luminaries and scientists worried for years.  <\/p>\n<p>    In 2014, cosmologist Stephen Hawking told the BBC:  <\/p>\n<p>    STEPHEN HAWKING, Scientist (through computer    voice): I think the development of full artificial intelligence    could spell the end of the human race.  <\/p>\n<p>    PAUL SOLMAN: And just this week, Tesla and    SpaceX entrepreneur Elon Musk told the MDNMNational Governors    Association:  <\/p>\n<p>    ELON MUSK, CEO, Tesla Motors: A.I. is a    fundamental existential risk for human civilization. And I    dont think people fully appreciate that.  <\/p>\n<p>    PAUL SOLMAN: OK, but whats the economics    angle? Well, at Oxford Universitys Future of Humanity    Institute, founding director Nick Bostrom leads a team trying    to figure out how best to invest in, well, the future of    humanity.  <\/p>\n<p>    NICK BOSTROM, Director, Future of Humanity    Institute: We are in this very peculiar situation of looking    back at the history of our species, 100,000 years old, and now    finding ourselves just before the threshold to what looks like    it will be this transition to some post-human era of    superintelligence that can colonize the universe, and then    maybe last for billions of years.  <\/p>\n<p>    PAUL SOLMAN: Philosopher Bostrom has been    perhaps the most prominent thinker about the benefits and    dangers to humanity of what he calls superintelligence for many    years.  <\/p>\n<p>    NICK BOSTROM: Once there is superintelligence,    the fate of humanity may depend on what that superintelligence    does.  <\/p>\n<p>    PAUL SOLMAN: There are plenty of ways to    invest in humanity, he says, giving money to anti-disease    charities, for example.  <\/p>\n<p>    But Bostrom thinks longer-term, about investing to lessen    existential risks, those that threaten to wipe out the human    species entirely. Global warming might be one. But plenty of    other people are worrying about that, he says. So, he thinks    about other risks.  <\/p>\n<p>    What are the greatest of those risks?  <\/p>\n<p>    NICK BOSTROM: The greatest existential risks    arise from certain anticipated technological breakthroughs that    we might make, in particular, machine superintelligence,    nanotechnology, and synthetic biology, fundamentally because we    dont have the ability to uninvent anything that we invent.  <\/p>\n<p>    We dont, as a human civilization, have the ability to put the    genie back into the bottle. Once something has been published,    then we are stuck with that knowledge.  <\/p>\n<p>    PAUL SOLMAN: So Bostrom wants money invested    in how to manage A.I.  <\/p>\n<p>    NICK BOSTROM: Specifically on the question, if    and when in the future you could build machines that were    really smart, maybe superintelligent, smarter than humans, how    could you then ensure that you could control what those    machines do, that they were beneficial, that they were aligned    with human intentions?  <\/p>\n<p>    PAUL SOLMAN: How likely is it that machines    would develop basically a mind of their own, which is what    youre saying, right?  <\/p>\n<p>    NICK BOSTROM: I do think that advanced A.I.,    including superintelligence, is a sort of portal through which    humanity will have passage, assuming we dont destroy ourselves    prematurely in some other way.  <\/p>\n<p>    Right now, the human brain is where its at. Its the source of    almost all of the technologies we have.  <\/p>\n<p>    PAUL SOLMAN: Im relieved to hear that.  <\/p>\n<p>    (LAUGHTER)  <\/p>\n<p>    NICK BOSTROM: And the complex social    organization we have.  <\/p>\n<p>    PAUL SOLMAN: Right.  <\/p>\n<p>    NICK BOSTROM: Its why the modern condition is    so different from the way that the chimpanzees live.  <\/p>\n<p>    Its all through the human brains ability to discover and    communicate. But there is no reason to think that human    intelligence is anywhere near the greatest possible level of    intelligence that could exist, that we are sort of the smartest    possible species.  <\/p>\n<p>    I think, rather, that we are the stupidest possible species    that is capable of creating technological civilization.  <\/p>\n<p>    PAUL SOLMAN: And capable of creating    technology that has begun to surpass us, first in chess, then    in Jeopardy, now in the supposedly impossible game for a    machine to win, Go.  <\/p>\n<p>    This is just task-oriented software, some have argued, and not    really intelligence at all. Moreover, whatever you call it,    there will be enormous benefits, says Bostrom.  <\/p>\n<p>    On the other hand, if we approach real intelligence, it could    also become a threat. Think of Ex Machina or The Matrix or    Elon Musks fantasy fear this week about advanced A.I.  <\/p>\n<p>    ELON MUSK: Well, it could start a war by    create  by doing fake news and spoofing e-mail accounts and    fake press releases, and just by, you know, manipulating    information. The pen is mightier than the sword.  <\/p>\n<p>    PAUL SOLMAN: So, this is going to be a    cat-and-mouse game between us and the intelligence?  <\/p>\n<p>    NICK BOSTROM: That would be one model. One    line of attack is to try to leverage the A.I.s intelligence to    learn what it is that we value and what we want it to do.  <\/p>\n<p>    PAUL SOLMAN: In order to protect ourselves    from what could be a truly existential risk.  <\/p>\n<p>    So, how do you get the greatest good for the greatest number of    present and future humans beings? It might be to invest now in    controlling the evolution of artificial intelligence.  <\/p>\n<p>    For the PBS NewsHour, this is economics correspondent Paul    Solman, reporting from Oxford, England.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>View original post here: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/www.pbs.org\/newshour\/bb\/will-wiped-machine-overlords-maybe-need-game-plan-now\/\" title=\"Will we be wiped out by machine overlords? Maybe we need a ... - PBS NewsHour\">Will we be wiped out by machine overlords? Maybe we need a ... - PBS NewsHour<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> JUDY WOODRUFF: Now: the fears around the development of artificial intelligence.  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/superintelligence\/will-we-be-wiped-out-by-machine-overlords-maybe-we-need-a-pbs-newshour.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[431612],"tags":[],"class_list":["post-229620","post","type-post","status-publish","format-standard","hentry","category-superintelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/229620"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=229620"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/229620\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=229620"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=229620"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=229620"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}