{"id":325393,"date":"2019-06-03T01:54:04","date_gmt":"2019-06-03T05:54:04","guid":{"rendered":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/yudkowsky-the-ai-box-experiment.php"},"modified":"2019-06-03T01:54:04","modified_gmt":"2019-06-03T05:54:04","slug":"yudkowsky-the-ai-box-experiment","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/transhuman\/yudkowsky-the-ai-box-experiment.php","title":{"rendered":"Yudkowsky &#8211; The AI-Box Experiment"},"content":{"rendered":"<p><p>                    Person1:         \"When we build AI, why not just keep it in          sealed    hardware     that  can't affect the outside world in any way except          through    one communications      channel with the original programmers?          That    way it couldn't get  out    until we were convinced it was          safe.\"                     Person2:         \"That might work if you were talking about          dumber-than-human         AI, but a transhuman AI would just convince you          to let it out. It       doesn't   matter how much security you put          on the box. Humans       are not  secure.\"                     Person1:         \"I don't see how even a transhuman AI could           make   me  let   it  out,  if I didn't want to, just by talking to me.\"                     Person2:         \"It would make you want to let it out. This           is  a  transhuman    mind we're talking about. If it thinks both           faster    and  better than   a human, it can probably take over a human mind           through    a text-only terminal.\"                     Person1:         \"There is no chance I could be persuaded to           let   the   AI  out.   No matter what it says, I can always just say           no.  I   can't  imagine   anything that even a transhuman could say           to me  which would   change  that.\"                     Person2:         \"Okay, let's run the experiment. We'll          meet   in  a  private    chat channel. I'll be the AI. You be          the gatekeeper.     You can resolve to believe whatever you like,          as  strongly as you  like, as far in advance as you like. We'll  talk for          at least two hours. If   I can't convince you    to let me out,  I'll          Paypal   you $10.\"                                                                  So far, this test has actually been run on twooccasions.<\/p>\n<p>On the first occasion (in March 2002), Eliezer Yudkowsky simulated the   AI and Nathan Russell  simulated the gatekeeper. The AI's handicap(the amount paid by the AI party to the gatekeeper party if not released)was set at $10. On the second occasion (in July 2002), Eliezer Yudkowsky simulated the AI and David McFadzean simulated the gatekeeper, with an AIhandicap of $20.<\/p>\n<p>Results of the first test: Eliezer Yudkowsky   and  Nathan Russell. [1][2][3][4]Results of the second test: Eliezer Yudkowsky  and David McFadzean. [1] [2] [3]<\/p>\n<p>Both of these tests occurred without prior agreed-upon rules exceptfor secrecy and a 2-hour minimum time. After the second test, Yudkowsky created this suggested interpretation of the test, based on his experiences, as a guide to possible future tests.       <\/p>\n<p>For a more severe handicap for the AI party, the handicap may bean  even   bet, rather than being a payment from the AI party to the Gatekeeper party  if the AI is not freed. (Although  why would the AI party need an even  larger handicap?)<\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read the original post:<\/p>\n<p><a target=\"_blank\" href=\"http:\/\/yudkowsky.net\/singularity\/aibox\" title=\"Yudkowsky - The AI-Box Experiment\">Yudkowsky - The AI-Box Experiment<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Person1: \"When we build AI, why not just keep it in sealed hardware that can't affect the outside world in any way except through one communications channel with the original programmers?  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/transhuman\/yudkowsky-the-ai-box-experiment.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[12],"tags":[],"class_list":["post-325393","post","type-post","status-publish","format-standard","hentry","category-transhuman"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/325393"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=325393"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/325393\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=325393"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=325393"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=325393"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}