{"id":176687,"date":"2015-01-22T16:51:10","date_gmt":"2015-01-22T21:51:10","guid":{"rendered":"http:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/google-and-elon-musk-to-decide-what-is-good-for-humanity.php"},"modified":"2015-01-22T16:51:10","modified_gmt":"2015-01-22T21:51:10","slug":"google-and-elon-musk-to-decide-what-is-good-for-humanity","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/mind-upload\/google-and-elon-musk-to-decide-what-is-good-for-humanity.php","title":{"rendered":"Google and Elon Musk to Decide What Is Good for Humanity"},"content":{"rendered":"<p><p>    Recently published Future of Life    Institute (FLI) letter Research Priorities for Robust and    Beneficial Artificial Intelligence, signed by hundreds of AI    researchers, many representing government regulators, some    sitting on committees with names like Presidential Panel on    Long Term AI future, in addition to the likes of Elon Musk and    Stephen Hawking, offers a program professing to protect the    mankind from the threat of super-intelligent AIs. In a    contrarian view, I believe, that should they succeed, rather    than upcoming salvation, we will see a 21st century version of    17th century Salem Witch trials instead, where technologies    competing with AI will be tried and burned at stake, with much    fanfare and applause from mainstream press.  <\/p>\n<p>    Before I proceed to my concerns,    some background on AI. For last 50 years AI researchers promise    to deliver intelligent computers, which always seem to be five    years in the future. For example, Dharmendra Modha, in charge    of IBMs Synapse neuromorphic chips claimed two or three    years ago that IBM will deliver computer equivalent of human    brain by 2018. I have heard echo of this claim in statements    of virtually all recently funded AI and Deep Learning    companies. Press accepts these claims with the same gullibility    it displayed during Apple Siris launch and hails arrival of    the brain like computing as a fait accompli. I believe this    is very far from the truth.  <\/p>\n<p>    The investments on the other hand    are real, with old AI technologies dressed up in new clothes of    Deep Learning. I addition to acquiring Deep Mind, Google    hired Geoffrey Hintons University of Toronto team as well as    Ray Kurzweil whose primary motivation for joining Google Brain    seems to be the opportunity to upload his brain into vast    Google supercomputer. Baidu invested $300M in Stanford    University Andrew Ngs Deep Learning lab, Facebook and    Zuckerberg personally invested $55M in Vicarious and hired Yann    LeCun, the other deep learning guru. Samsung and Intel    invested in Expect labs and Reactor, Qualcomm made a sizable    investment in BrainCorp. While some progress in speech    processing and image recognition will be made, it will not be    sufficient to justify lofty valuations of recent funding    events.  <\/p>\n<p>    [ Also on    Insights:Artificial    Intelligence and the Transformation of Healthcare    ]  <\/p>\n<p>    While my background is in fact in    AI, I worked for last few years closely with preeminent neural    scientist, Walter Freeman at Berkeley on a new kind of wearable    personal assistant, one based not on AI but on neural science.    During this time, I came to the conclusion that symbol based    computing technologies, including point-to-point deep neural    networks (not neural science) can not possibly deliver on    claims made by many of these well funded AI labs and startups.    Here are just three of the reasons:  <\/p>\n<p>    Each of the above three empirical    findings invalidates AIs symbolic, computation approach. I    could provide more but it is hard to fight prevalent cultural    myths perpetuated by mass media. Movies are a good example. At    the beginning of the movie Transcendence, Johnny Depps    character, an AI researcher (from Berkeley:), makes bold claim    that just one AI will be smarter than the entire population of    humans that ever lived on earth. By my calculation this    estimate is incorrect today by almost 20 orders of magnitude,    it will take more than a few years to bridge this gap.  <\/p>\n<p>    Which brings me back to the FLI    letter. While individual investors have every right to lose    their assets, problem gets much more complicated when    government regulators are involved. Here are the the main    claims of the letter I have problem with (quotes from the    letter in italics):  <\/p>\n<p>    Why should government regulators    support technology which failed to deliver on its promises    repeatedly for 50 years? Newly emerging branches of neural    science which made major breakthroughs in last years are of    much greater promise, in many cases exposing glaring weaknesses    of AI approach, so it is precisely these groups which will    suffer if AI is allowed to regulate the direction of future    research of intellect, whether human or artificial. Neural    scientists study actual brains with imaging techniques such as    fMRI, EEG, ECOG, etc and then postulate predictions about their    structure and function from the empirical data they gathered.    The more neural research progresses, the clearer it becomes    that brain is vastly more complex than we thought just a few    decades ago.  <\/p>\n<p>    AI researchers on the other hand    start with a priori assumption that brain quite simple, really    just a carbon version of Von Neumann CPU. As Google Brain AI    researcher and FLI letter signatory Illya Sutskever recently    told me, brain absolutely is just a CPU and further study of    brain would be a waste of my time. This is almost word for    word repetition of famous statement of Noam Chomsky made    decades ago predicting the existence of language generator    in brain.  <\/p>\n<p>    FLI letter signatories say: Do not    to worry, we will allow good AI and identify research    directions in order to maximize societal benefits and    eradicate diseases and poverty. I believe that it would be    precisely the newly emerging neural science groups which would    suffer if AI is allowed to regulate research direction in this    field. Why should evidence like this allow AI scientists to    control what biologists and neural scientists can and can not    do?  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read the original here:<\/p>\n<p><a target=\"_blank\" href=\"http:\/\/feeds.wired.com\/c\/35185\/f\/661370\/s\/429b09df\/sc\/1\/l\/0L0Swired0N0C20A150C0A10Cgoogle0Eand0Eelon0Emusk0Egood0Efor0Ehumanity0C\/story01.htm\/RK=0\/RS=6nEF8_rxdWLIAyPQYX4.XTtYbuU-\" title=\"Google and Elon Musk to Decide What Is Good for Humanity\">Google and Elon Musk to Decide What Is Good for Humanity<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Recently published Future of Life Institute (FLI) letter Research Priorities for Robust and Beneficial Artificial Intelligence, signed by hundreds of AI researchers, many representing government regulators, some sitting on committees with names like Presidential Panel on Long Term AI future, in addition to the likes of Elon Musk and Stephen Hawking, offers a program professing to protect the mankind from the threat of super-intelligent AIs. In a contrarian view, I believe, that should they succeed, rather than upcoming salvation, we will see a 21st century version of 17th century Salem Witch trials instead, where technologies competing with AI will be tried and burned at stake, with much fanfare and applause from mainstream press.  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/mind-upload\/google-and-elon-musk-to-decide-what-is-good-for-humanity.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[16],"tags":[],"class_list":["post-176687","post","type-post","status-publish","format-standard","hentry","category-mind-upload"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/176687"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=176687"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/176687\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=176687"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=176687"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=176687"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}