Google and Elon Musk to Decide What Is Good for Humanity

Recently published Future of Life Institute (FLI) letter Research Priorities for Robust and Beneficial Artificial Intelligence, signed by hundreds of AI researchers, many representing government regulators, some sitting on committees with names like Presidential Panel on Long Term AI future, in addition to the likes of Elon Musk and Stephen Hawking, offers a program professing to protect the mankind from the threat of super-intelligent AIs. In a contrarian view, I believe, that should they succeed, rather than upcoming salvation, we will see a 21st century version of 17th century Salem Witch trials instead, where technologies competing with AI will be tried and burned at stake, with much fanfare and applause from mainstream press.

Before I proceed to my concerns, some background on AI. For last 50 years AI researchers promise to deliver intelligent computers, which always seem to be five years in the future. For example, Dharmendra Modha, in charge of IBMs Synapse neuromorphic chips claimed two or three years ago that IBM will deliver computer equivalent of human brain by 2018. I have heard echo of this claim in statements of virtually all recently funded AI and Deep Learning companies. Press accepts these claims with the same gullibility it displayed during Apple Siris launch and hails arrival of the brain like computing as a fait accompli. I believe this is very far from the truth.

The investments on the other hand are real, with old AI technologies dressed up in new clothes of Deep Learning. I addition to acquiring Deep Mind, Google hired Geoffrey Hintons University of Toronto team as well as Ray Kurzweil whose primary motivation for joining Google Brain seems to be the opportunity to upload his brain into vast Google supercomputer. Baidu invested $300M in Stanford University Andrew Ngs Deep Learning lab, Facebook and Zuckerberg personally invested $55M in Vicarious and hired Yann LeCun, the other deep learning guru. Samsung and Intel invested in Expect labs and Reactor, Qualcomm made a sizable investment in BrainCorp. While some progress in speech processing and image recognition will be made, it will not be sufficient to justify lofty valuations of recent funding events.

[ Also on Insights:Artificial Intelligence and the Transformation of Healthcare ]

While my background is in fact in AI, I worked for last few years closely with preeminent neural scientist, Walter Freeman at Berkeley on a new kind of wearable personal assistant, one based not on AI but on neural science. During this time, I came to the conclusion that symbol based computing technologies, including point-to-point deep neural networks (not neural science) can not possibly deliver on claims made by many of these well funded AI labs and startups. Here are just three of the reasons:

Each of the above three empirical findings invalidates AIs symbolic, computation approach. I could provide more but it is hard to fight prevalent cultural myths perpetuated by mass media. Movies are a good example. At the beginning of the movie Transcendence, Johnny Depps character, an AI researcher (from Berkeley:), makes bold claim that just one AI will be smarter than the entire population of humans that ever lived on earth. By my calculation this estimate is incorrect today by almost 20 orders of magnitude, it will take more than a few years to bridge this gap.

Which brings me back to the FLI letter. While individual investors have every right to lose their assets, problem gets much more complicated when government regulators are involved. Here are the the main claims of the letter I have problem with (quotes from the letter in italics):

Why should government regulators support technology which failed to deliver on its promises repeatedly for 50 years? Newly emerging branches of neural science which made major breakthroughs in last years are of much greater promise, in many cases exposing glaring weaknesses of AI approach, so it is precisely these groups which will suffer if AI is allowed to regulate the direction of future research of intellect, whether human or artificial. Neural scientists study actual brains with imaging techniques such as fMRI, EEG, ECOG, etc and then postulate predictions about their structure and function from the empirical data they gathered. The more neural research progresses, the clearer it becomes that brain is vastly more complex than we thought just a few decades ago.

AI researchers on the other hand start with a priori assumption that brain quite simple, really just a carbon version of Von Neumann CPU. As Google Brain AI researcher and FLI letter signatory Illya Sutskever recently told me, brain absolutely is just a CPU and further study of brain would be a waste of my time. This is almost word for word repetition of famous statement of Noam Chomsky made decades ago predicting the existence of language generator in brain.

FLI letter signatories say: Do not to worry, we will allow good AI and identify research directions in order to maximize societal benefits and eradicate diseases and poverty. I believe that it would be precisely the newly emerging neural science groups which would suffer if AI is allowed to regulate research direction in this field. Why should evidence like this allow AI scientists to control what biologists and neural scientists can and can not do?

Read the original here:

Google and Elon Musk to Decide What Is Good for Humanity

Related Posts

Comments are closed.