{"id":146243,"date":"2015-09-24T23:47:17","date_gmt":"2015-09-25T03:47:17","guid":{"rendered":"http:\/\/www.designerchildren.com\/paul-allen-the-singularity-isnt-near-mit-technology-review\/"},"modified":"2015-09-24T23:47:17","modified_gmt":"2015-09-25T03:47:17","slug":"paul-allen-the-singularity-isnt-near-mit-technology-review","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/the-singularity\/paul-allen-the-singularity-isnt-near-mit-technology-review\/","title":{"rendered":"Paul Allen: The Singularity Isn&#8217;t Near | MIT Technology Review"},"content":{"rendered":"<p><p>  The Singularity Summit approaches this weekend in New York. But  the Microsoft cofounder and a colleague say the singularity  itself is a long way off.<\/p>\n<\/p>\n<p>    Futurists like Vernor Vinge and Ray    Kurzweil have argued that the world is rapidly approaching    a tipping point, where the accelerating pace of smarter and    smarter machines will soon outrun all human capabilities. They    call this tipping point the singularity, because they believe it is    impossible to predict how the human future might unfold after    this point. Once these machines exist, Kurzweil and Vinge    claim, theyll possess a superhuman intelligence that is so    incomprehensible to us that we cannot even rationally guess how    our life experiences would be altered. Vinge asks us to ponder the role of humans in a    world where machines are as much smarter than us as we are    smarter than our pet dogs and cats. Kurzweil, who is a bit more    optimistic, envisions a future in which developments in    medical nanotechnology will allow us to download a copy of our    individual brains into these superhuman machines, leave our    bodies behind, and, in a sense, live forever. Its heady stuff.  <\/p>\n<p>    While we suppose this kind of singularity might one day occur,    we dont think it is near. In fact, we think it will be a very    long time coming. Kurzweil disagrees, based on his    extrapolations about the rate of relevant scientific and    technical progress. He reasons that the rate of progress toward    the singularity isnt just a progression of steadily increasing    capability, but is in fact exponentially acceleratingwhat    Kurzweil calls the Law of Accelerating Returns. He writes    that:  <\/p>\n<p>      So we wont experience 100 years of progress in the 21st      centuryit will be more like 20,000 years of progress (at      todays rate). The returns, such as chip speed and      cost-effectiveness, also increase exponentially. Theres even      exponential growth in the rate of exponential growth. Within      a few decades, machine intelligence will surpass human      intelligence, leading to The Singularity  [1]    <\/p>\n<p>    By working through a set of models and historical data,    Kurzweil famously calculates that the singularity will arrive    around 2045.  <\/p>\n<p>    This prediction seems to us quite far-fetched. Of course, we    are aware that the history of science and technology is    littered with people who confidently assert that some event    cant happen, only to be later proven wrongoften in    spectacular fashion. We acknowledge that it is possible but    highly unlikely that Kurzweil will eventually be vindicated. An    adult brain is a finite thing, so its basic workings can    ultimately be known through sustained human effort. But if the    singularity is to arrive by 2045, it will take unforeseeable    and fundamentally unpredictable breakthroughs, and not because    the Law of Accelerating Returns made it the inevitable result    of a specific exponential rate of progress.  <\/p>\n<p>    Kurzweils reasoning rests on the Law of Accelerating Returns    and its siblings, but these are not physical laws. They are    assertions about how past rates of scientific and technical    progress can predict the future rate. Therefore, like other    attempts to forecast the future from the past, these laws    will work until they dont. More problematically for the    singularity, these kinds of extrapolations derive much of their    overall exponential shape from supposing that there will be a    constant supply of increasingly more powerful computing    capabilities. For the Law to apply and the singularity to occur    circa 2045, the advances in capability have to occur not only    in a computers hardware technologies (memory, processing    power, bus speed, etc.) but also in the software we create to    run on these more capable computers. To achieve the    singularity, it isnt enough to just run todays software    faster. We would also need to build smarter and more capable    software programs. Creating this kind of advanced software    requires a prior scientific understanding of the foundations of    human cognition, and we are just scraping the surface of this.  <\/p>\n<p>    This prior need to understand the basic science of cognition is    where the singularity is near arguments fail to persuade us.    It is true that computer hardware technology can develop    amazingly quickly once we have a solid scientific framework and    adequate economic incentives. However, creating the software    for a real singularity-level computer intelligence will require    fundamental scientific progress beyond where we are today. This    kind of progress is very different than the Moores Law-style    evolution of computer hardware capabilities that inspired    Kurzweil and Vinge. Building the complex software that would    allow the singularity to happen requires us to first have a    detailed scientific understanding of how the human brain works    that we can use as an architectural guide, or else create it    all de novo. This means not just knowing the physical    structure of the brain, but also how the brain reacts and    changes, and how billions of parallel neuron interactions can    result in human consciousness and original thought. Getting    this kind of comprehensive understanding of the brain is not    impossible. If the singularity is going to occur on anything    like Kurzweils timeline, though, then we absolutely require a    massive acceleration of our scientific progress in    understanding every facet of the human brain.  <\/p>\n<p>    But history tells us that the process of original scientific    discovery just doesnt behave this way, especially in complex    areas like neuroscience, nuclear fusion, or cancer research.    Overall scientific progress in understanding the brain rarely    resembles an orderly, inexorable march to the truth, let alone    an exponentially accelerating one. Instead, scientific advances    are often irregular, with unpredictable flashes of insight    punctuating the slow grind-it-out lab work of creating and    testing theories that can fit with experimental observations.    Truly significant conceptual breakthroughs dont arrive when    predicted, and every so often new scientific paradigms sweep    through the field and cause scientists to revaluate portions    of what they thought they had settled. We see this in    neuroscience with the discovery of long-term potentiation, the    columnar organization of cortical areas, and neuroplasticity.    These kinds of fundamental shifts dont support the overall    Moores Law-style acceleration needed to get to the singularity    on Kurzweils schedule.  <\/p>\n<p>    The Complexity Brake  <\/p>\n<p>    The foregoing points at a basic issue with how quickly a    scientifically adequate account of human intelligence can be    developed. We call this issue the complexity brake. As    we go deeper and deeper in our understanding of natural    systems, we typically find that we require more and more    specialized knowledge to characterize them, and we are forced    to continuously expand our scientific theories in more and more    complex ways. Understanding the detailed mechanisms of human    cognition is a task that is subject to this complexity brake.    Just think about what is required to thoroughly understand the    human brain at a micro level. The complexity of the brain is    simply awesome. Every structure has been precisely shaped by    millions of years of evolution to do a particular thing,    whatever it might be. It is not like a computer, with billions    of identical transistors in regular memory arrays that are    controlled by a CPU with a few different elements. In the brain    every individual structure and neural circuit has been    individually refined by evolution and environmental factors.    The closer we look at the brain, the greater the degree of    neural variation we find. Understanding the neural structure of    the human brain is getting harder as we learn more. Put another    way, the more we learn, the more we realize there is to know,    and the more we have to go back and revise our earlier    understandings. We believe that one day this steady increase in    complexity will endthe brain is, after all, a finite set of    neurons and operates according to physical principles. But for    the foreseeable future, it is the complexity brake and arrival    of powerful new theories, rather than the Law of Accelerating    Returns, that will govern the pace of scientific progress    required to achieve the singularity.  <\/p>\n<p>    So, while we think a fine-grained understanding of the neural    structure of the brain is ultimately achievable, it has not    shown itself to be the kind of area in which we can make    exponentially accelerating progress. But suppose scientists    make some brilliant new advance in brain scanning technology.    Singularity proponents often claim that we can achieve computer    intelligence just by numerically simulating the brain bottom    up from a detailed neural-level picture. For example, Kurzweil    predicts the development of nondestructive brain scanners that    will allow us to precisely take a snapshot a persons living    brain at the subneuron level. He suggests that these scanners would most    likely operate from inside the brain via millions of injectable    medical nanobots. But, regardless of whether nanobot-based    scanning succeeds (and we arent even close to knowing if this    is possible), Kurzweil essentially argues that this is the    needed scientific advance that will gate the singularity:    computers could exhibit human-level intelligence simply by    loading the state and connectivity of each of a brains neurons    inside a massive digital brain simulator, hooking up inputs and    outputs, and pressing start.  <\/p>\n<p>    However, the difficulty of building human-level software goes    deeper than computationally modeling the structural connections    and biology of each of our neurons. Brain duplication    strategies like these presuppose that there is no fundamental    issue in getting to human cognition other than having    sufficient computer power and neuron structure maps to do the    simulation.[2] While this may be true theoretically, it has    not worked out that way in practice, because it doesnt address    everything that is actually needed to build the software. For    example, if we wanted to build software to simulate a birds    ability to fly in various conditions, simply having a complete    diagram of bird anatomy isnt sufficient. To fully simulate the    flight of an actual bird, we also need to know how everything    functions together. In neuroscience, there is a parallel    situation. Hundreds of attempts have been made (using many    different organisms) to chain together simulations of different    neurons along with their chemical environment. The uniform    result of these attempts is that in order to create an adequate    simulation of the real ongoing neural activity of an organism,    you also need a vast amount of knowledge about the    functional role that these neurons play, how their    connection patterns evolve, how they are structured into groups    to turn raw stimuli into information, and how neural    information processing ultimately affects an organisms    behavior. Without this information, it has proven impossible to    construct effective computer-based simulation models.    Especially for the cognitive neuroscience of humans, we are not    close to the requisite level of functional knowledge. Brain    simulation projects underway today model    only a small fraction of what neurons do and lack the detail to    fully simulate what occurs in a brain. The pace of research in    this area, while encouraging, hardly seems to be exponential.    Again, as we learn more and more about the actual complexity of    how the brain functions, the main thing we find is that the    problem is actually getting harder.  <\/p>\n<p>    The AI Approach  <\/p>\n<p>    Singularity proponents occasionally appeal to developments in    artificial intelligence (AI) as a way to get around the slow    rate of overall scientific progress in bottom-up,    neuroscience-based approaches to cognition. It is true that AI    has had great successes in duplicating certain isolated    cognitive tasks, most recently with IBMs Watson system for Jeopardy!    question answering. But when we step back, we can see that    overall AI-based capabilities havent been exponentially    increasing either, at least when measured against the creation    of a fully general human intelligence. While we have learned a    great deal about how to build individual AI systems that do    seemingly intelligent things, our systems have always remained    brittletheir performance boundaries are rigidly set    by their internal assumptions and defining algorithms, they    cannot generalize, and they frequently give nonsensical answers    outside of their specific focus areas. A computer program that    plays excellent chess cant leverage its skill to play other    games. The best medical diagnosis programs contain immensely    detailed knowledge of the human body but cant deduce that a    tightrope walker would have a great sense of balance.  <\/p>\n<p>    Why has it proven so difficult for AI researchers to build    human-like intelligence, even at a small scale? One answer    involves the basic scientific framework that AI researchers    use. As humans grow from infants to adults, they begin by    acquiring a general knowledge about the world, and then    continuously augment and refine this general knowledge with    specific knowledge about different areas and contexts. AI    researchers have typically tried to do the opposite: they have    built systems with deep knowledge of narrow areas, and tried to    create a more general capability by combining these systems.    This strategy has not generally been successful, although    Watsons performance on Jeopardy! indicates paths like    this may yet have promise. The few attempts that have been made    to directly create a large amount of general knowledge of the    world, and then add the specialized knowledge of a domain (for    example, the work of Cycorp), have also met with only limited success.    And in any case, AI researchers are only just beginning to    theorize about how to effectively model the complex phenomena    that give human cognition its unique flexibility: uncertainty,    contextual sensitivity, rules of thumb, self-reflection, and    the flashes of insight that are essential to higher-level    thought. Just as in neuroscience, the AI-based route to    achieving singularity-level computer intelligence seems to    require many more discoveries, some new Nobel-quality theories,    and probably even whole new research approaches that are    incommensurate with what we believe now. This kind of basic    scientific progress doesnt happen on a reliable exponential    growth curve. So although developments in AI might ultimately    end up being the route to the singularity, again the complexity    brake slows our rate of progress, and pushes the singularity    considerably into the future.  <\/p>\n<p>    The amazing intricacy of human cognition should serve as a    caution to those who claim the singularity is close. Without    having a scientifically deep understanding of cognition, we    cant create the software that could spark the singularity.    Rather than the ever-accelerating advancement predicted by    Kurzweil, we believe that progress toward this understanding is    fundamentally slowed by the complexity brake. Our ability to    achieve this understanding, via either the AI or the    neuroscience approaches, is itself a human cognitive act,    arising from the unpredictable nature of human ingenuity and    discovery. Progress here is deeply affected by the ways in    which our brains absorb and process new information, and by the    creativity of researchers in dreaming up new theories. It is    also governed by the ways that we socially organize research    work in these fields, and disseminate the knowledge that    results. At Vulcan and at the Allen    Institute for Brain Science, we are working on advanced    tools to help researchers deal with this daunting complexity,    and speed them in their research. Gaining a comprehensive    scientific understanding of human cognition is one of the    hardest problems there is. We continue to make encouraging    progress. But by the end of the century, we believe, we will    still be wondering if the singularity is near.  <\/p>\n<p>    Paul G. Allen, who cofounded Microsoft in 1975, is a    philanthropist and chairman of Vulcan, which invests in an    array of technology, aerospace, entertainment, and sports    businesses. Mark Greaves is a computer scientist who serves as    Vulcans director for knowledge systems.  <\/p>\n<p>      [1] Kurzweil,      The Law of Accelerating Returns, March 2001.    <\/p>\n<p>      [2] We are      beginning to get within range of the computer power we might      need to support this kind of massive brain simulation.      Petaflop-class computers (such as IBMs      BlueGene\/P that was used in the Watson system) are now      available commercially. Exaflop-class computers are currently      on the drawing boards. These systems could probably deploy      the raw computational capability needed to simulate the      firing patterns for all of a brains neurons, though      currently it happens many times more slowly than would happen      in an actual brain.    <\/p>\n<p>      UPDATE: Ray Kurzweil responds here.    <\/p>\n<\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See the rest here:<br \/>\n<a target=\"_blank\" href=\"http:\/\/www.technologyreview.com\/view\/425733\/paul-allen-the-singularity-isnt-near\/\" title=\"Paul Allen: The Singularity Isn't Near | MIT Technology Review\">Paul Allen: The Singularity Isn't Near | MIT Technology Review<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> The Singularity Summit approaches this weekend in New York. But the Microsoft cofounder and a colleague say the singularity itself is a long way off. Futurists like Vernor Vinge and Ray Kurzweil have argued that the world is rapidly approaching a tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human capabilities <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/the-singularity\/paul-allen-the-singularity-isnt-near-mit-technology-review\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[214963],"tags":[],"class_list":["post-146243","post","type-post","status-publish","format-standard","hentry","category-the-singularity"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/146243"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=146243"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/146243\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=146243"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=146243"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=146243"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}