{"id":220998,"date":"2017-06-19T23:50:46","date_gmt":"2017-06-20T03:50:46","guid":{"rendered":"http:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/what-is-the-future-of-computers-moores-law.php"},"modified":"2017-06-19T23:50:46","modified_gmt":"2017-06-20T03:50:46","slug":"what-is-the-future-of-computers-moores-law","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/moores-law\/what-is-the-future-of-computers-moores-law.php","title":{"rendered":"What Is the Future of Computers? | Moore&#8217;s Law"},"content":{"rendered":"<p><p>  Integrated circuit from an EPROM memory microchip showing the  memory blocks and supporting circuitry.<\/p>\n<p>    In 1958, a Texas Instruments engineer named Jack Kilby cast a    pattern onto the surface of an 11-millimeter-long \"chip\" of    semiconducting germanium, creating the first ever integrated    circuit. Because the circuit contained a single transistor  a    sort of miniature switch  the chip could hold one \"bit\" of    data: either a 1 or a 0, depending on the transistor's    configuration.  <\/p>\n<p>    Since then, and with unflagging consistency, engineers have    managed to double the number of transistors they can fit on    computer    chips every two years. They do it by regularly halving the size    of transistors. Today, after dozens of iterations of this    doubling and halving rule, transistors measure just a few atoms    across, and a typical computer chip holds 9 million of    themper square millimeter. Computers with more    transistors can perform    more computations per second (because there are more    transistors available for firing), and are therefore more    powerful. The doubling of computing power every two years is    known as \"Moore's law,\" after Gordon Moore, the Intel engineer    who first noticed the trend in 1965.  <\/p>\n<p>    Moore's law renders last year's laptop models defunct, and it    will undoubtedly make next year's tech devices breathtakingly    small and fast compared to today's. But consumerism aside,    where is the exponential growth in computing power ultimately    headed? Will computers eventually outsmart humans? And will    they ever stop becoming more powerful?  <\/p>\n<p>    The singularity  <\/p>\n<p>    Many scientists believe the exponential growth in computing    power leads inevitably to a future moment when computers will    attain human-level intelligence: an event known as the    \"singularity.\" And according to some, the time is nigh.  <\/p>\n<p>    Physicist, author and self-described \"futurist\" Ray Kurzweil    has predicted that computers will come to par with humans    within two decades. He told Time Magazine last year that    engineers will successfully     reverse-engineer the human brain by the mid-2020s, and by    the end of that decade, computers will be capable of    human-level intelligence.  <\/p>\n<p>    The conclusion follows from projecting Moore's law into the    future. If the doubling of computing power every two years    continues to hold, \"then by 2030 whatever technology we're    using will be sufficiently small that we can fit all the    computing power that's in a human brain into a physical volume    the size of a brain,\" explained Peter Denning, distinguished    professor of computer science at the Naval Postgraduate School    and an expert on innovation in computing. \"Futurists believe    that's what you need for artificial intelligence. At that    point, the computer starts thinking for itself.\" [How    to Build a Human Brain]  <\/p>\n<p>    What happens next is uncertain  and has been the subject of    speculation since the dawn of computing.  <\/p>\n<p>    \"Once the machine thinking method has started, it would not    take long to outstrip our feeble powers,\" Alan Turing said in    1951 at a talk entitled \"Intelligent Machinery: A heretical    theory,\" presented at the University of Manchester in the    United Kingdom. \"At some stage therefore we should have to    expect the machines to take control.\" The British mathematician    I.J. Good hypothesized that \"ultraintelligent\" machines, once    created, could design even better machines. \"There would then    unquestionably be an 'intelligence explosion,' and the    intelligence of man would be left far behind. Thus the first    ultraintelligent machine is the last invention that man need    ever make,\" he wrote.  <\/p>\n<p>    Buzz about the coming singularity has escalated to such a pitch    that there's even a book coming out next month, called    \"Singularity Rising\" (BenBella Books), by James Miller, an    associate professor of economics at Smith College, about how to    survive in a post-singularity world. [Could    the Internet Ever Be Destroyed?]  <\/p>\n<p>    Brain-like processing  <\/p>\n<p>    But not everyone puts stock in this notion of a singularity, or    thinks we'll ever reach it. \"A lot of brain scientists now    believe the complexity of the brain is so vast that even if we    could build a computer that mimics the structure, we still    don't know if the thing we build would be able to function as a    brain,\" Denning told Life's Little Mysteries. Perhaps without    sensory inputs from the outside world, computers could never    become self-aware.  <\/p>\n<p>    Others argue that Moore's law will soon start to break down, or    that it has already. The argument stems from the fact that    engineers can't miniaturize transistors much more than they    already have, because they're already pushing atomic limits.    \"When there are only a few atoms in a transistor, you can no    longer guarantee that a few atoms behave as they're supposed    to,\" Denning explained. On the atomic scale, bizarre     quantum effects set in. Transistors no longer maintain a    single state represented by a \"1\" or a \"0,\" but instead    vacillate unpredictably between the two states, rendering    circuits and data storage unreliable. The other limiting    factor, Denning says, is that transistors give off heat when    they switch between states, and when too many transistors,    regardless of their size, are crammed together onto a single    silicon chip, the heat they collectively emit melts the chip.  <\/p>\n<p>    For these reasons, some scientists say computing power is    approaching its zenith. \"Already we see a slowing down of    Moore's law,\" the theoretical physicist Michio Kaku said in    a BigThink lecture in    May.  <\/p>\n<p>    But if that's the case, it's news to many. Doyne Farmer, a    professor of mathematics at Oxford University who studies the    evolution of technology, says there is little evidence for an    end to Moore's law. \"I am willing to bet that there is    insufficient data to draw a conclusion that a slowing down [of    Moore's law] has been observed,\" Farmer told Life's Little    Mysteries. He says computers continue to grow more powerful as    they become more brain-like.  <\/p>\n<p>    Computers can already perform individual operations orders of    magnitude faster than humans can, Farmer said; meanwhile, the    human brain remains far superior at parallel processing, or    performing multiple operations at once. For most of the past    half-century, engineers made computers faster by increasing the    number of transistors in their processors, but they only    recently began \"parallelizing\" computer processors. To work    around the fact that individual processors can't be packed with    extra transistors, engineers have begun upping computing power    by building multi-core processors, or systems of chips that    perform calculations in parallel.\"This controls the heat    problem, because you can slow down the clock,\" Denning    explained. \"Imagine that every time the processor's clock    ticks, the transistors fire. So instead of trying to speed up    the clock to run all these transistors at faster rates, you can    keep the clock slow and have parallel activity on all the    chips.\" He says Moore's law will probably continue because the    number of cores in computer processors will go on doubling    every two years.  <\/p>\n<p>    And because parallelization is the key to complexity, \"In a    sense multi-core processors make computers work more like the    brain,\" Farmer told Life's Little Mysteries.  <\/p>\n<p>    And then there's the future possibility of     quantum computing, a relatively new field that attempts to    harness the uncertainty inherent in quantum states in order to    perform vastly more complex calculations than are feasible with    today's computers. Whereas conventional computers store    information in bits, quantum computers store information in    qubits: particles, such as atoms or photons, whose states are    \"entangled\" with one another, so that a change to one of the    particles affects the states of all the others. Through    entanglement, a single operation performed on a quantum    computer theoretically allows the instantaneous performance of    an inconceivably huge number of calculations, and each    additional particle added to the system of entangled particles    doubles the performance capabilities of the computer.  <\/p>\n<p>    If physicists manage to harness the potential of quantum    computers  something they are struggling to do  Moore's law    will certainly hold far into the future, they say.  <\/p>\n<p>    Ultimate limit  <\/p>\n<p>    If Moore's law does hold, and computer power continues to rise    exponentially (either through human ingenuity or under its own    ultraintelligent steam), is there a point when the progress    will be forced to stop? Physicists Lawrence Krauss and Glenn    Starkman say \"yes.\" In 2005, they calculated that Moore's law    can only hold so long before computers actually run out of    matter and energy in the universe to use as bits. Ultimately,    computers will not be able to expand further; they will not be    able to co-opt enough material to double their number of bits    every two years, because the     universe will be accelerating apart too fast for them to    catch up and encompass more of it.  <\/p>\n<p>    So, if Moore's law continues to hold as accurately as it has so    far, when do Krauss and Starkman say computers must stop    growing? Projections indicate that computer will encompass the    entire reachable universe, turning every bit of matter and    energy into a part of its circuit, in 600 years' time.  <\/p>\n<p>    That might seem very soon. \"Nevertheless, Moore's law is an    exponential law,\" Starkman, a physicist at Case Western    University, told Life's Little Mysteries. You can only double    the number of bits so many times before you require the entire    universe.  <\/p>\n<p>    Personally, Starkman thinks Moore's law will break down long    before the ultimate computer eats the universe. In fact, he    thinks computers will stop getting more powerful in about 30    years. Ultimately, there's no telling what will happen. We    might reach the singularity  the point when computers become    conscious, take over, and then start to self-improve. Or maybe    we won't. This month, Denning has a new paper out in the    journal Communications of the ACM, called \"Don't feel bad if    you can't predict the future.\" It's about all the people who    have tried to do so in the past, and failed.  <\/p>\n<p>    This story was provided by Life's Little Mysteries, a sister    site to LiveScience. Follow Natalie Wolchover on Twitter    @nattyoveror Life's    Little Mysteries @llmysteries. We're also    on Facebook    & Google+.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read this article: <\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.livescience.com\/23074-future-computers.html\" title=\"What Is the Future of Computers? | Moore's Law\">What Is the Future of Computers? | Moore's Law<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Integrated circuit from an EPROM memory microchip showing the memory blocks and supporting circuitry. In 1958, a Texas Instruments engineer named Jack Kilby cast a pattern onto the surface of an 11-millimeter-long \"chip\" of semiconducting germanium, creating the first ever integrated circuit.  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/moores-law\/what-is-the-future-of-computers-moores-law.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[14],"tags":[],"class_list":["post-220998","post","type-post","status-publish","format-standard","hentry","category-moores-law"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/220998"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=220998"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/220998\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=220998"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=220998"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=220998"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}