{"id":202246,"date":"2015-10-18T20:40:58","date_gmt":"2015-10-19T00:40:58","guid":{"rendered":"http:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/artificial-intelligence-wait-but-why.php"},"modified":"2015-10-18T20:40:58","modified_gmt":"2015-10-19T00:40:58","slug":"artificial-intelligence-wait-but-why","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/artificial-intelligence-wait-but-why.php","title":{"rendered":"Artificial Intelligence &#8211; Wait But Why"},"content":{"rendered":"<p><p>    Note: The reason this post took three    weeks to finish is that as I dug into research on Artificial    Intelligence, I could not believe what I was    reading. It hit me pretty quickly that whats happening in the    world of AI is not just an important topic, but by far THE most    important topic for our future. So I wanted to learn as much as    I could about it, and once I did that, I wanted to make sure I    wrote a post that really explained this whole situation and why    it matters so much. Not shockingly, that became outrageously    long, so I broke it into two parts. This is Part 1Part 2 is        here.  <\/p>\n<p>    _______________  <\/p>\n<p>    We are on the edge of change comparable to the rise of    human life on Earth.  Vernor Vinge  <\/p>\n<\/p>\n<p>    What does it feel like to stand here?  <\/p>\n<\/p>\n<p>    It seems like a pretty intense place to be standingbut then    you have to remember something about what its like to stand on    a time graph: you cant see whats to your right. So heres how    it actually feels to stand there:  <\/p>\n<\/p>\n<p>    Which probably feels pretty normal  <\/p>\n<p>    _______________  <\/p>\n<p>    Imagine taking a time machine back to 1750a time when the    world was in a permanent power outage, long-distance    communication meant either yelling loudly or firing a cannon in    the air, and all transportation ran on hay. When you get there,    you retrieve a dude, bring him to 2015, and then walk him    around and watch him react to everything. Its impossible for    us to understand what it would be like for him to see shiny    capsules racing by on a highway, talk to people who had been on    the other side of the ocean earlier in the day, watch sports    that were being played 1,000 miles away, hear a musical    performance that happened 50 years ago, and play with my    magical wizard rectangle that he could use to capture a    real-life image or record a living moment, generate a map with    a paranormal moving blue dot that shows him where he is, look    at someones face and chat with them even though theyre on the    other side of the country, and worlds of other inconceivable    sorcery. This is all before you show him the internet or    explain things like the International Space Station, the Large    Hadron Collider, nuclear weapons, or general relativity.  <\/p>\n<p>    This experience for him wouldnt be surprising or shocking or    even mind-blowingthose words arent big enough. He might    actually die.  <\/p>\n<p>    But heres the interesting thingif he then went back to 1750    and got jealous that we got to see his reaction and decided he    wanted to try the same thing, hed take the time machine and go    back the same distance, get someone from around the year 1500,    bring him to 1750, and show him everything. And the 1500 guy    would be shocked by a lot of thingsbut he wouldnt die. It    would be far less of an insane experience for him,    because while 1500 and 1750 were very different, they were    much less different than 1750 to 2015. The    1500 guy would learn some mind-bending shit about space and    physics, hed be impressed with how committed Europe turned out    to be with that new imperialism fad, and hed have to do some    major revisions of his world map conception. But watching    everyday life go by in 1750transportation, communication,    etc.definitely wouldnt make him die.  <\/p>\n<p>    No, in order for the 1750 guy to have as much fun as we had    with him, hed have to go much farther backmaybe all the way    back to about 12,000 BC, before the First Agricultural    Revolution gave rise to the first cities and to the concept of    civilization. If someone from a purely hunter-gatherer    worldfrom a time when humans were, more or less, just another    animal speciessaw the vast human empires of 1750 with their    towering churches, their ocean-crossing ships, their concept of    being inside, and their enormous mountain of collective,    accumulated human knowledge and discoveryhed likely die.  <\/p>\n<p>    And then what if, after dying, he got jealous and    wanted to do the same thing. If he went back 12,000 years to    24,000 BC and got a guy and brought him to 12,000 BC, hed show    the guy everything and the guy would be like, Okay whats your    point who cares. For the 12,000 BC guy to have the same fun,    hed have to go back over 100,000 years and get someone he    could show fire and language to for the first time.  <\/p>\n<p>    In order for someone to be transported into the future and die    from the level of shock theyd experience, they have to go    enough years ahead that a die level of progress, or a Die    Progress Unit (DPU) has been achieved. So a DPU took over    100,000 years in hunter-gatherer times, but at the    post-Agricultural Revolution rate, it only took about 12,000    years. The post-Industrial Revolution world has moved so    quickly that a 1750 person only needs to go forward a couple    hundred years for a DPU to have happened.  <\/p>\n<p>    This patternhuman progress moving quicker and quicker as time    goes onis what futurist Ray Kurzweil calls human historys Law    of Accelerating Returns. This happens because more advanced    societies have the ability to progress at a faster    rate than less advanced societiesbecause    theyre more advanced. 19th century humanity knew more and had    better technology than 15th century humanity, so its no    surprise that humanity made far more advances in the 19th    century than in the 15th century15th century humanity was no    match for 19th century humanity.11 open these  <\/p>\n<p>    This works on smaller scales too. The movie Back to the    Future came out in 1985, and the past took place in    1955. In the movie, when Michael J. Fox went back to 1955, he    was caught off-guard by the newness of TVs, the prices of soda,    the lack of love for shrill electric guitar, and the variation    in slang. It was a different world, yesbut if the movie were    made today and the past took place in 1985, the movie could    have had much more fun with much bigger    differences. The character would be in a time before personal    computers, internet, or cell phonestodays Marty McFly, a    teenager born in the late 90s, would be much more out of place    in 1985 than the movies Marty McFly was in 1955.  <\/p>\n<p>    This is for the same reason we just discussedthe Law of    Accelerating Returns. The average rate of advancement between    1985 and 2015 was higher than the rate between 1955 and    1985because the former was a more advanced worldso much more    change happened in the most recent 30 years than in the prior    30.  <\/p>\n<p>    Soadvances are getting bigger and bigger and happening more    and more quickly. This suggests some pretty intense things    about our future, right?  <\/p>\n<p>    Kurzweil suggests that the progress of the entire 20th century    would have been achieved in only 20 years at the rate of    advancement in the year 2000in other words, by 2000, the rate    of progress was five times faster than the average    rate of progress during the 20th century. He believes another    20th centurys worth of progress happened between 2000 and 2014    and that another 20th centurys worth of progress will    happen by 2021, in only seven years. A couple decades later, he    believes a 20th centurys worth of progress will happen    multiple times in the same year, and even later, in less than    one month. All in all, because of the Law of Accelerating    Returns, Kurzweil believes that the 21st century will achieve    1,000 times the progress of the 20th century.2  <\/p>\n<p>    If Kurzweil and others who agree with him are correct, then we    may be as blown away by 2030 as our 1750 guy was by 2015i.e.    the next DPU might only take a couple decadesand the world in    2050 might be so vastly different than todays world    that we would barely recognize it.  <\/p>\n<p>    This isnt science fiction. Its what many scientists smarter    and more knowledgeable than you or I firmly believeand if you    look at history, its what we should logically predict.  <\/p>\n<p>    So then why, when you hear me say something like the world 35    years from now might be totally unrecognizable, are you    thinking, Cool.but nahhhhhhh? Three reasons were skeptical    of outlandish forecasts of the future:  <\/p>\n<p>    1) When it comes to history, we think in straight    lines. When we imagine the progress of the next 30    years, we look back to the progress of the previous 30 as an    indicator of how much will likely happen. When we think about    the extent to which the world will change in the 21st century,    we just take the 20th century progress and add it to the year    2000. This was the same mistake our 1750 guy made when he got    someone from 1500 and expected to blow his mind as much as his    own was blown going the same distance ahead. Its most    intuitive for us to think linearly, when we should be    thinking exponentially. If someone is being more    clever about it, they might predict the advances of the next 30    years not by looking at the previous 30 years, but by taking    the current rate of progress and judging based on    that. Theyd be more accurate, but still way off. In order to    think about the future correctly, you need to imagine things    moving at a much faster rate than theyre moving now.  <\/p>\n<\/p>\n<p>    2) The trajectory of very recent history often tells a    distorted story. First, even a steep exponential curve    seems linear when you only look at a tiny slice of it, the same    way if you look at a little segment of a huge circle up close,    it looks almost like a straight line. Second, exponential    growth isnt totally smooth and uniform. Kurzweil explains that    progress happens in S-curves:  <\/p>\n<\/p>\n<p>    An S is created by the wave of progress when a new paradigm    sweeps the world. The curve goes through three phases:  <\/p>\n<p>    1. Slow growth (the early phase of exponential    growth)    2. Rapid growth (the late, explosive phase of exponential    growth)    3. A leveling off as the particular paradigm    matures3  <\/p>\n<p>    If you look only at very recent history, the part of the    S-curve youre on at the moment can obscure your perception of    how fast things are advancing. The chunk of time between 1995    and 2007 saw the explosion of the internet, the introduction of    Microsoft, Google, and Facebook into the public consciousness,    the birth of social networking, and the introduction of cell    phones and then smart phones. That was Phase 2: the growth    spurt part of the S. But 2008 to 2015 has been less    groundbreaking, at least on the technological front. Someone    thinking about the future today might examine the last few    years to gauge the current rate of advancement, but thats    missing the bigger picture. In fact, a new, huge Phase 2 growth    spurt might be brewing right now.  <\/p>\n<p>    3) Our own experience makes us stubborn old men    about the future. We base our ideas about the world on    our personal experience, and that experience has ingrained the    rate of growth of the recent past in our heads as the way    things happen. Were also limited by our imagination, which    takes our experience and uses it to conjure future    predictionsbut often, what we know simply doesnt give us the    tools to think accurately about the future.2 When we hear a prediction about the future    that contradicts our experience-based notion of how things    work, our instinct is that the prediction must be naive.    If I tell you, later in this post, that you may live to be 150,    or 250, or not die at all, your instinct will be,    Thats stupidif theres one thing I know from history, its    that everybody dies. And yes, no one in the past has not died.    But no one flew airplanes before airplanes were invented    either.  <\/p>\n<p>    So while nahhhhh might feel right as you read this    post, its probably actually wrong. The fact is, if were being    truly logical and expecting historical patterns to continue, we    should conclude that much, much, much more should    change in the coming decades than we intuitively expect. Logic    also suggests that if the most advanced species on a planet    keeps making larger and larger leaps forward at an ever-faster    rate, at some point, theyll make a leap so great that it    completely alters life as they know it and the perception they    have of what it means to be a humankind of like how evolution    kept making great leaps toward intelligence until finally it    made such a large leap to the human being that it completely    altered what it meant for any creature to live on planet Earth.    And if you spend some time reading about whats going on today    in science and technology, you start to see a lot of signs    quietly hinting that life as we currently know it cannot    withstand the leap thats coming next.  <\/p>\n<p>    _______________  <\/p>\n<p>    If youre like me, you used to think Artificial Intelligence    was a silly sci-fi concept, but lately youve been hearing it    mentioned by serious people, and you dont really quite get it.  <\/p>\n<p>    There are three reasons a lot of people are confused about the    term AI:  <\/p>\n<p>    1) We associate AI with    movies. Star Wars. Terminator. 2001: A Space Odyssey.    Even the Jetsons. And those are fiction, as are the robot    characters. So it makes AI sound a little fictional to us.  <\/p>\n<p>    2) AI is a broad topic. It ranges from your    phones calculator to self-driving cars to something in the    future that might change the world dramatically. AI refers to    all of these things, which is confusing.  <\/p>\n<p>    3) We use AI all the time in our daily lives, but we    often dont realize its AI. John McCarthy, who coined    the term Artificial Intelligence in 1956, complained that as    soon as it works, no one calls it AI anymore.4 Because of this phenomenon, AI often    sounds like a mythical future prediction more than a reality.    At the same time, it makes it sound like a pop concept from the    past that never came to fruition. Ray Kurzweil says he hears    people say that AI withered in the 1980s, which he compares to    insisting that the Internet died in the dot-com bust of the    early 2000s.5  <\/p>\n<p>    So lets clear things up. First, stop thinking of    robots. A robot is a container for AI,    sometimes mimicking the human form, sometimes notbut the AI    itself is the computer inside the robot. AI is the    brain, and the robot is its bodyif it even has a body. For    example, the software and data behind Siri is AI, the womans    voice we hear is a personification of that AI, and theres no    robot involved at all.  <\/p>\n<p>    Secondly, youve probably heard the term singularity or    technological singularity. This term has been used in math to    describe an asymptote-like situation where normal rules no    longer apply. Its been used in physics to describe a    phenomenon like an infinitely small, dense black hole or the    point we were all squished into right before the Big Bang.    Again, situations where the usual rules dont apply. In 1993,    Vernor Vinge wrote a famous essay in which he applied the term    to the moment in the future when our technologys intelligence    exceeds our owna moment for him when life as we know it will    be forever changed and normal rules will no longer apply. Ray    Kurzweil then muddled things a bit by defining the singularity    as the time when the Law of Accelerating Returns has reached    such an extreme pace that technological progress is happening    at a seemingly-infinite pace, and after which well be living    in a whole new world. I found that many of todays AI thinkers    have stopped using the term, and its confusing anyway, so I    wont use it much here (even though well be focusing on that    idea throughout).  <\/p>\n<p>    Finally, while there are many different types or forms of AI    since AI is a broad concept, the critical categories we need to    think about are based on an AIs caliber. There are    three major AI caliber categories:  <\/p>\n<p>    AI Caliber 1) Artificial Narrow Intelligence    (ANI): Sometimes referred to as Weak AI,    Artificial Narrow Intelligence is AI that specializes in    one area. Theres AI that can beat the world chess    champion in chess, but thats the only thing it does. Ask it to    figure out a better way to store data on a hard drive, and    itll look at you blankly.  <\/p>\n<p>    AI Caliber 2) Artificial General Intelligence    (AGI): Sometimes referred to as Strong AI, or    Human-Level AI, Artificial General Intelligence refers    to a computer that is as smart as a human across the    boarda machine that can perform any intellectual task    that a human being can. Creating AGI is a much harder    task than creating ANI, and were yet to do it. Professor Linda    Gottfredson describes intelligence as a very general mental    capability that, among other things, involves the ability to    reason, plan, solve problems, think abstractly, comprehend    complex ideas, learn quickly, and learn from experience. AGI    would be able to do all of those things as easily as you can.  <\/p>\n<p>    AI Caliber 3) Artificial Superintelligence    (ASI): Oxford philosopher and leading AI thinker Nick    Bostrom defines superintelligence as an intellect that is    much smarter than the best human brains in practically every    field, including scientific creativity, general wisdom and    social skills. Artificial Superintelligence ranges from a    computer thats just a little smarter than a human to one    thats trillions of times smarteracross the board. ASI is the    reason the topic of AI is such a spicy meatball and why the    words immortality and extinction will both appear in these    posts multiple times.  <\/p>\n<p>    As of now, humans have conquered the lowest caliber of    AIANIin many ways, and its everywhere. The AI Revolution is    the road from ANI, through AGI, to ASIa road we may or may not    survive but that, either way, will change everything.  <\/p>\n<p>    Lets take a close look at what the leading thinkers in the    field believe this road looks like and why this revolution    might happen way sooner than you might think:  <\/p>\n<p>    Artificial Narrow Intelligence is machine intelligence that    equals or exceeds human intelligence or efficiency at a    specific thing. A few examples:  <\/p>\n<p>    ANI systems as they are now arent especially scary. At worst,    a glitchy or badly-programmed ANI can cause an isolated    catastrophe like knocking out a power grid, causing a harmful    nuclear power plant malfunction, or triggering a financial    markets disaster (like the 2010 Flash Crash when an ANI program    reacted the wrong way to an unexpected situation and caused the    stock market to briefly plummet, taking $1 trillion of market    value with it, only part of which was recovered when the    mistake was corrected).  <\/p>\n<p>    But while ANI doesnt have the capability to cause an    existential threat, we should see this increasingly    large and complex ecosystem of relatively-harmless ANI as a    precursor of the world-altering hurricane thats on the way.    Each new ANI innovation quietly adds another brick onto the    road to AGI and ASI. Or as Aaron Saenz sees it, our worlds ANI systems are like    the amino acids in the early Earths primordial oozethe    inanimate stuff of life that, one unexpected day, woke up.  <\/p>\n<p>    Why Its So Hard  <\/p>\n<p>    Nothing will make you appreciate human intelligence like    learning about how unbelievably challenging it is to try to    create a computer as smart as we are. Building skyscrapers,    putting humans in space, figuring out the details of how the    Big Bang went downall far easier than understanding our own    brain or how to make something as cool as it. As of now, the    human brain is the most complex object in the known universe.  <\/p>\n<p>    Whats interesting is that the hard parts of trying to build    AGI (a computer as smart as humans in general, not    just at one narrow specialty) are not intuitively what youd    think they are. Build a computer that can multiply two    ten-digit numbers in a split secondincredibly easy. Build one    that can look at a dog and answer whether its a dog or a    catspectacularly difficult. Make AI that can beat any human in    chess? Done. Make one that can read a paragraph from a    six-year-olds picture book and not just recognize the words    but understand the meaning of them? Google is    currently spending billions of dollars trying to do it. Hard    thingslike calculus, financial market strategy, and language    translationare mind-numbingly easy for a computer, while easy    thingslike vision, motion, movement, and perceptionare    insanely hard for it. Or, as computer scientist Donald Knuth    puts it, AI has by now succeeded in doing essentially    everything that requires thinking but has failed to do most    of what people and animals do without thinking.'7  <\/p>\n<p>    What you quickly realize when you think about this is that    those things that seem easy to us are actually unbelievably    complicated, and they only seem easy because those skills have    been optimized in us (and most animals) by hundreds of million    years of animal evolution. When you reach your hand up toward    an object, the muscles, tendons, and bones in your shoulder,    elbow, and wrist instantly perform a long series of physics    operations, in conjunction with your eyes, to allow you to move    your hand in a straight line through three dimensions. It seems    effortless to you because you have perfected software in your    brain for doing it. Same idea goes for why its not that    malware is dumb for not being able to figure out the slanty    word recognition test when you sign up for a new account on a    siteits that your brain is super impressive for being    able to.  <\/p>\n<p>    On the other hand, multiplying big numbers or playing chess are    new activities for biological creatures and we havent had any    time to evolve a proficiency at them, so a computer doesnt    need to work too hard to beat us. Think about itwhich would    you rather do, build a program that could multiply big numbers    or one that could understand the essence of a B well enough    that you could show it a B in any one of thousands of    unpredictable fonts or handwriting and it could instantly know    it was a B?  <\/p>\n<p>    One fun examplewhen you look at this, you and a computer both    can figure out that its a rectangle with two distinct shades,    alternating:  <\/p>\n<\/p>\n<p>    Tied so far. But if you pick up the black and reveal the whole    image  <\/p>\n<\/p>\n<p>    you have no problem giving a full description of the various    opaque and translucent cylinders, slats, and 3-D corners, but    the computer would fail miserably. It would describe what it    seesa variety of two-dimensional shapes in several different    shadeswhich is actually whats there. Your brain is doing a    ton of fancy shit to interpret the implied depth, shade-mixing,    and room lighting the picture is trying to portray.8 And looking at the picture below, a    computer sees a two-dimensional white, black, and gray collage,    while you easily see what it really isa photo of an    entirely-black, 3-D rock:  <\/p>\n<p>      Credit: Matthew Lloyd    <\/p>\n<p>    And everything we just mentioned is still only taking in    stagnant information and processing it. To be human-level    intelligent, a computer would have to understand things like    the difference between subtle facial expressions, the    distinction between being pleased, relieved, content,    satisfied, and glad, and why Braveheart was great but    The Patriot was terrible.  <\/p>\n<p>    Daunting.  <\/p>\n<p>    So how do we get there?  <\/p>\n<p>    First Key to Creating AGI: Increasing Computational    Power  <\/p>\n<p>    One thing that definitely needs to happen for AGI to be a    possibility is an increase in the power of computer hardware.    If an AI system is going to be as intelligent as the brain,    itll need to equal the brains raw computing capacity.  <\/p>\n<p>    One way to express this capacity is in the total calculations    per second (cps) the brain could manage, and you could come to    this number by figuring out the maximum cps of each structure    in the brain and then adding them all together.  <\/p>\n<p>    Ray Kurzweil came up with a shortcut by taking someones    professional estimate for the cps of one structure and that    structures weight compared to that of the whole brain and then    multiplying proportionally to get an estimate for the total.    Sounds a little iffy, but he did this a bunch of times with    various professional estimates of different regions, and the    total always arrived in the same ballparkaround    1016, or 10 quadrillion cps.  <\/p>\n<p>    Currently, the worlds fastest supercomputer, Chinas Tianhe-2, has actually beaten that number,    clocking in at about 34 quadrillion cps. But Tianhe-2 is also a    dick, taking up 720 square meters of space, using 24 megawatts    of power (the brain runs on just 20 watts), and costing $390 million to    build. Not especially applicable to wide usage, or even most    commercial or industrial usage yet.  <\/p>\n<p>    Kurzweil suggests that we think about the state of computers by    looking at how many cps you can buy for $1,000. When that    number reaches human-level10 quadrillion cpsthen thatll mean    AGI could become a very real part of life.  <\/p>\n<p>    Moores    Law is a historically-reliable rule that the worlds    maximum computing power doubles approximately every two years,    meaning computer hardware advancement, like general human    advancement through history, grows exponentially. Looking at    how this relates to Kurzweils cps\/$1,000 metric, were    currently at about 10 trillion cps\/$1,000, right on pace with    this graphs predicted trajectory:9  <\/p>\n<\/p>\n<p>    So the worlds $1,000 computers are now beating the mouse brain    and theyre at about a thousandth of human level. This doesnt    sound like much until you remember that we were at about a    trillionth of human level in 1985, a billionth in 1995, and a    millionth in 2005. Being at a thousandth in 2015 puts us right    on pace to get to an affordable computer by 2025 that rivals    the power of the brain.  <\/p>\n<p>    So on the hardware side, the raw power needed for AGI is    technically available now, in China, and well be ready for    affordable, widespread AGI-caliber hardware within 10 years.    But raw computational power alone doesnt make a computer    generally intelligentthe next question is, how do we bring    human-level intelligence to all that power?  <\/p>\n<p>    Second Key to Creating AGI: Making it Smart  <\/p>\n<p>    This is the icky part. The truth is, no one really knows how to    make it smartwere still debating how to make a computer    human-level intelligent and capable of knowing what a dog and a    weird-written B and a mediocre movie is. But there are a bunch    of far-fetched strategies out there and at some point, one of    them will work. Here are the three most common strategies I    came across:  <\/p>\n<p>    This is like scientists toiling over how that kid who sits next    to them in class is so smart and keeps doing so well on the    tests, and even though they keep studying diligently, they    cant do nearly as well as that kid, and then they finally    decide k fuck it Im just gonna copy that kids answers. It    makes sensewere stumped trying to build a super-complex    computer, and there happens to be a perfect prototype for one    in each of our heads.  <\/p>\n<p>    The science world is working hard on reverse engineering the    brain to figure out how evolution made such a rad    thingoptimistic estimates say we can do this by    2030. Once we do that, well know all the secrets of how the    brain runs so powerfully and efficiently and we can draw    inspiration from it and steal its innovations. One example of    computer architecture that mimics the brain is the artificial    neural network. It starts out as a network of transistor    neurons, connected to each other with inputs and outputs, and    it knows nothinglike an infant brain. The way it learns is    it tries to do a task, say handwriting recognition, and at    first, its neural firings and subsequent guesses at deciphering    each letter will be completely random. But when its told it    got something right, the transistor connections in the firing    pathways that happened to create that answer are strengthened;    when its told it was wrong, those pathways connections are    weakened. After a lot of this trial and feedback, the network    has, by itself, formed smart neural pathways and the machine    has become optimized for the task. The brain learns a bit like    this but in a more sophisticated way, and as we continue to    study the brain, were discovering ingenious new ways to take    advantage of neural circuitry.  <\/p>\n<p>    More extreme plagiarism involves a strategy called whole brain    emulation, where the goal is to slice a real brain into thin    layers, scan each one, use software to assemble an accurate    reconstructed 3-D model, and then implement the model on a    powerful computer. Wed then have a computer officially capable    of everything the brain is capable ofit would just need to    learn and gather information. If engineers get really    good, theyd be able to emulate a real brain with such exact    accuracy that the brains full personality and memory would be    intact once the brain architecture has been uploaded to a    computer. If the brain belonged to Jim right before he passed    away, the computer would now wake up as Jim (?), which would be a robust human-level AGI, and    we could now work on turning Jim into an unimaginably smart    ASI, which hed probably be really excited about.  <\/p>\n<p>    How far are we from achieving whole brain emulation? Well so    far, weve not yet just recently been able to emulate a    1mm-long flatworm brain, which consists of just 302 total    neurons. The human brain contains 100 billion. If that makes it    seem like a hopeless project, remember the power of exponential    progressnow that weve conquered the tiny worm brain, an ant    might happen before too long, followed by a mouse, and suddenly    this will seem much more plausible.  <\/p>\n<p>    So if we decide the smart kids test is too hard to copy, we    can try to copy the way he studies for the tests    instead.  <\/p>\n<p>    Heres something we know. Building a computer as powerful as    the brain is possibleour own brains evolution is    proof. And if the brain is just too complex for us to emulate,    we could try to emulate evolution instead. The fact    is, even if we can emulate a brain, that might be like trying    to build an airplane by copying a birds wing-flapping    motionsoften, machines are best designed using a fresh,    machine-oriented approach, not by mimicking biology exactly.  <\/p>\n<p>    So how can we simulate evolution to build AGI? The method,    called genetic algorithms, would work something like this:    there would be a performance-and-evaluation process that would    happen again and again (the same way biological creatures    perform by living life and are evaluated by whether they    manage to reproduce or not). A group of computers would try to    do tasks, and the most successful ones would be bred    with each other by having half of each of their programming    merged together into a new computer. The less successful ones    would be eliminated. Over many, many iterations, this natural    selection process would produce better and better computers.    The challenge would be creating an automated evaluation and    breeding cycle so this evolution process could run on its own.  <\/p>\n<p>    The downside of copying evolution is that evolution likes to    take a billion years to do things and we want to do this in a    few decades.  <\/p>\n<p>    But we have a lot of advantages over evolution. First,    evolution has no foresight and works randomlyit produces more    unhelpful mutations than helpful ones, but we would control the    process so it would only be driven by beneficial glitches and    targeted tweaks. Secondly, evolution doesnt aim for    anything, including intelligencesometimes an environment might    even select against higher intelligence (since it uses    a lot of energy). We, on the other hand, could specifically    direct this evolutionary process toward increasing    intelligence. Third, to select for intelligence, evolution has    to innovate in a bunch of other ways to facilitate    intelligencelike revamping the ways cells produce energywhen    we can remove those extra burdens and use things like    electricity. Its no doubt wed be much, much faster than    evolutionbut its still not clear whether well be able to    improve upon evolution enough to make this a viable    strategy.  <\/p>\n<p>    This is when scientists get desperate and try to program the    test to take itself. But it might be the most promising method    we have.  <\/p>\n<p>    The idea is that wed build a computer whose two major skills    would be doing research on AI and coding changes into    itselfallowing it to not only learn but to improve its own    architecture. Wed teach computers to be computer    scientists so they could bootstrap their own development. And    that would be their main jobfiguring out how to make    themselves smarter. More on this later.  <\/p>\n<p>    Rapid advancements in hardware and innovative experimentation    with software are happening simultaneously, and AGI could creep    up on us quickly and unexpectedly for two main reasons:  <\/p>\n<p>    1) Exponential growth is intense and what seems like a snails    pace of advancement can quickly race upwardsthis GIF    illustrates this concept nicely:  <\/p>\n<p>    2) When it comes to software, progress can seem slow, but then    one epiphany can instantly change the rate of advancement (kind    of like the way science, during the time humans thought the    universe was geocentric, was having difficulty calculating how    the universe worked, but then the discovery that it was    heliocentric suddenly made everything much easier).    Or, when it comes to something like a computer that improves    itself, we might seem far away but actually be just one tweak    of the system away from having it become 1,000 times more    effective and zooming upward to human-level intelligence.  <\/p>\n<p>    At some point, well have achieved AGIcomputers with    human-level general intelligence. Just a bunch of people and    computers living together in equality.  <\/p>\n<p>    Oh actually not at all.  <\/p>\n<p>    The thing is, AGI with an identical level of intelligence and    computational capacity as a human would still have significant    advantages over humans. Like:  <\/p>\n<p>    Hardware:  <\/p>\n<p>    Software:  <\/p>\n<p>    AI, which will likely get to AGI by being programmed to    self-improve, wouldnt see human-level intelligence as some    important milestoneits only a relevant marker from our point    of viewand wouldnt have any reason to stop at our level.    And given the advantages over us that even human    intelligence-equivalent AGI would have, its pretty obvious    that it would only hit human intelligence for a brief instant    before racing onwards to the realm of superior-to-human    intelligence.  <\/p>\n<p>    This may shock the shit out of us when it happens. The reason    is that from our perspective, A) while the    intelligence of different kinds of animals varies, the main    characteristic were aware of about any animals intelligence    is that its far lower than ours, and B) we view the smartest    humans as WAY smarter than the dumbest humans. Kind of like    this:  <\/p>\n<\/p>\n<p>    So as AI zooms upward in intelligence toward us, well see it    as simply becoming smarter, for an animal. Then, when    it hits the lowest capacity of humanityNick Bostrom uses the    term the village idiotwell be like, Oh wow, its like a    dumb human. Cute! The only thing is, in the grand spectrum of    intelligence, all humans, from the village idiot to    Einstein, are within a very small rangeso just after    hitting village idiot-level and being declared to be AGI, itll    suddenly be smarter than Einstein and we wont know what hit    us:  <\/p>\n<\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Original post:<\/p>\n<p><a target=\"_blank\" href=\"http:\/\/waitbutwhy.com\/2015\/01\/artificial-intelligence-revolution-1.html\" title=\"Artificial Intelligence - Wait But Why\">Artificial Intelligence - Wait But Why<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Note: The reason this post took three weeks to finish is that as I dug into research on Artificial Intelligence, I could not believe what I was reading.  <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/artificial-intelligence-wait-but-why.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[13],"tags":[],"class_list":["post-202246","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/202246"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=202246"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/202246\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=202246"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=202246"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=202246"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}