Page 17«..10..16171819..»

Category Archives: Superintelligence

Integrating disciplines ‘key to dealing with digital revolution’ – Times Higher Education (THE)

Posted: July 4, 2017 at 8:30 am

Universities must embrace crossdisciplinary education and research in order to deal with the megatrends of the fourth industrial revolution, according to the president of the Korea Advanced Institute of Science and Technology (KAIST).

Speaking at theTimesHigherEducationResearch Excellence Summit in Tacihung, Taiwan, Sung-Chul Shin said that these challenges were particularly pressing in South Korea, which he described as being at a stall point where it can either continue developing as an advanced nation or get stuck with a stagnant economy.

The fourth industrial revolution, also known as the digital revolution, is predicted to change the way we live, work and relate to each other. It represents the integration of technology between the physical, digital and biological worlds.

Professor Shin said that there are three megatrends that will drive the fourth industrial revolution: hyperconnectivity, which is the integration of the physical and digital fields; superintelligence, which will draw on artificial intelligence and computer science; and the convergence of science and technology.

Universities should play a central role in developing the fourth industrial revolution, he said.

In meeting the challenges posed by the digital revolution, universities will need to bring research across various disciplines together, in order to achieve better results than when professors work in a single specialism, Professor Shin said. Research involving the areas of artificial intelligence, brain research, data science and computer science will be key, he added.

International collaboration was also vital, he continued, pointing out that Korea only invests a fraction of its research budget in brain science research when comparedwith the US and Japan. We cannot compete so we have to collaborate, Professor Shin said.

Concerning all megatrends, university reform is urgent, he added.

Professor Shin argued that students will need an education in the humanities and social sciences in addition to strong training in basic science and engineering, in order to improve their creative talents.

Team-based learning and the flipped classroom is very important to fostering these skills in the next generation, he told the conference.

South Koreas major research and development policies for the future include expanding investment and improving the efficiency of research by streamlining the planning, management and evaluation of research projects, the conference heard. The government is also developing a series of strategic research priorities.

As the Korean government adopts the fourth industrial revolution, KAIST will play a pivotal role, Professor Shin said.Korea is near a stall point, it is either destined to solidify its place as an advanced nation or be caught in the middle-income trap with a stagnant economy.

holly.else@timeshighereducation.com

Read the original post:

Integrating disciplines 'key to dealing with digital revolution' - Times Higher Education (THE)

Posted in Superintelligence | Comments Off on Integrating disciplines ‘key to dealing with digital revolution’ – Times Higher Education (THE)

The AI Revolution: The Road to Superintelligence | Inverse

Posted: July 3, 2017 at 8:29 am

Nothing will make you appreciate human intelligence like learning about how unbelievably challenging it is to try to create a computer as smart as we are.

This article originally appeared on Wait But Why by Tim Urban. This is Part 1 Part 2 is here.

We are on the edge of change comparable to the rise of human life on Earth.

-Vernor Vinge

What does it feel like to stand here?

It seems like a pretty intense place to be standing but then you have to remember something about what its like to stand on a time graph: You cant see whats to your right. So heres how it actually feels to stand there:

Which probably feels pretty normal

Imagine taking a time machine back to 1750 a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. When you get there, you retrieve a dude, bring him to 2015, and then walk him around and watch him react to everything. Its impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someones face and chat with them even though theyre on the other side of the country, and worlds of other inconceivable sorcery. This is all before you show him the internet or explain things like the International Space Station, the Large Hadron Collider, nuclear weapons, or general relativity.

This experience for him wouldnt be surprising or shocking or even mind-blowing those words arent big enough. He might actually die.

But heres the interesting thing if he then went back to 1750 and got jealous that we got to see his reaction and decided he wanted to try the same thing, hed take the time machine and go back the same distance, get someone from around the year 1500, bring him to 1750, and show him everything. And the 1500 guy would be shocked by a lot of thingsbut he wouldnt die. It would be far less of an insane experience for him, because while 1500 and 1750 were very different, they were much less different than 1750 to 2015. The 1500 guy would learn some mind-bending shit about space and physics, hed be impressed with how committed Europe turned out to be with that new imperialism fad, and hed have to do some major revisions of his world map conception. But watching everyday life go by in 1750 transportation, communication, etc. definitely wouldnt make him die.

No, in order for the 1750 guy to have as much fun as we had with him, hed have to go much farther back maybe all the way back to about 12,000 BC, before the First Agricultural Revolution gave rise to the first cities and to the concept of civilization. If someone from a purely hunter-gatherer world from a time when humans were, more or less, just another animal species saw the vast human empires of 1750 with their towering churches, their ocean-crossing ships, their concept of being inside, and their enormous mountain of collective, accumulated human knowledge and discovery hed likely die.

And then what if, after dying, he got jealous and wanted to do the same thing. If he went back 12,000 years to 24,000 BC and got a guy and brought him to 12,000 BC, hed show the guy everything and the guy would be like, Okay whats your point who cares. For the 12,000 BC guy to have the same fun, hed have to go back over 100,000 years and get someone he could show fire and language to for the first time.

In order for someone to be transported into the future and die from the level of shock theyd experience, they have to go enough years ahead that a die level of progress, or a Die Progress Unit (DPU) has been achieved. So a DPU took over 100,000 years in hunter-gatherer times, but at the post-Agricultural Revolution rate, it only took about 12,000 years. The post-Industrial Revolution world has moved so quickly that a 1750 person only needs to go forward a couple hundred years for a DPU to have happened.

This pattern human progress moving quicker and quicker as time goes on is what futurist Ray Kurzweil calls human historys Law of Accelerating Returns. This happens because more advanced societies have the ability to progress at a faster rate than less advanced societies because theyre more advanced. 19th century humanity knew more and had better technology than 15th century humanity, so its no surprise that humanity made far more advances in the 19th century than in the 15th century 15th century humanity was no match for 19th century humanity.

This works on smaller scales too. The movie Back to the Future came out in 1985, and the past took place in 1955. In the movie, when Michael J. Fox went back to 1955, he was caught off-guard by the newness of TVs, the prices of soda, the lack of love for shrill electric guitar, and the variation in slang. It was a different world, yes but if the movie were made today and the past took place in 1985, the movie could have had much more fun with much bigger differences. The character would be in a time before personal computers, internet, or cell phones todays Marty McFly, a teenager born in the late 90s, would be much more out of place in 1985 than the movies Marty McFly was in 1955.

This is for the same reason we just discussed the Law of Accelerating Returns. The average rate of advancement between 1985 and 2015 was higher than the rate between 1955 and 1985 because the former was a more advanced world so much more change happened in the most recent 30 years than in the prior 30.

So advances are getting bigger and bigger and happening more and more quickly. This suggests some pretty intense things about our future, right?

Kurzweil suggests that the progress of the entire 20th century would have been achieved in only 20 years at the rate of advancement in the year 2000 in other words, by 2000, the rate of progress was five times faster than the average rate of progress during the 20th century. He believes another 20th centurys worth of progress happened between 2000 and 2014 and that another 20th centurys worth of progress will happen by 2021, in only seven years. A couple decades later, he believes a 20th centurys worth of progress will happen multiple times in the same year, and even later, in less than one month. All in all, because of the Law of Accelerating Returns, Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century.

If Kurzweil and others who agree with him are correct, then we may be as blown away by 2030 as our 1750 guy was by 2015 i.e. the next DPU might only take a couple decades and the world in 2050 might be so vastly different than todays world that we would barely recognize it.

This isnt science fiction. Its what many scientists smarter and more knowledgeable than you or I firmly believe and if you look at history, its what we should logically predict.

So then why, when you hear me say something like the world 35 years from now might be totally unrecognizable, are you thinking, Cool.but nahhhhhhh? Three reasons were skeptical of outlandish forecasts of the future:

1) When it comes to history, we think in straight lines. When we imagine the progress of the next 30 years, we look back to the progress of the previous 30 as an indicator of how much will likely happen. When we think about the extent to which the world will change in the 21st century, we just take the 20th century progress and add it to the year 2000. This was the same mistake our 1750 guy made when he got someone from 1500 and expected to blow his mind as much as his own was blown going the same distance ahead. Its most intuitive for us to think linearly, when we should be thinking exponentially. If someone is being more clever about it, they might predict the advances of the next 30 years not by looking at the previous 30 years, but by taking the current rate of progress and judging based on that. Theyd be more accurate, but still way off. In order to think about the future correctly, you need to imagine things moving at a much faster rate than theyre moving now.

2) The trajectory of very recent history often tells a distorted story. First, even a steep exponential curve seems linear when you only look at a tiny slice of it, the same way if you look at a little segment of a huge circle up close, it looks almost like a straight line. Second, exponential growth isnt totally smooth and uniform. Kurzweil explains that progress happens in S-curves:

An S is created by the wave of progress when a new paradigm sweeps the world. The curve goes through three phases:

If you look only at very recent history, the part of the S-curve youre on at the moment can obscure your perception of how fast things are advancing. The chunk of time between 1995 and 2007 saw the explosion of the internet, the introduction of Microsoft, Google, and Facebook into the public consciousness, the birth of social networking, and the introduction of cell phones and then smart phones. That was Phase 2: the growth spurt part of the S. But 2008 to 2015 has been less groundbreaking, at least on the technological front. Someone thinking about the future today might examine the last few years to gauge the current rate of advancement, but thats missing the bigger picture. In fact, a new, huge Phase 2 growth spurt might be brewing right now.

3) Our own experience makes us stubborn old men about the future. We base our ideas about the world on our personal experience, and that experience has ingrained the rate of growth of the recent past in our heads as the way things happen. Were also limited by our imagination, which takes our experience and uses it to conjure future predictions but often, what we know simply doesnt give us the tools to think accurately about the future. When we hear a prediction about the future that contradicts our experience-based notion of how things work, our instinct is that the prediction must be naive. If I tell you, later in this post, that you may live to be 150, or 250, or not die at all, your instinct will be, Thats stupid if theres one thing I know from history, its that everybody dies. And yes, no one in the past has not died. But no one flew airplanes before airplanes were invented either.

So while nahhhhh might feel right as you read this post, its probably actually wrong. The fact is, if were being truly logical and expecting historical patterns to continue, we should conclude that much, much, much more should change in the coming decades than we intuitively expect. Logic also suggests that if the most advanced species on a planet keeps making larger and larger leaps forward at an ever-faster rate, at some point, theyll make a leap so great that it completely alters life as they know it and the perception they have of what it means to be a human kind of like how evolution kept making great leaps toward intelligence until finally it made such a large leap to the human being that it completely altered what it meant for any creature to live on planet Earth. And if you spend some time reading about whats going on today in science and technology, you start to see a lot of signs quietly hinting that life as we currently know it cannot withstand the leap thats coming next.

If youre like me, you used to think Artificial Intelligence was a silly sci-fi concept, but lately youve been hearing it mentioned by serious people, and you dont really quite get it.

There are three reasons a lot of people are confused about the term AI:

1) We associate AI with movies. Star Wars. Terminator. 2001: A Space Odyssey. Even the Jetsons. And those are fiction, as are the robot characters. So it makes AI sound a little fictional to us.

2) AI is a broad topic. It ranges from your phones calculator to self-driving cars to something in the future that might change the world dramatically. AI refers to all of these things, which is confusing.

3) We use AI all the time in our daily lives, but we often dont realize its AI. John McCarthy, who coined the term Artificial Intelligence in 1956, complained that as soon as it works, no one calls it AI anymore. Because of this phenomenon, AI often sounds like a mythical future prediction more than a reality. At the same time, it makes it sound like a pop concept from the past that never came to fruition. Ray Kurzweil says he hears people say that AI withered in the 1980s, which he compares to insisting that the Internet died in the dot-com bust of the early 2000s.

So lets clear things up. First, stop thinking of robots. A robot is a container for AI, sometimes mimicking the human form, sometimes not but the AI itself is the computer inside the robot. AI is the brain, and the robot is its body if it even has a body. For example, the software and data behind Siri is AI, the womans voice we hear is a personification of that AI, and theres no robot involved at all.

Secondly, youve probably heard the term singularity or technological singularity. This term has been used in math to describe an asymptote-like situation where normal rules no longer apply. Its been used in physics to describe a phenomenon like an infinitely small, dense black hole or the point we were all squished into right before the Big Bang. Again, situations where the usual rules dont apply. In 1993, Vernor Vinge wrote a famous essay in which he applied the term to the moment in the future when our technologys intelligence exceeds our own a moment for him when life as we know it will be forever changed and normal rules will no longer apply. Ray Kurzweil then muddled things a bit by defining the singularity as the time when the Law of Accelerating Returns has reached such an extreme pace that technological progress is happening at a seemingly-infinite pace, and after which well be living in a whole new world. I found that many of todays AI thinkers have stopped using the term, and its confusing anyway, so I wont use it much here (even though well be focusing on that idea throughout).

Finally, while there are many different types or forms of AI since AI is a broad concept, the critical categories we need to think about are based on an AIs caliber. There are three major AI caliber categories:

AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. Theres AI that can beat the world chess champion in chess, but thats the only thing it does. Ask it to figure out a better way to store data on a hard drive, and itll look at you blankly.

AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board a machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and were yet to do it. Professor Linda Gottfredson describes intelligence as a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. AGI would be able to do all of those things as easily as you can.

AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. Artificial Superintelligence ranges from a computer thats just a little smarter than a human to one thats trillions of times smarter across the board. ASI is the reason the topic of AI is such a spicy meatball and why the words immortality and extinction will both appear in these posts multiple times.

As of now, humans have conquered the lowest caliber of AI ANI in many ways, and its everywhere. The AI Revolution is the road from ANI, through AGI, to ASI a road we may or may not survive but that, either way, will change everything.

Lets take a close look at what the leading thinkers in the field believe this road looks like and why this revolution might happen way sooner than you might think:

Artificial Narrow Intelligence is machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing. A few examples:

ANI systems as they are now arent especially scary. At worst, a glitchy or badly-programmed ANI can cause an isolated catastrophe like knocking out a power grid, causing a harmful nuclear power plant malfunction, or triggering a financial markets disaster (like the 2010 Flash Crash when an ANI program reacted the wrong way to an unexpected situation and caused the stock market to briefly plummet, taking $1 trillion of market value with it, only part of which was recovered when the mistake was corrected).

But while ANI doesnt have the capability to cause an existential threat, we should see this increasingly large and complex ecosystem of relatively-harmless ANI as a precursor of the world-altering hurricane thats on the way. Each new ANI innovation quietly adds another brick onto the road to AGI and ASI. Or as Aaron Saenz sees it, our worlds ANI systems are like the amino acids in the early Earths primordial ooze the inanimate stuff of life that, one unexpected day, woke up.

Why Its So Hard

Nothing will make you appreciate human intelligence like learning about how unbelievably challenging it is to try to create a computer as smart as we are. Building skyscrapers, putting humans in space, figuring out the details of how the Big Bang went down all far easier than understanding our own brain or how to make something as cool as it. As of now, the human brain is the most complex object in the known universe.

Whats interesting is that the hard parts of trying to build AGI (a computer as smart as humans in general, not just at one narrow specialty) are not intuitively what youd think they are. Build a computer that can multiply two ten-digit numbers in a split second incredibly easy. Build one that can look at a dog and answer whether its a dog or a cat spectacularly difficult. Make AI that can beat any human in chess? Done. Make one that can read a paragraph from a six-year-olds picture book and not just recognize the words but understand the meaning of them? Google is currently spending billions of dollars trying to do it. Hard things like calculus, financial market strategy, and language translationare mind-numbingly easy for a computer, while easy things like vision, motion, movement, and perception are insanely hard for it. Or, as computer scientist Donald Knuth puts it, AI has by now succeeded in doing essentially everything that requires thinking but has failed to do most of what people and animals do without thinking.

What you quickly realize when you think about this is that those things that seem easy to us are actually unbelievably complicated, and they only seem easy because those skills have been optimized in us (and most animals) by hundreds of million years of animal evolution. When you reach your hand up toward an object, the muscles, tendons, and bones in your shoulder, elbow, and wrist instantly perform a long series of physics operations, in conjunction with your eyes, to allow you to move your hand in a straight line through three dimensions. It seems effortless to you because you have perfected software in your brain for doing it. Same idea goes for why its not that malware is dumb for not being able to figure out the slanty word recognition test when you sign up for a new account on a siteits that your brain is super impressive for being able to.

On the other hand, multiplying big numbers or playing chess are new activities for biological creatures and we havent had any time to evolve a proficiency at them, so a computer doesnt need to work too hard to beat us. Think about itwhich would you rather do, build a program that could multiply big numbers or one that could understand the essence of a B well enough that you could show it a B in any one of thousands of unpredictable fonts or handwriting and it could instantly know it was a B?

One fun examplewhen you look at this, you and a computer both can figure out that its a rectangle with two distinct shades, alternating:

Tied so far. But if you pick up the black and reveal the whole image

you have no problem giving a full description of the various opaque and translucent cylinders, slats, and 3-D corners, but the computer would fail miserably. It would describe what it seesa variety of two-dimensional shapes in several different shadeswhich is actually whats there. Your brain is doing a ton of fancy shit to interpret the implied depth, shade-mixing, and room lighting the picture is trying to portray. And looking at the picture below, a computer sees a two-dimensional white, black, and gray collage, while you easily see what it really isa photo of an entirely-black, 3-D rock:

And everything we just mentioned is still only taking in stagnant information and processing it. To be human-level intelligent, a computer would have to understand things like the difference between subtle facial expressions, the distinction between being pleased, relieved, content, satisfied, and glad, and why Braveheart was great but The Patriot was terrible.

Daunting.

So how do we get there?

First Key to Creating AGI: Increasing Computational Power

One thing that definitely needs to happen for AGI to be a possibility is an increase in the power of computer hardware. If an AI system is going to be as intelligent as the brain, itll need to equal the brains raw computing capacity.

One way to express this capacity is in the total calculations per second (cps) the brain could manage, and you could come to this number by figuring out the maximum cps of each structure in the brain and then adding them all together.

Ray Kurzweil came up with a shortcut by taking someones professional estimate for the cps of one structure and that structures weight compared to that of the whole brain and then multiplying proportionally to get an estimate for the total. Sounds a little iffy, but he did this a bunch of times with various professional estimates of different regions, and the total always arrived in the same ballpark around 1016, or 10 quadrillion cps.

Currently, the worlds fastest supercomputer, Chinas Tianhe-2, has actually beaten that number, clocking in at about 34 quadrillion cps. But Tianhe-2 is also a dick, taking up 720 square meters of space, using 24 megawatts of power (the brain runs on just 20 watts, and costing $390 million to build. Not especially applicable to wide usage, or even most commercial or industrial usage yet.

Kurzweil suggests that we think about the state of computers by looking at how many cps you can buy for $1,000. When that number reaches human-level 10 quadrillion cps then thatll mean AGI could become a very real part of life.

Moores Law is a historically-reliable rule that the worlds maximum computing power doubles approximately every two years, meaning computer hardware advancement, like general human advancement through history, grows exponentially. Looking at how this relates to Kurzweils cps/$1,000 metric, were currently at about 10 trillion cps/$1,000, right on pace with this graphs predicted trajectory:

So the worlds $1,000 computers are now beating the mouse brain and theyre at about a thousandth of human level. This doesnt sound like much until you remember that we were at about a trillionth of human level in 1985, a billionth in 1995, and a millionth in 2005. Being at a thousandth in 2015 puts us right on pace to get to an affordable computer by 2025 that rivals the power of the brain.

So on the hardware side, the raw power needed for AGI is technically available now, in China, and well be ready for affordable, widespread AGI-caliber hardware within 10 years. But raw computational power alone doesnt make a computer generally intelligent the next question is, how do we bring human-level intelligence to all that power?

Second Key to Creating AGI: Making it Smart

This is the icky part. The truth is, no one really knows how to make it smart were still debating how to make a computer human-level intelligent and capable of knowing what a dog and a weird-written B and a mediocre movie is. But there are a bunch of far-fetched strategies out there and at some point, one of them will work. Here are the three most common strategies I came across:

1) Plagiarize the brain.

This is like scientists toiling over how that kid who sits next to them in class is so smart and keeps doing so well on the tests, and even though they keep studying diligently, they cant do nearly as well as that kid, and then they finally decide k fuck it Im just gonna copy that kids answers. It makes sensewere stumped trying to build a super-complex computer, and there happens to be a perfect prototype for one in each of our heads.

The science world is working hard on reverse engineering the brain to figure out how evolution made such a rad thing optimistic estimates say we can do this by 2030. Once we do that, well know all the secrets of how the brain runs so powerfully and efficiently and we can draw inspiration from it and steal its innovations. One example of computer architecture that mimics the brain is the artificial neural network. It starts out as a network of transistor neurons, connected to each other with inputs and outputs, and it knows nothinglike an infant brain. The way it learns is it tries to do a task, say handwriting recognition, and at first, its neural firings and subsequent guesses at deciphering each letter will be completely random. But when its told it got something right, the transistor connections in the firing pathways that happened to create that answer are strengthened; when its told it was wrong, those pathways connections are weakened. After a lot of this trial and feedback, the network has, by itself, formed smart neural pathways and the machine has become optimized for the task. The brain learns a bit like this but in a more sophisticated way, and as we continue to study the brain, were discovering ingenious new ways to take advantage of neural circuitry.

More extreme plagiarism involves a strategy called whole brain emulation, where the goal is to slice a real brain into thin layers, scan each one, use software to assemble an accurate reconstructed 3-D model, and then implement the model on a powerful computer. Wed then have a computer officially capable of everything the brain is capable of it would just need to learn and gather information. If engineers get really good, theyd be able to emulate a real brain with such exact accuracy that the brains full personality and memory would be intact once the brain architecture has been uploaded to a computer. If the brain belonged to Jim right before he passed away, the computer would now wake up as Jim (?, which would be a robust human-level AGI, and we could now work on turning Jim into an unimaginably smart ASI, which hed probably be really excited about.

How far are we from achieving whole brain emulation? Well so far, weve not yet just recently been able to emulate a 1mm-long flatworm brain, which consists of just 302 total neurons. The human brain contains 100 billion. If that makes it seem like a hopeless project, remember the power of exponential progress now that weve conquered the tiny worm brain, an ant might happen before too long, followed by a mouse, and suddenly this will seem much more plausible.

2) Try to make evolution do what it did before but for us this time.

So if we decide the smart kids test is too hard to copy, we can try to copy the way he studies for the tests instead.

Heres something we know. Building a computer as powerful as the brain is possible our own brains evolution is proof. And if the brain is just too complex for us to emulate, we could try to emulate evolution instead. The fact is, even if we can emulate a brain, that might be like trying to build an airplane by copying a birds wing-flapping motionsoften, machines are best designed using a fresh, machine-oriented approach, not by mimicking biology exactly.

So how can we simulate evolution to build AGI? The method, called genetic algorithms, would work something like this: there would be a performance-and-evaluation process that would happen again and again (the same way biological creatures perform by living life and are evaluated by whether they manage to reproduce or not). A group of computers would try to do tasks, and the most successful ones would be bred with each other by having half of each of their programming merged together into a new computer. The less successful ones would be eliminated. Over many, many iterations, this natural selection process would produce better and better computers. The challenge would be creating an automated evaluation and breeding cycle so this evolution process could run on its own.

The downside of copying evolution is that evolution likes to take a billion years to do things and we want to do this in a few decades.

But we have a lot of advantages over evolution. First, evolution has no foresight and works randomly it produces more unhelpful mutations than helpful ones, but we would control the process so it would only be driven by beneficial glitches and targeted tweaks. Secondly, evolution doesnt aim for anything, including intelligence sometimes an environment might even select against higher intelligence (since it uses a lot of energy). We, on the other hand, could specifically direct this evolutionary process toward increasing intelligence. Third, to select for intelligence, evolution has to innovate in a bunch of other ways to facilitate intelligence like revamping the ways cells produce energy when we can remove those extra burdens and use things like electricity. Its no doubt wed be much, much faster than evolution but its still not clear whether well be able to improve upon evolution enough to make this a viable strategy.

3) Make this whole thing the computers problem, not ours.

This is when scientists get desperate and try to program the test to take itself. But it might be the most promising method we have.

The idea is that wed build a computer whose two major skills would be doing research on AI and coding changes into itself allowing it to not only learn but to improve its own architecture. Wed teach computers to be computer scientists so they could bootstrap their own development. And that would be their main job figuring out how to make themselves smarter. More on this later.

All of This Could Happen Soon

Rapid advancements in hardware and innovative experimentation with software are happening simultaneously, and AGI could creep up on us quickly and unexpectedly for two main reasons:

1) Exponential growth is intense and what seems like a snails pace of advancement can quickly race upwardsthis GIF illustrates this concept nicely:

2) When it comes to software, progress can seem slow, but then one epiphany can instantly change the rate of advancement (kind of like the way science, during the time humans thought the universe was geocentric, was having difficulty calculating how the universe worked, but then the discovery that it was heliocentric suddenly made everything much easier). Or, when it comes to something like a computer that improves itself, we might seem far away but actually be just one tweak of the system away from having it become 1,000 times more effective and zooming upward to human-level intelligence.

At some point, well have achieved AGI computers with human-level general intelligence. Just a bunch of people and computers living together in equality.

Oh actually not at all.

The thing is, AGI with an identical level of intelligence and computational capacity as a human would still have significant advantages over humans. Like:

AI, which will likely get to AGI by being programmed to self-improve, wouldnt see human-level intelligence as some important milestone its only a relevant marker from our point of view and wouldnt have any reason to stop at our level. And given the advantages over us that even human intelligence-equivalent AGI would have, its pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence.

This may shock the shit out of us when it happens. The reason is that from our perspective, A) while the intelligence of different kinds of animals varies, the main characteristic were aware of about any animals intelligence is that its far lower than ours, and B) we view the smartest humans as WAY smarter than the dumbest humans. Kind of like this:

So as AI zooms upward in intelligence toward us, well see it as simply becoming smarter, for an animal. Then, when it hits the lowest capacity of humanityNick Bostrom uses the term the village idiot well be like, Oh wow, its like a dumb human. Cute! The only thing is, in the grand spectrum of intelligence, all humans, from the village idiot to Einstein, are within a very small range so just after hitting village idiot-level and being declared to be AGI, itll suddenly be smarter than Einstein and we wont know what hit us:

And what happensafter that?

I hope you enjoyed normal time, because this is when this topic gets unnormal and scary, and its gonna stay that way from here forward. I want to pause here to remind you that every single thing Im going to say is real real science and real forecasts of the future from a large array of the most respected thinkers and scientists. Just keep remembering that.

Anyway, as I said above, most of our current models for getting to AGI involve the AI getting there by self-improvement. And once it gets to AGI, even systems that formed and grew through methods that didnt involve self-improvement would now be smart enough to begin self-improving if they wanted to.

And heres where we get to an intense concept: recursive self-improvement. It works like this

An AI system at a certain level lets say human village idiot is programmed with the goal of improving its own intelligence. Once it does, its smarter maybe at this point its at Einsteins level so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. These leaps make it much smarter than any human, allowing it to make even bigger leaps. As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the superintelligent level of an ASI system. This is called an Intelligence Explosion, and its the ultimate example of The Law of Accelerating Returns.

There is some debate about how soon AI will reach human-level general intelligence the median year on a survey of hundreds of scientists about when they believed wed be more likely than not to have reached AGI was 2040 thats only 25 years from now, which doesnt sound that huge until you consider that many of the thinkers in this field think its likely that the progression from AGI to ASI happens very quickly. Like this could happen:

It takes decades for the first AI system to reach low-level general intelligence, but it finally happens. A computer is able to understand the world around it as well as a human four-year-old. Suddenly, within an hour of hitting that milestone, the system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do. 90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human.

Superintelligence of that magnitude is not something we can remotely grasp, any more than a bumblebee can wrap its head around Keynesian Economics. In our world, smart means a 130 IQ and stupid means an 85 IQ we dont have a word for an IQ of 12,952.

What we do know is that humans utter dominance on this Earth suggests a clear rule: with intelligence comes power. Which means an ASI, when we create it, will be the most powerful being in the history of life on Earth, and all living things, including humans, will be entirely at its whim and this might happen in the next few decades.

Read more from the original source:

The AI Revolution: The Road to Superintelligence | Inverse

Posted in Superintelligence | Comments Off on The AI Revolution: The Road to Superintelligence | Inverse

No need to fear Artificial Intelligence – Livemint – Livemint

Posted: June 29, 2017 at 11:31 am

Love it or fear it. Call it useful or dismiss it as plain hype. Whatever your stand is, Artificial Intelligence, or AI, will remain the overarching theme of the digital story that individuals and companies will be discussing for quite some time to come.

It is important, however, to understand the nature of AIwhat it can and cannot do.

Unfortunately, individuals and companies often fail to understand this.

For instance, most of the AI we see around caters to narrow specific areas, and hence are categorised as weak AI. Examples include most of the AI chatbots, AI personal assistants and smart home assistants that we see, including Apples Siri, Microsofts Cortana, Googles Allo, Amazons Alexa or Echo and Googles Home.

Driverless cars and trucks, however impressive they sound, are still higher manifestations of weak AI.

In other words, weak AI lacks human consciousness. Moreover, though we talk about the use of artificial neural networks (ANNs) in deep learninga subset of machine learningANNs do not behave like the human brain. They are loosely modelled on the human brain.

As an example, just because a plane flies in the air, you do not call it a bird. Yet, we are comfortable with the idea that a plane is not a bird and we fly across the world in planes.

Why, then, do we fear AI? Part of the reason is that most of us confuse weak AI with strong AI. Machines with strong AI will have a brain as powerful as the human brain.

Such machines will be able to teach themselves, learn from others, perceive, emotein other words, do everything that human beings do and more. Its the more aspect that we fear most.

Strong AIalso called true intelligence or artificial general intelligence (AGI)is still a far way off.

Some use the term Artificial Superintelligence (ASI) to describe a system with the capabilities of an AGI, without the physical limitations of humans, that would learn and improve far beyond human level.

Many people, including experts, fear that if strong AI becomes a reality, AI may become more intelligent than humans. When this will happen is anyones guess.

Till then, we have our work cut out to figure how we can make the best use of AI in our lives and companies.

The Accenture Technology Vision 2017 report, for instance, points out that AI is the new user interface (UI) since AI is making every interface both simple and smart. In this edition of Mints Technology Quarterly, we also have experts from Boston Consulting Group talking about value creation in digital, and other experts discussing the path from self-healing grids to self-healing factories.

This edition also features articles on the transformative power of genome editing, and how smart cars are evolving.

Your feedback, as usual, will help us make the forthcoming editions better.

First Published: Wed, Jun 28 2017. 11 18 PM IST

Originally posted here:

No need to fear Artificial Intelligence - Livemint - Livemint

Posted in Superintelligence | Comments Off on No need to fear Artificial Intelligence – Livemint – Livemint

The bots are coming – The New Indian Express

Posted: June 22, 2017 at 5:26 am

Facebooks artificial intelligence research lab was training chatbots to negotiate. The bots soon learned the usefulness of deceit

Deceit & dealmaking The bots feigned interest in a valueless item, so they could later compromise by conceding it. Deceit is a complex skill that requires hypothesising the others beliefs, and is learnt late in child development. Our agents learnt to deceive without explicit human design, simply by trying to achieve their goals, says the research paper

Has Terminators Skynet arrived? The bots also developed their own language. The researchers had to take measures to prevent them from using non-human language. But the bots werent better at negotiating than humans. So the Terminators Skynet is not here yet, but should we be worried?

The fable of the sparrows A colony of sparrows found it tough to manage on their own. After another day of long hard work, they decided to steal an owl egg. They reckoned an owl would help them build their nests, look after their young and keep an eye on predators

But one sparrow disagreed, This will be our undoing. Should we not first give some thought to owl-domestication? But the rest went on with their plan. They felt it would be difficult to find an owl egg. After succeeding in raising an owl, we can think about this, they said. Nick Bostrom narrates this in his book SuperIntelligence to illustrate the dangers of AI

Visit link:

The bots are coming - The New Indian Express

Posted in Superintelligence | Comments Off on The bots are coming – The New Indian Express

Effective Altruism Says You Can Save the Future by Making Money – Motherboard

Posted: June 21, 2017 at 4:29 am

There is no contradiction in claiming that, as Steven Pinker argues, the world is getting better in many important respects and also that the world is a complete mess. Sure, your chances of being murdered may be lower than at anytime before in human history, but one could riposte that given the size of the human population today there has never been more total disutility, or suffering/injustice/evil, engulfing our planet.

Just consider that about 3.1 million children died of hunger in 2013, averaging nearly 8,500 each day. Along these lines, about 66 million children attend class hungry in the developing world; roughly 161 million kids under five are nutritionally stunted; 99 million are underweight; and 51 million suffer from wasting. Similarly, an estimated 1.4 billion people live on less than $1.25 per day while roughly 2.5 billion earn less than $2 per day, and in 2015 about 212 million people were diagnosed with malaria, with some 429,000 dying.

The idea is to optimize the total amount of good that one can do in the world

This is a low-resolution snapshot of the global predicament of humanity todayone that doesn't even count the frustration, pain, and misery caused by sexism, racism, factory farming, terrorism, climate change, and war. So the question is: how can we make the world more livable for sentient life? What actions can we take to alleviate the truly massive amounts of suffering that plague our pale blue dot? And to what extent should we care about the many future generations that could come into existence?

I recently attended a conference at Harvard University about a fledgling movement called effective altruism (EA), popularized by philosophers like William MacAskill and Facebook cofounder Dustin Moskovitz. Whereas many philanthropically inclined individuals make decisions to donate based on which causes tugged at their heartstrings, this movement takes a highly data-driven approach to charitable giving. The idea is to optimize the total amount of good that one can do in the world, even if it's counterintuitive.

For example, one might think that donating money to buy books for schools in low-income communities across Africa is a great way to improve the education of children victimized by poverty, but it turns out that spending this money on deworming programs could be a better way of improving outcomes. Studies show that deworming can reduce the rate of absenteeism in schools by 25 percenta problem that buying more books fails to addressand that "the children who had been de-wormed earned 20% more than those who hadn't."

Similarly, many people in the developed world feel compelled to donate money to disaster relief following natural catastrophes like earthquakes and tsunamis. While this is hardly immoral, data reveals the money donated could have more tangible impact if spent on insecticide-treated mosquito nets for people in malaria-prone regions of Africa.

Another surprising, and controversial, suggestion within effective altruism is that boycotting sweatshops in the developing world often does more harm than good. The idea is that, however squalid the working conditions of sweatshops are, they usually provide the very best jobs around. If a sweatshop worker were forced to take a different joband there's no guarantee that another job would even be availableit would almost certainly involve much more laborious work for lower wages. As the New York Times quotes a woman in Cambodia who scavenges garbage dumps for a living, "I'd love to get a job in a factoryAt least that work is in the shade. Here is where it's hot."

There are, of course, notable criticisms of this approach. Consider the story of Matt Wage. After earning an undergraduate degree at Princeton, he was accepted by the University of Oxford to earn a doctorate in philosophy. But instead of attending this programone of the very best in the worldhe opted to get a job on Wall Street making a six-figure salary. Why? Because, he reasoned, if he were to save 100 children from a burning building, it would be the best day of his life. As it happens, he could save the same number of children over the course of his life as a professional philosopher who donates a large portion of his salary to charity. Butcrunching the numbersif he were to get a high-paying job at, say, an arbitrage trading firm and donate half of his earnings to, say, the Against Malaria Foundation, he could potentially save hundreds of children from dying "within the first year or two of his working life and every year thereafter."

Some people think superintelligence is too far away to be of concern

The criticism leveled at this idea is that Wall Street may itself be a potent source of badness in the world, and thus participating in the machine as a cog might actually contribute net harm. But effective altruists would respond that what matters isn't just what one does, but what would have happened if one hadn't acted in a particular way. If Wage hadn't gotten the job on Wall Street, someone else would havesomeone who wasn't as concerned about the plight of African children, whereas Wage earns to give money that saves thousands of disadvantaged people.

Another objection is that many effective altruists are too concerned about the potential risks associated with machine superintelligence. Some people think superintelligence is too far away to be of concern or unlikely to pose any serious threats to human survival, effect. They maintain that spending money to research what's called the "AI control problem" is misguided, if not a complete waste of resources. But the fact is that there are good arguments for thinking that, as Stephen Hawking puts it, if superintelligence isn't the worst thing to happen to humanity, it will likely be the very best. And effective altruistsand Iwould argue that then designing a "human friendly" superintelligence is a highly worthwhile task, even if the first superintelligent machine won't make its debut on Earth until the end of this century. In sum, the expected value of solving the AI control problem could be astronomically high.

Perhaps the most interesting idea within the effective altruism movement is that we should not just worry about present day humans but future humans as well. According to one study published in the journal Sustainability, "most individuals' abilities to imagine the future goes 'dark' at the ten-year horizon." This likely stems from our cognitive evolution in an ancient environment (like the African savanna) in which long-term thinking was not only unnecessary for survival but might actually have been disadvantageous.

Yet many philosophers believe that, from a moral perspective, this "bias for the short-term" is completely unjustified. They argue that when one is born should have no bearing on one's intrinsic valuethat is to say, "time discounting," or valuing the future less than the present, should not apply to human lives.

First, there is the symmetry issue: if future lives are worth less than present lives, then are past lives worth less as well? Or, from the perspective of past people, are our lives worth less than theirs? Second, consider that using a time discounting annual rate of 10 percent, a single person today would be equal in value to an unimaginable 4.96 x 1020 people 500 years hence. Does that strike one as morally defensible? Is it right that one person dying today constitutes an equivalent moral tragedy to a global holocaust that kills 4.96 x 1020 people in five centuries?

And finally, our best estimates of how many people could come to exist in the future indicate that this number could be exceptionally large. For example, The Oxford philosopher Nick Bostrom estimates that some 1016 people with normal lifespans could exist on Earth before the sun sterilizes it in a billion years or so. Yet another educated guess is that "a hundred thousand billion billion billion"that is 100,000,000,000,000,000,000,000,000,000,000people could someday populate the visible universe. To date, there have been approximately 60 billion humans on Earth, or 6 x 109, meaning that the humanor posthuman, if our progeny evolves into technologically enhanced cyborgsstory may have only just begun.

Read More: Today's Kids Could Live Through Machine Superintelligence, Martian Colonies and a Nuclear Attack

Caring about the far future leads to some effective altruists to focus specifically on what Bostrom calls "existential risks," or events that would either trip our species into the eternal grave of extinction or irreversibly catapult us back to the Paleolithic.

Since the end of World War II, there has been an unsettling increase in both the total number of existential riskssuch as nuclear conflict, climate change, global biodiversity loss, engineered pandemics, grey goo, geoengineering, physics experiments, and machine superintelligenceand the overall probability of civilizational collapse, or worse, occurring. For example, the cosmologist Lord Martin Rees puts the likelihood of civilization imploding at 50 percent this century, and Bostrom argues that an existential catastrophe has an equal to or greater than 25 percent chance of happening. It follows that, as Stephen Hawking recently put it, humanity has never lived in more dangerous times.

This is why I believe that the movement's emphasis on the deep future is a very good thing. Our world is one in which contemplating what lies ahead often extends no further than quarterly reports and the next political election. Yet, as suggested above, the future could contain astronomical amounts of value if only we manage to slalom through the obstacle course of natural and anthropogenic hazards before us. While contemporary issues like global poverty, disease, and animal welfare weigh heavily on the minds of many effective altruists, it is encouraging to see a growing number of people taking seriously the issue of humanity's long-term future.

This article draws from Phil Torres's forthcoming book Morality, Foresight, and Human Flourishing: An Introduction to Existential Risk Studies .

Here is the original post:

Effective Altruism Says You Can Save the Future by Making Money - Motherboard

Posted in Superintelligence | Comments Off on Effective Altruism Says You Can Save the Future by Making Money – Motherboard

Facebook Chatbots Spontaneously Invent Their Own Non-Human … – Interesting Engineering

Posted: June 18, 2017 at 11:20 am

Facebook chatbot agents have spontaneously created their own non-human language. Researchers were developing negotiating chatbots when they found out the bots had developed a unique way to communicate with each other. Facebook chatbots have accidentally given us a glimpse into the future of language.

[Image Souce: FAIR]

Researchers from the Facebook Artificial Intelligence Research lab (FAIR) have released a report that describes training their chatbots to negotiate using machine learning. The chatbots were actually very successful negotiators but researchers soon realized they needed to change their modes because when the bots were allowed to communicate among themselves they started to develop their own unique negotiating language.

[Image Souce: FAIR]

This unique and spontaneous development of non-human language was an incredible surprise for the researchers who had to redevelop their modes of teaching to allow for less unstructured and unsupervised bot-to-bot time.

The chatbots surprised its developers in other ways too proving to excel at the art of negotiation. Going as far as to use advanced negotiation techniques such as feigning interest in something valueless in order to concede it later in the negotiations ensuring the best outcome. Co-author of the report, Dhruv Batra, said, No human [programmed] that strategy, this is just something they discovered by themselves that leads to high rewards.

But dont panic, the accidental discovery of some basic communication between chatbots isnt about to trigger singularity. Singularity, if you are not up on the doomsday techno jargon, is the term used for the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.

But these chatty bots definitely provide some solid opportunities for thinking about the way we understand language. Particularly the general view that language is our domain and exclusive to humans.

The research also highlights the fact that we have a long way to go in understanding machine learning. Right now there is a lot of guessing games that often involve examining how the machine thinks by evaluating the output after feeding a neural net a massive meal of data.

The idea that the machine can create its own language highlights just how many holes there are in our knowledge around machine learning, even for the experts designing the systems.

The findings got the team at Facebook fired up, they write,There remains much potential for future work, particularly in exploring other reasoning strategies, and in improving the diversity of utterances without diverging from human language.

Chatbots are widespread across the customer service industry using common keywords to answer FAQ type inquiries. Often these bots have a short run time before the requests get too complicated. Facebook has been investing heavily in chatbot technology and other large corporations are to follow. While there isnt a strong indication of how these negotiating bots will be used by Facebook, other projects are getting big results.

The bot called, DoNotPay has helped over 250,000 people overturn more than 160,000 parking tickets for users in New York and London byworking out if an appeal to a ticket is possible through a series of simple questions, and then guiding the user through the appeal process.

Sources: FAIR,TheVerge, TheGuardian, TheAtlantic, Futurism

Featured Image Source: Pixabay

Read the original post:

Facebook Chatbots Spontaneously Invent Their Own Non-Human ... - Interesting Engineering

Posted in Superintelligence | Comments Off on Facebook Chatbots Spontaneously Invent Their Own Non-Human … – Interesting Engineering

Cars 3 gets back to what made the franchise adequate – Vox

Posted: June 12, 2017 at 8:21 pm

To call the Cars movies the black sheep of Pixars filmography does a disservice to black sheep. The first one (released in 2006) is considered the one major black mark in the animation studios killer run from 1995s Toy Story to 2010s Toy Story 3, and 2011s Cars 2 is the only Pixar film with a rotten score on Rotten Tomatoes.

And, okay, I wont speak up too heartily for Cars 2 may it always be the worst Pixar movie but the original Cars is a good-natured, even-keeled sort of film, one that celebrates taking it slow every once in a while. Its no Incredibles or Wall-E, but few movies are. Its heart is in the right place.

Thus, its a relief that Cars 3 skews more toward the original flavor than the sequel (a spy movieinflected mess that revealed a Pixar slightly out of its depth with something so action-heavy). Its not to the level of that first film, but its amiable, ambling nature keeps it from becoming too boxed in by its needlessly contorted plot (which all but spoils its own ending very early on, then spends roughly an hour futilely avoiding said ending).

Like all Pixar movies, Cars 3 is gorgeous the landscapes the characters race through are more photorealistic than ever, recalling The Good Dinosaur (another recent Pixar misfire that nonetheless looked great) but like most of the studios 2010s output, its storytelling is perhaps too complicated to really register. The movie is constantly trying to outmaneuver itself, leading to a film thats pleasant but not much more.

Still, that doesnt mean its devoid of value. Here are six useful ways of thinking about Cars 3.

This is the angle Disney is pushing most in the trailers for the film. Lightning McQueen (Owen Wilson), the hotshot race car who learned to take it easy in Cars, has succumbed to the ravages of time, as we all must. Newer, sleeker race cars are outpacing him on the track, and hes desperate to make a comeback.

But Cars 3 resists the most feel-good version of that story, to its credit. Lightning isnt going to suddenly become faster in his middle age. If he wants to beat the young whippersnappers, hell have to either outsmart them or out-train them. But Lightning isnt one for high-tech gadgets that might help him eke out a few more miles per hour from his chassis. Instead, he goes on a random tour of the American South, visiting hallowed racetracks.

It gives the movie a tried-and-true spine old-fashioned knowhow versus new tech but it also means that every time the story seems to be gaining momentum, it veers completely off course in a new direction. Pixar used this tendency to let its stories swerve all over the place to great effect in 2012s Brave and 2013s Monsters University, but Cars 3 has maybe a few too many head fakes. By the time Lightning tries to tap into his roots by visiting legendary racers in North Carolina, I felt slightly checked out.

Seriously! This is a major part of Cars 3s climax!

The movie argues that the best thing Lightning (whos always been coded as a good ol Texas boy) can do to help preserve his legacy is try to find ways to hold open doors for cars that are not at all like himself. And the leader of the new class of racers, Jackson Storm, is voiced by Armie Hammer as a sleek, might-makes-right bully who never nods to the fact that hes so much faster because hes got access to a lot of great technology.

A major scene at the films midpoint involves Lightning learning that his trainer, Cruz Ramirez (voiced by the comedian Cristela Alonzo), always wanted to be a racer herself, but felt intimidated by how she wasnt like the other race cars the one time she tried out.

How did Lightning build up the confidence to race? Cruz asks. Lightning shrugs. He doesnt know. Hes just always had it.

Just the description of this scene or the even earlier scene where Cruz dominates a simulated race probably telegraphs where all of this is headed. But its still neat that Pixar used its most little-boy-friendly franchise to make an argument for level, more diverse playing fields. Except...

The Cars movies have always moved merchandise, and even if all involved parties insist they continue to make Cars movies for reasons other than because they sell toys cmon. The fact that the movies major new character is an explicitly female car, who gets a variety of new paint jobs throughout the film, no less, feels like somebody in a boardroom somewhere said, Yes, but what if we had a way to make the toys from these movies appeal to little girls as well?

(And thats to say nothing of the numerous other new characters introduced throughout the film, all of whom your children will simply have to own the action figures for. My favorite was a school bus named Miss Fritter who dominates demolition derbies.)

So it goes with Disney, one of the best companies out there when it comes to diversifying the points of view that are represented in its films but always, as the cynics among us are prone to assume, because it sees those points of view as a way to sell you more stuff.

Kerry Washington plays a new character named Natalie Certain, a journalist who pops up every so often to point out how her data cant lie and how Jackson Storm has a 96 percent probability of winning the films climactic race. Ill let you draw your own conclusions from there.

When Pixar made Cars 2, it faced a major challenge. The first film, dealing with Lightnings slow embrace of small-town life, didnt leave much room for another story, and its second-most-important character, Doc Hudson, was voiced by Paul Newman, who died between the two films.

So Cars 2 made a hard pivot into spy movie action, ramped up the role of kiddie favorite Tow Mater (voiced by Larry the Cable Guy), and largely lost the soul of the first film.

Cars 3 is most successful when it finds ways to reintegrate Lightning into the tone and world of the first film, as he tries to grapple with his legacy and realizes Doc (who appears in flashbacks that seem as if they might have been cobbled together from outtakes and deleted scenes Newman recorded for the first film) might offer him wisdom even from beyond the grave. (Since cars cant really die, Doc is just not around anymore. But, again, cmon.)

However, because Lightning already learned his lesson about appreciating life and taking it easy, theres just not a lot to mine here. Cars 3 makes some awkward attempts to suggest technology is no replacement for really experiencing life, and Lightning visits other famous race cars, even detouring to hang out in a bar with famous, groundbreaking cars voiced by Margo Martindale and Isiah Whitlock Jr.

But the movie struggles to figure out how to make all of this mesh, right up until the very end, when it finally nods toward keeping one eye on the past but always letting the future take precedent.

Many thinkers who consider the question of what happens when human beings finally create an artificial intelligence that is on the same level as the human brain have concluded that it will not take very long for such a being to evolve into a superintelligence which is any artificial intelligence thats just a smidgen smarter than the smartest human. And from there, they will continue to improve, and we will be left in the dust, ruled, effectively, by our robot successors.

Anyway, the Cars movies dont take place in an explicitly post-human future, but this is the biggest cmon of them all. At some point, self-driving cars rose up, they killed us all, and now they long for the good old days, not realizing those days are impossible to return to.

Thus, the rise of Jackson and his pals allows the film to broach the subject of those early days of artificial superintelligence, with Lightning in the role of humanity. What will happen when we try to keep up with beings that are simply made better than us? Will we accept our obsolescence with grace? Or will we push back with all we have? Cars 3 suggests no easy answers.

Lou is better than, say, Lava (the odious singing volcano short attached to Inside Out). With that said, it is also about how all of the toys in a lost-and-found box become a sort of toy golem that wanders a playground, returning toys to children and making sure bullies pay for their misdeeds.

The audience I saw Lou with ate it up, but reader, I found it terrifying. If Toy Story posited a world where toys wake up when youre not around, Lou posits a world where toys have no knowledge of what it means to be human but are cursed to make an attempt all the same: strange, shambling beasts from outside of time, wandering our playgrounds.

Make it stop. Kill it with fire.

Cars 3 opens in theaters Friday, June 16, with early screenings on the evening of Thursday, June 15.

See the article here:

Cars 3 gets back to what made the franchise adequate - Vox

Posted in Superintelligence | Comments Off on Cars 3 gets back to what made the franchise adequate – Vox

Using AI to unlock human potential – EJ Insight

Posted: June 9, 2017 at 1:29 pm

The artificial intelligence (AI) wave has spread to Hong Kong. Hang Seng Bank plans to introduce Chatbot, with natural language processing and mechanical learning ability, to provide customers with credit card offers and banking services information; while Bank of China (Hong Kong) is studying robot technology that can help improve handling of customer queries.

BOCHK also plans to create a shared big brain to provide customers with personal services. The Chartered Financial Analyst Institute, meanwhile, is reported to be planning to incorporate topics on AI, robotics consultants and big data analysis methods in its 2019 membership qualification examination, so as to meet future market needs.

AI has a broad meaning. From a technical perspective, it includes: deep learning (learning from a large pool of data to assimilate human intelligence, such as AlphaGo in the chess world), robotics (responsible for pre-determined extremely cautious or dangerous tasks, e.g. surgery, dismantling bombs, surveying damaged nuclear power plants, etc.), digital personal assistants (such as Apples Siri, Facebooks M), and querying (finding information from a huge database speedily and accurately).

However, how does AI progress? In The AI Revolution: The Road to Superintelligence, the author points out that there are three stages of development in Al:

Basic Artificial Narrow Intelligence or Weak AI, i.e. AI specializes in a certain scope. AlphaGo can beat the worlds invincible hand in chess, but I am afraid its unable to guide you to the nearby restaurants or to book a hotel room for you. The same logic applies to bomb disposal robot and the AI which identifies cancer cells within seconds.

Advance Artificial General Intelligence or Strong AI, i.e. the computer thinks and operates like a human being. How does human think? I have just read a column from a connoisseur. There are many factors affecting us in choosing a catering place, like our mood, type and taste of food, price, time, etc. The determined factors are not the same every time. See, it is really complicated. This is so called Turing Test. Alan Turing, a British scientist who was born over 100 years ago, said: If a computer makes you believe that it is human, it is artificial intelligence.

Super Advance Artificial Superintelligence. Nick Bostrom, a philosopher at Oxford University, has been thinking about Als relationship with mankind for years, he defines superintelligence as an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.

Although we are still in the stage of Artificial Narrow Intelligence, AI application is already very extensive. By combining with big data, it is applied in various areas like financial (wealth management, fraud detection), medical (diagnosis, drug development), media and advertising (face recognition advertising, tailor-made services), retail (Amazon.com), law (speed-finding information), education (virtual teacher), manufacturing, transportation, agriculture and so on.

MarketsandMarkets, a research company, estimated that Als market value would reach US$16 billion (about HK$124 billion) by 2022, with an average compound annual growth rate of more than 60 percent.

However, relative to the Mainland, this estimate is very conservative according to Jiang Guangzhi, a key member of the Beijing Municipal Commission of Economy and Information Technology. Counting Beijing alone, Al and related industries have now exceeded 150 billion yuan (about HK$170 billion).

In any case, Al definitely holds a key position in the field of science and technology, and it has substantial potential. However, I have just read a report in which a top international AI expert and a HKUST professor complained that the government and private sector were too passive in scientific research in Hong Kong, resulting in a brain drain.

Experts helped Huawei, a Chinese telecom and technology giant, set up a laboratory in Hong Kong, but they said they found it difficult to recruit even 50 people in the city with the required skills.

This is really alarming. Hong Kong has world-class teachers to attract the elite to study here.

However, if we do not strive to create an environment for talents to pursuit their careers here, I am afraid we will keep experiencing talent flight.

Recently, the Financial Development Board published two research reports that deal with issues related to development and application of financial technology in Hong Kong. I hope the government and society will work together and speed up efforts to bring out the potential of Hong Kong in AI.

Contact us at [emailprotected]

AC/RC

Follow this link:

Using AI to unlock human potential - EJ Insight

Posted in Superintelligence | Comments Off on Using AI to unlock human potential – EJ Insight

Are You Ready for the AI Revolution and the Rise of Superintelligence? – TrendinTech

Posted: June 7, 2017 at 5:30 pm

Weve come a long way as a whole over the past few centuries. Take a time machine back to 1750 and life would be very different indeed. There was no power outage, to communicate with someone long distance was virtually impossible, and there were no gas stations or supermarkets anywhere. Bring someone from that era to todays world, and they would almost certainly have some form of breakdown. I mean how would they cope seeing capsules with wheels whizz around the roads, electrical devices everywhere you look, and even just talking to someone on the other side of the world in real time. These are all simple things that we take for granted. But someone from a few centuries ago would probably think it was all witchcraft, and could even possibly die.

But then imagine that person went back to 1750 and suddenly became jealous that we saw their reaction of awe and amazement. They may want to re-create that feeling themselves in someone else. So, what would they do? They would take the time machine and go back to say 1500 or so and get someone from that era to take to their own. Although the difference from being in 1500 to then being in 1750 would, of course, be different, it wouldnt be anything as extreme as the difference between 1750 and today. So the 1500 person would still almost certainly be shocked by a few things, its highly unlikely they would die. So, in order for the 1750 person to see the same kind of reaction that we would have, they would need to travel back much, much, farther to say 24,000 BC.

For someone to actually die from the shock of being transported into the future, theyd need to go that far ahead that a Die Progress Unit (DPU) is achieved. In hunter-gatherer times, a DPU took over 100,000 years, and thanks to the Agricultural Revolution rate it took around 12,000 years during that period. Nowadays, because of the rate of advancement following the Industrial Revolution a DPU would happen after being transported just a couple hundred years forward. Futurist Ray Kurzweil calls this pattern of human progression moving quicker as time goes on, the Law of Accelerating Returns and is all down to technology.

This theory also works on smaller scales too. Cast your mind back to that great 1985 movie, Back to the Future. In the movie, the past era they went back to was 1955, where there were various differences of course. But if we were to remake the same movie today, but use the past era as 1985, there would be more dramatic differences. Again, this all comes down to the Law of Accelerating Returns. Between 1985 and 2015 the average rate of advancement was much higher than between 1955 and 1985. Kurzweil suggests that by 2000 the rate of progress was five times faster that the average rate during the 20th century. He also suggests that between 2000 and 2014 another 20th centurys worth of progress happened, and by 2021 another will happen, taking just seven years to get there. This means that keeping with the same pattern, in a couple of decades, a 20th centurys worth of progress will happen multiple times in one year, and eventually, in one month.

If Kurzweil is right then by the time 2030 gets here, we may all be blown away with the technology all around us and by 2050 we may not even recognize anything. But many people are skeptical of this for three main reasons:

1. Our own experiences make us stubborn about the future. Our imagination takes our experiences and uses it to predict future outcomes. The problem is that were limited in what we know and when we hear a prediction that goes against what weve been led to believe we often have trouble accepting it as the truth. For example, if someone was to tell you that youd live to be 200, you would think that was ridiculous because of what youve been taught. But at the end of the day, there has to be a first time for everything, and no one knew airplanes would fly until they gave it a go one day.

2. We think in straight lines when we think about history. When trying to project what will happen in the next 30 years we tend to look back at the past 30 years and use that as some sort of guideline as to whats to come. But, in doing that we arent considering the Law of Accelerating Returns. Instead of thinking linearly, we need to be thinking exponentially. In order to predict anything about the future, we need to picture things advancing at a much faster rate than they are today.

3. The trajectory of recent history tells a distorted story. Exponential growth isnt smooth and progress in this area happens in S-curves. An S curve is created when the wave of the progress of a new paradigm sweeps the world and happens in three phases: slow growth, rapid growth, and a leveling off as the paradigm matures. If you view only a small section of the S-curve youll get a distorted version of how fast things are progressing.

What do we mean by AI?

Artificial intelligence (AI) is big right now; bigger than it ever has been. But, there are still many people out there that get confused by the term for various reasons. One is that in the past weve associated AI with movies like Star Wars, Terminator, and even the Jetsons. Because these are all fictional characters, it makes AI still seem like a sci-fi concept. Also, AI is such a broad topic that ranges from self-driving cars to your phones calculator, so getting to grips with all it entails is not easy. Another reason its confusing is that we often dont even realize when were using AI.

So, to try and clear things up and give yourself a better idea of what AI is, first stop thinking about robots. Robots are simply shells that can encompass AI. Secondly, consider the term singularity. Vernor Vinge wrote an essay in 1993 where this term was applied to the moment in future when the intelligence of our technology exceeds that of ourselves. However, that idea was later confused by Kurzweil defining the singularity as the time when the Law of Accelerating Returns gets so fast that well find ourselves living in a whole new world.

To try and narrow AI down a bit, try to think of it as being separated into three major categories:

1. Artificial Narrow Intelligence (ANI): This is sometimes referred to as Weak AI and is a type if AI that specializes in one particular area. An example of ANI is a chess playing AI. It may be great at winning chess, but that is literally all it can do.

2. Artificial General Intelligence (AGI): Often known as Strong AI or Human-Level AI, AGI refers to a computer that has the intelligence of a human across the board and is much harder to create than ANI.

3. Artificial Superintelligence (ASI): ASI ranges from a computer thats just a little smarter than a human to one thats billions of time smarter in every way. This is the type of AI that is most feared and will often be associated with the words immortality and extinction.

Right now, were progressing steadily through the AI revolution and are currently running in a world of ANI. Cars are full of ANI systems that range from the computer that tells the car when the ABS should kick into the various self-driving cars that are about. Phones are another product thats bursting with ANI. Whenever youre receiving music recommendations from Pandora or using your map app to navigate, or various other activities youre utilizing ANI. An email spam filter is another form of ANI because it learns whats spam and whats not. Google Translate and voice recognition systems are also examples of ANI. And, some of the best Checkers and Chess players of the world are also ANI systems.

So, as you can see, ANI systems are all around us already, but luckily these types of systems dont have the capability to cause any real threat to humanity. But, each new ANI system that is created is simply another step towards AGI and ASI. However, trying to create a computer that is at least, if not more intelligent than ourselves, is no easy feat. But, the hard parts are probably not what you were imagining. To build a computer that can calculate sums quickly is simple, but to build a computer than can tell the difference between a cat and a dog is much harder. As summed up by computer scientist, Donald Knuth, AI has by now succeeded in doing essentially everything that requires thinking but has failed to do most of what people and animals do without thinking.'

The next move in which to make AGI a possibility and to compete with the human brain is to increase the power of computers hardware. One way to demonstrate this capacity is by expressing it in calculations per second (cps) that the brain can handle. Kurzweil created a shortcut for calculating this by taking an estimate for the caps of one structure and its weight, comparing it to that of the whole brain, the multiplying it proportionally until an estimate for the total has been reached. After carrying out this calculation several times, Kurzweil always got the same answer of around 1016, or 10 quadrillion cps.

The worlds fastest supercomputer is currently Chinas Tianhe-2 and has clocked in at around 34 quadrillion cps. But, thats hardly a surprise when it uses 24 megawatts of power, takes up 720 square meters of space, and cost $390 million to build. Perhaps if we were to scale that down slightly to 10 quadrillion cps (the human-level) we may be able to achieve a more workable model and AGI would then become a part of everyday life. Currently, the worlds $1,000 computers are about a thousandth of the human level and while that may not sound like much its actually a huge leap forward. In 1985 we were only about a trillionth of human level. If we keep progressing in the same manner then by 2025 we should have an affordable computer that can rival the power of the human brain. Then its just a case of merging all that power with human-level intelligence.

However, thats so much easier said than done. No one really knows how to make computers smart, but here are the most popular strategies weve come across so far:

1. Make everything the computers problem. This is usually a scientists last resort and involves building a computer whose main skill would be to carry out research on AI and coding them changes into itself.

2. Plagiarize the brain. It makes sense to copy the best of whats already available and currently, scientists are working hard to uncover all we can about the mighty organ. As soon as we know how a human brain can run so efficiently we can begin to replicate it in the form of AI. Artificial neural networks do this already, where they mimic the human brain. But there is still a long way to go before they are anywhere near as sophisticated or effective as the human brain. A more extreme example of plagiarism involves whats known as whole brain emulation. Here the aim is to slice a brain into layers, scan each one, create an accurate 3D model then implement that model on a computer. Wed then have a fully working computer that has a brain as capable as our own.

3. Try to make history and evolution repeat itself in our favor. If building a computer just as powerful as the human brain is too hard to mimic, we could instead try to mimic the evolution of it instead. This is a method called genetic algorithms. They would work by taking part in a performance-and-evaluation process that would happen over and over. When a task is completed successfully the computer would be bred with another just as capable in an attempt to merge them and recreate a better computer. This natural selection process would be done several times until we finally have the result we wanted. The downside is that this process could take billions of years.

Various advancements in technology are happening so quickly that AGI could be here before we know it for two main reasons:

1. Exponential growth is very intense and so much can happen in such a short space of time.

2. Even minute software changes can make a big difference. Just one tweak could have the potential to make it 1,000 times more effective.

Once AGI has been achieved and people are happy living alongside human-level AGI, well then move on to ASI. But, just to clarify, even though AGI has the same level of intelligence (theoretically) as a human, they would still have several advantages over us, including:

Speed: Todays microprocessors can run at speeds 10 million times faster than our own neurons and they can also communicate optically at the speed of light.

Size and storage: Unlike our brains, computers can expand to any size, allowing for a larger working memory and long-term memory that will outperform us any day.

Reliability and durability: Computer transistors are far more accurate than biological neurons and are easily repaired too.

Editability: Computer software can be easily tweaked to allow for updates and fixes.

Collective capability: Humans are great at building a huge amount of collective intelligence and is one of the main reasons why weve survived so long as a species and are far more advanced. A computer that is designed to essentially mimic the human brain, will be even better at it as it could regularly sync with itself so that anything another computer learned could be instantly uploaded to the whole network of them.

Most current models that focus on reaching AGI concentrate on AI achieving these goals via self-improvement. Once everything is able to self-improve, another concept to consider is recursive self-improvement. This is where something has already self-improved and so if therefore considerably smarter than it was original. Now, to improve itself further, will be much easier as it is smarter and not so much to learn and therefore takes bigger leaps. Soon the AGIs intelligence levels will exceed that of a human and thats when you get a superintelligent ASI system. This process is called an Intelligence Explosion and is a prime example of The Law of Accelerating Returns. How soon we will reach this level is still very much in debate.

More News to Read

comments

Go here to read the rest:

Are You Ready for the AI Revolution and the Rise of Superintelligence? - TrendinTech

Posted in Superintelligence | Comments Off on Are You Ready for the AI Revolution and the Rise of Superintelligence? – TrendinTech

A reply to Wait But Why on machine superintelligence

Posted: June 3, 2017 at 12:41 pm

Tim Urban of the wonderfulWait But Whyblog recently wrote two posts on machine superintelligence:The Road to Superintelligence and Our Immortality or Extinction.These postsare probably now among the most-readintroductions to the topic since Ray Kurzweils 2006 book.

In general I agree with Tims posts, but I think lots of details in his summary of the topic deserve to be corrected or clarified.Below, Ill quotepassages from histwo posts, roughlyin the order they appear, and then give my own brief reactions. Someof my commentsare fairlynit-picky but I decided to share them anyway; perhaps my most important clarification comes at the end.

The average rate of advancement between 1985 and 2015 was higher than the rate between 1955 and 1985 because the former was a more advanced world so much more change happened in the most recent 30 years than in the prior 30.

Readers should know this claim is heavily debated, and its truth depends on what Tim means by rate of advancement. If hes talking about the rate of progress in information technology, the claim might be true. But it might be false for most other areas of technology, for example energy and transportation technology. Cowen, Thiel, Gordon, and Huebner argue that technological innovation more generally has slowed. Meanwhile, Alexander, Smart, Gilder, and others critique some of those arguments.

Anyway,most of what Tim saysin these posts doesnt depend muchon the outcome of these debates.

Artificial Narrow Intelligence is machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing.

Well, thats the goal. But lots ofcurrentANI systems dont yet equal human capability or efficiency at their given task.To pick an easy example from game-playing AIs: chess computers reliably beat humans, and Go computers dont (but they will soon).

Each new ANI innovation quietly adds another brick onto the road to AGI and ASI.

I know Tim is speaking loosely, but I should note that many ANI innovations probably most, depending on how you count wont end up contributing to progress toward AGI. ManyANI methodswill end up being dead ends after some initial success, and their performance on the target task will be superseded by other methods. Thats how the history of AI has worked so far, and how it will likely continue to work.

the human brain is the most complex object in the known universe.

Well, not really. For example the brain of an African elephanthas 3 as many neurons.

Hard things like calculus, financial market strategy, and language translation are mind-numbingly easy for a computer, while easy things like vision, motion, movement, and perception are insanely hard for it.

Yes,Moravecs paradox is roughly true, but I wouldnt say that getting AI systems to perform well in asset trading or language translation has been mind-numbingly easy. E.g. machine translation is useful for getting the gist of a foreign-language text,but billions of dollars of effort still hasnt produced a machine translation system as good as a mid-level humantranslator, and I expect this willremain true for at least another 10 years.

One thing that definitely needs to happen for AGI to be a possibility is an increase in the power of computer hardware. If an AI system is going to be as intelligent as the brain, itll need to equal the brains raw computing capacity.

Because computing power is increasing so rapidly, we probablywill have more computing power than the human brain(speaking loosely) before we know how to build AGI, but I just want to flag that this isntconceptually necessary. In principle,an AGIdesign could be very different than the brains design, just like a plane isnt designed much like a bird. Depending onthe efficiency of the AGI design, it might be able to surpass human-level performance in all relevant domains using muchless computing power than the human brain does, especially since evolution is avery dumb designer.

So,we dont necessarilyneedhuman brain-ish amounts of computing power to build AGI, but the more computing power we have available, the dumber (less efficient)our AGI design can afford to be.

One way to express this capacity is in the total calculations per second (cps) the brain could manage

Just an aside:TEPS is probably another good metric to think about.

The science world is working hard on reverse engineering the brain to figure out how evolution made such a rad thing optimistic estimates say we can do this by 2030.

I suspect that approximately zeroneuroscientists think we can reverse-engineer the brain to the degree being discussed in this paragraph by 2030.To get a sense of current and near-future progress in reverse-engineering the brain, seeThe Future of the Brain (2014).

One example of computer architecture that mimics the brain is the artificial neural network.

This probably isnta good example of the kind of brain-inspired insights wed need to build AGI. Artificial neural networks arguably go back to the 1940s, and they mimic the brain only in the most basicsense. TD learning would be a more specific example, except in that case computer scientists were using the algorithmbefore we discovered the brain also uses it.

[We have]just recently been able to emulate a 1mm-long flatworm brain

No we havent.

The human brain contains 100 billion [neurons].

Good news! Thanks to a new technique we now have a more precise estimate: 86 billion neurons.

If that makes [whole brain emulation]seem like a hopeless project, remember the power of exponential progress now that weve conquered the tiny worm brain, an ant might happen before too long, followed by a mouse, and suddenly this will seem much more plausible.

Because computing power advances so quickly, it probably wont be the limiting factor on brain emulation technology. Scanning resolution and neuroscience knowledge are likely to lag far behind computing power: see chapter 2 ofSuperintelligence.

most of our current models for getting to AGI involve the AI getting there by self-improvement.

They do? Says who?

I think the path from AGI to superintelligence is mostly or entirely about self-improvement, but the path from current AI systems to AGI is mostly about human engineering work, probably until relatively shortly before the leading AI project reachesa level of capability worth calling AGI.

the median year on a survey of hundreds of scientists about when they believed wed be more likely than not to have reached AGI was 2040

Thats the number you get when you combine the estimates from several different recent surveys, including surveys of people who were mostlynot AIscientists. If you stick to the survey of the top-cited living AI scientists the one called TOP100 here the median estimate for50% probability of AGI is 2050. (Not a big difference, though.)

many of the thinkers in this field think its likely that the progression from AGI to ASI [will happen]very quickly

True, but it should be noted this is still a minority position, as one can see in Tims 2nd post, or in section 3.3 of the source paper.

90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human.

Remember that lots of knowledge and intelligence comes from interacting with the world, not just from running computational processesmore quickly or efficiently. Sometimes learning requires that you wait on some slow natural process to unfold.(In this context, even a1-second experimental test is slow.)

So the median participant thinks its more likely than not that well have AGI 25 years from now.

Again, I think its better to usethe numbers for the TOP100 survey from that paper, rather than the combined numbers.

Due to something called cognitive biases, we have a hard time believing something is real until we see proof.

There are dozens of cognitive biases,so thisis about as informative assaying due to something calledpsychology, we

The specific cognitive bias Tim seems to be discussing in this paragraph is the availability heuristic, or maybe the absurdity heuristic.Also see Cognitive Biases Potentially AffectingJudgment of Global Risks.

[Kurzweil is]well-known for his bold predictions and has a pretty good record of having them come true

The linked article says Ray Kurzweils predictions are right 86% of the time. That statistic is from a self-assessment Kurzweil published in 2010.Not surprisingly, when independent partiestry to grade the accuracy of Kurzweils predictions, they arrive at a much lower accuracy score: seepage 21 of this paper.

How good is this compared to other futurists? Unfortunately,we have no idea. The problem is that nobodyelse has bothered to write down so many specific technological forecasts over the course of multiple decades. So, give Kurzweil credit fordaring to make lots of predictions.

My own vague guess is that Kurzweils track record is actually pretty impressive, but not as impressive as his own self-assessment suggests.

Kurzweil predicts that well get [advanced nanotech]by the 2020s.

Im not surewhich Kurzweilprediction about nanotech Tim isreferring to, because the associated footnote points to a page of The Singularity is Nearthat isnt about nanotech. But if hes talking about advanced Drexlerian nanotech, then I suspect approximately zero nanotechnologists would agree with this forecast.

I expected [Kurzweils]critics to be saying, Obviously that stuff cant happen, but instead they were saying things like, Yes, all of that can happen if we safely transition to ASI, but thats the hard part.Bostrom, one of the most prominent voices warning us about the dangers of AI, still acknowledges

Yeah, but Bostrom and Kurzweil are both famous futurists. There are plenty of non-futuristcritics of Kurzweil who would say Obviously that stuff cant happen. I happen to agree with Kurzweil and Bostrom about the radical goods within reach of a human-aligned superintelligence,but lets not forget that most AI scientists, and most PhD-carrying members of societyin general, probablywould say Obviously that stuff cant happen in response to Kurzweil.

The people on Anxious Avenue arent in Panicked Prairie or Hopeless Hills both of which are regions on the far left of the chart but theyre nervous and theyre tense.

Actually, the people Tim istalking about here are often more pessimistic about societal outcomes than Tim issuggesting.Many of them are, roughly speaking, 65%-85% confident that machine superintelligence will lead to human extinction, and that its only in a small minority of possible worlds that humanity rises to the challenge and gets a machine superintelligence robustly aligned with humane values.

Of course, itsalso true that many of the people who write about the importance of AGI risk mitigationare moreoptimistic than the range shown in Timsgraph of Anxious Avenue. For example, one researcher I know thinks its maybe 65% likely we get really good outcomes from machine superintelligence. But he notes that a ~35% chance ofhuman friggin extinction istotally worth trying to mitigate as much as wecan, including by funding hundreds of smart scientists to study potential solutionsdecades in advance of the worst-case scenarios, like we already do with regard to a global warming, a much smaller problem. (Global warming is a big problem on a normal persons scale of things to worry about, but even climate scientists dont think its capable of human extinction in the next couple centuries.)

Or, as Stuart Russell author of the leading AI textbook likes to put it,If a superior alien civilization sent us a message saying, Well arrive in a few decades, would we just reply, OK, call us when you get here well leave the lights on? Probably not but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes1

[In the movies]AI becomes as or more intelligent than humans, then decides to turn against us and take over. Heres what I need you to be clear on for the rest of this post: None of the people warning us about AI are talking about this. Evil is a human concept, and applying human concepts to non-human things is called anthropomorphizing.

Thank you. Jesus Christ I am tired of clearing up that very basic confusion, even formany AI scientists.

Turry started off as Friendly AI, but at some point, she turned Unfriendly, causing the greatest possible negative impact on our species.

Just FYI, at MIRI wevestarted to move away from the Friendly AI language recently, since people think Oh, like C-3PO? MIRIs recent papersuse phrases like superintelligence alignmentinstead.

In any case, my real comment here isthat the quoted sentence above doesnt use the terms Friendly or Unfriendly the way theyve been used traditionally. In the usual parlance, a Friendly AI doesnt turn Unfriendly. If it becomes Unfriendly at some point, then it wasalways an Unfriendly AI, it just wasnt powerful enough yet to be a harm to you.

Tim does sorta fix this much later in the same post when he writes: So Turry didnt turn against usor switch from Friendly AI to Unfriendly AI she just kept doing her thing as she became more and more advanced.

When were talking about ASI, the same concept applies it would become superintelligent, but it would be no more human than your laptop is.

Well, this depends on how the AI is designed. If the ASI is an uploaded human, itll be pretty similar to a human in lots of ways. If itsnot an uploaded human, it could still be purposely designed to be human-like in many different ways. But mimicking human psychology in any kind of detail almost certainly isnt the quickest way to AGI/ASI just like mimicking bird flight in lots of detail wasnt how we built planes sopractically speaking yes,the first AGI(s) will likely be very alien from our perspective.

What motivates an AI system?The answer is simple: its motivation is whatever we programmed its motivation to be. AI systems are given goals by their creators your GPSs goal is to give you the most efficient driving directions; Watsons goal is to answer questions accurately. And fulfilling those goals as well as possible is their motivation.

Some AI programs today are goal-driven, but most are not. Siri isnt trying to maximize somegoal likebe useful to the user of this iPhone or anything like that. It just has a long list of rules about what kind of output to provide in response to different kinds of commands and questions. Varioussub-components of Siri might be sortagoal-oriented e.g. theres an evaluation function trying topick the most likely accuratetranscriptionof your spoken words but the system as a whole isnt goal-oriented. (Or at least, this is how it seems to work. Apple hasnt shown me Siris source code.)

As AI systems become more autonomous, giving them goals becomes more important because you cant feasibly specify how the AI shouldreact in every possible arrangement of the environment instead, you need to give it goals and let it do its own on-the-fly planning for how its going achieve those goals in unexpected environmental conditions.

The programming for a Predator drone doesnt include a list of instructionsto follow for every possiblecombination of takeoff points, destinations, and wind conditions, because that list would be impossiblylong. Rather, the operator gives the Predator drone a goal destination and the drone figures out how to get there on its own.

when [Turry]wasnt yet that smart, doing her best to achieve her final goal meant simple instrumental goals like learning to scan handwriting samples more quickly. She caused no harm to humans and was, by definition, Friendly AI.

Again, Ill mention thats not how the term has traditionally been used, but whatever.

But there are all kinds of governments, companies, militaries, science labs, and black market organizations working on all kinds of AI. Many of them are trying to build AI that can improve on its own

This isnt true unless by AI that can improve on its own you just mean machine learning. Almost nobody in AI is working on the kind of recursive self-improvement youd need to get an intelligence explosion. Lots of people are working onsystemsthat could eventually providesomepiece of the foundational architecturefor a self-improving AGI, but almostnobody is working directly on the recursive self-improvement problem right now, because its too far beyond current capabilities.2

because many techniques to build innovative AI systems dont require a large amount of capital, development can take place in the nooks and crannies of society, unmonitored.

True, its much harder to monitor potential AGI projects than it is to track uranium enrichment facilities. But you can at least trackAI research talent. Right nowit doesnt take a ton of work to identify aset of 500 AI researchers that probably contains the most talented ~150AI researchers in the world. Then you can just track all 500 of them.

This is similar to back whenphysicists were starting to realize that a nuclear fission bomb mightbe feasible. Suddenly a few of the most talented researchers stoppedpresenting their work at the usual conferences, and the other nuclear physicists pretty quickly deduced: Oh, shit, theyre probably working on a secret government fission bomb. If Geoff Hinton or even the much younger Ilya Sutskever suddenly went undergroundtomorrow, a lot of AI people would notice.

Of course, such a tracking effort might not be so feasible 30-60 years from now, when serious AGI projects will be more numerous and greater proportions of world GDP and human cognitive talent will be devoted to AI efforts.

On the contrary, what [AI developers are]probably doing is programming their early systems with a very simple, reductionist goal like writing a simple note with a pen on paper to just get the AI to work.Down the road, once theyve figured out how to build a strong level of intelligence in a computer, they figure they can always go back and revise the goal with safety in mind. Right?

Again, I note that most AI systems today are not goal-directed.

I also note that sadly, it probably wouldnt just be a matter of going back to revise the goal with safety in mind after a certain level of AI capability is reached. Most proto-AGIdesigns probably arent even thekind of systems youcan make robustly safe, no matterwhat goals you program into them.

To illustrate what I mean, imaginea hypothetical computer security expert namedBruce.Youtell Bruce that he and his team havejust 3years tomodify the latest version ofMicrosoft Windows so that it cant be hacked in any way, even by the smartest hackers on Earth. If he fails, Earth will be destroyed because reasons.

Bruce just stares at you and says, Well, thats impossible, so I guess were allfucked.

The problem, Bruce explains,is that Microsoft Windows was neverdesigned to be anything remotely like unhackable. It was designed to be easily useable, and compatible with lots of software, and flexible, and affordable, and just barely secure enough to be marketable, and you cant just slap ona specialUnhackability Module at the last minute.

To get a system that even has achance at being robustlyunhackable, Bruce explains, youve got to design an entirely differenthardware + software system that was designedfrom the ground up to be unhackable. And that systemmustbe designed in an entirely different way than Microsoft Windows is,and no team in the world could do everything that is required for that in a mere 3 years. So, were fucked.

But! By a stroke of luck, Bruce learns that some teamsoutsideMicrosoft have been working on a theoretically unhackablehardware + software system for the past several decades (high reliability ishard) people like Greg Morrisett (SAFE) and Gerwin Klein (seL4).Bruce says he might be able to take their work and add thefeatures you need,while preserving the strong security guaranteesof the original highly securesystem. Bruce sets Microsoft Windows aside and gets to work on trying to make this other system satisfy themysteriousreasons while remaining unhackable. He and his team succeedjust in time to savethe day.

This is an oversimplified and comically romanticway to illustrate whatMIRI is trying to do in the area of long-term AI safety. Were trying to think through what properties an AGI would need to haveif it was going to very reliablyact in accordance with humane values even as it rewrote its own code a hundredtimes on its way tomachine superintelligence.Were asking:What would it look like if somebody tried to design an AGI that was designedfrom the ground up not for affordability, or for speed of development, or for economic benefit at every increment of progress, but for reliably beneficial behavior even under conditions of radical self-improvement? What does the computationally unbounded solution to that problem look like,so we can gain conceptual insightsusefulfor later efforts to build a computationally tractable self-improving system reliably aligned with humane interests?

So ifyoure reading this, and you happen to be a highly gifted mathematician or computer scientist, and you want a full-time job working on themost important challenge of the 21st century, well were hiring. (I will also try to appeal to your vanity: Please note that because so little work has been done in this area, youve still got a decentchance to contribute to what will eventually be recognized as the early, foundational results ofthe most important field of research in human history.)

My thanks to Tim Urban for hisvery nice posts on machine superintelligence. Be sure to read his ongoing series about Elon Musk.

Read the original:

A reply to Wait But Why on machine superintelligence

Posted in Superintelligence | Comments Off on A reply to Wait But Why on machine superintelligence

Page 17«..10..16171819..»