The AI Revolution: The Road to Superintelligence | Inverse

Nothing will make you appreciate human intelligence like learning about how unbelievably challenging it is to try to create a computer as smart as we are.

This article originally appeared on Wait But Why by Tim Urban. This is Part 1 Part 2 is here.

We are on the edge of change comparable to the rise of human life on Earth.

-Vernor Vinge

What does it feel like to stand here?

It seems like a pretty intense place to be standing but then you have to remember something about what its like to stand on a time graph: You cant see whats to your right. So heres how it actually feels to stand there:

Which probably feels pretty normal

Imagine taking a time machine back to 1750 a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. When you get there, you retrieve a dude, bring him to 2015, and then walk him around and watch him react to everything. Its impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someones face and chat with them even though theyre on the other side of the country, and worlds of other inconceivable sorcery. This is all before you show him the internet or explain things like the International Space Station, the Large Hadron Collider, nuclear weapons, or general relativity.

This experience for him wouldnt be surprising or shocking or even mind-blowing those words arent big enough. He might actually die.

But heres the interesting thing if he then went back to 1750 and got jealous that we got to see his reaction and decided he wanted to try the same thing, hed take the time machine and go back the same distance, get someone from around the year 1500, bring him to 1750, and show him everything. And the 1500 guy would be shocked by a lot of thingsbut he wouldnt die. It would be far less of an insane experience for him, because while 1500 and 1750 were very different, they were much less different than 1750 to 2015. The 1500 guy would learn some mind-bending shit about space and physics, hed be impressed with how committed Europe turned out to be with that new imperialism fad, and hed have to do some major revisions of his world map conception. But watching everyday life go by in 1750 transportation, communication, etc. definitely wouldnt make him die.

No, in order for the 1750 guy to have as much fun as we had with him, hed have to go much farther back maybe all the way back to about 12,000 BC, before the First Agricultural Revolution gave rise to the first cities and to the concept of civilization. If someone from a purely hunter-gatherer world from a time when humans were, more or less, just another animal species saw the vast human empires of 1750 with their towering churches, their ocean-crossing ships, their concept of being inside, and their enormous mountain of collective, accumulated human knowledge and discovery hed likely die.

And then what if, after dying, he got jealous and wanted to do the same thing. If he went back 12,000 years to 24,000 BC and got a guy and brought him to 12,000 BC, hed show the guy everything and the guy would be like, Okay whats your point who cares. For the 12,000 BC guy to have the same fun, hed have to go back over 100,000 years and get someone he could show fire and language to for the first time.

In order for someone to be transported into the future and die from the level of shock theyd experience, they have to go enough years ahead that a die level of progress, or a Die Progress Unit (DPU) has been achieved. So a DPU took over 100,000 years in hunter-gatherer times, but at the post-Agricultural Revolution rate, it only took about 12,000 years. The post-Industrial Revolution world has moved so quickly that a 1750 person only needs to go forward a couple hundred years for a DPU to have happened.

This pattern human progress moving quicker and quicker as time goes on is what futurist Ray Kurzweil calls human historys Law of Accelerating Returns. This happens because more advanced societies have the ability to progress at a faster rate than less advanced societies because theyre more advanced. 19th century humanity knew more and had better technology than 15th century humanity, so its no surprise that humanity made far more advances in the 19th century than in the 15th century 15th century humanity was no match for 19th century humanity.

This works on smaller scales too. The movie Back to the Future came out in 1985, and the past took place in 1955. In the movie, when Michael J. Fox went back to 1955, he was caught off-guard by the newness of TVs, the prices of soda, the lack of love for shrill electric guitar, and the variation in slang. It was a different world, yes but if the movie were made today and the past took place in 1985, the movie could have had much more fun with much bigger differences. The character would be in a time before personal computers, internet, or cell phones todays Marty McFly, a teenager born in the late 90s, would be much more out of place in 1985 than the movies Marty McFly was in 1955.

This is for the same reason we just discussed the Law of Accelerating Returns. The average rate of advancement between 1985 and 2015 was higher than the rate between 1955 and 1985 because the former was a more advanced world so much more change happened in the most recent 30 years than in the prior 30.

So advances are getting bigger and bigger and happening more and more quickly. This suggests some pretty intense things about our future, right?

Kurzweil suggests that the progress of the entire 20th century would have been achieved in only 20 years at the rate of advancement in the year 2000 in other words, by 2000, the rate of progress was five times faster than the average rate of progress during the 20th century. He believes another 20th centurys worth of progress happened between 2000 and 2014 and that another 20th centurys worth of progress will happen by 2021, in only seven years. A couple decades later, he believes a 20th centurys worth of progress will happen multiple times in the same year, and even later, in less than one month. All in all, because of the Law of Accelerating Returns, Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century.

If Kurzweil and others who agree with him are correct, then we may be as blown away by 2030 as our 1750 guy was by 2015 i.e. the next DPU might only take a couple decades and the world in 2050 might be so vastly different than todays world that we would barely recognize it.

This isnt science fiction. Its what many scientists smarter and more knowledgeable than you or I firmly believe and if you look at history, its what we should logically predict.

So then why, when you hear me say something like the world 35 years from now might be totally unrecognizable, are you thinking, Cool.but nahhhhhhh? Three reasons were skeptical of outlandish forecasts of the future:

1) When it comes to history, we think in straight lines. When we imagine the progress of the next 30 years, we look back to the progress of the previous 30 as an indicator of how much will likely happen. When we think about the extent to which the world will change in the 21st century, we just take the 20th century progress and add it to the year 2000. This was the same mistake our 1750 guy made when he got someone from 1500 and expected to blow his mind as much as his own was blown going the same distance ahead. Its most intuitive for us to think linearly, when we should be thinking exponentially. If someone is being more clever about it, they might predict the advances of the next 30 years not by looking at the previous 30 years, but by taking the current rate of progress and judging based on that. Theyd be more accurate, but still way off. In order to think about the future correctly, you need to imagine things moving at a much faster rate than theyre moving now.

2) The trajectory of very recent history often tells a distorted story. First, even a steep exponential curve seems linear when you only look at a tiny slice of it, the same way if you look at a little segment of a huge circle up close, it looks almost like a straight line. Second, exponential growth isnt totally smooth and uniform. Kurzweil explains that progress happens in S-curves:

An S is created by the wave of progress when a new paradigm sweeps the world. The curve goes through three phases:

If you look only at very recent history, the part of the S-curve youre on at the moment can obscure your perception of how fast things are advancing. The chunk of time between 1995 and 2007 saw the explosion of the internet, the introduction of Microsoft, Google, and Facebook into the public consciousness, the birth of social networking, and the introduction of cell phones and then smart phones. That was Phase 2: the growth spurt part of the S. But 2008 to 2015 has been less groundbreaking, at least on the technological front. Someone thinking about the future today might examine the last few years to gauge the current rate of advancement, but thats missing the bigger picture. In fact, a new, huge Phase 2 growth spurt might be brewing right now.

3) Our own experience makes us stubborn old men about the future. We base our ideas about the world on our personal experience, and that experience has ingrained the rate of growth of the recent past in our heads as the way things happen. Were also limited by our imagination, which takes our experience and uses it to conjure future predictions but often, what we know simply doesnt give us the tools to think accurately about the future. When we hear a prediction about the future that contradicts our experience-based notion of how things work, our instinct is that the prediction must be naive. If I tell you, later in this post, that you may live to be 150, or 250, or not die at all, your instinct will be, Thats stupid if theres one thing I know from history, its that everybody dies. And yes, no one in the past has not died. But no one flew airplanes before airplanes were invented either.

So while nahhhhh might feel right as you read this post, its probably actually wrong. The fact is, if were being truly logical and expecting historical patterns to continue, we should conclude that much, much, much more should change in the coming decades than we intuitively expect. Logic also suggests that if the most advanced species on a planet keeps making larger and larger leaps forward at an ever-faster rate, at some point, theyll make a leap so great that it completely alters life as they know it and the perception they have of what it means to be a human kind of like how evolution kept making great leaps toward intelligence until finally it made such a large leap to the human being that it completely altered what it meant for any creature to live on planet Earth. And if you spend some time reading about whats going on today in science and technology, you start to see a lot of signs quietly hinting that life as we currently know it cannot withstand the leap thats coming next.

If youre like me, you used to think Artificial Intelligence was a silly sci-fi concept, but lately youve been hearing it mentioned by serious people, and you dont really quite get it.

There are three reasons a lot of people are confused about the term AI:

1) We associate AI with movies. Star Wars. Terminator. 2001: A Space Odyssey. Even the Jetsons. And those are fiction, as are the robot characters. So it makes AI sound a little fictional to us.

2) AI is a broad topic. It ranges from your phones calculator to self-driving cars to something in the future that might change the world dramatically. AI refers to all of these things, which is confusing.

3) We use AI all the time in our daily lives, but we often dont realize its AI. John McCarthy, who coined the term Artificial Intelligence in 1956, complained that as soon as it works, no one calls it AI anymore. Because of this phenomenon, AI often sounds like a mythical future prediction more than a reality. At the same time, it makes it sound like a pop concept from the past that never came to fruition. Ray Kurzweil says he hears people say that AI withered in the 1980s, which he compares to insisting that the Internet died in the dot-com bust of the early 2000s.

So lets clear things up. First, stop thinking of robots. A robot is a container for AI, sometimes mimicking the human form, sometimes not but the AI itself is the computer inside the robot. AI is the brain, and the robot is its body if it even has a body. For example, the software and data behind Siri is AI, the womans voice we hear is a personification of that AI, and theres no robot involved at all.

Secondly, youve probably heard the term singularity or technological singularity. This term has been used in math to describe an asymptote-like situation where normal rules no longer apply. Its been used in physics to describe a phenomenon like an infinitely small, dense black hole or the point we were all squished into right before the Big Bang. Again, situations where the usual rules dont apply. In 1993, Vernor Vinge wrote a famous essay in which he applied the term to the moment in the future when our technologys intelligence exceeds our own a moment for him when life as we know it will be forever changed and normal rules will no longer apply. Ray Kurzweil then muddled things a bit by defining the singularity as the time when the Law of Accelerating Returns has reached such an extreme pace that technological progress is happening at a seemingly-infinite pace, and after which well be living in a whole new world. I found that many of todays AI thinkers have stopped using the term, and its confusing anyway, so I wont use it much here (even though well be focusing on that idea throughout).

Finally, while there are many different types or forms of AI since AI is a broad concept, the critical categories we need to think about are based on an AIs caliber. There are three major AI caliber categories:

AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. Theres AI that can beat the world chess champion in chess, but thats the only thing it does. Ask it to figure out a better way to store data on a hard drive, and itll look at you blankly.

AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board a machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and were yet to do it. Professor Linda Gottfredson describes intelligence as a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. AGI would be able to do all of those things as easily as you can.

AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. Artificial Superintelligence ranges from a computer thats just a little smarter than a human to one thats trillions of times smarter across the board. ASI is the reason the topic of AI is such a spicy meatball and why the words immortality and extinction will both appear in these posts multiple times.

As of now, humans have conquered the lowest caliber of AI ANI in many ways, and its everywhere. The AI Revolution is the road from ANI, through AGI, to ASI a road we may or may not survive but that, either way, will change everything.

Lets take a close look at what the leading thinkers in the field believe this road looks like and why this revolution might happen way sooner than you might think:

Artificial Narrow Intelligence is machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing. A few examples:

ANI systems as they are now arent especially scary. At worst, a glitchy or badly-programmed ANI can cause an isolated catastrophe like knocking out a power grid, causing a harmful nuclear power plant malfunction, or triggering a financial markets disaster (like the 2010 Flash Crash when an ANI program reacted the wrong way to an unexpected situation and caused the stock market to briefly plummet, taking $1 trillion of market value with it, only part of which was recovered when the mistake was corrected).

But while ANI doesnt have the capability to cause an existential threat, we should see this increasingly large and complex ecosystem of relatively-harmless ANI as a precursor of the world-altering hurricane thats on the way. Each new ANI innovation quietly adds another brick onto the road to AGI and ASI. Or as Aaron Saenz sees it, our worlds ANI systems are like the amino acids in the early Earths primordial ooze the inanimate stuff of life that, one unexpected day, woke up.

Why Its So Hard

Nothing will make you appreciate human intelligence like learning about how unbelievably challenging it is to try to create a computer as smart as we are. Building skyscrapers, putting humans in space, figuring out the details of how the Big Bang went down all far easier than understanding our own brain or how to make something as cool as it. As of now, the human brain is the most complex object in the known universe.

Whats interesting is that the hard parts of trying to build AGI (a computer as smart as humans in general, not just at one narrow specialty) are not intuitively what youd think they are. Build a computer that can multiply two ten-digit numbers in a split second incredibly easy. Build one that can look at a dog and answer whether its a dog or a cat spectacularly difficult. Make AI that can beat any human in chess? Done. Make one that can read a paragraph from a six-year-olds picture book and not just recognize the words but understand the meaning of them? Google is currently spending billions of dollars trying to do it. Hard things like calculus, financial market strategy, and language translationare mind-numbingly easy for a computer, while easy things like vision, motion, movement, and perception are insanely hard for it. Or, as computer scientist Donald Knuth puts it, AI has by now succeeded in doing essentially everything that requires thinking but has failed to do most of what people and animals do without thinking.

What you quickly realize when you think about this is that those things that seem easy to us are actually unbelievably complicated, and they only seem easy because those skills have been optimized in us (and most animals) by hundreds of million years of animal evolution. When you reach your hand up toward an object, the muscles, tendons, and bones in your shoulder, elbow, and wrist instantly perform a long series of physics operations, in conjunction with your eyes, to allow you to move your hand in a straight line through three dimensions. It seems effortless to you because you have perfected software in your brain for doing it. Same idea goes for why its not that malware is dumb for not being able to figure out the slanty word recognition test when you sign up for a new account on a siteits that your brain is super impressive for being able to.

On the other hand, multiplying big numbers or playing chess are new activities for biological creatures and we havent had any time to evolve a proficiency at them, so a computer doesnt need to work too hard to beat us. Think about itwhich would you rather do, build a program that could multiply big numbers or one that could understand the essence of a B well enough that you could show it a B in any one of thousands of unpredictable fonts or handwriting and it could instantly know it was a B?

One fun examplewhen you look at this, you and a computer both can figure out that its a rectangle with two distinct shades, alternating:

Tied so far. But if you pick up the black and reveal the whole image

you have no problem giving a full description of the various opaque and translucent cylinders, slats, and 3-D corners, but the computer would fail miserably. It would describe what it seesa variety of two-dimensional shapes in several different shadeswhich is actually whats there. Your brain is doing a ton of fancy shit to interpret the implied depth, shade-mixing, and room lighting the picture is trying to portray. And looking at the picture below, a computer sees a two-dimensional white, black, and gray collage, while you easily see what it really isa photo of an entirely-black, 3-D rock:

And everything we just mentioned is still only taking in stagnant information and processing it. To be human-level intelligent, a computer would have to understand things like the difference between subtle facial expressions, the distinction between being pleased, relieved, content, satisfied, and glad, and why Braveheart was great but The Patriot was terrible.

Daunting.

So how do we get there?

First Key to Creating AGI: Increasing Computational Power

One thing that definitely needs to happen for AGI to be a possibility is an increase in the power of computer hardware. If an AI system is going to be as intelligent as the brain, itll need to equal the brains raw computing capacity.

One way to express this capacity is in the total calculations per second (cps) the brain could manage, and you could come to this number by figuring out the maximum cps of each structure in the brain and then adding them all together.

Ray Kurzweil came up with a shortcut by taking someones professional estimate for the cps of one structure and that structures weight compared to that of the whole brain and then multiplying proportionally to get an estimate for the total. Sounds a little iffy, but he did this a bunch of times with various professional estimates of different regions, and the total always arrived in the same ballpark around 1016, or 10 quadrillion cps.

Currently, the worlds fastest supercomputer, Chinas Tianhe-2, has actually beaten that number, clocking in at about 34 quadrillion cps. But Tianhe-2 is also a dick, taking up 720 square meters of space, using 24 megawatts of power (the brain runs on just 20 watts, and costing $390 million to build. Not especially applicable to wide usage, or even most commercial or industrial usage yet.

Kurzweil suggests that we think about the state of computers by looking at how many cps you can buy for $1,000. When that number reaches human-level 10 quadrillion cps then thatll mean AGI could become a very real part of life.

Moores Law is a historically-reliable rule that the worlds maximum computing power doubles approximately every two years, meaning computer hardware advancement, like general human advancement through history, grows exponentially. Looking at how this relates to Kurzweils cps/$1,000 metric, were currently at about 10 trillion cps/$1,000, right on pace with this graphs predicted trajectory:

So the worlds $1,000 computers are now beating the mouse brain and theyre at about a thousandth of human level. This doesnt sound like much until you remember that we were at about a trillionth of human level in 1985, a billionth in 1995, and a millionth in 2005. Being at a thousandth in 2015 puts us right on pace to get to an affordable computer by 2025 that rivals the power of the brain.

So on the hardware side, the raw power needed for AGI is technically available now, in China, and well be ready for affordable, widespread AGI-caliber hardware within 10 years. But raw computational power alone doesnt make a computer generally intelligent the next question is, how do we bring human-level intelligence to all that power?

Second Key to Creating AGI: Making it Smart

This is the icky part. The truth is, no one really knows how to make it smart were still debating how to make a computer human-level intelligent and capable of knowing what a dog and a weird-written B and a mediocre movie is. But there are a bunch of far-fetched strategies out there and at some point, one of them will work. Here are the three most common strategies I came across:

1) Plagiarize the brain.

This is like scientists toiling over how that kid who sits next to them in class is so smart and keeps doing so well on the tests, and even though they keep studying diligently, they cant do nearly as well as that kid, and then they finally decide k fuck it Im just gonna copy that kids answers. It makes sensewere stumped trying to build a super-complex computer, and there happens to be a perfect prototype for one in each of our heads.

The science world is working hard on reverse engineering the brain to figure out how evolution made such a rad thing optimistic estimates say we can do this by 2030. Once we do that, well know all the secrets of how the brain runs so powerfully and efficiently and we can draw inspiration from it and steal its innovations. One example of computer architecture that mimics the brain is the artificial neural network. It starts out as a network of transistor neurons, connected to each other with inputs and outputs, and it knows nothinglike an infant brain. The way it learns is it tries to do a task, say handwriting recognition, and at first, its neural firings and subsequent guesses at deciphering each letter will be completely random. But when its told it got something right, the transistor connections in the firing pathways that happened to create that answer are strengthened; when its told it was wrong, those pathways connections are weakened. After a lot of this trial and feedback, the network has, by itself, formed smart neural pathways and the machine has become optimized for the task. The brain learns a bit like this but in a more sophisticated way, and as we continue to study the brain, were discovering ingenious new ways to take advantage of neural circuitry.

More extreme plagiarism involves a strategy called whole brain emulation, where the goal is to slice a real brain into thin layers, scan each one, use software to assemble an accurate reconstructed 3-D model, and then implement the model on a powerful computer. Wed then have a computer officially capable of everything the brain is capable of it would just need to learn and gather information. If engineers get really good, theyd be able to emulate a real brain with such exact accuracy that the brains full personality and memory would be intact once the brain architecture has been uploaded to a computer. If the brain belonged to Jim right before he passed away, the computer would now wake up as Jim (?, which would be a robust human-level AGI, and we could now work on turning Jim into an unimaginably smart ASI, which hed probably be really excited about.

How far are we from achieving whole brain emulation? Well so far, weve not yet just recently been able to emulate a 1mm-long flatworm brain, which consists of just 302 total neurons. The human brain contains 100 billion. If that makes it seem like a hopeless project, remember the power of exponential progress now that weve conquered the tiny worm brain, an ant might happen before too long, followed by a mouse, and suddenly this will seem much more plausible.

2) Try to make evolution do what it did before but for us this time.

So if we decide the smart kids test is too hard to copy, we can try to copy the way he studies for the tests instead.

Heres something we know. Building a computer as powerful as the brain is possible our own brains evolution is proof. And if the brain is just too complex for us to emulate, we could try to emulate evolution instead. The fact is, even if we can emulate a brain, that might be like trying to build an airplane by copying a birds wing-flapping motionsoften, machines are best designed using a fresh, machine-oriented approach, not by mimicking biology exactly.

So how can we simulate evolution to build AGI? The method, called genetic algorithms, would work something like this: there would be a performance-and-evaluation process that would happen again and again (the same way biological creatures perform by living life and are evaluated by whether they manage to reproduce or not). A group of computers would try to do tasks, and the most successful ones would be bred with each other by having half of each of their programming merged together into a new computer. The less successful ones would be eliminated. Over many, many iterations, this natural selection process would produce better and better computers. The challenge would be creating an automated evaluation and breeding cycle so this evolution process could run on its own.

The downside of copying evolution is that evolution likes to take a billion years to do things and we want to do this in a few decades.

But we have a lot of advantages over evolution. First, evolution has no foresight and works randomly it produces more unhelpful mutations than helpful ones, but we would control the process so it would only be driven by beneficial glitches and targeted tweaks. Secondly, evolution doesnt aim for anything, including intelligence sometimes an environment might even select against higher intelligence (since it uses a lot of energy). We, on the other hand, could specifically direct this evolutionary process toward increasing intelligence. Third, to select for intelligence, evolution has to innovate in a bunch of other ways to facilitate intelligence like revamping the ways cells produce energy when we can remove those extra burdens and use things like electricity. Its no doubt wed be much, much faster than evolution but its still not clear whether well be able to improve upon evolution enough to make this a viable strategy.

3) Make this whole thing the computers problem, not ours.

This is when scientists get desperate and try to program the test to take itself. But it might be the most promising method we have.

The idea is that wed build a computer whose two major skills would be doing research on AI and coding changes into itself allowing it to not only learn but to improve its own architecture. Wed teach computers to be computer scientists so they could bootstrap their own development. And that would be their main job figuring out how to make themselves smarter. More on this later.

All of This Could Happen Soon

Rapid advancements in hardware and innovative experimentation with software are happening simultaneously, and AGI could creep up on us quickly and unexpectedly for two main reasons:

1) Exponential growth is intense and what seems like a snails pace of advancement can quickly race upwardsthis GIF illustrates this concept nicely:

2) When it comes to software, progress can seem slow, but then one epiphany can instantly change the rate of advancement (kind of like the way science, during the time humans thought the universe was geocentric, was having difficulty calculating how the universe worked, but then the discovery that it was heliocentric suddenly made everything much easier). Or, when it comes to something like a computer that improves itself, we might seem far away but actually be just one tweak of the system away from having it become 1,000 times more effective and zooming upward to human-level intelligence.

At some point, well have achieved AGI computers with human-level general intelligence. Just a bunch of people and computers living together in equality.

Oh actually not at all.

The thing is, AGI with an identical level of intelligence and computational capacity as a human would still have significant advantages over humans. Like:

AI, which will likely get to AGI by being programmed to self-improve, wouldnt see human-level intelligence as some important milestone its only a relevant marker from our point of view and wouldnt have any reason to stop at our level. And given the advantages over us that even human intelligence-equivalent AGI would have, its pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence.

This may shock the shit out of us when it happens. The reason is that from our perspective, A) while the intelligence of different kinds of animals varies, the main characteristic were aware of about any animals intelligence is that its far lower than ours, and B) we view the smartest humans as WAY smarter than the dumbest humans. Kind of like this:

So as AI zooms upward in intelligence toward us, well see it as simply becoming smarter, for an animal. Then, when it hits the lowest capacity of humanityNick Bostrom uses the term the village idiot well be like, Oh wow, its like a dumb human. Cute! The only thing is, in the grand spectrum of intelligence, all humans, from the village idiot to Einstein, are within a very small range so just after hitting village idiot-level and being declared to be AGI, itll suddenly be smarter than Einstein and we wont know what hit us:

And what happensafter that?

I hope you enjoyed normal time, because this is when this topic gets unnormal and scary, and its gonna stay that way from here forward. I want to pause here to remind you that every single thing Im going to say is real real science and real forecasts of the future from a large array of the most respected thinkers and scientists. Just keep remembering that.

Anyway, as I said above, most of our current models for getting to AGI involve the AI getting there by self-improvement. And once it gets to AGI, even systems that formed and grew through methods that didnt involve self-improvement would now be smart enough to begin self-improving if they wanted to.

And heres where we get to an intense concept: recursive self-improvement. It works like this

An AI system at a certain level lets say human village idiot is programmed with the goal of improving its own intelligence. Once it does, its smarter maybe at this point its at Einsteins level so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. These leaps make it much smarter than any human, allowing it to make even bigger leaps. As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the superintelligent level of an ASI system. This is called an Intelligence Explosion, and its the ultimate example of The Law of Accelerating Returns.

There is some debate about how soon AI will reach human-level general intelligence the median year on a survey of hundreds of scientists about when they believed wed be more likely than not to have reached AGI was 2040 thats only 25 years from now, which doesnt sound that huge until you consider that many of the thinkers in this field think its likely that the progression from AGI to ASI happens very quickly. Like this could happen:

It takes decades for the first AI system to reach low-level general intelligence, but it finally happens. A computer is able to understand the world around it as well as a human four-year-old. Suddenly, within an hour of hitting that milestone, the system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do. 90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human.

Superintelligence of that magnitude is not something we can remotely grasp, any more than a bumblebee can wrap its head around Keynesian Economics. In our world, smart means a 130 IQ and stupid means an 85 IQ we dont have a word for an IQ of 12,952.

What we do know is that humans utter dominance on this Earth suggests a clear rule: with intelligence comes power. Which means an ASI, when we create it, will be the most powerful being in the history of life on Earth, and all living things, including humans, will be entirely at its whim and this might happen in the next few decades.

Read the original here:

The AI Revolution: The Road to Superintelligence | Inverse

Friendly artificial intelligence – Wikipedia

A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive rather than negative effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained.

The term was coined by Eliezer Yudkowsky[1] to discuss superintelligent artificial agents that reliably implement human values. Stuart J. Russell and Peter Norvig’s leading artificial intelligence textbook, Artificial Intelligence: A Modern Approach, describes the idea:[2]

Yudkowsky (2008) goes into more detail about how to design a Friendly AI. He asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the challenge is one of mechanism designto define a mechanism for evolving AI systems under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes.

‘Friendly’ is used in this context as technical terminology, and picks out agents that are safe and useful, not necessarily ones that are “friendly” in the colloquial sense. The concept is primarily invoked in the context of discussions of recursively self-improving artificial agents that rapidly explode in intelligence, on the grounds that this hypothetical technology would have a large, rapid, and difficult-to-control impact on human society.[3]

The roots of concern about artificial intelligence are very old. Kevin LaGrandeur showed that the dangers specific to AI can be seen in ancient literature concerning artificial humanoid servants such as the golem, or the proto-robots of Gerbert of Aurillac and Roger Bacon. In those stories, the extreme intelligence and power of these humanoid creations clash with their status as slaves (which by nature are seen as sub-human), and cause disastrous conflict.[4] By 1942 these themes prompted Isaac Asimov to create the “Three Laws of Robotics” – principles hard-wired into all the robots in his fiction, intended to prevent them from turning on their creators, or allow them to come to harm.[5]

In modern times as the prospect of superintelligent AI looms nearer, philosopher Nick Bostrom has said that superintelligent AI systems with goals that are not aligned with human ethics are intrinsically dangerous unless extreme measures are taken to ensure the safety of humanity. He put it this way:

Basically we should assume that a ‘superintelligence’ would be able to achieve whatever goals it has. Therefore, it is extremely important that the goals we endow it with, and its entire motivation system, is ‘human friendly.’

Ryszard Michalski, a pioneer of machine learning, taught his Ph.D. students decades ago that any truly alien mind, including a machine mind, was unknowable and therefore dangerous to humans.[citation needed]

More recently, Eliezer Yudkowsky has called for the creation of friendly AI to mitigate existential risk from advanced artificial intelligence. He explains: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”[6]

Steve Omohundro says that a sufficiently advanced AI system will, unless explicitly counteracted, exhibit a number of basic “drives”, such as resource acquisition, because of the intrinsic nature of goal-driven systems and that these drives will, “without special precautions”, cause the AI to exhibit undesired behavior.[7][8]

Alexander Wissner-Gross says that AIs driven to maximize their future freedom of action (or causal path entropy) might be considered friendly if their planning horizon is longer than a certain threshold, and unfriendly if their planning horizon is shorter than that threshold.[9][10]

Luke Muehlhauser, writing for the Machine Intelligence Research Institute, recommends that machine ethics researchers adopt what Bruce Schneier has called the “security mindset”: Rather than thinking about how a system will work, imagine how it could fail. For instance, he suggests even an AI that only makes accurate predictions and communicates via a text interface might cause unintended harm.[11]

Yudkowsky advances the Coherent Extrapolated Volition (CEV) model. According to him, coherent extrapolated volition is people’s choices and the actions people would collectively take if “we knew more, thought faster, were more the people we wished we were, and had grown up closer together.”[12]

Rather than a Friendly AI being designed directly by human programmers, it is to be designed by a “seed AI” programmed to first study human nature and then produce the AI which humanity would want, given sufficient time and insight, to arrive at a satisfactory answer.[12] The appeal to an objective though contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of “Friendliness”, is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity.

Ben Goertzel, an artificial general intelligence researcher, believes that friendly AI cannot be created with current human knowledge. Goertzel suggests humans may instead decide to create an “AI Nanny” with “mildly superhuman intelligence and surveillance powers”, to protect the human race from existential risks like nanotechnology and to delay the development of other (unfriendly) artificial intelligences until and unless the safety issues are solved.[13]

Steve Omohundro has proposed a “scaffolding” approach to AI safety, in which one provably safe AI generation helps build the next provably safe generation.[14]

Stefan Pernar argues along the lines of Meno’s paradox to point out that attempting to solve the FAI problem is either pointless or hopeless depending on whether one assumes a universe that exhibits moral realism or not. In the former case a transhuman AI would independently reason itself into the proper goal system and assuming the latter, designing a friendly AI would be futile to begin with since morals can not be reasoned about.[15]

James Barrat, author of Our Final Invention, suggested that “a public-private partnership has to be created to bring A.I.-makers together to share ideas about securitysomething like the International Atomic Energy Agency, but in partnership with corporations.” He urges AI researchers to convene a meeting similar to the Asilomar Conference on Recombinant DNA, which discussed risks of biotechnology.[14]

John McGinnis encourages governments to accelerate friendly AI research. Because the goalposts of friendly AI aren’t necessarily clear, he suggests a model more like the National Institutes of Health, where “Peer review panels of computer and cognitive scientists would sift through projects and choose those that are designed both to advance AI and assure that such advances would be accompanied by appropriate safeguards.” McGinnis feels that peer review is better “than regulation to address technical issues that are not possible to capture through bureaucratic mandates”. McGinnis notes that his proposal stands in contrast to that of the Machine Intelligence Research Institute, which generally aims to avoid government involvement in friendly AI.[16]

According to Gary Marcus, the annual amount of money being spent on developing machine morality is tiny.[17]

Some critics believe that both human-level AI and superintelligence are unlikely, and that therefore friendly AI is unlikely. Writing in The Guardian, Alan Winfeld compares human-level artificial intelligence with faster-than-light travel in terms of difficulty, and states that while we need to be “cautious and prepared” given the stakes involved, we “don’t need to be obsessing” about the risks of superintelligence.[18]

Some philosophers claim that any truly “rational” agent, whether artificial or human, will naturally be benevolent; in this view, deliberate safeguards designed to produce a friendly AI could be unnecessary or even harmful.[19] Other critics question whether it is possible for an artificial intelligence to be friendly. Adam Keiper and Ari N. Schulman, editors of the technology journal The New Atlantis, say that it will be impossible to ever guarantee “friendly” behavior in AIs because problems of ethical complexity will not yield to software advances or increases in computing power. They write that the criteria upon which friendly AI theories are based work “only when one has not only great powers of prediction about the likelihood of myriad possible outcomes, but certainty and consensus on how one values the different outcomes.[20]

Read the original here:

Friendly artificial intelligence – Wikipedia

Why won’t everyone listen to Elon Musk about the robot apocalypse? – Ladders

When Elon Musk is not the billionaire CEO running three companies, he has a side hustle as our greatest living prophet of the upcoming war between humans and machines.

In his latest public testimony about the dark future that awaits us all, Musk urged the United Nations to ban artificially intelligent killer robots. And he and other fellow prophets emphasized that we have no time. No time.

In an open letterto the U.N., Musk, along with 115 other experts in robotics, co-signed a grim future where artificial superintelligence would lead to lethal autonomous weapons that would bring the third revolution in warfare.

And to add some urgency to the matter, the letter said that this future wasnt a distant science fiction, it was a near and present danger.

Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend, the letter states. We do not have long to act. Once this Pandoras box is opened, it will be hard to close.

Although lethal autonomous weapons are not mainstream yet, they do already exist. Samsungs SGR-A1 sentry robot is reportedly used by the South Korean army to monitor the Korean Demilitarized Zone with guns capable of autonomous firing.Taranis, an unmanned combat air vehicle, is being developed by the U.K. So autonomous weapons are already here. It remains to be seen, however, if this brings a new World War.

This is not the first time Musk has sounded the alarm on machines taking over. Heres a look at all the ways Musk has tried to convince humanity of its impending doom.

And he hasnt been mild in his warnings. If youre going to get people to pay attention to your robot visions, you need to raise the stakes.

Thats what Musk did when he toldMassachusetts Institute of Technology students in 2014 that artificial intelligence was our biggest existential threat. And in case he didnt get the students attention there, Musk compared artificial intelligence research to a metaphor of Good and Evil.

With artificial intelligence we are summoning the demon, in all those stories where theres the guy with the pentagram and the holy water, its like yeah hes sure he can control the demon. Didnt work out, Musk said.

So what can humans use as prayer beads against these robotic demons? Musk thinks that well need to use artificial intelligence to beat artificial intelligence. In a Vanity Fair profile of his artificial intelligence ambitions, Musk said that the human A.I. collective could beat rogue algorithms that could arise with artificial superintelligence.

If youre not sold with A.I. being an existential threat to humanity, are you more alarmed when you consider a world where our robot overlords treat us like pets? This is an argument Musk tried in 2017.

When Musk founded his brain-implant company, Neuralink, in 2017, he needed to explain why developing a connection between brains and machines was necessary. As part of this press tour, he talked with Wait But Why about the background behind his latest company and the existential risk were facing with artificial intelligence.

Were going to have the choice of either being left behind and being effectively useless or like a pet you know, like a house cat or something or eventually figuring out some way to be symbiotic and merge with AI, Musk told the blog. A house cats a good outcome, by the way.

Musk meant that being housecats for the demonic robot overlords is the best possible outcome, of course. But its also worth considering that housecats are not only well-treated and largely adored, but also, by acclamation, came to dominate the internet. Humanity could do worse.

Our impending irrelevance means well have to become cyborgs to stay useful to this world, according to Musk. While computers can communicate at a a trillion bits per second, Musk has said, we flawed humans are built with much slower bandwidths our puny brains and sluggish fingers that process information more slowly. We will need to evolve past this to stay useful.

To do this, humans and robots will need to form a merger so that we can achieve a symbiosis between human and machine intelligence, and maybe solves the control problem and the usefulness problem, Musk told an audience at the World Government Summit in Dubai in 2017, according to CNBC.

In other words, one day in the future, humans will have to join forces with artificial intelligence to keep up with the times or become the collared felines Musk fears well become without intervention.

What will it take for our robot prophet to be heard, so that his proclamations dont keep falling on deaf ears?

Although Musk may seem like a minority opinion now, his ideas around the threat of artificial intelligence are becoming more mainstream. For instance, his idea has been widely adopted that we are living right now in a computer simulation staged by future scientists.

Although Facebook CEO Mark Zuckerberg disagrees with Musks dark future, more tech leaders are siding with Musk when it comes to killer robots. Alphabets artificial intelligence expert, Mustafa Suleyman, was one of the U.N. open letters signatories. In the past, Bill Gates has said that the intelligence in A.I. is strong enough to be a concern.

So we can laugh now at these outlandish science fiction worlds where were robots domestic pets. But Musk has been sounding the alarm for years and he has held firm to his beliefs. What may be one mans outlier theory now may become a reality in the future. If nothing else, hes making sure you listen.

Monica Torres is a reporter for Ladders. She is based in New York City and can be reached at mtorres@theladders.com.

Visit link:

Why won’t everyone listen to Elon Musk about the robot apocalypse? – Ladders

Being human in the age of artificial intelligence – Science Weekly podcast – The Guardian

Subscribe & Review on iTunes, Soundcloud, Audioboom, Mixcloud & Acast, and join the discussion on Facebook and Twitter

In 2014, a new research and outreach organisation was born in Boston. Calling itself The Future of Life Institute, its founders included Jaan Tallinn – who helped create Skype – and a physicist from Massachusetts Institute of Technology. That physicist was Professor Max Tegmark.

With a mission to help safeguard life and develop optimistic visions of the future, the Institute has focused largely on Artificial Intelligence (AI). Of particular concern is the potential for AI to leapfrog humans and achieve so-called superintelligence something discussed in depth in Tegmarks latest book Life 3.0. This week Ian Sample asks the physicist and author what would happen if we did manage to create superintelligent AI? Do we even know how to build human-level AI? And with no sign of computers outsmarting us yet, why talk about it now?

The rest is here:

Being human in the age of artificial intelligence – Science Weekly podcast – The Guardian

Infographic: Visualizing the Massive $15.7 Trillion Impact of AI – Visual Capitalist (blog)

on August 21, 2017 at 12:24 pm

For the people most immersed in the tech sector, its hard to think of a more controversial topic than the ultimate impact of artificial intelligence (AI) on society.

By eventually empowering machines with a level of superintelligence, there are many different possible outcomes ranging from Kurzweils technological singularity to the more dire predictions popularized by Elon Musk.

Despite this wide gap in potential outcomes, most technologists do agree on one thing: AI will have a profound impact on the society and the way we do business.

Todays infographic comes from the Extraordinary Future 2017, a new conference in Vancouver, BC that focuses on emerging technologies such as AI, autonomous vehicles, fintech, and blockchain tech.

In the below infographic, we look recent projections from PwC and Accenture regarding AIs economic impact, as well as the industries and countries that will be the most profoundly affected.

According to PwCs most recent report on the topic, the impact of artificial intelligence (AI) will be transformative.

By 2030, AI is expected to provide a $15.7 trillion boost to GDP worldwide the equivalent of adding 13 new Australias to the global economy.

Where will AIs impact be most pronounced?

According to PwC, China will be the region receiving the most economic benefit ($7.0 trillion) from AI being integrated into various industries:

Further, the global growth from AI can be divided into two major areas, according to PwC: labor productivity improvements ($6.6 trillion) and increased consumer demand ($9.1 trillion).

But how will AI impact industries on an individual level?

For that, we turn to Accentures recent report, which breaks down a similar projection of $14 trillion of gross value added (GVA) by 2035, with estimates for AIs impact on specific industries.

Manufacturing will see nearly $4 trillion in growth from AI alone and many other industries will undergo significant changes as well.

To learn more about other tech that will have a big impact on our future, see a Timeline of Future Technology.

Thank you!

Given email address is already subscribed, thank you!

Please provide a valid email address.

Please complete the CAPTCHA.

Oops. Something went wrong. Please try again later.

Embed This Image On Your Site (copy code below):

Jeff Desjardins is a founder and editor of Visual Capitalist, a media website that creates and curates visual content on investing and business.

Previous Article

This Map Shows Which States Will Benefit From Solar Eclipse Tourism

Next Article

Interactive: Visualizing Median Income For All 3,000+ U.S. Counties

See original here:

Infographic: Visualizing the Massive $15.7 Trillion Impact of AI – Visual Capitalist (blog)

The Musk/Zuckerberg Dustup Represents a Growing Schism in AI – Motherboard

Frank White is the author of The Overview Effect: Space Exploration and Human Evolution. He is working on a book about artificial intelligence.

Recently, two tech heavyweights stepped into the social media ring and threw a couple of haymakers at one another. The topic: artificial intelligence (AI) and whether it is a boon to humanity or an existential threat.

Elon Musk, founder and CEO of SpaceX and Tesla, has been warning of the dangers posed by AI for some time and called for its regulation at a conference of state governors in July. In the past, he has likened AI to “summoning the demon,” and he founded an organization called OpenAI to mitigate the risks posed by artificial intelligence.

Facebook founder Mark Zuckerberg took a moment while sitting in his backyard and roasting some meat to broadcast a Facebook Live message expressing his support for artificial intelligence, suggesting that those urging caution were “irresponsible.”

Musk then tweeted that Zuckerberg’s understanding of AI was “limited.”

The Musk/Zuckerberg tiff points to something far more important than a disagreement between two young billionaires. There are two distinct perspectives on AI emerging, represented by Musk and Zuckerberg, but the discussion is by no means limited to them.

This debate has been brewing under the surface for some time, but has not received the attention it deserves. AI is making rapid strides and its advent raises a number of significant public policy questions, such as whether developments in this field should be evaluated in regard to their impact on society, and perhaps regulated. It will doubtless have a tremendous impact on the workplace, for example. Let’s examine the underlying issues and how we might address them.

Perhaps the easiest way to sort out this debate is to consider, broadly, the positive and negative scenarios for AI in terms of its impact on humankind.

The AI pessimists and optimists seem locked into their worldviews, with little overlap between their projected futures

The negative scenario, which has been personified by Musk, goes something like this: What we have today is specialized AI, which can accomplish specific tasks as well as, if not better than, humans. This is not a matter of concern in and of itself. However, some believe it will likely lead to artificial general intelligence (AGI) that is not only equal to human intelligence but also able to master any discipline, from picking stocks to diagnosing diseases. This is uncharted territory, but the concern is that AGI will almost inevitably lead to Superintelligence, a system that will outperform humans in every domain and perhaps even have its own goals, over which we will have no control.

At that point, known as the Singularity, we will no longer be the most intelligent species on the planet and no longer masters of our own fate.

In the scariest visions of the post-Singularity future, the hypothesized Superintelligence may decide that humans are a threat and wipe us out. More hopeful, but still disturbing views, such as that of Apple co-founder Steve Wozniak, suggest that we humans will eventually become the “pets” of robots.

The positive scenario, recently associated with Zuckerberg, goes in a different direction: It emphasizes more strongly that specialized AI is already benefiting humanity, and we can expect more of the same. For example, AIs are being applied to diagnosing diseases and they are often doing a better job than human doctors. Why, ask the optimists, do we care who does the work, if it benefits patients? Then we have mainstream applications of AI assistants like Siri and Alexa, which (or who?) are helping people manage their lives and learn more about the world just by asking.

Read More: Google’s DeepMind Is Teaching AI How to Think Like a Human

Optimistic observers believe that AGI will be difficult to achieveit won’t happen overnightand we can build in plenty of safeguards before it emerges. Others suggest that AGI and anything beyond it is a myth.

If we can achieve AGI, the optimistic view is that we will build on previous successes and deploy technologies like driverless cars, which will save thousands of human lives every year. As for the Singularity and Superintelligence, advocates of the positive scenario see these developments as more an article of faith than a scientific reality. And again, we have plenty of time to prepare for these eventualities.

The AI pessimists and optimists may seem locked into their own worldviews, with little apparent overlap between their projected futures. This leaves us with tweetstorms and Facebook Live jabs rather than a collaborative effort to manage a powerful technology.

However, there is one topic on which both sides tend to agree: AI is already having, and will continue to have, tremendous impact on jobs.

Speaking recently at a Harvard Business School event, Andrew Ng, the cofounder of Coursera and former chief scientist at Baidu, said that based on his experience as an “AI insider,” he did not “see a clear path for AI to surpass human-level intelligence.”

On the other hand, he asserted that job displacement was a huge problem, and “the one that I wish we could focus on, rather than be distracted by these science fiction-ish, dystopian elements.”

Ng seems to confirm the optimistic view that Superintelligence is unlikely, and therefore the thrust of his comments center on the future of work and whether we are adequately prepared. Looking at just one sector of the economy, transport, it isn’t hard to see that he has a point. If driverless cars and trucks do become the norm, thousands if not millions of people who drive for a living will be out of work. What will they do?

As the Musk/Zuckerberg argument unfolds, let’s hope it sheds light on a significant challenge that has gone unnoticed for far too long. Forging a public policy response represents an opportunity for the optimists and pessimists to collaborate rather than debate.

Get six of our favorite Motherboard stories every day by signing up for our newsletter.

More here:

The Musk/Zuckerberg Dustup Represents a Growing Schism in AI – Motherboard

The end of humanity as we know it is ‘coming in 2045’ and Google is preparing for it – Metro

Robots will reach human intelligence by 2029 and life as we know it will end in 2045.

This isnt the prediction of a conspiracy theorist, a blind dead woman or an octopus but of Googles chief of engineering, Ray Kurzweil.

Kurzweil has said that the work happening now will change the nature of humanity itself.

Tech company Softbanks CEO Masayoshi Son predicts it will happen in 2047.

And its all down to the many complexities of artificial intelligence (AI).

AI is currently limited to Siri or Alexa-like voice assistants that learn from humans, Amazons things you might also like, machines like Deep Blue, which has beaten grandmasters at chess, and a few other examples.

But the Turing test, where a machine exhibits intelligence indistinguishable from a human, has still not been fully passed.

Not yet at least

What we have at the moment is known as narrow AI, intelligent at doing one thing or a narrow selection of tasks very well.

General AI, where humans and robots are comparable, is expected to show breakthroughs over the next decade.

They become adaptable and able to turn their hand to a wider variety of tasks, in the same way as humans have areas of strength but can accomplish many things outside those areas.

This is when the Turing Test will truly be passed.

The third step is ASI, artificial super-intelligence.

ASI is the thing that the movies are so obsessed with, where machines are more intelligent and stronger than humans. It always felt like a distant dream but predictions are that its getting closer.

People will be able to upload their consciousness into a machine, it is said, by 2029 when the machine will be as powerful as the human brain and ASI or the singularity will happen, Google predicts, in 2045.

There are many different theories about what this could mean, some more scary than others.

The technological singularity, as it called, is the moment when artificial intelligence takes off into artificial superintelligence and becomes exponentially more intelligent more quickly.

As self-improvement becomes more efficient, it would get quicker and quicker at improvement until the machine became infinitely more intelligent infinitely quickly.

In essence, the conclusion of the extreme end of this theory has a machine with God-like abilities recreating itself infinitely more powerfully an infinite number of times in less than a blink of eye.

We project our own humanist delusions on what life might be life [when artificial intelligence reaches maturity], philosopher Slavoj iek says.

The very basics of what a human being will be will change.

But technology never stands on its own. Its always in a set of relations and part of society.

Society, however that develops, will need to catch up with technology. If it doesnt, then there is a risk that technology will overtake it and make human society irrelevant at best and extinct at worst.

One of the theories asserts that once we upload our consciousness into a machine, we become immortal and remove the need to have a physical body.

Another has us as not being able to keep up with truly artificial intelligence so humanity is left behind as infinitely intelligent AI explores the earth and/or the universe without us.

The third, and perhaps the scariest, is the sci-fi one where, once machines become aware of humanitys predilection to destroy anything it is scared of, AI acts first to preserve itself at the expense of humans so humanity is wiped out.

All this conjures up images of Blade Runner, of iRobot and all sorts of Terminator-like dystopian nightmares.

In my lifetime, the singularity will happen, Alison Lowndes, head of AI developer relations at technology company Nvidia, tells Metro.co.uk at the AI Summit.

But why does everyone think theyd be hostile?

Thats our brain assuming its evil. Why on earth would it need to be? People are just terrified of change.

These people still struggle with the idea that your fridge might know what it contains.

Self-driving cars, which will teach themselves the nuances of each road, still frighten a lot of people.

And this is still just narrow AI.

Letting a car drive for us is one thing, letting a machine think for us is quite another.

The pace of innovation and the pace of impact on the population is getting quicker, Letitia Cailleteau, global head of AI at strategists Accenture, tells Metro.co.uk.

If you take cars, for example, it took around 50 years to get 50 million cars on the road.

If you look at the latest innovations, it only takes a couple of years like Facebook to have the same impact.

The pace of innovation is quicker. AI will innovate quickly, even though it is to predict how quickly that will be.

But, as with all doomsday predictions, there is a lot of uncertainty. It turns out predicting the future is hard:

Computer scientists, philosophers and journalists have never been shy to offer their own definite prognostics, claiming AI to be impossible or just around the corner or anything in between, the Machine Intelligence Research Institute wrote.

Steve Pinker, a cognitive scientist at Harvard puts it more simply:

The increase in understanding of the brain or evolutionary genetics has followed nothing like [the pace of technological innovation], Pinker has said.

I dont see any sign that well attain it.

Yet there already those who think were already part of the way there.

Your old video games could be worth a lot of money

Has the Sarahah app become a tool for cyberbullying?

Nobody panic, but Facebook just went down in the UK

This is how you can temporarily deactivate your Instagram account

Were already in a state of transhumanism, author and journalist Will Self says.

Technology happens to humans rather than humans playing in a part of it.

The body can already be augmented with machinery, either internally or externally, and microchips have been inserted into a workforce.

On a more everyday level, when you see people just staring at their phones, are we really that far away from a point when humans and machines are one and the same?

MORE: Star Wars books could provide clues about The Last Jedi

MORE: 40% of jobs taken by robots by 2030 but AI companies say theyre here to help

See the rest here:

The end of humanity as we know it is ‘coming in 2045’ and Google is preparing for it – Metro

Will we be wiped out by machine overlords? Maybe we need a … – PBS NewsHour

JUDY WOODRUFF: Now: the fears around the development of artificial intelligence.

Computer superintelligence is a long, long way from the stuff of sci-fi movies, but several high-profile leaders and thinkers have been worrying quite publicly about what they see as the risks to come.

Our economics correspondent, Paul Solman, explores that. Its part of his weekly series, Making Sense.

ACTOR: I want to talk to you about the greatest scientific event in the history of man.

ACTOR: Are you building an A.I.?

PAUL SOLMAN: A.I., artificial intelligence.

ACTRESS: Do you think I might be switched off?

ACTOR: Its not up to me.

ACTRESS: Why is it up to anyone?

PAUL SOLMAN: Some version of this scenario has had prominent tech luminaries and scientists worried for years.

In 2014, cosmologist Stephen Hawking told the BBC:

STEPHEN HAWKING, Scientist (through computer voice): I think the development of full artificial intelligence could spell the end of the human race.

PAUL SOLMAN: And just this week, Tesla and SpaceX entrepreneur Elon Musk told the MDNMNational Governors Association:

ELON MUSK, CEO, Tesla Motors: A.I. is a fundamental existential risk for human civilization. And I dont think people fully appreciate that.

PAUL SOLMAN: OK, but whats the economics angle? Well, at Oxford Universitys Future of Humanity Institute, founding director Nick Bostrom leads a team trying to figure out how best to invest in, well, the future of humanity.

NICK BOSTROM, Director, Future of Humanity Institute: We are in this very peculiar situation of looking back at the history of our species, 100,000 years old, and now finding ourselves just before the threshold to what looks like it will be this transition to some post-human era of superintelligence that can colonize the universe, and then maybe last for billions of years.

PAUL SOLMAN: Philosopher Bostrom has been perhaps the most prominent thinker about the benefits and dangers to humanity of what he calls superintelligence for many years.

NICK BOSTROM: Once there is superintelligence, the fate of humanity may depend on what that superintelligence does.

PAUL SOLMAN: There are plenty of ways to invest in humanity, he says, giving money to anti-disease charities, for example.

But Bostrom thinks longer-term, about investing to lessen existential risks, those that threaten to wipe out the human species entirely. Global warming might be one. But plenty of other people are worrying about that, he says. So, he thinks about other risks.

What are the greatest of those risks?

NICK BOSTROM: The greatest existential risks arise from certain anticipated technological breakthroughs that we might make, in particular, machine superintelligence, nanotechnology, and synthetic biology, fundamentally because we dont have the ability to uninvent anything that we invent.

We dont, as a human civilization, have the ability to put the genie back into the bottle. Once something has been published, then we are stuck with that knowledge.

PAUL SOLMAN: So Bostrom wants money invested in how to manage A.I.

NICK BOSTROM: Specifically on the question, if and when in the future you could build machines that were really smart, maybe superintelligent, smarter than humans, how could you then ensure that you could control what those machines do, that they were beneficial, that they were aligned with human intentions?

PAUL SOLMAN: How likely is it that machines would develop basically a mind of their own, which is what youre saying, right?

NICK BOSTROM: I do think that advanced A.I., including superintelligence, is a sort of portal through which humanity will have passage, assuming we dont destroy ourselves prematurely in some other way.

Right now, the human brain is where its at. Its the source of almost all of the technologies we have.

PAUL SOLMAN: Im relieved to hear that.

(LAUGHTER)

NICK BOSTROM: And the complex social organization we have.

PAUL SOLMAN: Right.

NICK BOSTROM: Its why the modern condition is so different from the way that the chimpanzees live.

Its all through the human brains ability to discover and communicate. But there is no reason to think that human intelligence is anywhere near the greatest possible level of intelligence that could exist, that we are sort of the smartest possible species.

I think, rather, that we are the stupidest possible species that is capable of creating technological civilization.

PAUL SOLMAN: And capable of creating technology that has begun to surpass us, first in chess, then in Jeopardy, now in the supposedly impossible game for a machine to win, Go.

This is just task-oriented software, some have argued, and not really intelligence at all. Moreover, whatever you call it, there will be enormous benefits, says Bostrom.

On the other hand, if we approach real intelligence, it could also become a threat. Think of Ex Machina or The Matrix or Elon Musks fantasy fear this week about advanced A.I.

ELON MUSK: Well, it could start a war by create by doing fake news and spoofing e-mail accounts and fake press releases, and just by, you know, manipulating information. The pen is mightier than the sword.

PAUL SOLMAN: So, this is going to be a cat-and-mouse game between us and the intelligence?

NICK BOSTROM: That would be one model. One line of attack is to try to leverage the A.I.s intelligence to learn what it is that we value and what we want it to do.

PAUL SOLMAN: In order to protect ourselves from what could be a truly existential risk.

So, how do you get the greatest good for the greatest number of present and future humans beings? It might be to invest now in controlling the evolution of artificial intelligence.

For the PBS NewsHour, this is economics correspondent Paul Solman, reporting from Oxford, England.

View original post here:

Will we be wiped out by machine overlords? Maybe we need a … – PBS NewsHour

Giving Up the Fags: A Self-Reflexive Speech on Critical Auto-ethnography About the Shame of Growing up Gay/Sexual … – The Good Men Project (blog)

Editors note: In British English, the word fag means cigarette.

I am studying my Ph.D. in the College of Education at Victoria University Melbourne. Ill introduce from a quote from Springaay, from the handbook, Being with A/r/tography, (my methodology), where she writes there is no need to separate the personal from the professional any more than we can separate the dancer from the dance (Springgay, pp5).

Hopefully, by the end of this essay, you will see why that is important to me in terms of critical auto-ethnographical and autobiographical, practice-led writing and research.

I am working with young people for my Ph.D. Specifically, year eleven students. What will it mean to be human through the lens of technology in the near future? is the broad central theme. I am writing a six-week curriculum exploring artificial intelligence, and the anticipated superintelligence that will further enable transhumanism. What do young people ethically think of living in a post-human world?

But that is not what this essay is about.

In my youth and adolescence, I felt I had no non-prejudiced person to validate my emotional or ethical life. As a now forty-four-year-old adult, I want to be that person for these kids, allowing them to voice their concerns and for them to be heard.

My intuition that led me to want to work with young people is multifaceted and, as it turns out, complex. In the first instance, I have worked with young people before discussing mental health issues (as per my lived experience of schizophrenia), and drug use and abuse in many pedagogical settings in the past. I have valued and enjoyed hearing young peoples candidness. I have no children of my own.

I was exposed to things of a sexual nature from two abusive peers that I need not have seen.

For my presentation, I would like to read an abridged and sometimes for me emotional introduction to my exegesis. Through auto-ethnographical and autobiographical, practice-led writing, it has led to some intensely personal and stunning revelations. I feel this adds to my justifications of working with young people and needed to be addressed before my research commenced.

Just before I start this narrative piece I would like to quote Jones, Autoethnography uses the researchers personal experiences as primary data.

Just before Christmas in 2016, I gave up smoking. This was for health reasons, as I was getting unfit and short of breath. Another reason was to avoid feeling ostracized with the proliferation of non-smoking zones. Being ostracized is also a feeling I have felt throughout my life.It was also to save money and have literally have enough prosperity so that I could put a roof over my head to finish this Ph.D.

I only expected to give up smoking. What happened next was totally unexpected. It is a bit like the outcome of this novella I am writing for my Ph.D., for the result is beyond an event horizon in which no one knows the outcome.

The occurrence of giving up smoking, however, wove itself into this Ph.D. narrative and is a vehicle by which I can place my more self-actualised identity within the framework of my study.

It also goes part way to justify why it is that I want to work with young people, apart from the fact they are familiar with technology and will inherit this fast changing technological world.

As a young queer person with a mental illness, I did not think I ever received much validation. I did not have the capacity nor the opportunity to express myself in many ways, and with the onset of depression, addiction and psychosis, that coupled itself with isolation and ostracisation, I did not ever have the opportunity to.

This being said I had wonderful parents in many ways growing up and other well-meaning relatives around. However, growing up in the eighties AIDS crisis, to feel like anything other than heteronormative was difficult.The television broadcast the shock tactics of the Grim Reaper killing people with AIDS.Adults and children alike, had eyes and ears during my formative years. We had a close family, they were all wonderful but to be gay that was bad.

I recall Mum at the park when I was young, Dont go near those toilets without me, bad men go there. Mum was caring and expressing herself from a well of love and protectiveness. She was a great Mum.

With my developing self-awareness, I further want to be a non-prejudiced and open person for young people to relate to with candidness and openness.

When I gave up smoking, unconsciously I went into self-destruct mode for a while, a sort of self-medicating and hedonistic coping mechanism. After some months, it suddenly dawned on me that I had undergone inappropriate sexual abuse and sexual exposure when I was a child.

Two abusive peers exposed me to things of a sexual nature that I need not have seen. I had also been flashed and was shown an adults genitals by someone very close to my home whom I and the family trusted.

The memories started to rush in at another separate event, I cant quite remember and dont want to, an incident occurred at the toilets at little athletics when I was about eight years old. I only put weight to this sketchy memory, because even though I loved little aths and was good at it-I never went back after the incident despite my fathers pleas.

After that incident at little aths, I remember being so scared of, and avoiding the toilet so much, that I recall going home one afternoon from little aths having not urinated all day and Dad popping into the milk bar to buy the paper as he used to.

Having avoided the scene of the indecency, I could not hold on anymore, so I pissed in a McDonalds cup in the front seat of our family Volkswagon, snuck out of the car and put it in the bin before Dad came back, such was my shame.

Bad people go there. To be gay was bad. This meant that I was bad. This was ingrained from a young age.

I carried that guilt and shame for most of my childhood, all my adolescence and adult life.

I had always remembered the abuse, yet I did not ever consciously give it voice or gave it any weight. However, as I wrote more, I received counsel from my psychologist for the additional memories. For the longest timemy whole life, in factI had made decisions as an adolescent and an adult that had their genesis in the non-validation of the abuse.

As I wrote more, I received counsel from my psychologist for the additional memories. For the longest timemy whole life, in factI had made decisions as an adolescent and an adult that had their genesis in the non-validation of the abuse.

This included drug-taking and other risky behavior, constantly changing the location of where I lived, running away, squatting in disheveled housing at times, being jobless, not confident and not knowing why, financially bereft, emotionally traumatized, and overactive sexual misadventures.

It also manifested in choosing life partners and company in which I settled for, yet deserved much more. I have no doubt that my self-denial of what had happened to me added to and exacerbated my diagnosis of schizophrenia from age twenty over my lifetime.

Smoking for me was literally a smokescreen for nearly 23 years.

It was the reason not to remember, the affirmation that I as a person was not worthy. I did not care for myself. At the start, it was rebellious; it was also something I started to do when I was young that I knew I was not allowed to: that was taboo. I as a young person, had known taboo with abuse and prejudice-but the taboo of smoking was something that I myself was in control of.

This was in antithesis, of the abusive and inappropriate events that happened to me growing up; of the face of being vulnerable and exposed, and then not having the opportunity to express or validated what had happened.

Such was my lack of self-esteem, I knew it would kill me it said so on the pack! This self-depreciative beast took over my life from age thirteen.

It had become my addiction and best friend. It was a smokescreen for the memories that I had pushed deep into the wells of my sub consciousness. I remember throughout many psychoses and depressive episodes in my adolescence and adulthood, wanting and wishing I could die.

There was also a couple of brazen attempts, which thankfully did not work.

Ethnographically, on our televisions and on the news, gay people died of AIDS. Even in primary school, I had crushes on the boys and crushes on the girls. What if I was gay? Maybe I deserved to die? have another smoke!

I did not really answer that question of Was I gay? with certainty and confidence until I was twenty-five, had moved out of home, and got myself a job as an artist and illustrator for a major Melbourne newspaper. I needed a place to be safe when I finally did come out.

Smoking the fags meant:

I did not deserve to live (because it would kill me),

or be prosperous, (because it cost so much).

Then, I gave them up.

A change occurred that made me feel like I was a worthy person. I uncovered all the memories of the sexual abuse, of the complex family relationships within a complex time and how this had manifested into my adult life.

This surprising re-birth happened fast.

Giving up the fags was a journey of healing, and this short speech is a testament to that. It is the process of owning your experiences (both conscious and sub conscious) and being responsible, for your greatest happiness, and highest good.

To be a self-actualized adult you must be aware of your history, your make-up and your relationships and your memories, and be fully conscious of it yet for me, the illusion of the smoke screen of smoking kept me from this.

In essence-to validate and be reborn from a troubling past I had to confront the self within an autobiographical and autoethnographic narrative. This is the essential practice led writing that has un-blocked me from moving forward within my Ph.D. and within my personal life.

This public statement, writing and talking both frees me and also encourages my future happiness, and dare I say prosperity and security in a multitude of ways. This is the piece of writing, and the public testimony, that exalts me and sets me free. It will also make me a better teacher and more self-actualized researcher.

My psychologist wrote something down for me a couple of months which I said which he skillfully reminded me of:

31/01/2017

I deserve a future,

I deserve a life,

I am worthy.

I deserved, to be heard, and to live with wealth happiness and prosperity.

Giving up the fags was a revelation, yet late at age forty-four. However, I am sure we all know some people dont make it. But to feel self-worth and be listened to??

This is what the young people in my Ph.D. study, and young people everywhere, deserve to feel. We owe it to them as mentors, parents, and teachers.

So, I am no longer a smoker. I do still vape, though. This essay has been important to me as a public statement because I rightly and justly reclaimed my worth.

These were the words I needed to say which came from me and no one else, in order to move forward with my autobiographic writing of reflecting on being a young person, so I can be of service to my students and go on to co-contribute to produce global knowledge from local settings.

To be a self-actualized adult you must be aware of your history, your make-up and your relationships and your memories, and be fully conscious of it yet for me, the illusion of the smoke screen of smoking kept me from this.

These challengingly spoken words of intimacy and trauma had existed kicking and screaming in sub liminality right up into and strongly influencing my adult life.

This writing, my decisions, and this speech is a release, a healing, a process, a validation. Also, a manifesto of sorts for the role I will play in listening and validating young peoples concerns in terms of my Ph.D. topic.

If I could right now, Id take a drag on my vape, and Im on my way.

Thank you for reading.

ON CRITICAL AUTOETHNOGRAPHY:

To quote an early text from C. Wright Mills (1959) from Joneses Handbook of autoethnography, before the term autoethnography existed:

The sociological imagination enables us to grasp history and biography and the relations between the two in society. The challenge is to develop a methodology that allows us to examine how the private troubles of individuals are connected to public issues and to public responses to these troubles. That is its task and its promise. Individuals can understand their own experience and gauge their own fate only by locating themselves within their historical moment period, (pp. 56, slight paraphrase)1.

(Jones 1,2,3)

Jones, Stacy H.Handbook of Autoethnography. Routledge, 20160523. VitalBook file.

Furthermore,Carolyn Ellis(2004) defines autoethnography as research, writing, story, and method that connect the autobiographical and personal to the cultural, social, and political (p. xix).

Please share this article if it resonated with you. Thank you.

See more about Rich McLean at his websitewww.richmclean.com.au

__

__

This article originally appeared on LinkedIn

Photo credit: Getty Images

View original post here:

Giving Up the Fags: A Self-Reflexive Speech on Critical Auto-ethnography About the Shame of Growing up Gay/Sexual … – The Good Men Project (blog)

AI researcher: Why should a superintelligence keep us around? – TNW

As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. Its perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, Matrix-like, as some sort of human battery.

And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become the destroyer of worlds, as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?

I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldnt avoid asking: As an AI expert, what do I fear about artificial intelligence?

The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarke and brought to life by movie director Stanley Kubrick in 2001: A Space Odyssey, is a good example of a system that fails because of unintended consequences. In many complex systems the RMS Titanic, NASAs space shuttle, the Chernobyl nuclear power plant engineers layer many different components together. The designers may have known well how each element worked individually, but didnt know enough about how they all worked together.

That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia a set of relatively small failures combined together to create a catastrophe.

I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.

Systems like IBMs Watson and Googles Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on Jeopardy! or dont defeat a Go master. These are not world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.

But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that to err is human, so it is likely impossible for us to create a truly safe system.

Im not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.

Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.

Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that well find unintended consequences in simulation, which can be eliminated before they ever enter the real world.

Another possibility thats farther down the line is using evolution to influence the ethics of artificial intelligence systems. Its likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.

While neuroevolution might reduce the likelihood of unintended consequences, it doesnt prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.

Being a scientist doesnt absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.

As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we dont yet know what its capable of. But we do need to decide what the desired outcome of advanced AI is.

One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.

Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected and get surgery performed by a tireless robot with a perfectly steady hand. Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.

Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.

In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political self together with the rest of humanity may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.

There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?

If this guy comes for you, how will you convince him to let you live?

The key question in this scenario is: Why should a superintelligence keep us around?

I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankinds existence in it probably doesnt matter at all.

But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.

Fortunately, we need not justify our existence quite yet. We have some time somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldnt just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things as are saying we want to save the planet and successfully doing so.

We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we dont find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.

This article was originally published on The Conversation. Read the original article.

Science and Technology News on The Conversation

Read next: Xiaomi’s tablet-sized Mi-Max 2 desperately wants to be a phone

Read the original post:

AI researcher: Why should a superintelligence keep us around? – TNW

What an artificial intelligence researcher fears about AI – Huron Daily … – Huron Daily Tribune

(The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts.)

Arend Hintze, Michigan State University

(THE CONVERSATION) As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. Its perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, Matrix-like, as some sort of human battery.

And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become the destroyer of worlds, as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?

I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldnt avoid asking: As an AI expert, what do I fear about artificial intelligence?

The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarke and brought to life by movie director Stanley Kubrick in 2001: A Space Odyssey, is a good example of a system that fails because of unintended consequences. In many complex systems the RMS Titanic, NASAs space shuttle, the Chernobyl nuclear power plant engineers layer many different components together. The designers may have known well how each element worked individually, but didnt know enough about how they all worked together.

That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia a set of relatively small failures combined together to create a catastrophe.

I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.

Systems like IBMs Watson and Googles Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on Jeopardy! or dont defeat a Go master. These are not world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.

But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that to err is human, so it is likely impossible for us to create a truly safe system.

Im not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.

Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.

Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that well find unintended consequences in simulation, which can be eliminated before they ever enter the real world.

Another possibility thats farther down the line is using evolution to influence the ethics of artificial intelligence systems. Its likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.

While neuroevolution might reduce the likelihood of unintended consequences, it doesnt prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.

Being a scientist doesnt absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.

As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we dont yet know what its capable of. But we do need to decide what the desired outcome of advanced AI is.

One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.

Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected and get surgery performed by a tireless robot with a perfectly steady hand. Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.

Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.

In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political self together with the rest of humanity may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.

There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?

The key question in this scenario is: Why should a superintelligence keep us around?

I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankinds existence in it probably doesnt matter at all.

But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.

Fortunately, we need not justify our existence quite yet. We have some time somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldnt just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things as are saying we want to save the planet and successfully doing so.

We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we dont find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.

This article was originally published on The Conversation. Read the original article here: http://theconversation.com/what-an-artificial-intelligence-researcher-fears-about-ai-78655.

Here is the original post:

What an artificial intelligence researcher fears about AI – Huron Daily … – Huron Daily Tribune

Integrating disciplines ‘key to dealing with digital revolution’ | Times … – Times Higher Education (THE)

Universities must embrace crossdisciplinary education and research in order to deal with the megatrends of the fourth industrial revolution, according to the president of the Korea Advanced Institute of Science and Technology (KAIST).

Speaking at theTimesHigherEducationResearch Excellence Summit in Tacihung, Taiwan, Sung-Chul Shin said that these challenges were particularly pressing in South Korea, which he described as being at a stall point where it can either continue developing as an advanced nation or get stuck with a stagnant economy.

The fourth industrial revolution, also known as the digital revolution, is predicted to change the way we live, work and relate to each other. It represents the integration of technology between the physical, digital and biological worlds.

Professor Shin said that there are three megatrends that will drive the fourth industrial revolution: hyperconnectivity, which is the integration of the physical and digital fields; superintelligence, which will draw on artificial intelligence and computer science; and the convergence of science and technology.

Universities should play a central role in developing the fourth industrial revolution, he said.

In meeting the challenges posed by the digital revolution, universities will need to bring research across various disciplines together, in order to achieve better results than when professors work in a single specialism, Professor Shin said. Research involving the areas of artificial intelligence, brain research, data science and computer science will be key, he added.

International collaboration was also vital, he continued, pointing out that Korea only invests a fraction of its research budget in brain science research when comparedwith the US and Japan. We cannot compete so we have to collaborate, Professor Shin said.

Concerning all megatrends, university reform is urgent, he added.

Professor Shin argued that students will need an education in the humanities and social sciences in addition to strong training in basic science and engineering, in order to improve their creative talents.

Team-based learning and the flipped classroom is very important to fostering these skills in the next generation, he told the conference.

South Koreas major research and development policies for the future include expanding investment and improving the efficiency of research by streamlining the planning, management and evaluation of research projects, the conference heard. The government is also developing a series of strategic research priorities.

As the Korean government adopts the fourth industrial revolution, KAIST will play a pivotal role, Professor Shin said.Korea is near a stall point, it is either destined to solidify its place as an advanced nation or be caught in the middle-income trap with a stagnant economy.

holly.else@timeshighereducation.com

Follow this link:

Integrating disciplines ‘key to dealing with digital revolution’ | Times … – Times Higher Education (THE)

The AI Revolution: The Road to Superintelligence | Inverse

Nothing will make you appreciate human intelligence like learning about how unbelievably challenging it is to try to create a computer as smart as we are.

This article originally appeared on Wait But Why by Tim Urban. This is Part 1 Part 2 is here.

We are on the edge of change comparable to the rise of human life on Earth.

-Vernor Vinge

What does it feel like to stand here?

It seems like a pretty intense place to be standing but then you have to remember something about what its like to stand on a time graph: You cant see whats to your right. So heres how it actually feels to stand there:

Which probably feels pretty normal

Imagine taking a time machine back to 1750 a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. When you get there, you retrieve a dude, bring him to 2015, and then walk him around and watch him react to everything. Its impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someones face and chat with them even though theyre on the other side of the country, and worlds of other inconceivable sorcery. This is all before you show him the internet or explain things like the International Space Station, the Large Hadron Collider, nuclear weapons, or general relativity.

This experience for him wouldnt be surprising or shocking or even mind-blowing those words arent big enough. He might actually die.

But heres the interesting thing if he then went back to 1750 and got jealous that we got to see his reaction and decided he wanted to try the same thing, hed take the time machine and go back the same distance, get someone from around the year 1500, bring him to 1750, and show him everything. And the 1500 guy would be shocked by a lot of thingsbut he wouldnt die. It would be far less of an insane experience for him, because while 1500 and 1750 were very different, they were much less different than 1750 to 2015. The 1500 guy would learn some mind-bending shit about space and physics, hed be impressed with how committed Europe turned out to be with that new imperialism fad, and hed have to do some major revisions of his world map conception. But watching everyday life go by in 1750 transportation, communication, etc. definitely wouldnt make him die.

No, in order for the 1750 guy to have as much fun as we had with him, hed have to go much farther back maybe all the way back to about 12,000 BC, before the First Agricultural Revolution gave rise to the first cities and to the concept of civilization. If someone from a purely hunter-gatherer world from a time when humans were, more or less, just another animal species saw the vast human empires of 1750 with their towering churches, their ocean-crossing ships, their concept of being inside, and their enormous mountain of collective, accumulated human knowledge and discovery hed likely die.

And then what if, after dying, he got jealous and wanted to do the same thing. If he went back 12,000 years to 24,000 BC and got a guy and brought him to 12,000 BC, hed show the guy everything and the guy would be like, Okay whats your point who cares. For the 12,000 BC guy to have the same fun, hed have to go back over 100,000 years and get someone he could show fire and language to for the first time.

In order for someone to be transported into the future and die from the level of shock theyd experience, they have to go enough years ahead that a die level of progress, or a Die Progress Unit (DPU) has been achieved. So a DPU took over 100,000 years in hunter-gatherer times, but at the post-Agricultural Revolution rate, it only took about 12,000 years. The post-Industrial Revolution world has moved so quickly that a 1750 person only needs to go forward a couple hundred years for a DPU to have happened.

This pattern human progress moving quicker and quicker as time goes on is what futurist Ray Kurzweil calls human historys Law of Accelerating Returns. This happens because more advanced societies have the ability to progress at a faster rate than less advanced societies because theyre more advanced. 19th century humanity knew more and had better technology than 15th century humanity, so its no surprise that humanity made far more advances in the 19th century than in the 15th century 15th century humanity was no match for 19th century humanity.

This works on smaller scales too. The movie Back to the Future came out in 1985, and the past took place in 1955. In the movie, when Michael J. Fox went back to 1955, he was caught off-guard by the newness of TVs, the prices of soda, the lack of love for shrill electric guitar, and the variation in slang. It was a different world, yes but if the movie were made today and the past took place in 1985, the movie could have had much more fun with much bigger differences. The character would be in a time before personal computers, internet, or cell phones todays Marty McFly, a teenager born in the late 90s, would be much more out of place in 1985 than the movies Marty McFly was in 1955.

This is for the same reason we just discussed the Law of Accelerating Returns. The average rate of advancement between 1985 and 2015 was higher than the rate between 1955 and 1985 because the former was a more advanced world so much more change happened in the most recent 30 years than in the prior 30.

So advances are getting bigger and bigger and happening more and more quickly. This suggests some pretty intense things about our future, right?

Kurzweil suggests that the progress of the entire 20th century would have been achieved in only 20 years at the rate of advancement in the year 2000 in other words, by 2000, the rate of progress was five times faster than the average rate of progress during the 20th century. He believes another 20th centurys worth of progress happened between 2000 and 2014 and that another 20th centurys worth of progress will happen by 2021, in only seven years. A couple decades later, he believes a 20th centurys worth of progress will happen multiple times in the same year, and even later, in less than one month. All in all, because of the Law of Accelerating Returns, Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century.

If Kurzweil and others who agree with him are correct, then we may be as blown away by 2030 as our 1750 guy was by 2015 i.e. the next DPU might only take a couple decades and the world in 2050 might be so vastly different than todays world that we would barely recognize it.

This isnt science fiction. Its what many scientists smarter and more knowledgeable than you or I firmly believe and if you look at history, its what we should logically predict.

So then why, when you hear me say something like the world 35 years from now might be totally unrecognizable, are you thinking, Cool.but nahhhhhhh? Three reasons were skeptical of outlandish forecasts of the future:

1) When it comes to history, we think in straight lines. When we imagine the progress of the next 30 years, we look back to the progress of the previous 30 as an indicator of how much will likely happen. When we think about the extent to which the world will change in the 21st century, we just take the 20th century progress and add it to the year 2000. This was the same mistake our 1750 guy made when he got someone from 1500 and expected to blow his mind as much as his own was blown going the same distance ahead. Its most intuitive for us to think linearly, when we should be thinking exponentially. If someone is being more clever about it, they might predict the advances of the next 30 years not by looking at the previous 30 years, but by taking the current rate of progress and judging based on that. Theyd be more accurate, but still way off. In order to think about the future correctly, you need to imagine things moving at a much faster rate than theyre moving now.

2) The trajectory of very recent history often tells a distorted story. First, even a steep exponential curve seems linear when you only look at a tiny slice of it, the same way if you look at a little segment of a huge circle up close, it looks almost like a straight line. Second, exponential growth isnt totally smooth and uniform. Kurzweil explains that progress happens in S-curves:

An S is created by the wave of progress when a new paradigm sweeps the world. The curve goes through three phases:

If you look only at very recent history, the part of the S-curve youre on at the moment can obscure your perception of how fast things are advancing. The chunk of time between 1995 and 2007 saw the explosion of the internet, the introduction of Microsoft, Google, and Facebook into the public consciousness, the birth of social networking, and the introduction of cell phones and then smart phones. That was Phase 2: the growth spurt part of the S. But 2008 to 2015 has been less groundbreaking, at least on the technological front. Someone thinking about the future today might examine the last few years to gauge the current rate of advancement, but thats missing the bigger picture. In fact, a new, huge Phase 2 growth spurt might be brewing right now.

3) Our own experience makes us stubborn old men about the future. We base our ideas about the world on our personal experience, and that experience has ingrained the rate of growth of the recent past in our heads as the way things happen. Were also limited by our imagination, which takes our experience and uses it to conjure future predictions but often, what we know simply doesnt give us the tools to think accurately about the future. When we hear a prediction about the future that contradicts our experience-based notion of how things work, our instinct is that the prediction must be naive. If I tell you, later in this post, that you may live to be 150, or 250, or not die at all, your instinct will be, Thats stupid if theres one thing I know from history, its that everybody dies. And yes, no one in the past has not died. But no one flew airplanes before airplanes were invented either.

So while nahhhhh might feel right as you read this post, its probably actually wrong. The fact is, if were being truly logical and expecting historical patterns to continue, we should conclude that much, much, much more should change in the coming decades than we intuitively expect. Logic also suggests that if the most advanced species on a planet keeps making larger and larger leaps forward at an ever-faster rate, at some point, theyll make a leap so great that it completely alters life as they know it and the perception they have of what it means to be a human kind of like how evolution kept making great leaps toward intelligence until finally it made such a large leap to the human being that it completely altered what it meant for any creature to live on planet Earth. And if you spend some time reading about whats going on today in science and technology, you start to see a lot of signs quietly hinting that life as we currently know it cannot withstand the leap thats coming next.

If youre like me, you used to think Artificial Intelligence was a silly sci-fi concept, but lately youve been hearing it mentioned by serious people, and you dont really quite get it.

There are three reasons a lot of people are confused about the term AI:

1) We associate AI with movies. Star Wars. Terminator. 2001: A Space Odyssey. Even the Jetsons. And those are fiction, as are the robot characters. So it makes AI sound a little fictional to us.

2) AI is a broad topic. It ranges from your phones calculator to self-driving cars to something in the future that might change the world dramatically. AI refers to all of these things, which is confusing.

3) We use AI all the time in our daily lives, but we often dont realize its AI. John McCarthy, who coined the term Artificial Intelligence in 1956, complained that as soon as it works, no one calls it AI anymore. Because of this phenomenon, AI often sounds like a mythical future prediction more than a reality. At the same time, it makes it sound like a pop concept from the past that never came to fruition. Ray Kurzweil says he hears people say that AI withered in the 1980s, which he compares to insisting that the Internet died in the dot-com bust of the early 2000s.

So lets clear things up. First, stop thinking of robots. A robot is a container for AI, sometimes mimicking the human form, sometimes not but the AI itself is the computer inside the robot. AI is the brain, and the robot is its body if it even has a body. For example, the software and data behind Siri is AI, the womans voice we hear is a personification of that AI, and theres no robot involved at all.

Secondly, youve probably heard the term singularity or technological singularity. This term has been used in math to describe an asymptote-like situation where normal rules no longer apply. Its been used in physics to describe a phenomenon like an infinitely small, dense black hole or the point we were all squished into right before the Big Bang. Again, situations where the usual rules dont apply. In 1993, Vernor Vinge wrote a famous essay in which he applied the term to the moment in the future when our technologys intelligence exceeds our own a moment for him when life as we know it will be forever changed and normal rules will no longer apply. Ray Kurzweil then muddled things a bit by defining the singularity as the time when the Law of Accelerating Returns has reached such an extreme pace that technological progress is happening at a seemingly-infinite pace, and after which well be living in a whole new world. I found that many of todays AI thinkers have stopped using the term, and its confusing anyway, so I wont use it much here (even though well be focusing on that idea throughout).

Finally, while there are many different types or forms of AI since AI is a broad concept, the critical categories we need to think about are based on an AIs caliber. There are three major AI caliber categories:

AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. Theres AI that can beat the world chess champion in chess, but thats the only thing it does. Ask it to figure out a better way to store data on a hard drive, and itll look at you blankly.

AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board a machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and were yet to do it. Professor Linda Gottfredson describes intelligence as a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. AGI would be able to do all of those things as easily as you can.

AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. Artificial Superintelligence ranges from a computer thats just a little smarter than a human to one thats trillions of times smarter across the board. ASI is the reason the topic of AI is such a spicy meatball and why the words immortality and extinction will both appear in these posts multiple times.

As of now, humans have conquered the lowest caliber of AI ANI in many ways, and its everywhere. The AI Revolution is the road from ANI, through AGI, to ASI a road we may or may not survive but that, either way, will change everything.

Lets take a close look at what the leading thinkers in the field believe this road looks like and why this revolution might happen way sooner than you might think:

Artificial Narrow Intelligence is machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing. A few examples:

ANI systems as they are now arent especially scary. At worst, a glitchy or badly-programmed ANI can cause an isolated catastrophe like knocking out a power grid, causing a harmful nuclear power plant malfunction, or triggering a financial markets disaster (like the 2010 Flash Crash when an ANI program reacted the wrong way to an unexpected situation and caused the stock market to briefly plummet, taking $1 trillion of market value with it, only part of which was recovered when the mistake was corrected).

But while ANI doesnt have the capability to cause an existential threat, we should see this increasingly large and complex ecosystem of relatively-harmless ANI as a precursor of the world-altering hurricane thats on the way. Each new ANI innovation quietly adds another brick onto the road to AGI and ASI. Or as Aaron Saenz sees it, our worlds ANI systems are like the amino acids in the early Earths primordial ooze the inanimate stuff of life that, one unexpected day, woke up.

Why Its So Hard

Nothing will make you appreciate human intelligence like learning about how unbelievably challenging it is to try to create a computer as smart as we are. Building skyscrapers, putting humans in space, figuring out the details of how the Big Bang went down all far easier than understanding our own brain or how to make something as cool as it. As of now, the human brain is the most complex object in the known universe.

Whats interesting is that the hard parts of trying to build AGI (a computer as smart as humans in general, not just at one narrow specialty) are not intuitively what youd think they are. Build a computer that can multiply two ten-digit numbers in a split second incredibly easy. Build one that can look at a dog and answer whether its a dog or a cat spectacularly difficult. Make AI that can beat any human in chess? Done. Make one that can read a paragraph from a six-year-olds picture book and not just recognize the words but understand the meaning of them? Google is currently spending billions of dollars trying to do it. Hard things like calculus, financial market strategy, and language translationare mind-numbingly easy for a computer, while easy things like vision, motion, movement, and perception are insanely hard for it. Or, as computer scientist Donald Knuth puts it, AI has by now succeeded in doing essentially everything that requires thinking but has failed to do most of what people and animals do without thinking.

What you quickly realize when you think about this is that those things that seem easy to us are actually unbelievably complicated, and they only seem easy because those skills have been optimized in us (and most animals) by hundreds of million years of animal evolution. When you reach your hand up toward an object, the muscles, tendons, and bones in your shoulder, elbow, and wrist instantly perform a long series of physics operations, in conjunction with your eyes, to allow you to move your hand in a straight line through three dimensions. It seems effortless to you because you have perfected software in your brain for doing it. Same idea goes for why its not that malware is dumb for not being able to figure out the slanty word recognition test when you sign up for a new account on a siteits that your brain is super impressive for being able to.

On the other hand, multiplying big numbers or playing chess are new activities for biological creatures and we havent had any time to evolve a proficiency at them, so a computer doesnt need to work too hard to beat us. Think about itwhich would you rather do, build a program that could multiply big numbers or one that could understand the essence of a B well enough that you could show it a B in any one of thousands of unpredictable fonts or handwriting and it could instantly know it was a B?

One fun examplewhen you look at this, you and a computer both can figure out that its a rectangle with two distinct shades, alternating:

Tied so far. But if you pick up the black and reveal the whole image

you have no problem giving a full description of the various opaque and translucent cylinders, slats, and 3-D corners, but the computer would fail miserably. It would describe what it seesa variety of two-dimensional shapes in several different shadeswhich is actually whats there. Your brain is doing a ton of fancy shit to interpret the implied depth, shade-mixing, and room lighting the picture is trying to portray. And looking at the picture below, a computer sees a two-dimensional white, black, and gray collage, while you easily see what it really isa photo of an entirely-black, 3-D rock:

And everything we just mentioned is still only taking in stagnant information and processing it. To be human-level intelligent, a computer would have to understand things like the difference between subtle facial expressions, the distinction between being pleased, relieved, content, satisfied, and glad, and why Braveheart was great but The Patriot was terrible.

Daunting.

So how do we get there?

First Key to Creating AGI: Increasing Computational Power

One thing that definitely needs to happen for AGI to be a possibility is an increase in the power of computer hardware. If an AI system is going to be as intelligent as the brain, itll need to equal the brains raw computing capacity.

One way to express this capacity is in the total calculations per second (cps) the brain could manage, and you could come to this number by figuring out the maximum cps of each structure in the brain and then adding them all together.

Ray Kurzweil came up with a shortcut by taking someones professional estimate for the cps of one structure and that structures weight compared to that of the whole brain and then multiplying proportionally to get an estimate for the total. Sounds a little iffy, but he did this a bunch of times with various professional estimates of different regions, and the total always arrived in the same ballpark around 1016, or 10 quadrillion cps.

Currently, the worlds fastest supercomputer, Chinas Tianhe-2, has actually beaten that number, clocking in at about 34 quadrillion cps. But Tianhe-2 is also a dick, taking up 720 square meters of space, using 24 megawatts of power (the brain runs on just 20 watts, and costing $390 million to build. Not especially applicable to wide usage, or even most commercial or industrial usage yet.

Kurzweil suggests that we think about the state of computers by looking at how many cps you can buy for $1,000. When that number reaches human-level 10 quadrillion cps then thatll mean AGI could become a very real part of life.

Moores Law is a historically-reliable rule that the worlds maximum computing power doubles approximately every two years, meaning computer hardware advancement, like general human advancement through history, grows exponentially. Looking at how this relates to Kurzweils cps/$1,000 metric, were currently at about 10 trillion cps/$1,000, right on pace with this graphs predicted trajectory:

So the worlds $1,000 computers are now beating the mouse brain and theyre at about a thousandth of human level. This doesnt sound like much until you remember that we were at about a trillionth of human level in 1985, a billionth in 1995, and a millionth in 2005. Being at a thousandth in 2015 puts us right on pace to get to an affordable computer by 2025 that rivals the power of the brain.

So on the hardware side, the raw power needed for AGI is technically available now, in China, and well be ready for affordable, widespread AGI-caliber hardware within 10 years. But raw computational power alone doesnt make a computer generally intelligent the next question is, how do we bring human-level intelligence to all that power?

Second Key to Creating AGI: Making it Smart

This is the icky part. The truth is, no one really knows how to make it smart were still debating how to make a computer human-level intelligent and capable of knowing what a dog and a weird-written B and a mediocre movie is. But there are a bunch of far-fetched strategies out there and at some point, one of them will work. Here are the three most common strategies I came across:

1) Plagiarize the brain.

This is like scientists toiling over how that kid who sits next to them in class is so smart and keeps doing so well on the tests, and even though they keep studying diligently, they cant do nearly as well as that kid, and then they finally decide k fuck it Im just gonna copy that kids answers. It makes sensewere stumped trying to build a super-complex computer, and there happens to be a perfect prototype for one in each of our heads.

The science world is working hard on reverse engineering the brain to figure out how evolution made such a rad thing optimistic estimates say we can do this by 2030. Once we do that, well know all the secrets of how the brain runs so powerfully and efficiently and we can draw inspiration from it and steal its innovations. One example of computer architecture that mimics the brain is the artificial neural network. It starts out as a network of transistor neurons, connected to each other with inputs and outputs, and it knows nothinglike an infant brain. The way it learns is it tries to do a task, say handwriting recognition, and at first, its neural firings and subsequent guesses at deciphering each letter will be completely random. But when its told it got something right, the transistor connections in the firing pathways that happened to create that answer are strengthened; when its told it was wrong, those pathways connections are weakened. After a lot of this trial and feedback, the network has, by itself, formed smart neural pathways and the machine has become optimized for the task. The brain learns a bit like this but in a more sophisticated way, and as we continue to study the brain, were discovering ingenious new ways to take advantage of neural circuitry.

More extreme plagiarism involves a strategy called whole brain emulation, where the goal is to slice a real brain into thin layers, scan each one, use software to assemble an accurate reconstructed 3-D model, and then implement the model on a powerful computer. Wed then have a computer officially capable of everything the brain is capable of it would just need to learn and gather information. If engineers get really good, theyd be able to emulate a real brain with such exact accuracy that the brains full personality and memory would be intact once the brain architecture has been uploaded to a computer. If the brain belonged to Jim right before he passed away, the computer would now wake up as Jim (?, which would be a robust human-level AGI, and we could now work on turning Jim into an unimaginably smart ASI, which hed probably be really excited about.

How far are we from achieving whole brain emulation? Well so far, weve not yet just recently been able to emulate a 1mm-long flatworm brain, which consists of just 302 total neurons. The human brain contains 100 billion. If that makes it seem like a hopeless project, remember the power of exponential progress now that weve conquered the tiny worm brain, an ant might happen before too long, followed by a mouse, and suddenly this will seem much more plausible.

2) Try to make evolution do what it did before but for us this time.

So if we decide the smart kids test is too hard to copy, we can try to copy the way he studies for the tests instead.

Heres something we know. Building a computer as powerful as the brain is possible our own brains evolution is proof. And if the brain is just too complex for us to emulate, we could try to emulate evolution instead. The fact is, even if we can emulate a brain, that might be like trying to build an airplane by copying a birds wing-flapping motionsoften, machines are best designed using a fresh, machine-oriented approach, not by mimicking biology exactly.

So how can we simulate evolution to build AGI? The method, called genetic algorithms, would work something like this: there would be a performance-and-evaluation process that would happen again and again (the same way biological creatures perform by living life and are evaluated by whether they manage to reproduce or not). A group of computers would try to do tasks, and the most successful ones would be bred with each other by having half of each of their programming merged together into a new computer. The less successful ones would be eliminated. Over many, many iterations, this natural selection process would produce better and better computers. The challenge would be creating an automated evaluation and breeding cycle so this evolution process could run on its own.

The downside of copying evolution is that evolution likes to take a billion years to do things and we want to do this in a few decades.

But we have a lot of advantages over evolution. First, evolution has no foresight and works randomly it produces more unhelpful mutations than helpful ones, but we would control the process so it would only be driven by beneficial glitches and targeted tweaks. Secondly, evolution doesnt aim for anything, including intelligence sometimes an environment might even select against higher intelligence (since it uses a lot of energy). We, on the other hand, could specifically direct this evolutionary process toward increasing intelligence. Third, to select for intelligence, evolution has to innovate in a bunch of other ways to facilitate intelligence like revamping the ways cells produce energy when we can remove those extra burdens and use things like electricity. Its no doubt wed be much, much faster than evolution but its still not clear whether well be able to improve upon evolution enough to make this a viable strategy.

3) Make this whole thing the computers problem, not ours.

This is when scientists get desperate and try to program the test to take itself. But it might be the most promising method we have.

The idea is that wed build a computer whose two major skills would be doing research on AI and coding changes into itself allowing it to not only learn but to improve its own architecture. Wed teach computers to be computer scientists so they could bootstrap their own development. And that would be their main job figuring out how to make themselves smarter. More on this later.

All of This Could Happen Soon

Rapid advancements in hardware and innovative experimentation with software are happening simultaneously, and AGI could creep up on us quickly and unexpectedly for two main reasons:

1) Exponential growth is intense and what seems like a snails pace of advancement can quickly race upwardsthis GIF illustrates this concept nicely:

2) When it comes to software, progress can seem slow, but then one epiphany can instantly change the rate of advancement (kind of like the way science, during the time humans thought the universe was geocentric, was having difficulty calculating how the universe worked, but then the discovery that it was heliocentric suddenly made everything much easier). Or, when it comes to something like a computer that improves itself, we might seem far away but actually be just one tweak of the system away from having it become 1,000 times more effective and zooming upward to human-level intelligence.

At some point, well have achieved AGI computers with human-level general intelligence. Just a bunch of people and computers living together in equality.

Oh actually not at all.

The thing is, AGI with an identical level of intelligence and computational capacity as a human would still have significant advantages over humans. Like:

AI, which will likely get to AGI by being programmed to self-improve, wouldnt see human-level intelligence as some important milestone its only a relevant marker from our point of view and wouldnt have any reason to stop at our level. And given the advantages over us that even human intelligence-equivalent AGI would have, its pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence.

This may shock the shit out of us when it happens. The reason is that from our perspective, A) while the intelligence of different kinds of animals varies, the main characteristic were aware of about any animals intelligence is that its far lower than ours, and B) we view the smartest humans as WAY smarter than the dumbest humans. Kind of like this:

So as AI zooms upward in intelligence toward us, well see it as simply becoming smarter, for an animal. Then, when it hits the lowest capacity of humanityNick Bostrom uses the term the village idiot well be like, Oh wow, its like a dumb human. Cute! The only thing is, in the grand spectrum of intelligence, all humans, from the village idiot to Einstein, are within a very small range so just after hitting village idiot-level and being declared to be AGI, itll suddenly be smarter than Einstein and we wont know what hit us:

And what happensafter that?

I hope you enjoyed normal time, because this is when this topic gets unnormal and scary, and its gonna stay that way from here forward. I want to pause here to remind you that every single thing Im going to say is real real science and real forecasts of the future from a large array of the most respected thinkers and scientists. Just keep remembering that.

Anyway, as I said above, most of our current models for getting to AGI involve the AI getting there by self-improvement. And once it gets to AGI, even systems that formed and grew through methods that didnt involve self-improvement would now be smart enough to begin self-improving if they wanted to.

And heres where we get to an intense concept: recursive self-improvement. It works like this

An AI system at a certain level lets say human village idiot is programmed with the goal of improving its own intelligence. Once it does, its smarter maybe at this point its at Einsteins level so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. These leaps make it much smarter than any human, allowing it to make even bigger leaps. As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the superintelligent level of an ASI system. This is called an Intelligence Explosion, and its the ultimate example of The Law of Accelerating Returns.

There is some debate about how soon AI will reach human-level general intelligence the median year on a survey of hundreds of scientists about when they believed wed be more likely than not to have reached AGI was 2040 thats only 25 years from now, which doesnt sound that huge until you consider that many of the thinkers in this field think its likely that the progression from AGI to ASI happens very quickly. Like this could happen:

It takes decades for the first AI system to reach low-level general intelligence, but it finally happens. A computer is able to understand the world around it as well as a human four-year-old. Suddenly, within an hour of hitting that milestone, the system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do. 90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human.

Superintelligence of that magnitude is not something we can remotely grasp, any more than a bumblebee can wrap its head around Keynesian Economics. In our world, smart means a 130 IQ and stupid means an 85 IQ we dont have a word for an IQ of 12,952.

What we do know is that humans utter dominance on this Earth suggests a clear rule: with intelligence comes power. Which means an ASI, when we create it, will be the most powerful being in the history of life on Earth, and all living things, including humans, will be entirely at its whim and this might happen in the next few decades.

Originally posted here:

The AI Revolution: The Road to Superintelligence | Inverse

No need to fear Artificial Intelligence – Livemint – Livemint

Love it or fear it. Call it useful or dismiss it as plain hype. Whatever your stand is, Artificial Intelligence, or AI, will remain the overarching theme of the digital story that individuals and companies will be discussing for quite some time to come.

It is important, however, to understand the nature of AIwhat it can and cannot do.

Unfortunately, individuals and companies often fail to understand this.

For instance, most of the AI we see around caters to narrow specific areas, and hence are categorised as weak AI. Examples include most of the AI chatbots, AI personal assistants and smart home assistants that we see, including Apples Siri, Microsofts Cortana, Googles Allo, Amazons Alexa or Echo and Googles Home.

Driverless cars and trucks, however impressive they sound, are still higher manifestations of weak AI.

In other words, weak AI lacks human consciousness. Moreover, though we talk about the use of artificial neural networks (ANNs) in deep learninga subset of machine learningANNs do not behave like the human brain. They are loosely modelled on the human brain.

As an example, just because a plane flies in the air, you do not call it a bird. Yet, we are comfortable with the idea that a plane is not a bird and we fly across the world in planes.

Why, then, do we fear AI? Part of the reason is that most of us confuse weak AI with strong AI. Machines with strong AI will have a brain as powerful as the human brain.

Such machines will be able to teach themselves, learn from others, perceive, emotein other words, do everything that human beings do and more. Its the more aspect that we fear most.

Strong AIalso called true intelligence or artificial general intelligence (AGI)is still a far way off.

Some use the term Artificial Superintelligence (ASI) to describe a system with the capabilities of an AGI, without the physical limitations of humans, that would learn and improve far beyond human level.

Many people, including experts, fear that if strong AI becomes a reality, AI may become more intelligent than humans. When this will happen is anyones guess.

Till then, we have our work cut out to figure how we can make the best use of AI in our lives and companies.

The Accenture Technology Vision 2017 report, for instance, points out that AI is the new user interface (UI) since AI is making every interface both simple and smart. In this edition of Mints Technology Quarterly, we also have experts from Boston Consulting Group talking about value creation in digital, and other experts discussing the path from self-healing grids to self-healing factories.

This edition also features articles on the transformative power of genome editing, and how smart cars are evolving.

Your feedback, as usual, will help us make the forthcoming editions better.

First Published: Wed, Jun 28 2017. 11 18 PM IST

Link:

No need to fear Artificial Intelligence – Livemint – Livemint

Effective Altruism Says You Can Save the Future by Making Money – Motherboard

There is no contradiction in claiming that, as Steven Pinker argues, the world is getting better in many important respects and also that the world is a complete mess. Sure, your chances of being murdered may be lower than at anytime before in human history, but one could riposte that given the size of the human population today there has never been more total disutility, or suffering/injustice/evil, engulfing our planet.

Just consider that about 3.1 million children died of hunger in 2013, averaging nearly 8,500 each day. Along these lines, about 66 million children attend class hungry in the developing world; roughly 161 million kids under five are nutritionally stunted; 99 million are underweight; and 51 million suffer from wasting. Similarly, an estimated 1.4 billion people live on less than $1.25 per day while roughly 2.5 billion earn less than $2 per day, and in 2015 about 212 million people were diagnosed with malaria, with some 429,000 dying.

The idea is to optimize the total amount of good that one can do in the world

This is a low-resolution snapshot of the global predicament of humanity todayone that doesn’t even count the frustration, pain, and misery caused by sexism, racism, factory farming, terrorism, climate change, and war. So the question is: how can we make the world more livable for sentient life? What actions can we take to alleviate the truly massive amounts of suffering that plague our pale blue dot? And to what extent should we care about the many future generations that could come into existence?

I recently attended a conference at Harvard University about a fledgling movement called effective altruism (EA), popularized by philosophers like William MacAskill and Facebook cofounder Dustin Moskovitz. Whereas many philanthropically inclined individuals make decisions to donate based on which causes tugged at their heartstrings, this movement takes a highly data-driven approach to charitable giving. The idea is to optimize the total amount of good that one can do in the world, even if it’s counterintuitive.

For example, one might think that donating money to buy books for schools in low-income communities across Africa is a great way to improve the education of children victimized by poverty, but it turns out that spending this money on deworming programs could be a better way of improving outcomes. Studies show that deworming can reduce the rate of absenteeism in schools by 25 percenta problem that buying more books fails to addressand that “the children who had been de-wormed earned 20% more than those who hadn’t.”

Similarly, many people in the developed world feel compelled to donate money to disaster relief following natural catastrophes like earthquakes and tsunamis. While this is hardly immoral, data reveals the money donated could have more tangible impact if spent on insecticide-treated mosquito nets for people in malaria-prone regions of Africa.

Another surprising, and controversial, suggestion within effective altruism is that boycotting sweatshops in the developing world often does more harm than good. The idea is that, however squalid the working conditions of sweatshops are, they usually provide the very best jobs around. If a sweatshop worker were forced to take a different joband there’s no guarantee that another job would even be availableit would almost certainly involve much more laborious work for lower wages. As the New York Times quotes a woman in Cambodia who scavenges garbage dumps for a living, “I’d love to get a job in a factoryAt least that work is in the shade. Here is where it’s hot.”

There are, of course, notable criticisms of this approach. Consider the story of Matt Wage. After earning an undergraduate degree at Princeton, he was accepted by the University of Oxford to earn a doctorate in philosophy. But instead of attending this programone of the very best in the worldhe opted to get a job on Wall Street making a six-figure salary. Why? Because, he reasoned, if he were to save 100 children from a burning building, it would be the best day of his life. As it happens, he could save the same number of children over the course of his life as a professional philosopher who donates a large portion of his salary to charity. Butcrunching the numbersif he were to get a high-paying job at, say, an arbitrage trading firm and donate half of his earnings to, say, the Against Malaria Foundation, he could potentially save hundreds of children from dying “within the first year or two of his working life and every year thereafter.”

Some people think superintelligence is too far away to be of concern

The criticism leveled at this idea is that Wall Street may itself be a potent source of badness in the world, and thus participating in the machine as a cog might actually contribute net harm. But effective altruists would respond that what matters isn’t just what one does, but what would have happened if one hadn’t acted in a particular way. If Wage hadn’t gotten the job on Wall Street, someone else would havesomeone who wasn’t as concerned about the plight of African children, whereas Wage earns to give money that saves thousands of disadvantaged people.

Another objection is that many effective altruists are too concerned about the potential risks associated with machine superintelligence. Some people think superintelligence is too far away to be of concern or unlikely to pose any serious threats to human survival, effect. They maintain that spending money to research what’s called the “AI control problem” is misguided, if not a complete waste of resources. But the fact is that there are good arguments for thinking that, as Stephen Hawking puts it, if superintelligence isn’t the worst thing to happen to humanity, it will likely be the very best. And effective altruistsand Iwould argue that then designing a “human friendly” superintelligence is a highly worthwhile task, even if the first superintelligent machine won’t make its debut on Earth until the end of this century. In sum, the expected value of solving the AI control problem could be astronomically high.

Perhaps the most interesting idea within the effective altruism movement is that we should not just worry about present day humans but future humans as well. According to one study published in the journal Sustainability, “most individuals’ abilities to imagine the future goes ‘dark’ at the ten-year horizon.” This likely stems from our cognitive evolution in an ancient environment (like the African savanna) in which long-term thinking was not only unnecessary for survival but might actually have been disadvantageous.

Yet many philosophers believe that, from a moral perspective, this “bias for the short-term” is completely unjustified. They argue that when one is born should have no bearing on one’s intrinsic valuethat is to say, “time discounting,” or valuing the future less than the present, should not apply to human lives.

First, there is the symmetry issue: if future lives are worth less than present lives, then are past lives worth less as well? Or, from the perspective of past people, are our lives worth less than theirs? Second, consider that using a time discounting annual rate of 10 percent, a single person today would be equal in value to an unimaginable 4.96 x 1020 people 500 years hence. Does that strike one as morally defensible? Is it right that one person dying today constitutes an equivalent moral tragedy to a global holocaust that kills 4.96 x 1020 people in five centuries?

And finally, our best estimates of how many people could come to exist in the future indicate that this number could be exceptionally large. For example, The Oxford philosopher Nick Bostrom estimates that some 1016 people with normal lifespans could exist on Earth before the sun sterilizes it in a billion years or so. Yet another educated guess is that “a hundred thousand billion billion billion”that is 100,000,000,000,000,000,000,000,000,000,000people could someday populate the visible universe. To date, there have been approximately 60 billion humans on Earth, or 6 x 109, meaning that the humanor posthuman, if our progeny evolves into technologically enhanced cyborgsstory may have only just begun.

Read More: Today’s Kids Could Live Through Machine Superintelligence, Martian Colonies and a Nuclear Attack

Caring about the far future leads to some effective altruists to focus specifically on what Bostrom calls “existential risks,” or events that would either trip our species into the eternal grave of extinction or irreversibly catapult us back to the Paleolithic.

Since the end of World War II, there has been an unsettling increase in both the total number of existential riskssuch as nuclear conflict, climate change, global biodiversity loss, engineered pandemics, grey goo, geoengineering, physics experiments, and machine superintelligenceand the overall probability of civilizational collapse, or worse, occurring. For example, the cosmologist Lord Martin Rees puts the likelihood of civilization imploding at 50 percent this century, and Bostrom argues that an existential catastrophe has an equal to or greater than 25 percent chance of happening. It follows that, as Stephen Hawking recently put it, humanity has never lived in more dangerous times.

This is why I believe that the movement’s emphasis on the deep future is a very good thing. Our world is one in which contemplating what lies ahead often extends no further than quarterly reports and the next political election. Yet, as suggested above, the future could contain astronomical amounts of value if only we manage to slalom through the obstacle course of natural and anthropogenic hazards before us. While contemporary issues like global poverty, disease, and animal welfare weigh heavily on the minds of many effective altruists, it is encouraging to see a growing number of people taking seriously the issue of humanity’s long-term future.

This article draws from Phil Torres’s forthcoming book Morality, Foresight, and Human Flourishing: An Introduction to Existential Risk Studies .

Read more from the original source:

Effective Altruism Says You Can Save the Future by Making Money – Motherboard

The bots are coming – The New Indian Express

Facebooks artificial intelligence research lab was training chatbots to negotiate. The bots soon learned the usefulness of deceit

Deceit & dealmaking The bots feigned interest in a valueless item, so they could later compromise by conceding it. Deceit is a complex skill that requires hypothesising the others beliefs, and is learnt late in child development. Our agents learnt to deceive without explicit human design, simply by trying to achieve their goals, says the research paper

Has Terminators Skynet arrived? The bots also developed their own language. The researchers had to take measures to prevent them from using non-human language. But the bots werent better at negotiating than humans. So the Terminators Skynet is not here yet, but should we be worried?

The fable of the sparrows A colony of sparrows found it tough to manage on their own. After another day of long hard work, they decided to steal an owl egg. They reckoned an owl would help them build their nests, look after their young and keep an eye on predators

But one sparrow disagreed, This will be our undoing. Should we not first give some thought to owl-domestication? But the rest went on with their plan. They felt it would be difficult to find an owl egg. After succeeding in raising an owl, we can think about this, they said. Nick Bostrom narrates this in his book SuperIntelligence to illustrate the dangers of AI

Continued here:

The bots are coming – The New Indian Express

U.S. Navy reaches out to gamers to troubleshoot post …

Get today’s popular DigitalTrends articles in your inbox:

Why it matters to you

The Maritime Singularity simulation is yet another example of real-world value stemming from playing video games.

The next time someone tells you that playing video games doesnt have real-world applications, you might be able to say that your gaming skills assisted the U.S. Navy. As originally reported by Engadget, the U.S. Navy has put out a call for participants for its Maritime Singularity MMOWGLI -(massively multiplayer online war game leveraging the internet).

Technological singularity hypothesizes that if and when artificial superintelligence is invented, it will set off a swift chain reaction that will change human society forever, and not necessarily for the better. As itdevelops strategies for dealing with the possibility of apost-singularity world, the U.S. Navy thinks that gamers are ideal for problem-solving the future.

Dr. Eric Gulovsen, director of disruptive technology at the Office of Naval Research, claimed that technology has already reached the point where singularity is in the foreseeable future. What we cant see yet is what lies over that horizon. Thats where we need help from players. This is a complex, open-ended problem, so were looking for people from all walks of life Navy, non-Navy, technologist, non-technologist to help us design our Navy for a post-singularity world, he said.

If Maritime Singularity is set up like the Navys previous MMOWGLIs, such as the recent effort to foster a more prosperous and secure South China Sea, participants will come up with opportunities and challenges pertaining to singularity and play out various scenarios.

If the Navys interest in singularity doesnt sound enough like dystopian science fiction already, the games blurb certainly sounds like it was ripped from the back cover of a William Gibson novel:

A tidal wave of change is rapidly approaching todays Navy. We can ride this wave and harness its energy, or get crushed by it. There is no middle ground. What is the nature of this change? The SINGULARITY. We can see the SINGULARITY on the horizon. What we cant see, YET, is what lies OVER that horizon. Thats where you come in.

Maritime Singularity is open for signups now, and will run for a week beginning March 27. For more information, check out the overview video above.

See more here:

U.S. Navy reaches out to gamers to troubleshoot post …

Facebook Chatbots Spontaneously Invent Their Own Non-Human … – Interesting Engineering

Facebook chatbot agents have spontaneously created their own non-human language. Researchers were developing negotiating chatbots when they found out the bots had developed a unique way to communicate with each other. Facebook chatbots have accidentally given us a glimpse into the future of language.

[Image Souce: FAIR]

Researchers from the Facebook Artificial Intelligence Research lab (FAIR) have released a report that describes training their chatbots to negotiate using machine learning. The chatbots were actually very successful negotiators but researchers soon realized they needed to change their modes because when the bots were allowed to communicate among themselves they started to develop their own unique negotiating language.

[Image Souce: FAIR]

This unique and spontaneous development of non-human language was an incredible surprise for the researchers who had to redevelop their modes of teaching to allow for less unstructured and unsupervised bot-to-bot time.

The chatbots surprised its developers in other ways too proving to excel at the art of negotiation. Going as far as to use advanced negotiation techniques such as feigning interest in something valueless in order to concede it later in the negotiations ensuring the best outcome. Co-author of the report, Dhruv Batra, said, No human [programmed] that strategy, this is just something they discovered by themselves that leads to high rewards.

But dont panic, the accidental discovery of some basic communication between chatbots isnt about to trigger singularity. Singularity, if you are not up on the doomsday techno jargon, is the term used for the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.

But these chatty bots definitely provide some solid opportunities for thinking about the way we understand language. Particularly the general view that language is our domain and exclusive to humans.

The research also highlights the fact that we have a long way to go in understanding machine learning. Right now there is a lot of guessing games that often involve examining how the machine thinks by evaluating the output after feeding a neural net a massive meal of data.

The idea that the machine can create its own language highlights just how many holes there are in our knowledge around machine learning, even for the experts designing the systems.

The findings got the team at Facebook fired up, they write,There remains much potential for future work, particularly in exploring other reasoning strategies, and in improving the diversity of utterances without diverging from human language.

Chatbots are widespread across the customer service industry using common keywords to answer FAQ type inquiries. Often these bots have a short run time before the requests get too complicated. Facebook has been investing heavily in chatbot technology and other large corporations are to follow. While there isnt a strong indication of how these negotiating bots will be used by Facebook, other projects are getting big results.

The bot called, DoNotPay has helped over 250,000 people overturn more than 160,000 parking tickets for users in New York and London byworking out if an appeal to a ticket is possible through a series of simple questions, and then guiding the user through the appeal process.

Sources: FAIR,TheVerge, TheGuardian, TheAtlantic, Futurism

Featured Image Source: Pixabay

Read more:

Facebook Chatbots Spontaneously Invent Their Own Non-Human … – Interesting Engineering