Liquid metal tendons could give robots the ability to heal themselves – Digital Trends

Since fans first clapped eyes on the T-1000, the shape-shifting antagonist from 1991s Terminator 2: Judgment Day, many people have been eagerly anticipating the day in which liquid metal robots became a reality. And by eagerly anticipating, we mean had the creeping sense that such a thing is a Skynet eventuality, so we might as well make the best of it.

Jump forward to the closing days of 2019 and, while robots havent quite advanced to the level of the 2029 future sequences seen in T2, scientists are getting closer. In Japan, roboticists from the University of Tokyos JSK Lab have created a prototype robot leg with a metal tendon fusethats able to repair fractures. How does it do this? Simple: By autonomously melting itself and then reforming as a single piece. The work was presented at the recent 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

The self-healing module is comprised of two halves that are connected via magnets and springs. Each half of the module is filled with an alloy with a low melting point of just 50 degrees Celsius (122 degrees Fahrenheit). When the fuse breaks, the cartridges heat, melting the alloy and allowing the two halves to fuse together again. While the re-fused joints are not as strong as they were before any break took place, the researchers have observed that gently vibrating the joint during melting and reforming results in a joint that is up to 90% of its original strength. This could be further optimized in the future.

Its still very early in the development process. But the ultimate ambition is to develop ways that robots will be able to better heal themselves, rather than having to rely on external tools to do so. Since roboticists regularly borrow from nature for biomimetic solutions to problems, the idea of robots that can heal like biological creatures makes a lot of sense.

Just like breakthroughs in endeavors like artificial muscles and continued research toward creating superintelligence, it does take us one step closer to the world envisioned in Terminator. Wheres John savior of all humanity Connor when you need him?

Link:

Liquid metal tendons could give robots the ability to heal themselves - Digital Trends

NIU expert: 4 leaps in technology to expect in the 2020s | NIU – NIU Newsroom

DeKalb, Ill. Autopilot automobiles, wearable devices, services such as Uber and Lyft. Technological advances in the 2010s made headlines, and some made their way into our everyday lives.

So what should we expect from the roaring 2020s?

We put that question to NIU Professor David Gunkel, a communication technology expert and author of Robot Rights and How to Survive a Robot Invasion. Gunkel pointed to four areas where technology is poised to make an impact on the coming decade.

Robots By the mid-2020s, robots of one kind or another will be everywhere and doing virtually everything, Gunkel says. This robot invasion will not transpire as we have imagined it in our science fiction, with a marauding army of evil-minded androids either descending from the heavens or rising up in revolt against their human masters. It will look less like Blade Runner, Terminator or Westworld and more like the Fall of Rome, asmachines of various configurations and capabilities come to take up influential positions in our world through a slow but steady incursion.

Artificial Intelligence Innovations in Artificial Intelligence, especially with deep-learning algorithms, have made great strides in the previous decade. The 2020s will see AI in everything, from our handheld mobile devices to self-driving vehicles. These will be very capable but highly specialized AIs. We are creating a world full of idiot savants that will control every aspect of our lives. This might actually be more interesting, and possibly more terrifying, than superintelligence.

Things that Talk In 2018, Amazon put Alexa in the toilet, when they teamed up with Kohler at the Consumer Electronics Show. Manufactures of these digital voice assistants, which also include the likes of Siri, Google Assistant and Bixby, are currently involved in an arms race to dominate the voice-activated, screenless Internet of the future. By mid-decade, everything will be talking to us, which will dramatically change how we think about social interaction. But they will also be listening to what we say and sharing all this personal data with their parent corporations.

The Empires Strike Back This past year has seen unprecedented investment in AI ethics and governance. The 2020s will see amplification of this effort as stakeholders in Europe, China and North America compete to dominate the AI policy and governance market. Europe might be the odds-on favorite, since it was first to exit the starting block, but China and the U.S. are not far behind. The technology of AI might be global in scope and controlled by borderless multinationals. But tech policy and governance is still a matter of nation states, and the 2020s will see increasing involvement as the empires strike back.

Media Contact:Tom Parisi

About NIU

Northern Illinois University is a student-centered, nationally recognized public research university, with expertise that benefits its region and spans the globe in a wide variety of fields, including the sciences, humanities, arts, business, engineering, education, health and law. Through its main campus in DeKalb, Illinois, and education centers for students and working professionals in Chicago, Hoffman Estates, Naperville, Oregon and Rockford, NIU offers more than 100 areas of study while serving a diverse and international student body.

Continued here:

NIU expert: 4 leaps in technology to expect in the 2020s | NIU - NIU Newsroom

Playing Tetris Shows That True AI Is Impossible – Walter Bradley Center for Natural and Artificial Intelligence

Hi there! I recently put together an electroencephalogram (EEG), or in normal words, a brain wave reader, so you can see what goes on inside my brain!

I received a kit from OpenBCI, a successful kickstarter project to make inexpensive brain wave readers available to the masses. Heres what it looks like:

Yes, it looks like something Calvin and Hobbes would invent.

Here is how it looks on my head:

A number of electrodes are touching my scalp and a wire is connected to my ear. The layout on my head looks like the following schematic:

The EEG is measuring the voltage between different points on my scalp and my earlobe. The positions on my scalp are receiving a current from my brain while my earlobe acts as the ground. The EEG is essentially a multimeter for my brain.

Brain waves are generated by ions building up inside the neurons. Once the neurons reach capacity, they release the ions in a cascade across the brain. This leads to the wave effect.

So can I see any connection between my brain waves and what Im consciously experiencing in my mind?

To test that, inspired by the EEG hacker blog, I generated a graphic known as a spectrogram of my brain waves across a set of activities.

The spectrogram shows the range of brainwave frequencies in my brain at a given point in time. In the following plots, the horizontal axis is time, and the vertical axis is frequency. There are some artifacts in the plots, such as a middle band and a big pink blotch, so dont take all patterns as significant. The important thing to note is the overall texture of the plot.

The greens and reds are low amplitude frequencies, and the blue and magenta are high amplitude frequencies, meaning those brain waves are stronger. The spectrogram is generated from the readings of the #1 electrode in the schematic above.

I performed three different activities to see how they affect thespectrogram. Results and code are provided athttps://github.com/yters/eeg_tetris.

First, I just absentmindedly tapped the Enter key on my keyboard. I did not focus on anything in particular, just pressed Enter whenever I felt like it. This is the EEG spectrogram that random tapping generated:

Second, I played a game of Tetris on very slow speed, using a Github repo.

Heres a video of the game speed:

This is the corresponding spectrogram:

Finally, I played Tetris much faster, and the spectrogram looked like this:

You can watch a video of the game speed here:

The big difference is that, as my activity became cognitively more difficult, the spectrogram became more blue and magenta, meaning that my brain waves became stronger.

What does this mean? It means that, at least at a high level. I can measure how cognitively difficult a mental task is.

Another interesting thing is the direction of causality. The intensity of my mental processing brought about an observable brain state. The causality did not go in the other direction; the magenta brain state did not increase my conscious process.

So my subjective mental experience brought about a change in my physical brain. In other words, my consciousness has a causal impact on my physical processing unit, the brain.

This type of observation causes a problem for those hoping to duplicate human intelligence in a computer program. This Tetris EEG experiment shows that conscious thought is essential to human intelligence. So, until we make conscious computers, which is most likely never, we will not have computers that display human intelligence.

Update: Someone online suggested it might just be my facial muscle tension. So I tested out the idea by recording while I tensed my brow (where the electrode is placed). https://github.com/yters/eeg_tetris

The result looked no different than the tapping EEG, so I consider the just facial tension hypothesis falsified.

If you enjoyed this item, here are some of Eric Holloways other reflections on human consciousness and computer intelligence:

No materialist theory of consciousness is plausible All such theories either deny the very thing they are trying to explain, result in absurd scenarios, or end up requiring an immaterial intervention

We need a better test for AI intelligence Better than Turing or Lovelace. The difficulty is that intelligence, like randomness, is mathematically undefinable

and

Will artificial intelligence design artificial superintelligence? And then turn us all into super-geniuses, as some AI researchers hope? No, and heres why not

Read the original:

Playing Tetris Shows That True AI Is Impossible - Walter Bradley Center for Natural and Artificial Intelligence

AI R&D is booming, but general intelligence is still out of reach – The Verge

Trying to get a handle on the progress of artificial intelligence is a daunting task, even for those enmeshed in the AI community. But the latest edition of the AI Index report an annual rundown of machine learning data points now in its third year does a good job confirming what you probably already suspected: the AI world is booming in a range of metrics covering research, education, and technical achievements.

The AI Index covers a lot of ground so much so that its creators, which include institutions like Harvard, Stanford, and OpenAI, have also released two new tools just to sift through the information they sourced from. One tool is for searching AI research papers and the other is for investigating country-level data on research and investment.

Most of the 2019 report basically confirms the continuation of trends weve highlighted in previous years. But to save you from having to trudge through its 290 pages, here are some of the more interesting and pertinent points:

All this is impressive, but one big caveat applies: no matter how fast AI improves, its never going to match the achievements accorded to it by pop culture and hyped headlines. This may seem pedantic or even obvious, but its worth remembering that, while the world of artificial intelligence is booming, AI itself is still limited in some important ways.

The best demonstration of this comes from a timeline of human-level performance milestones featured in the AI Index report; a history of moments when AI has matched or surpassed human-level expertise.

The timeline starts in the 1990s when programs first beat humans at checkers and chess, and accelerates with the recent machine learning boom, listing video games and board games where AI has came, saw, and conquered (Go in 2016, Dota 2 in 2018, etc.). This is mixed with miscellaneous tasks like human-level classification of skin cancer images in 2017 and in Chinese to English translation in 2018. (Many experts would take issue with that last achievement being included at all, and note that AI translation is still way behind humans.)

And while this list is impressive, it shouldnt lead you to believe that AI superintelligence is nigh.

For a start, the majority of these milestones come from defeating humans in video games and board games domains that, because of their clear rules and easy simulation, are particularly amenable to AI training. Such training usually relies on AI agents sinking many lifetimes worth of work into a single game, training hundreds of years in a solar day: a fact that highlights how quickly humans learn compared to computers.

Similarly, each achievements was set in a single domain. With very few exceptions, AI systems trained at one task cant transfer what theyve learned to another. A superhuman StarCraft II bot would lose to a five-year-old playing chess. And while an AI might be able to spot breast cancer tumors as accurately as an oncologist, it cant do the same for lung cancer (let alone write a prescription or deliver a diagnosis). In other words: AI systems are single-use tools, not flexible intelligences that are stand-ins for humans.

But and yes, theres another but that doesnt mean AI isnt incredibly useful. As this report shows, despite the limitations of machine learning, it continues to accelerate in terms of funding, interest, and technical achievements.

When thinking about AI limitations and promises, its good to remember the words of machine learning pioneer Andrew Ng: If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future. Were just beginning to find out what happens when those seconds are added up.

Read the rest here:

AI R&D is booming, but general intelligence is still out of reach - The Verge

Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits – Forbes

Digital Human Brain Covered with Networks

Artificial intelligence is advancing rapidly. In a few decades machines will achieve superintelligence and become self-improving. Soon after that happens we will launch a thousand ships into space. These probes will land on distant planets, moons, asteroids, and comets. Using AI and terabytes of code, they will then nanoassemble local particles into living organisms. Each probe will, in fact, contain the information needed to create an entire ecosystem. Thanks to AI and advanced biotechnology, the species in each place will be tailored to their particular plot of rock. People will thrive in low temperatures, dim light, high radiation, and weak gravity. Humanity will become an incredibly elastic concept. In time our distant progeny will build megastructures that surround stars and capture most of their energy. Then the power of entire galaxies will be harnessed. Then life and AIlong a common entity by this pointwill construct a galaxy-sized computer. It will take a mind that large about a hundred-thousand years to have a thought. But those thoughts will pierce the veil of reality. They will grasp things as they really are. All will be one. This is our destiny.

Then again, maybe not.

There are, of course, innumerable reasons to reject this fantastic tale out of hand. Heres a quick and dirty one built around Copernicuss discovery that we are not the center of the universe. Most times, places, people, and things are average. But if sentient beings from Earth are destined to spend eons multiplying and spreading across the heavens, then those of us alive today are special. We are among the very few of our kind to live in our cosmic infancy, confined in our planetary cradle. Because we probably are not special, we probably are not at an extreme tip of the human timeline; were likely somewhere in the broad middle. Perhaps a hundred-billion modern humans have existed, across a span of around 50,000 years. To claim in the teeth of these figures that our species is on the cusp of spending millions of years spreading trillions of individuals across this galaxy and others, you must engage in some wishful thinking. You must embrace the notion that we today are, in a sense, back at the center of the universe.

It is in any case more fashionable to speculate about imminent catastrophes. Technology again looms large. In the gray goo scenario, runaway self-replicating nanobots consume all of the Earths biomass. Thinking along similar lines, philosopher Nick Bostrom imagines an AI-enhanced paperclip machine that, ruthlessly following its prime directive to make paperclips, liquidates mankind and converts the planet into a giant paperclip mill. Elon Musk, when he discusses this hypothetical, replaces paperclips with strawberries, so that he can worry about strawberry fields forever. What Bostrom and Musk are driving at is the fear that an advanced AI being will not share our values. We might accidently give it a bad aim (e.g., paperclips at all costs). Or it might start setting its own aims. As Stephen Hawking noted shortly before his death, a machine that sees your intelligence the way you see a snails might decide it has no need for you. Instead of using AI to colonize distant planets, we will use it to destroy ourselves.

When someone mentions AI these days, she is usually referring to deep neural networks. Such networks are far from the only form of AI, but they have been the source of most of the recent successes in the field. A deep neural network can recognize a complex pattern without relying on a large body of pre-set rules. It does this with algorithms that loosely mimic how a human brain tunes neural pathways.

The neurons, or units, in a deep neural network are layered. The first layer is an input layer that breaks incoming data into pieces. In a network that looks at black-and-white images, for instance, each of the first layers units might link to a single pixel. Each input unit in this network will translate its pixels grayscale brightness into a numer. It might turn a white pixel into zero, a black pixel into one, and a gray pixel into some fraction in between. These numbers will then pass to the next layer of units. Each of the units there will generate a weighted sum of the values coming in from several of the previous layers units. The next layer will do the same thing to that second layer, and so on through many layers more. The deeper the layer, the more pixels accounted for in each weighted sum.

An early-layer unit will produce a high weighted sumit will fire, like a neuron doesfor a pattern as simple as a black pixel above a white pixel. A middle-layer unit will fire only when given a more complex pattern, like a line or a curve. An end-layer unit will fire only when the patternor, rather, the weighted sums of many other weighted sumspresented to it resembles a chair or a bonfire or a giraffe. At the end of the network is an output layer. If one of the units in this layer reliably fires only when the network has been fed an image with a giraffe in it, the network can be said to recognize giraffes.

A deep neural network is not born recognizing objects. The network just described would have to learn from pre-labeled examples. At first the network would produce random outputs. Each time the network did this, however, the correct answers for the labeled image would be run backward through the network. An algorithm would be used, in other words, to move the networks unit weighting functions closer to what they would need to be to recognize a given object. The more samples a network is fed, the more finely tuned and accurate it becomes.

Some deep neural networks do not need spoon-fed examples. Say you want a program equipped with such networks to play chess. Give it the rules of the game, instruct it to seek points, and tell it that a checkmate is worth a hundred points. Then have it use a Monte Carlo method to randomly simulate games. Through trial and error, the program will stumble on moves that lead to a checkmate, and then on moves that lead to moves that lead to a checkmate, and so on. Over time the program will assign value to moves that simply tend to lead toward a checkmate. It will do this by constantly adjusting its networks unit weighting functions; it will just use points instead of correctly labeled images. Once the networks are trained, the program can win discrete contests in much the way it learned to play in the first place. At each of its turns, the program will simulate games for each potential move it is considering. It will then choose the move that does best in the simulations. Thanks to constant fine-tuning, even these in-game simulations will get better and better.

There is a chess program that operates more or less this way. It is called AlphaZero, and at present it is the best chess player on the planet. Unlike other chess supercomputers, it has never seen a game between humans. It learned to play by spending just a few hours simulating moves with itself. In 2017 it played a hundred games against Stockfish 8, one of the best chess programs to that point. Stockfish8 examined 70million moves per second. AlphaZero examined only 80,000. AlphaZero won 28 games, drew 72, and lost zero. It sometimes made baffling moves (to humans) that turned out to be masterstrokes. AlphaZero is not just a chess genius; it is an alien chess genius.

AlphaZero is at the cutting edge of AI, and it is very impressive. But its success is not a sign that AI will take us to the starsor enslave usany time soon. In Artificial Intelligence: A Guide For Thinking Humans, computer scientist Melanie Mitchell makes the case for AI sobriety. AI currently excels, she notes, only when there are clear rules, straightforward reward functions (for example, rewards for points gained or for winning), and relatively few possible actions (moves). Take IBMs Watson program. In 2011 it crushed the best human competitors on the quiz show Jeopardy!, leading IBM executives to declare that its successors would soon be making legal arguments and medical diagnoses. It has not worked out that way. Real-world questions and answers in real-world domains, Mitchell explains, have neither the simple short structure of Jeopardy! clues nor their well-defined responses.

Even in the narrow domains that most suit it, AI is brittle. A program that is a chess grandmaster cannot compete on a board with a slightly different configuration of squares or pieces. Unlike humans, Mitchell observes, none of these programs can transfer anything it has learned about one game to help it learn a different game. Because the programs cannot generalize or abstract from what they know, they can function only within the exact parameters in which they have been trained.

A related point is that current AI does not understand even basic aspects of how the world works. Consider this sentence: The city council refused the demonstrators a permit because they feared violence. Who feared violence, the city council or the demonstrators? Using what she knows about bureaucrats, protestors, and riots, a human can spot at once that the fear resides in the city council. When AI-driven language-processing programs are asked this kind of question, however, their responses are little better than random guesses. When AI cant determine what it refers to in a sentence, Mitchell writes, quoting computer scientist Oren Etzioni, its hard to believe that it will take over the world.

And it is not accurate to say, as many journalists do, that a program like AlphaZero learns by itself. Humans must painstakingly decide how many layers a network should have, how much incoming data should link to each input unit, how fast data should aggregate as it passes through the layers, how much each unit weighting function should change in response to feedback, and much else. These settings and designs, adds Mitchell, must typically be decided anew for each task a network is trained on. It is hard to see nefarious unsupervised AI on the horizon.

The doom camp (AI will murder us) and the rapture camp (it will take us into the mind of God) share a common premise. Both groups extrapolate from past trends of exponential progress. Moores lawwhich is not really a law, but an observationsays that the number of transistors we can fit on a computer chip doubles every two years or so. This enables computer processing speeds to increase at an exponential rate. The futurist Ray Kurzweil asserts that this trend of accelerating improvement stretches back to the emergence of life, the appearance of Eukaryotic cells, and the Cambrian Explosion. Looking forward, Kurzweil sees an AI singularitythe rise of self-improving machine superintelligenceon the trendline around 2045.

The political scientist Philip Tetlock has looked closely at whether experts are any good at predicting the future. The short answer is that theyre terrible at it. But theyre not hopeless. Borrowing an analogy from Isaiah Berlin, Tetlock divides thinkers into hedgehogs and foxes. A hedgehog knows one big thing, whereas a fox knows many small things. A hedgehog tries to fit what he sees into a sweeping theory. A fox is skeptical of such theories. He looks for facts that will show he is wrong. A hedgehog gives answers and says moreover a lot. A fox asks questions and says however a lot. Tetlock has found that foxes are better forecasters than hedgehogs. The more distant the subject of the prediction, the more the hedgehogs performance lags.

Using a theory of exponential growth to predict an impending AI singularity is classic hedgehog thinking. It is a bit like basing a prediction about human extinction on nothing more than the Copernican principle. Kurzweils vision of the future is clever and provocative, but it is also hollow. It is almost as if huge obstacles to general AI will soon be overcome because the theory says so, rather than because the scientists on the ground will perform the necessary miracles. Gordon Moore himself acknowledges that his law will not hold much longer. (Quantum computers might pick up the baton. Well see.) Regardless, increased processing capacity might be just a small piece of whats needed for the next big leaps in machine thinking.

When at Thanksgiving dinner you see Aunt Jane sigh after Uncle Bob tells a blue joke, you can form an understanding of what Jane thinks about what Bob thinks. For that matter, you get the joke, and you can imagine analogous jokes that would also annoy Jane. You can infer that your cousin Mary, who normally likes such jokes but is not laughing now, is probably still angry at Bob for spilling the gravy earlier. You know that although you cant see Bobs feet, they exist, under the table. No deep neural network can do any of this, and its not at all clear that more layers or faster chips or larger training sets will close the gap. We probably need further advances that we have only just begun to contemplate. Enabling machines to form humanlike conceptual abstractions, Mitchell declares, is still an almost completely unsolved problem.

There has been some concern lately about the demise of the corporate laboratory. Mitchell gives the impression that, at least in the technology sector, the corporate basic-research division is alive and well. Over the course of her narrative, labs at Google, Microsoft, Facebook, and Uber make major breakthroughs in computer image recognition, decision making, and translation. In 2013, for example, researchers at Google trained a network to create vectors among a vast array of words. A vector set of this sort enables a language-processing program to define and use a word based on the other words with which it tends to appear. The researchers put their vector set online for public use. Google is in some ways the protagonist of Mitchells story. It is now an applied AI company, in Mitchells words, that has placed machine thinking at the center of diverse products, services, and blue-sky research.

Google has hired Ray Kurzweil, a move that might be taken as an implicit endorsement of his views. It is pleasing to think that many Google engineers earnestly want to bring on the singularity. The grand theory may be illusory, but the treasures produced in pursuit of it will be real.

More:

Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits - Forbes

Melissa McCarthy And Ben Falcone Have Decided To Release ‘Superintelligence’ Via HBO Max Ins – Science Fiction

Kathy Hutchins / Shutterstock.com

The new Melissa McCarthy sci-fi comedy Superintelligence will not open theatrically as planned. Instead, the comedian and her director husband, Ben Falcone, have decided to release the movie via the new HBO Max streaming service. Superintelligence had been slated for release during the busy holiday season, on December 20, but have chosen a different route, at least in part to reach a wider audience.

McCarthy told Deadline:

It was actually Bens idea, it came from the filmmaker himself. We had a release date, a full marketing plan, and I had all my press lined up. We were really ready to go. When the announcement came that HBO Max was really happening, Ben had this idea. And we thought, is this better? Different doesnt mean worse, and how arewewatching films ourselves? To us, each movie is near and dear to our hearts. You just want people to see it and love it and you want them to feel good.Superintelligenceat its core is, love wins, and people matter. I want that to get to as many people as it can. We need that today, and this seemed like the best way to do it. So no, [this wasnt imposed on us]. We were ready to go the other way, and we decided to make the detour.

Falcone added:

Honestly, you can release a mid-budget movie, and if wed stayed in the theaters, we could have done incredibly well. There still are those examples of movies like this one that do. But for this movie, at this time, we felt like it was the best way to go. The PG rating, the fact they are starting this thing. All these streaming services are starting, and here, we are up there withSesame Street, and Meryl Streep and JJ Abrams and Hugh Jackman and Jordan Peele. There are cool people doing this. So following my fear-based mentality, I thought it was the best move.

In addition to Superintelligence, HBO Max will also be offering Let Them All Talk from Steven Soderbergh and starring Meryl Streep, Greg Berlantis Unpregnant, and Bad Education starring Hugh Jackman and Allison Janney, which HBO paid $17 million to acquire.

Carol Peters life is turned upside down when she is selected for observation by the worlds first superintelligence a form of artificial intelligence that may or may not take over the world.

Superintelligence also stars Bobby Cannavale, Jean Smart, Michael Beach, Brian Tyree Henry, and the voice of James Corden as the titular Superintelligence. The release will now be delayed, as HBO Max isnt expected to launch until next spring.

Falcone and McCarthy are re-teaming for Thunder Force for Netflix, which also stars Octavia Spencer.

Jax's earliest memory is of watching 'Batman,' followed shortly by a memory of playing Batman & Robin with a friend, which entailed running outside in just their underwear and towels as capes. When adults told them they couldn't run around outside in their underwear, both boys promptly whipped theirs off and ran around in just capes.

Read more:

Melissa McCarthy And Ben Falcone Have Decided To Release 'Superintelligence' Via HBO Max Ins - Science Fiction

Melissa McCarthy & Director Ben Falcone On Choosing HBO Max Bow Instead Of WB Xmas Release For Superintelligence – Deadline

EXCLUSIVE: In a move that could become more common as major studios lean in heavily toward their streaming launches, the Ben Falcone-directed Melissa McCarthy-starrer Superintelligence has exited its December 20 theatrical release date to instead become the first Warner Bros Pictures Group film to premiere on HBO Max.

This comes before an HBO Max presentation on October 29 where it is expected that other projects might become part of a streamer launch slate that now will have Superintelligence; the Steven Soderbergh-directed Meryl Streep-starrer Let Them All Talk; the Greg Berlanti-produced YA novel adaptation Unpregnant;and sooner or later Bad Education, the Hugh Jackman/Allison Janney-starrer bought at Toronto for north of $17 million to bow on HBO. The original programming will be part of a service that launches with WarnerMedias own library titles including Friends and The Big Bang Theory and third-party acquisitions including Sesame Street.

Amid the high-stakes battle for subscription streaming service launches by WarnerMedia, Disney, Comcast and Apple to go along with Netflix and Amazon, it isnt hard to see how the prospect of being among the first marquee titles on HBO Max is enticing. Especially when mid-budget comedies and dramas are plagued by the optics of eight-figure P&A spends and heavy scrutiny on opening-weekend box office grosses. That doesnt exist if you are launching on an OTT to a wide audience.

McCarthy and Falcone, long married and longtime frequent creative collaborators they are now making their first Netflix film, Thunder Force said the decision to move out of theaters and onto HBO Max was theirs, and that it wasnt imposed on them by WarnerMedia, Warner Bros or New Line, which developed the comedy and shepherded the film through production.

It was actually Bens idea, it came from the filmmaker himself, McCarthy told Deadline. We had a release date, a full marketing plan, and I had all my press lined up. We were really ready to go. When the announcement came that HBO Max was really happening, Ben had this idea. And we thought, is this better? Different doesnt mean worse, and how are we watching films ourselves? To us, each movie is near and dear to our hearts. You just want people to see it and love it and you want them to feel good. Superintelligence at its core is, love wins, and people matter. I want that to get to as many people as it can. We need that today, and this seemed like the best way to do it. So no, [this wasnt imposed on us]. We were ready to go the other way, and we decided to make the detour.

When I brought up the perilous track many theatrical releases face these days, Falcone acknowledged it is something a filmmaker thinks about.

I pride myself on living a fear-based life, and that wont stop, Falcone joked. I dont exactly remember the question, but I wanted to make that clear to you, and to everyone. Honestly, you can release a mid-budget movie, and if wed stayed in the theaters, we could have done incredibly well. There still are those examples of movies like this one that do. But for this movie, at this time, we felt like it was the best way to go. The PG rating, the fact they are starting this thing. All these streaming services are starting, and here, we are up there with Sesame Street, and Meryl Streep and JJ Abrams and Hugh Jackman and Jordan Peele. There are cool people doing this. So following my fear-based mentality, I thought it was the best move.

McCarthy and Falcone also felt a thematic fit as the film explores relationships in the backdrop of technological evolution. McCarthys character finds herself getting messages from her TV, phone and microwave and what she doesnt realize is she has been selected for observation by the worlds first superintelligence, a form of artificial intelligence that is contemplating taking over the world. Steve Mallory wrote the script, James Corden voices the A.I., and Bobby Cannavale is playing her love interest.

We made the film for New Line and Warner Bros, and there are different challenges in the way people watch films, how and where they see them on different platforms, McCarthy said. We were all geared up to open theatrically, and Ben was the one who said, this would be better for HBO Max. What a way to reach a massive amount of people, and to be put in pretty amazing company. It seemed like a win-win. We have two young kids, and we thought about how we watch movies. Superintelligence is PG, and we thought about how we watch these movies with our kids. We still go to the theater, and we love going to the theater. I would cry if that ever went away. But we watch a lot of movies at home, and a lot of people do. This just seemed like an exciting new way to get it in front of a lot of people.

The move pushes the release of the film until sometime in the spring, and though a specific date hasnt been decided, the couple is really warming to the platform.

I urge you and all your friends to immediately subscribe to HBO Max, Falcone said.

Added McCarthy: Just give us your credit card, Mike, and wed be happy to process it for you. And maybe give us your bank account numbers, too.

Read more here:

Melissa McCarthy & Director Ben Falcone On Choosing HBO Max Bow Instead Of WB Xmas Release For Superintelligence - Deadline

AMC Is Still In the Theater Business, But VOD Is a Funny Way of Showing It – IndieWire

Was I the only one who found it weird when AMC Theatres announced that it was getting into the streaming business with the launch of AMC Theatres On Demand? When it comes to places to buy and rent movies, weve got Apple, Amazon, Fandango, Vudu, Google Play, YouTube, and a few more that I dont need to remember because its too many already.

I also thought it suggested some seriously mixed messaging, but maybe that was just me until I got a call from an NBC affiliate who wanted to do an interview about AMCs new streaming service. That seemed like a curious topic for local news; why were they interested? The answer: They wanted to know if it meant AMC was getting out of the theater business.

Of course, AMC is very much dedicated to theatrical business, but this is a funny way of showing it. Launching a platform for VOD transactions something that runs counter to going out to the movies is not what Id expect a theater chain to worry about right now. Theres far more pressing issues at hand, starting with the sacred cow of The Theatrical Experience.

Its the theme of every CinemaCon, repeated like rosary as exhibitors and distributors take the Caesars Palace stage and talk about how worldwide audiences continue to share the primacy of the theatrical experience. However, that audience also has the option to stay home with their couches, pause buttons, and very large TV sets to watch an infinite number of entertainment options. By contrast, choosing to go to the theater means spending a lot of time, money, and effort on a very small selection of premium products. So whether youre going to the AMC to see Avengers, or to the Alamo to see Parasite, the act of going to the movies is now a bespoke experience.

But is that what chain theaters deliver? If youre Alamo with the fun beers on tap and no commercials and weird short films, sure. If youre a chain that inspired the ire of Edward Norton, who encountered low-light projection and crappy sound while preparing for the November 2 nationwide release of Motherless Brooklyn, that would be no. Its the theater chains that are destroying the theatrical experience, he said. Period, full-stop. No one else. Meanwhile, he sang the praises of Netflix as it represents an unprecedented period of ripe opportunity for many more types of stories and voices to be heard. (Netflix is also looking at a long-term lease for the tony, single-screen Paris Theater in Manhattan. Oh, the irony.)

Netflix turned to the Paris, the Belasco, and the Egyptian as showcases for Oscar contenders Marriage Story and The Irishman because major chains wont let them book their theaters but a much more significant threat to exhibitors is coming from inside the house. This week, Warners chose to move Melissa McCarthys Christmas title, Superintelligence, out of theaters and on to its upcoming streaming platform, HBO Max, which is scheduled to launch sometime next spring.

Melissa McCarthy and Ben Falcone at the Warner Bros. Cinemacon presentation, April 2019

Rob Latour/Shutterstock

Speaking to Deadline, McCarthy spun it as all being the idea of her husband, director Ben Falcone:

It was actually Bens idea, it came from the filmmaker himself, We had a release date, a full marketing plan, and I had all my press lined up. We were really ready to go. When the announcement came that HBO Max was really happening, Ben had this idea. And we thought, is this better? Different doesnt mean worse, and how arewe watching films ourselves? To us, each movie is near and dear to our hearts. You just want people to see it and love it and you want them to feel good. Superintelligence at its core is, love wins, and people matter. I want that to get to as many people as it can. We need that today, and this seemed like the best way to do it. So no, [this wasnt imposed on us]. We were ready to go the other way, and we decided to make the detour.

Ultimately, it doesnt matter if the idea came from Falcone, or from the studio (sources told IndieWire that the film didnt test well). What matters is this is likely the first of many films in which a distributor weighs its options: Invest many millions and see what you get back from theatrical, or substantially fewer millions on a global streaming platform and see what you generate in subscribers? Studios may find themselves following in Netflixs footsteps and sorting their slates: These movies demand a theatrical investment, and these will do well on streaming.

Last May, when HBO Max was only a twinkle in the eye of current WarnerMedia CEO John Stankey, Warners released the McCarthy and Falcone comedy Life of the Party; at $53 million domestic, it wasnt a blockbuster. But with box office on track to fall nearly 6% behind 2018, exhibitors need every $53 million they can get. And with almost every major studio now tied to a streaming outlet, they now have a no-friction solution for theatrical releases that might struggle: Whats dull on the big screen can look very shiny on the smaller ones. And, as McCarthy said: How arewewatching films ourselves?

Increasingly, were watching them at home. But probably not on AMC Theatres On Demand.

Heres some of the best work from this week on IndieWire:

Disneys Most Valuable Screenwriter Has Had Enough of the Strong Female Trope, by Kate ErblandLinda Woolverton, the woman who brought Belle, Maleficent, and a billion-dollar animated movie to Disney, speaks her mind.

Large-Format Cameras Are Changing Film Language, From Joker to Midsommar, by Chris OFalt

With the advent of cameras like the Alexa 65, a new generation of large format filmmaker is using its immersive qualities in exciting ways.

Peak TV Is Only a Concern in the Gated Community of Hollywood, by Libby HillThe average Joe doesnt care about The Morning Show. They already have all the TV they need and can afford.

Bombshell and Jojo Rabbit Share an Oscar Superpower: Theyre Made For the Mainstream, by Anne ThompsonFilms like Parasite and Pain and Glory are critical darlings, but the truth is that when it comes to Oscar votes, popularity counts.

Is This Is Us Making You Seasick? Youre Not Alone, by Leo GarciaDigital image stabilization mixed with the shows penchant for shaky camera work make it seem as if certain scenes were filmed out at sea.

Disney+: 200 Must-Watch TV Shows & Movies Available on Launch, by LaToya Ferguson

From the beloved Star Wars trilogies to the Marvel Cinematic Universe to Pixars greatest achievements, heres the best of the content that will be available to subscribers for $6.99 a month.

Have a great weekend,

Dana

Sign Up: Stay on top of the latest breaking film and TV news! Sign up for our Email Newsletters here.

See the article here:

AMC Is Still In the Theater Business, But VOD Is a Funny Way of Showing It - IndieWire

Idiot Box: HBO Max joins the flood of streaming services – Weekly Alibi

HBO Max joins the flood of streaming services

Viewers of visual media can be forgiven for thinking that todays streaming services have turned into a veritable deluge. Every other week it seems like Im educating/warning people about another streaming service with a catalogue of original programming, an archive of old TV shows and a random selection of movies available on your mobile devices for a low monthly subscription fee. Since I didnt talk about one last week, I guess Im obliged to this week. Netflix, Hulu, Amazon Prime, Disney Plus, Apple TV+: Meet HBO Max.

Like a lot of Americans, you may be confused at this point. Isnt HBO already a pay-per-view station full of movies, TV shows and original content? Sure. And cant you already subscribe to HBO Now, a streaming service for portable devices that bypasses the need for cable or satellite? Yup. But HBO Max is a long-brewing corporate mash-up from AT&T-owned multinational mass media conglomerate WarnerMedia. Not only will it consist of HBOs normal slate of movies, miniseries and TV showsit will also have access to all of WarnerMedias corporate catalogue. Basically, whatever Disney doesnt own, WarnerMedia does (HBO, CNN, TBS, TNT, TruTV, Cartoon Network, Adult Swim, TCM, Warner Bros, New Line, Crunchy Roll, Looney Tunes, The CW, DC Comics).

HBO Max, for example, will be the new home for the Warner Bros.-produced series Friendsnow that the beloved 90s sitcom is free from its $100 million dollar contract with Netflix. Also lined up: The Fresh Prince of Bel Air (which is owned by Warner Bros. Domestic Television Distribution) and any Warner Bros.-produced dramas on The CW Network (like, for example, Riverdale). Throw in some Bugs Bunny cartoons, all the Nightmare On Elm Street films (from New Line Cinema) and stuff like Full Frontal with Samantha Bee (thats TBS), and youve got a solid back catalogue on which to build.

In addition to everything WarnerMedia owns, HBO Max has signed contracts to re-air BBC shows including Doctor Who, The Office, Top Gear and Luther. The network also signed a deal with Japans Studio Ghibli to secure US streaming rights to all of its animated films (My Neighbor Totoro, Princess Mononoke, Spirited Away, Ponyo, Howls Moving Castle, Kikis Delivery Service, to name a few). These deals add some impressive weight to HBO Maxs lineup (while, at the same time, stealing these shows away from cable/streaming rivals).

As far as the new programming is concerned, the floodgates have already opened. Dozens of emails have been pouring into my inbox this week, touting HBO Maxs new projects. Director Denis Villeneuve (Blade Runner 2049) will adapt Dune: The Sisterhood, a series based on Brian Herbert and Kevin Andersons sequel to Frank Herberts sci-fi classic. The classic 1984 horror-comedy Gremlins is being turned into an animated series. The Hos is a multigenerational docu-reality series about a rich Vietnamese-American family in Houston. Monica Lewinsky (yes, that Monica Lewinsky) executive produces 15 Minutes of Shame, a documentary series about the public shaming epidemic in our culture and our collective need to destroy one another. Brad and Gary Go To finds Hollywood power couple Brad Goreski and Gary Janetti traveling around the globe sampling international cuisine. The streaming service has also ordered up Grease: Rydell High, a musical spin-off which brings the 1978 film Grease to todays post-Glee audiences.

There will be original movies on tap as well. Emmy-winning comedian Amy Schumer climbs on board with Expecting Amy, a documentary about the funny ladys struggle to prepare for a stand-up comedy tour while pregnant. Melissa McCarthy (Spy, Bridesmaids) will star in Superintelligence, about an ordinary woman who is befriended by the worlds first artificial intelligence with an attitude.

As far as when we can get a look at HBO Max, WarnerMedia has pushed the premiere date several times and is now simply saying spring 2020. What will it cost the consumer? Given that HBO Now costs $15 a month, and HBO Max will include all of HBOs streaming product (plus all that other stuff mentioned above), we can only assume that it will cost more than that. With Hulu starting at $6 a month, Disney+ banking on charging $6.99 a month and Netflix running $13 a month, HBO Max is looking kinda pricey. But what do you say, American consumers? Are you ready to fork out for one more monthly streaming service? Its the last one. I swear. (Its not. Not by a longshot.)

Read this article:

Idiot Box: HBO Max joins the flood of streaming services - Weekly Alibi

Here’s How to Watch Watchmen, HBOs Next Game of Thrones – Cosmopolitan

The DC universe just keeps getting bigger, and the newest addition to the comic world is HBOs Watchmen, a series based on the 1986 graphic novel where the superheroes are the outlaws (dont worry, Ill explain what that even means in a bit).

Youve probs already heard about it because its being dubbed as the new Game of Thrones, which means our hopes are high for the beginning and our expectations for the series ending are at an all-time low.

The Watchmen graphic novel is about superheroes. (Yes, thats in quotes for a reason.) These superheroes arent born with crazy superhuman abilities but instead are really, really good at one specific thingso they might have, say, extremely high intelligence or insane detective skills.

The comic takes place in a world where these everyday people would dress in superhero costumes and act as vigilantesuntil the practice was outlawed in 1977 after a riot involving said vigilante superheroes. A lot of the former superheroes went to work for the government, using their powers for good, but some ignored the law (aka a man named Rorschach) and continued their work in a more anarchic way.

The show is being described as more of a continuation, not an adaptation. It picks up a little over 30 years after the novel ended.

Queen Regina King stars as the main character, a police officer in Tulsa who goes by the name Sister Night and is super protective over her husband and child. Also, she has a BADASS costume that is part Catwoman, part Xena Warrior Princess. Serious Halloween inspo.

Dr. Manhattan *might* be making a return. If youre not familiar, hes a blue guy and the only one in the series with actual superpowers. His godlike capabilities include teleportation, total clairvoyance, and telekinesis. At the end of the graphic novel, he leaves Earth to go to Mars, BUT hes in the HBO previews, so fingers crossed.

Youll definitely see Adrian Veidt (also known as Ozymandias)a retired superhero with superintelligence who is known for faking an alien invasion with a giant squid. (Ya, this show gets weird.)

Of course, Rorschach (who was killed by Dr. Manhattan at the end of the series) will returnbut not exactly. His name, mask, and overall evil mission will be carried on by a group of white supremacists.

You can catch the series on HBO or HBO Now every Sunday at 9 p.m. ET. But if you cant be held to a strict TV-watching schedule, it can also be streamed with an HBO Go account! TG for streaming services.

Link:

Here's How to Watch Watchmen, HBOs Next Game of Thrones - Cosmopolitan

The Best Artificial Intelligence Books you Need to Read Today – Edgy Labs

If youre looking for a selection of the top artificial intelligence books, the offerings could be overwhelming. But were here to help with that.

Artificial intelligence is slowly and steadily making its way through pretty much every system humans have created.

AI-powered agents are getting increasingly smarter as they hone their problem-solving and decision-making skills.

On the other hand, humans avail themselves of AI as much as possible. But, theyre called to adapt and learn to coexist with machines if they are to, at best, thrive, or survive, at the worst.

As far as humans are concerned, intelligent agents cut both ways.

Thankfully, the worlds leading scientists and thinkers help us understand whats at stake and the best damage control measures to take if need be.

Many books deal with AI theory, modern AI sciences, and the technologys future implications.

The ones listed below are some of the best artificial books today that dissect all of these areas.

1. Introduction to Artificial Intelligence

As befits the topic, we start our list with a comprehensive introduction into AI technology: Introduction to Artificial Intelligence. Written by Phillip C. Jackson, Jr., the book is one of the classics thats still read by experts in the field and non-specialists alike.

This book provides a summary of the previous two decades of research into the science of computer reasoning, and where it could be heading. Published in 1985, some of the information might be outdated, but if nothing else, the book could serve as a valuable historical document.

2. Artificial Intelligence: A Modern Approach

Another classic is Artificial Intelligence: A Modern Approach, written by Stuart Russell and Peter Norvig.

No list on the best artificial intelligence books can fail to mention this bestseller that has become a standard book for AI students. Used as a textbook in hundreds of universities around the world, the book was first published in 1995. A third edition came out in 2009.

You may want to check this book to know why its described as the most popular artificial intelligence textbook in the world.

3. Life 3.0

This book is one of my personal favorites, by one of the leading physicists and cosmologists in the world, Max Tegmark, aka Mad Max.

Tegmarks Life 3.0: Being Human in the Age of Artificial Intelligence welcomes you to the most important conversation of our time. The MIT physics professor explores the future of AI and how it would reshape many facets of human life, from jobs to wars. Hes one of those thinking AI is a double-edged sword, and its really up to us to give it free rein.

Elon Musk recommends this book as worth reading, recapping that AI could be the best or worst thing.

4. How to Create a Mind

How to Create a Mind The Secret of Human Thought Revealed is a book by famous futurist and tech visionary Ray Kurzweil.

Kurzweil discusses the notion of mind and how it emerges from the brain, and the attempts of scientists to recreate human intelligence. He predicts that by 2020, computers would be powerful enough to simulate an entire human brain.

Kurzweil offers some interesting thought experiments on thinking. in the book. For example, most people can recite the alphabet correctly, but most would fail at reciting it backward as easily. The reason for this, according to the author, has to do with the memory formation process. The brain stores memories as hierarchical sequences only accessible in the order theyre remembered in.

5. Superintelligence Paths, Dangers, Strategies

Oxford philosopher Nick Bostrom is known for his work on major existential risks. He includes the superintelligence threat among the bunch.

A poorly-programmed or a flawed superintelligence

In Superintelligence Paths, Dangers, Strategies, Bostrom questions whether smart algorithms would spell the end of humanity or be a catalyst for a better future.

A New York Times bestseller, Bostrom argues that superintelligent machines left unchecked could replace humans as the dominant lifeform on Earth.

6. Weapons of Math Destruction

AI is all about Big Data, and the algorithms that work off of it. And thats the focus of the book titled Weapons of Math Destruction by Cathy ONeil, a data scientist at Harvard University,

In the book, the author explores how math, at the heart of data and by extension AI, could be manipulated and biased. The Author discusses the negative social implications of AI and how it could be a threat to democracy.

ONeil identifies three factorsscale, secrecy, and destructivenessthat could turn an AI algorithm into a Weapon of Math Destruction.

7. Our Final Invention

Its thanks to their brains not brawn that humans dominated Earth and reigned supreme over other species. Now, a human invention, AI, is posing a potential threat to this dominance.

Our Final Invention: Artificial Intelligence And The End Of The Human Era is a book by American documentary filmmaker James Barrat.

According to the author, while human intelligence stagnates, machines are getting smarter and would soon surpass humans cognitive abilities. Superintelligent artificial species could develop survival drives that could eventually lead them to clash with humans.

8. The Sentient Machine

Unlike other books on this list, The Sentient Machine The Coming Age of Artificial Intelligence provides a more optimistic look at AI.

In the book, inventor and techpreneur Amir Husainunlike Bostrom, Tegmark, and Muskthinks humans can thrive with AI, not just survive.

Weighing AIs risk and potential, Husain thinks we should embrace AI and let sentient machines lead us to a bright future. This isnt some void utopian daydreaming! The authors approach is based on scientific, cultural, and historical arguments. He also provides a wide-ranging discussion on what makes us humans and our role as creators in the world.

9. The Fourth Age

We find another optimistic take on AI in The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

In this book, author Byron Reese manages to both engage and entertain the reader with his insights into history and projections for the future. According to Reese, the human civilization went through three major disruptions in its history: fire and language, agriculture, and finally writing and the wheel.

AI promises a fourth age, which the book discusses in detail.

10. AI Superpowers

The United States and China are at the forefront of AI research. In a context marked by a geopolitical and economic rivalry between the two countries, it stands to reason that AI would be weaponized someway.

AI Superpowers: China, Silicon Valley, and the New World Order is a book by AI pioneer Kai-Fu Lee. China is racing with the U.S. to take the AI lead globally, and Lee thinks it will dominate the industry. If data is the new oil, says Lee. then China is the new Saudi Arabia.

Lee points out the factors that he thinks would help China win the AI arms race. He cites a high quantity of data, less data protection regulations, and a more aggressive AI startup culture as reasons giving China a potential edge.

These are our picks. What are the artificial intelligence books worth reading that left an impression on you?

Go here to read the rest:

The Best Artificial Intelligence Books you Need to Read Today - Edgy Labs

Aquinas’ Fifth Way: The Proof from Specification – Discovery Institute

Editors note: See also, Introducing Aquinas Five Ways, by Michael Egnor. For Dr. Egnors previous posts in this series on Aquinas Five Ways, seehere,here, here, and here. For more on Thomas Aquinas, intelligent design, and evolution, see the websiteAquinas.Design.

Aquinas Fifth Way is the proof of Gods existence that is easiest to grasp in everyday life. The order of nature points to a Mind that gives it order. This obvious order is the substrate for all natural science after all, without natural order, scientific study of nature would be an exercise in futility. And the natural order is the framework for everyday life. We could not take a breath unless our lungs and nerves worked consistently, and unless oxygen had the chemical properties that it has. Order in nature is ubiquitous. We have become so accustomed to it that we fail to notice how remarkable it is.

That this natural order points to God is obvious. But what are the characteristics of this order? In living things, ID theorists describe this order as specified complexity. Specified complexity means that a pattern has substantial independently specified information (specification) that has a low probability of occurrence by chance (complexity). Aquinas would agree that such specified complexity points to a designer, but he understands natural order in a way that is rather different from the understanding of many ID theorists.

For Aquinas, it is the specification, rather than the complexity, that is at the heart of the Fifth Way. Aquinas understands specification in an Aristotelian sense: as final cause (teleology). The Fifth Way is often called the proof from Final Cause, or the Teleological proof.

Final cause is fundamental to Aristotelian-Thomistic metaphysics. One may ask: What is the cause of a thing? St. Thomas answers that to completely understand a cause in nature, we really must know four causes:

Material cause: the matter out of which something is made. The material cause of a statue is the block of marble from which it is carved.

Efficient cause: the agent that gets the cause started. The efficient cause of a statue is the sculptor.

Formal cause: the structure of the system that is caused. The formal cause of a statue is the shape of the statue.

Final cause: the end or purpose for the cause. The final cause of a statue is the purpose in the mind of the sculptor to use the statue to decorate a garden, for example.

In nature, final causes and formal causes often overlap. The formal cause of an acorn growing into an oak tree is the form of the oak tree, which is also the final cause of the growth of the acorn the end or telos of the growth of the acorn is the form of the oak tree it will become.

The four causes have reciprocal relations. Material cause and formal cause work together, in the sense that form provides structure to matter. Efficient cause and final cause work together, in the sense of a push-pull relationship. An efficient cause pushes while a final cause pulls simultaneously. Efficient causes point to ends regular causes in nature tend to specific outcomes. When you strike a match (efficient cause), it bursts into flame (final cause). Efficient causation is incomprehensible without final cause: regular cause-and-effect in nature is directional, in the sense that cause is consistently from one specific state to another specific state. It makes no sense to speak of cause from unless we also speak of cause to. Causes have beginnings and ends.

For St. Thomas (following Aristotle), final cause is particularly important, because it provides direction to natural causes. Final cause is the essential principle by which causes in nature happen. We moderns tend to ignore final causeswe think in terms of cause as a push efficient cause, rather than cause that pulls final cause. For St. Thomas, it is the pull of final cause that is fundamental to the regularity of nature. Final cause is the cause of causes.

With this in mind, lets look at the proof from the Fifth Way. St. Thomas notes that causes in nature are more or less consistent. Causation is the actualization of potentiality, and causation follows patterns. Things fall down, not up. Cold weather causes water to freeze, not boil. Acorns become oaks, but oaks dont become acorns. Aquinas notes that the final cause of an acorn is in some sense in the acorn itself: that is, in order for an acorn to reliably grow into an oak tree, the form of the oak tree must have some sort of existence while the acorn is still an acorn. A process of change cant point to an end unless the end pre-exists in some sense. But how can an oak tree exist when it is merely an acorn?

What exists is the form of the oak tree. The form of the oak tree can exist in two ways. It can exist in an object as a substantial form that is, the form can exist in the oak tree itself. This is the way forms ordinarily exist in objects.

A form can also exist in an intentional sense that is, the form can exist in the mind of a person who thinks about it. When I know an oak tree, the form of that oak tree is in my mind as well as in the oak tree. That is, in fact, how I know it. My mind grasps its form.

For change to occur in nature, the form of the end-state of the change must in some way exist prior to the completion of the change. Otherwise, the change would have no direction colloquially, the acorn wouldnt know what to grow into.

But of course most things in nature and all inanimate things dont know anything. An electron doesnt know quantum mechanics, but it moves in strict accordance with quantum mechanical laws. A rock knows nothing of Newtons law of gravity, but it falls in strict accordance with Newtons law. A plant knows nothing about photosynthesis, but it does it very well every day with an expertise exceeding that of the best chemist.

Since the form of the final state of a process of change cant be in the thing being changed the acorn is not yet the oak tree and change routinely occurs in things that have no mind to look forward to the final state, where is the form of the final state of change in nature?

Aquinas asserts that the form of the final state the telos or final cause must therefore be in the Mind of a Superintelligence that directs natural change. That is what all men call God.

So you can see that in the Thomistic Fifth Way, it is the specification of change, not its complexity, that is at the heart of the matter. Its reminiscent of the quip about a dog that can recite Shakespeare. Its not that the mutt knows Shakespeare thats remarkable; its remarkable that he can talk at all. Whats remarkable in nature is not so much that nature follows complex patterns, but that it follows any pattern at all. Any pattern in nature, even the simplest, cries out for explanation, and it is the fact of natural patterns that is the starting point of the Fifth Way.

From the Thomistic perspective, even the most simple natural process a leaf falling to the ground is proof of Gods existence. The fall of the leaf is specified prior to the fall leaves fall to the ground, rather than doing any of countless other things a natural object might do (like burst into flame or grow a tail). This specification this telos requires a Mind in which the fallen state of the leaf is conceived prior to the actual fall of the leaf. Change in nature requires a Mind to look ahead and direct it. Complexity (or simplicity) of the change is irrelevant.

It is the consistent directedness of change in nature that points to God. Atheists, with much handwaving and dubious science, claim to explain biological complexity by Darwinian stories. Yet, even on its own terms, Darwinism fails. Adaptation by natural selection may account on some level for the fixation of a particular phenotype in a population, but it offers no explanation for the fundamental fact of teleology in nature. In fact, Darwinian theory depends on teleology in nature. If natural causes were not consistent and mostly directed, there would be no consistency to evolution at all. There is no evolution in chaos. Without teleology, chance and necessity would be all chance and no necessity, and therefore no evolution.

Actually, atheists cant explain chance either. Chance is the accidental conjunction of teleological processes. A car accident may be by chance, but it necessarily occurs in a matrix of purpose and teleology the cars move in accordance with laws of physics, the road was constructed according to plans, the cars are driven purposefully by drivers, etc. There can be no chance unless there is a system of regularity in which chance can occur. Chance by itself cant happen it is, by definition, the accidental conjunction of teleological processes. Both chance and necessity point to God. Pure chance, without a framework of regularity, is unintelligible.

From the perspective of the Fifth Way, necessity permeates nature. But it is specification, rather than the complexity, that characterizes necessity and points to Gods existence. The specification need not be complex. The simplest motion of an inanimate object a raindrop falling to the ground is proof of Gods existence.

Teleology is foresight, the ability of a natural process to proceed to an end not yet realized. Yet the end must be realized, in some real sense, for final cause to be a cause. The foresight inherent in teleology is in Gods Mind, and it is via His manifest foresight in teleology that we see Him at work all around us.

This rules out the God of deism. The God of the Fifth Way is no watchmaker who winds up the world and walks away. He is at work ceaselessly and everywhere. The evidence for a Designer is as clear in the most simple inanimate process as it is in the most complex living organism. The elegant intricate complexity of cellular metabolism is certainly a manifestation of Gods glory the beauty of biological processes is breath-taking. But the proof of His existence is in every movement in nature in every detail of cellular metabolism, of course, but also in every raindrop and in every blown grain of dust.

Photo: An oak tree, by Abrget47j [CC BY-SA 3.0], via Wikimedia Commons.

Read the rest here:

Aquinas' Fifth Way: The Proof from Specification - Discovery Institute

Elon Musk warns ‘advanced A.I.’ will soon manipulate social media – Big Think

Twitter bots in 2019 can perform some basic functions, like tweeting content, retweeting, following other users, quoting other users, liking tweets and even sending direct messages. But even though bots on Twitter and other social media seem to be getting smarter than previous iterations, these A.I. are still relatively unsophisticated in terms of how well they can manipulate social discourse.

But it's only a matter of time before more advanced A.I. changes begins manipulating the conversation on a large scale, according to Tesla and SpaceX CEO Elon Musk.

"If advanced A.I. (beyond basic bots) hasn't been applied to manipulate social media, it won't be long before it is," Musk tweeted on Thursday morning.

It's unclear exactly what Musk is referring to by "advanced A.I." but his tweet come just hours after The New York Times published an article outlining a study showing that at least 70 countries have experienced digital disinformation campaigns over the past two years.

"In recent years, governments have used 'cyber troops' to shape public opinion, including networks of bots to amplify a message, groups of "trolls" to harass political dissidents or journalists, and scores of fake social media accounts to misrepresent how many people engaged with an issue," Davey Alba and Adam Satariano wrote for the Times. "The tactics are no longer limited to large countries. Smaller states can now easily set up internet influence operations as well."

Musk followed up his tweet by saying that "anonymous bot swarms" presumably referring to coordinated activity by a large number of social media bots should be investigated.

"If they're evolving rapidly, something's up," he tweeted.

Musk has long predicted a gloomy future with AI. In 2017, he told staff at Neuralink Musk's company that's developing an implantable brain-computer interface that he thinks there's about "a five to 10 percent chance" of making artificial intelligence safe. In the documentary "Do You Trust Your Computer?", Musk warned of the dangers of a single organization someday developing superintelligence.

"The least scary future I can think of is one where we have at least democratized AI because if one company or small group of people manages to develop godlike digital superintelligence, they could take over the world," Musk said.

"At least when there's an evil dictator, that human is going to die. But for an AI, there would be no death. It would live forever. And then you'd have an immortal dictator from which we can never escape."

Related Articles Around the Web

Read more here:

Elon Musk warns 'advanced A.I.' will soon manipulate social media - Big Think

Superintelligence – Wikipedia

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

University of Oxford philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". The program Fritz falls short of superintelligence even though it is much better than humans at chess because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity toeither as a single being or as a new speciesbecome much more powerful than humans, and to displace them.

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself a feature called "recursive self-improvement". It would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything.

Computer components already greatly surpass human performance in speed. Bostrom writes, "Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz)." Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, "whereas existing electronic processing cores can communicate optically at the speed of light". Thus, the simplest example of a superintelligence may be an emulated human mind that's run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed.[9] Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and primate cognition.) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.

All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence.[12] By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

Selective breeding, nootropics, NSI-189, MAO-I's, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process very rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.

Alternatively, collective intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism.

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or braincomputer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem.

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.[17]

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines "that can carry out most human professions at least as well as a typical human" (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said 'never' for 50% confidence, and the 16.5% who said 'never' for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:

Responding to Bostrom, Santos-Lang raised concern that developers may attempt to start with a single kind of superintelligence.

Learning computers that rapidly become superintelligent may take unforeseen actions or robots might out-compete humanity (one potential technological singularity scenario).[21] Researchers have argued that, by way of an "intelligence explosion" sometime over the next century, a self-improving AI could become so powerful as to be unstoppable by humans.[22]

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[23]

Eliezer Yudkowsky explains: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."[24]

This presents the AI control problem: how to build a superintelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. The danger of not designing control right "the first time", is that a misprogrammed superintelligence might rationally decide to "take over the world" and refuse to permit its programmers to modify it once it has been activated. Potential design strategies include "capability control" (preventing an AI from being able to pursue harmful plans), and "motivational control" (building an AI that wants to be helpful).

Bill Hibbard advocates for public education about superintelligence and public control over the development of superintelligence.

Follow this link:

Superintelligence - Wikipedia

Superintelligence: Paths, Dangers, Strategies – Wikipedia

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the Swedish philosopher Nick Bostrom from the University of Oxford. It argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists,[2] and the outcome could be an existential catastrophe for humans.[3]

Bostrom's book has been translated into many languages and is available as an audiobook.[1][4]

It is unknown whether human-level artificial intelligence will arrive in a matter of years, later this century, or not until future centuries. Regardless of the initial timescale, once human-level machine intelligence is developed, a "superintelligent" system that "greatly exceeds the cognitive performance of humans in virtually all domains of interest" would follow surprisingly quickly, possibly even instantaneously. Such a superintelligence would be difficult to control or restrain.

While the ultimate goals of superintelligences can vary greatly, a functional superintelligence will spontaneously generate, as natural subgoals, "instrumental goals" such as self-preservation and goal-content integrity, cognitive enhancement, and resource acquisition. For example, an agent whose sole final goal is to solve the Riemann hypothesis (a famous unsolved, mathematical conjecture) could create and act upon a subgoal of transforming the entire Earth into some form of computronium (hypothetical "programmable matter") to assist in the calculation. The superintelligence would proactively resist any outside attempts to turn the superintelligence off or otherwise prevent its subgoal completion. In order to prevent such an existential catastrophe, it might be necessary to successfully solve the "AI control problem" for the first superintelligence. The solution might involve instilling the superintelligence with goals that are compatible with human survival and well-being. Solving the control problem is surprisingly difficult because most goals, when translated into machine-implementable code, lead to unforeseen and undesirable consequences.

The book ranked #17 on the New York Times list of best selling science books for August 2014.[5] In the same month, business magnate Elon Musk made headlines by agreeing with the book that artificial intelligence is potentially more dangerous than nuclear weapons.[6][7][8]Bostrom's work on superintelligence has also influenced Bill Gatess concern for the existential risks facing humanity over the coming century.[9][10] In a March 2015 interview with Baidu's CEO, Robin Li, Gates said that he would "highly recommend" Superintelligence.[11]

The science editor of the Financial Times found that Bostrom's writing "sometimes veers into opaque language that betrays his background as a philosophy professor" but convincingly demonstrates that the risk from superintelligence is large enough that society should start thinking now about ways to endow future machine intelligence with positive values.[2]A review in The Guardian pointed out that "even the most sophisticated machines created so far are intelligent in only a limited sense" and that "expectations that AI would soon overtake human intelligence were first dashed in the 1960s", but finds common ground with Bostrom in advising that "one would be ill-advised to dismiss the possibility altogether".[3]

Some of Bostrom's colleagues suggest that nuclear war presents a greater threat to humanity than superintelligence, as does the future prospect of the weaponisation of nanotechnology and biotechnology.[3] The Economist stated that "Bostrom is forced to spend much of the book discussing speculations built upon plausible conjecture... but the book is nonetheless valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect of actually doing so seems remote."[12] Ronald Bailey wrote in the libertarian Reason that Bostrom makes a strong case that solving the AI control problem is the "essential task of our age".[13] According to Tom Chivers of The Daily Telegraph, the book is difficult to read, but nonetheless rewarding.[14]

Original post:

Superintelligence: Paths, Dangers, Strategies - Wikipedia

Global Risks Report 2017 – Reports – World Economic Forum

Every step forward in artificial intelligence (AI) challenges assumptions about what machines can do. Myriad opportunities for economic benefit have created a stable flow of investment into AI research and development, but with the opportunities come risks to decision-making, security and governance. Increasingly intelligent systems supplanting both blue- and white-collar employees are exposing the fault lines in our economic and social systems and requiring policy-makers to look for measures that will build resilience to the impact of automation.

Leading entrepreneurs and scientists are also concerned about how to engineer intelligent systems as these systems begin implicitly taking on social obligations and responsibilities, and several of them penned an Open Letter on Research Priorities for Robust and Beneficial Artificial Intelligence in late 2015.1 Whether or not we are comfortable with AI may already be moot: more pertinent questions might be whether we can and ought to build trust in systems that can make decisions beyond human oversight that may have irreversible consequences.

By providing new information and improving decision-making through data-driven strategies, AI could potentially help to solve some of the complex global challenges of the 21st century, from climate change and resource utilization to the impact of population growth and healthcare issues. Start-ups specializing in AI applications received US$2.4 billion in venture capital funding globally in 2015 and more than US$1.5 billion in the first half of 2016.2 Government programmes and existing technology companies add further billions (Figure 3.2.1). Leading players are not just hiring from universities, they are hiring the universities: Amazon, Google and Microsoft have moved to funding professorships and directly acquiring university researchers in the search for competitive advantage.3

Machine learning techniques are now revealing valuable patterns in large data sets and adding value to enterprises by tackling problems at a scale beyond human capability. For example, Stanfords computational pathologist (C-Path) has highlighted unnoticed indicators for breast cancer by analysing thousands of cellular features on hundreds of tumour images,4 while DeepMind increased the power usage efficiency of Alphabet Inc.s data centres by 15%.5 AI applications can reduce costs and improve diagnostics with staggering speed and surprising creativity.

The generic term AI covers a wide range of capabilities and potential capabilities. Some serious thinkers fear that AI could one day pose an existential threat: a superintelligence might pursue goals that prove not to be aligned with the continued existence of humankind. Such fears relate to strong AI or artificial general intelligence (AGI), which would be the equivalent of human-level awareness, but which does not yet exist.6 Current AI applications are forms of weak or narrow AI or artificial specialized intelligence (ASI); they are directed at solving specific problems or taking actions within a limited set of parameters, some of which may be unknown and must be discovered and learned.

Tasks such as trading stocks, writing sports summaries, flying military planes and keeping a car within its lane on the highway are now all within the domain of ASI. As ASI applications expand, so do the risks of these applications operating in unforeseeable ways or outside the control of humans.7 The 2010 and 2015 stock market flash crashes illustrate how ASI applications can have unanticipated real-world impacts, while AlphaGo shows how ASI can surprise human experts with novel but effective tactics (Box 3.2.1). In combination with robotics, AI applications are already affecting employment and shaping risks related to social inequality.8

AI has great potential to augment human decision-making by countering cognitive biases and making rapid sense of extremely large data sets: at least one venture capital firm has already appointed an AI application to help determine its financial decisions.9 Gradually removing human oversight can increase efficiency and is necessary for some applications, such as automated vehicles. However, there are dangers in coming to depend entirely on the decisions of AI systems when we do not fully understand how the systems are making those decisions.10

by Jean-Marc Rickli, Geneva Centre for Security Policy

One sector that saw the huge disruptive potential of AI from an early stage is the military. The weaponization of AI will represent a paradigm shift in the way wars are fought, with profound consequences for international security and stability. Serious investment in autonomous weapon systems (AWS) began a few years ago; in July 2016 the Pentagons Defense Science Board published its first study on autonomy, but there is no consensus yet on how to regulate the development of these weapons.

The international community started to debate the emerging technology of lethal autonomous weapons systems (LAWS) in the framework of the United Nations Convention on Conventional Weapon (CCW) in 2014. Yet, so far, states have not agreed on how to proceed. Those calling for a ban on AWS fear that human beings will be removed from the loop, leaving decisions on the use lethal force to machines, with ramifications we do not yet understand.

There are lessons here from non-military applications of AI. Consider the example of AlphaGo, the AI Go-player created by Googles DeepMind division, which in March last year beat the worlds second-best human player. Some of AlphaGos moves puzzled observers, because they did not fit usual human patterns of play. DeepMind CEO Demis Hassabis explained the reason for this difference as follows: unlike humans, the AlphaGo program aims to maximize the probability of winning rather than optimizing margins. If this binary logic in which the only thing that matters is winning while the margin of victory is irrelevant were built into an autonomous weapons system, it would lead to the violation of the principle of proportionality, because the algorithm would see no difference between victories that required it to kill one adversary or 1,000.

Autonomous weapons systems will also have an impact on strategic stability. Since 1945, the global strategic balance has prioritized defensive systems a priority that has been conducive to stability because it has deterred attacks. However, the strategy of choice for AWS will be based on swarming, in which an adversarys defence system is overwhelmed with a concentrated barrage of coordinated simultaneous attacks. This risks upsetting the global equilibrium by neutralizing the defence systems on which it is founded. This would lead to a very unstable international configuration, encouraging escalation and arms races and the replacement of deterrence by pre-emption.

We may already have passed the tipping point for prohibiting the development of these weapons. An arms race in autonomous weapons systems is very likely in the near future. The international community should tackle this issue with the utmost urgency and seriousness because, once the first fully autonomous weapons are deployed, it will be too late to go back.

In any complex and chaotic system, including AI systems, potential dangers include mismanagement, design vulnerabilities, accidents and unforeseen occurrences.11 These pose serious challenges to ensuring the security and safety of individuals, governments and enterprises. It may be tolerable for a bug to cause an AI mobile phone application to freeze or misunderstand a request, for example, but when an AI weapons system or autonomous navigation system encounters a mistake in a line of code, the results could be lethal.

Machine-learning algorithms can also develop their own biases, depending on the data they analyse. For example, an experimental Twitter account run by an AI application ended up being taken down for making socially unacceptable remarks;12 search engine algorithms have also come under fire for undesirable race-related results.13 Decision-making that is either fully or partially dependent on AI systems will need to consider management protocols to avoid or remedy such outcomes.

AI systems in the Cloud are of particular concern because of issues of control and governance. Some experts propose that robust AI systems should run in a sandbox an experimental space disconnected from external systems but some cognitive services already depend on their connection to the internet. The AI legal assistant ROSS, for example, must have access to electronically available databases. IBMs Watson accesses electronic journals, delivers its services, and even teaches a university course via the internet.14 The data extraction program TextRunner is successful precisely because it is left to explore the web and draw its own conclusions unsupervised.15

On the other hand, AI can help solve cybersecurity challenges. Currently AI applications are used to spot cyberattacks and potential fraud in internet transactions. Whether AI applications are better at learning to attack or defend will determine whether online systems become more secure or more prone to successful cyberattacks.16 AI systems are already analysing vast amounts of data from phone applications and wearables; as sensors find their way into our appliances and clothing, maintaining security over our data and our accounts will become an even more crucial priority. In the physical world, AI systems are also being used in surveillance and monitoring analysing video and sound to spot crime, help with anti-terrorism and report unusual activity.17 How much they will come to reduce overall privacy is a real concern.

So far, AI development has occurred in the absence of almost any regulatory environment.18 As AI systems inhabit more technologies in daily life, calls for regulatory guidelines will increase. But can AI systems be sufficiently governed? Such governance would require multiple layers that include ethical standards, normative expectations of AI applications, implementation scenarios, and assessments of responsibility and accountability for actions taken by or on behalf of an autonomous AI system.

AI research and development presents issues that complicate standard approaches to governance, and can take place outside of traditional institutional frameworks, with both people and machines and in various locations. The developments in AI may not be well understood by policy-makers who do not have specialized knowledge of the field; and they may involve technologies that are not an issue on their own but that collectively present emergent properties that require attention.19 It would be difficult to regulate such things before they happen, and any unforeseeable consequences or control issues may be beyond governance once they occur (Box 3.2.2).

One option could be to regulate the technologies through which the systems work. For example, in response to the development of automated transportation that will require AI systems, the U.S. Department of Transportation has issued a 116 page policy guide.20 Although the policy guide does not address AI applications directly, it does put in place guidance frameworks for the developers of automated vehicles in terms of safety, control and testing.

Scholars, philosophers, futurists and tech enthusiasts vary in their predictions for the advent of artificial general intelligence (AGI), with timelines ranging from the 2030s to never. However, given the possibility of an AGI working out how to improve itself into a superintelligence, it may be prudent or even morally obligatory to consider potentially feasible scenarios, and how serious or even existential threats may be avoided.

The creation of AGI may depend on converging technologies and hybrid platforms. Much of human intelligence is developed by the use of a body and the occupation of physical space, and robotics provides such embodiment for experimental and exploratory AI applications. Proof-of-concept for muscle and braincomputer interfaces has already been established: Massachusetts Institute of Technology (MIT) scientists have shown that memories can be encoded in silicon,21 and Japanese researchers have used electroencephalogram (EEG) patterns to predict the next syllable someone will say with up to 90% accuracy, which may lead to the ability to control machines simply by thinking.22

Superintelligence could potentially also be achieved by augmenting human intelligence through smart systems, biotech, and robotics rather than by being embodied in a computational or robotic form.23 Potential barriers to integrating humans with intelligence-augmenting technology include peoples cognitive load, physical acceptance and concepts of personal identity.24 Should these challenges be overcome, keeping watch over the state of converging technologies will become an ever more important task as AI capabilities grow and fuse with other technologies and organisms.

Advances in computing technologies such as quantum computing, parallel systems, and neurosynaptic computing research may create new opportunities for AI applications or unleash new unforeseen behaviours in computing systems.25 New computing technologies are already having an impact: for instance, IBMs TrueNorth chip with a design inspired by the human brain and built for exascale computing already has contracts from Lawrence Livermore National Laboratory in California to work on nuclear weapons security.26 While adding great benefit to scenario modelling today, the possibility of a superintelligence could turn this into a risk.

by Stuart Russell, University of California, Berkeley

Few in the field believe that there are intrinsic limits to machine intelligence, and even fewer argue for self-imposed limits. Thus it is prudent to anticipate the possibility that machines will exceed human capabilities, as Alan Turing posited in 1951: If a machine can think, it might think more intelligently than we do. [T]his new danger is certainly something which can give us anxiety.

So far, the most general approach to creating generally intelligent machines is to provide them with our desired objectives and with algorithms for finding ways to achieve those objectives. Unfortunately, we may not specify our objectives in such a complete and well-calibrated fashion that a machine cannot find an undesirable way to achieve them. This is known as the value alignment problem, or the King Midas problem. Turing suggested turning off the power at strategic moments as a possible solution to discovering that a machine is misaligned with our true objectives, but a superintelligent machine is likely to have taken steps to prevent interruptions to its power supply.

How can we define problems in such a way that any solution the machine finds will be provably beneficial? One idea is to give a machine the objective of maximizing the true human objective, but without initially specifying that true objective: the machine has to gradually resolve its uncertainty by observing human actions, which reveal information about the true objective. This uncertainty should avoid the single-minded and potentially catastrophic pursuit of a partial or erroneous objective. It might even persuade a machine to leave open the possibility of allowing itself to be switched off.

There are complications: humans are irrational, inconsistent, weak-willed, computationally limited and heterogeneous, all of which conspire to make learning about human values from human behaviour a difficult (and perhaps not totally desirable) enterprise. However, these ideas provide a glimmer of hope that an engineering discipline can be developed around provably beneficial systems, allowing a safe way forward for AI. Near-term developments such as intelligent personal assistants and domestic robots will provide opportunities to develop incentives for AI systems to learn value alignment: assistants that book employees into US$20,000-a-night suites and robots that cook the cat for the family dinner are unlikely to prove popular.

Both existing ASI systems and the plausibility of AGI demand mature consideration. Major firms such as Microsoft, Google, IBM, Facebook and Amazon have formed the Partnership on Artificial Intelligence to Benefit People and Society to focus on ethical issues and helping the public better understand AI.27 AI will become ever more integrated into daily life as businesses employ it in applications to provide interactive digital interfaces and services, increase efficiencies and lower costs.28 Superintelligent systems remain, for now, only a theoretical threat, but artificial intelligence is here to stay and it makes sense to see whether it can help us to create a better future. To ensure that AI stays within the boundaries that we set for it, we must continue to grapple with building trust in systems that will transform our social, political and business environments, make decisions for us, and become an indispensable faculty for interpreting the world around us.

Chapter 3.2 was contributed by Nicholas Davis, World Economic Forum, and Thomas Philbeck, World Economic Forum.

Armstrong, S. 2014. Smarter than Us: The Rise of Machine Intelligence. Berkeley, CA: Machine Intelligence Research Institute.

Bloomberg. 2016. Boston Marathon Security: Can A.I. Predict Crimes? Bloomberg News, Video, 21 April 2016. http://www.bloomberg.com/news/videos/b/d260fb95-751b-43d5-ab8d-26ca87fa8b83

Bostrom, N. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.

CB Insights. 2016. Artificial intelligence explodes: New deal activity record for AI startups. Blog, 20 June 2016. https://www.cbinsights.com/blog/artificial-intelligence-funding-trends/

Chiel, E. 2016. Black teenagers vs. white teenagers: Why Googles algorithm displays racist results. Fusion, 10 June 2016. http://fusion.net/story/312527/google-image-search-algorithm-three-black-teenagers-vs-three-white-teenagers/

Clark, J. 2016. Google cuts its giant electricity bill with deepmind-powered AI. Bloomberg Technology, 19 July 2016. http://www.bloomberg.com/news/articles/2016-07-19/google-cuts-its-giant-electricity-bill-with-deepmind-powered-ai

Cohen, J. 2013. Memory implants: A maverick neuroscientist believes he has deciphered the code by which the brain forms long-term memories. MIT Technology Review. https://www.technologyreview.com/s/513681/memory-implants/

Frey, C. B. and M. A. Osborne. 2015. Technology at work: The future of innovation and employment. Citi GPS: Global Perspectives & Solutions, February 2015. http://www.oxfordmartin.ox.ac.uk/downloads/reports/Citi_GPS_Technology_Work.pdf

Hern, A. 2016. Partnership on AI formed by Google, Facebook, Amazon, IBM and Microsoft. The Guardian Online, 28 September 2016. https://www.theguardian.com/technology/2016/sep/28/google-facebook-amazon-ibm-microsoft-partnership-on-ai-tech-firms

Hunt, E. 2016. Tay, Microsofts AI chatbot, gets a crash course in racism from Twitter. The Guardian, 24 March 2016. https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter

Kelly, A. 2016. Will Artificial Intelligence read your mind? Scientific research analyzes brainwaves to predict words before you speak. iDigital Times, 9 January 2016. http://www.idigitaltimes.com/will-artificial-intelligence-read-your-mind-scientific-research-analyzes-brainwaves-502730

Kime, B. 3 Chatbots to deploy in your busines. VentureBeat, 1 October 2016. http://venturebeat.com/2016/10/01/3-chatbots-to-deploy-in-your-business/

Lawrence Livermore National Laboratory. 2016. Lawrence Livermore and IBM collaborate to build new brain-inspired supercomputer, Press release, 29 March 2016. https://www.llnl.gov/news/lawrence-livermore-and-ibm-collaborate-build-new-brain-inspired-supercomputer

Maderer, J. 2016. Artificial Intelligence course creates AI teaching assistant. Georgia Tech News Center, 9 May 2016. http://www.news.gatech.edu/2016/05/09/artificial-intelligence-course-creates-ai-teaching-assistant

Martin, M. 2012. C-Path: Updating the art of pathology. Journal of the National Cancer Institute 104 (16): 120204. http://jnci.oxfordjournals.org/content/104/16/1202.full

Mizroch, A. 2015. Artificial-intelligence experts are in high demand. Wall Street Journal Online, 1 May 2015. http://www.wsj.com/articles/artificial-intelligence-experts-are-in-high-demand-1430472782

Russell, S., D. Dewey, and M. Tegmark. 2015. Research priorities for a robust and beneficial artificial intelligence. AI Magazine Winter 2015: 10514.

Scherer, M. U. 2016. Regulating Artificial Intelligence systems: Risks, challenges, competencies, and strategies. Harvard Journal of Law & Technology 29 (2): 35498.

Sherpany. 2016. Artificial Intelligence: Bringing machines into the boardroom, 21 April 2016. https://www.sherpany.com/en/blog/2016/04/21/artificial-intelligence-bringing-machines-boardroom/

Talbot, D. 2009. Extracting meaning from millions of pages. MIT Technology Review, 10 June 2009. https://www.technologyreview.com/s/413767/extracting-meaning-from-millions-of-pages/

Turing, A. M. 1951. Can digital machines think? Lecture broadcast on BBC Third Programme; typescript at turingarchive.org

U.S. Department of Transportation. 2016. Federal Automated Vehicles Policy September 2016. Washington, DC: U.S. Department of Transportation. https://www.transportation.gov/AV/federal-automated-vehicles-policy-september-2016

Wallach, W. 2015. A Dangerous Master. New York: Basic Books.

Yirka, B. 2016. Researchers create organic nanowire synaptic transistors that emulate the working principles of biological synapses. TechXplore, 20 June 2016. https://techxplore.com/news/2016-06-nanowire-synaptic-transistors-emulate-principles.html

More:

Global Risks Report 2017 - Reports - World Economic Forum

Superintelligence – Hardcover – Nick Bostrom – Oxford …

Superintelligence Paths, Dangers, Strategies Nick Bostrom

"I highly recommend this book" --Bill Gates

"Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era." --Stuart Russell, Professor of Computer Science, University of California, Berkley

"Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book." --Martin Rees, Past President, Royal Society

"This superb analysis by one of the world's clearest thinkers tackles one of humanity's greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn't become the last?" --Professor Max Tegmark, MIT

"Terribly important ... groundbreaking... extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole... If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Spring from 1962, or ever." --Olle Haggstrom, Professor of Mathematical Statistics

"Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking" --The Economist

"There is no doubting the force of [Bostrom's] arguments...the problem is a research challenge worthy of the next generation's best mathematical talent. Human civilisation is at stake." --Clive Cookson, Financial Times

"Worth reading.... We need to be super careful with AI. Potentially more dangerous than nukes" --Elon Musk, Founder of SpaceX and Tesla

"Every intelligent person should read it." --Nils Nilsson, Artificial Intelligence Pioneer, Stanford University

Continued here:

Superintelligence - Hardcover - Nick Bostrom - Oxford ...

The Artificial Intelligence Revolution: Part 1 – Wait But Why

PDF: We made a fancy PDF of this post for printing and offline viewing. Buy it here. (Or see a preview.)

Note: The reason this post took three weeks to finish is that as I dug into research on Artificial Intelligence, I could not believe what I was reading. It hit me pretty quickly that whats happening in the world of AI is not just an important topic, but by far THE most important topic for our future. So I wanted to learn as much as I could about it, and once I did that, I wanted to make sure I wrote a post that really explained this whole situation and why it matters so much. Not shockingly, that became outrageously long, so I broke it into two parts. This is Part 1Part 2 is here.

_______________

We are on the edge of change comparable to the rise of human life on Earth. Vernor Vinge

What does it feel like to stand here?

It seems like a pretty intense place to be standingbut then you have to remember something about what its like to stand on a time graph: you cant see whats to your right. So heres how it actually feels to stand there:

Which probably feels pretty normal

_______________

Imagine taking a time machine back to 1750a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. When you get there, you retrieve a dude, bring him to 2015, and then walk him around and watch him react to everything. Its impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someones face and chat with them even though theyre on the other side of the country, and worlds of other inconceivable sorcery. This is all before you show him the internet or explain things like the International Space Station, the Large Hadron Collider, nuclear weapons, or general relativity.

This experience for him wouldnt be surprising or shocking or even mind-blowingthose words arent big enough. He might actually die.

But heres the interesting thingif he then went back to 1750 and got jealous that we got to see his reaction and decided he wanted to try the same thing, hed take the time machine and go back the same distance, get someone from around the year 1500, bring him to 1750, and show him everything. And the 1500 guy would be shocked by a lot of thingsbut he wouldnt die. It would be far less of an insane experience for him, because while 1500 and 1750 were very different, they were much less different than 1750 to 2015. The 1500 guy would learn some mind-bending shit about space and physics, hed be impressed with how committed Europe turned out to be with that new imperialism fad, and hed have to do some major revisions of his world map conception. But watching everyday life go by in 1750transportation, communication, etc.definitely wouldnt make him die.

No, in order for the 1750 guy to have as much fun as we had with him, hed have to go much farther backmaybe all the way back to about 12,000 BC, before the First Agricultural Revolution gave rise to the first cities and to the concept of civilization. If someone from a purely hunter-gatherer worldfrom a time when humans were, more or less, just another animal speciessaw the vast human empires of 1750 with their towering churches, their ocean-crossing ships, their concept of being inside, and their enormous mountain of collective, accumulated human knowledge and discoveryhed likely die.

And then what if, after dying, he got jealous and wanted to do the same thing. If he went back 12,000 years to 24,000 BC and got a guy and brought him to 12,000 BC, hed show the guy everything and the guy would be like, Okay whats your point who cares. For the 12,000 BC guy to have the same fun, hed have to go back over 100,000 years and get someone he could show fire and language to for the first time.

In order for someone to be transported into the future and die from the level of shock theyd experience, they have to go enough years ahead that a die level of progress, or a Die Progress Unit (DPU) has been achieved. So a DPU took over 100,000 years in hunter-gatherer times, but at the post-Agricultural Revolution rate, it only took about 12,000 years. The post-Industrial Revolution world has moved so quickly that a 1750 person only needs to go forward a couple hundred years for a DPU to have happened.

This patternhuman progress moving quicker and quicker as time goes onis what futurist Ray Kurzweil calls human historys Law of Accelerating Returns. This happens because more advanced societies have the ability to progress at a faster rate than less advanced societiesbecause theyre more advanced. 19th century humanity knew more and had better technology than 15th century humanity, so its no surprise that humanity made far more advances in the 19th century than in the 15th century15th century humanity was no match for 19th century humanity.11 open these

This works on smaller scales too. The movie Back to the Future came out in 1985, and the past took place in 1955. In the movie, when Michael J. Fox went back to 1955, he was caught off-guard by the newness of TVs, the prices of soda, the lack of love for shrill electric guitar, and the variation in slang. It was a different world, yesbut if the movie were made today and the past took place in 1985, the movie could have had much more fun with much bigger differences. The character would be in a time before personal computers, internet, or cell phonestodays Marty McFly, a teenager born in the late 90s, would be much more out of place in 1985 than the movies Marty McFly was in 1955.

This is for the same reason we just discussedthe Law of Accelerating Returns. The average rate of advancement between 1985 and 2015 was higher than the rate between 1955 and 1985because the former was a more advanced worldso much more change happened in the most recent 30 years than in the prior 30.

Soadvances are getting bigger and bigger and happening more and more quickly. This suggests some pretty intense things about our future, right?

Kurzweil suggests that the progress of the entire 20th century would have been achieved in only 20 years at the rate of advancement in the year 2000in other words, by 2000, the rate of progress was five times faster than the average rate of progress during the 20th century. He believes another 20th centurys worth of progress happened between 2000 and 2014 and that another 20th centurys worth of progress will happen by 2021, in only seven years. A couple decades later, he believes a 20th centurys worth of progress will happen multiple times in the same year, and even later, in less than one month. All in all, because of the Law of Accelerating Returns, Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century.2

If Kurzweil and others who agree with him are correct, then we may be as blown away by 2030 as our 1750 guy was by 2015i.e. the next DPU might only take a couple decadesand the world in 2050 might be so vastly different than todays world that we would barely recognize it.

This isnt science fiction. Its what many scientists smarter and more knowledgeable than you or I firmly believeand if you look at history, its what we should logically predict.

So then why, when you hear me say something like the world 35 years from now might be totally unrecognizable, are you thinking, Cool.but nahhhhhhh? Three reasons were skeptical of outlandish forecasts of the future:

1) When it comes to history, we think in straight lines. When we imagine the progress of the next 30 years, we look back to the progress of the previous 30 as an indicator of how much will likely happen. When we think about the extent to which the world will change in the 21st century, we just take the 20th century progress and add it to the year 2000. This was the same mistake our 1750 guy made when he got someone from 1500 and expected to blow his mind as much as his own was blown going the same distance ahead. Its most intuitive for us to think linearly, when we should be thinking exponentially. If someone is being more clever about it, they might predict the advances of the next 30 years not by looking at the previous 30 years, but by taking the current rate of progress and judging based on that. Theyd be more accurate, but still way off. In order to think about the future correctly, you need to imagine things moving at a much faster rate than theyre moving now.

2) The trajectory of very recent history often tells a distorted story. First, even a steep exponential curve seems linear when you only look at a tiny slice of it, the same way if you look at a little segment of a huge circle up close, it looks almost like a straight line. Second, exponential growth isnt totally smooth and uniform. Kurzweil explains that progress happens in S-curves:

An S is created by the wave of progress when a new paradigm sweeps the world. The curve goes through three phases:

1. Slow growth (the early phase of exponential growth)2. Rapid growth (the late, explosive phase of exponential growth)3. A leveling off as the particular paradigm matures3

If you look only at very recent history, the part of the S-curve youre on at the moment can obscure your perception of how fast things are advancing. The chunk of time between 1995 and 2007 saw the explosion of the internet, the introduction of Microsoft, Google, and Facebook into the public consciousness, the birth of social networking, and the introduction of cell phones and then smart phones. That was Phase 2: the growth spurt part of the S. But 2008 to 2015 has been less groundbreaking, at least on the technological front. Someone thinking about the future today might examine the last few years to gauge the current rate of advancement, but thats missing the bigger picture. In fact, a new, huge Phase 2 growth spurt might be brewing right now.

3) Our own experience makes us stubborn old men about the future. We base our ideas about the world on our personal experience, and that experience has ingrained the rate of growth of the recent past in our heads as the way things happen. Were also limited by our imagination, which takes our experience and uses it to conjure future predictionsbut often, what we know simply doesnt give us the tools to think accurately about the future.2 When we hear a prediction about the future that contradicts our experience-based notion of how things work, our instinct is that the prediction must be naive. If I tell you, later in this post, that you may live to be 150, or 250, or not die at all, your instinct will be, Thats stupidif theres one thing I know from history, its that everybody dies. And yes, no one in the past has not died. But no one flew airplanes before airplanes were invented either.

So while nahhhhh might feel right as you read this post, its probably actually wrong. The fact is, if were being truly logical and expecting historical patterns to continue, we should conclude that much, much, much more should change in the coming decades than we intuitively expect. Logic also suggests that if the most advanced species on a planet keeps making larger and larger leaps forward at an ever-faster rate, at some point, theyll make a leap so great that it completely alters life as they know it and the perception they have of what it means to be a humankind of like how evolution kept making great leaps toward intelligence until finally it made such a large leap to the human being that it completely altered what it meant for any creature to live on planet Earth. And if you spend some time reading about whats going on today in science and technology, you start to see a lot of signs quietly hinting that life as we currently know it cannot withstand the leap thats coming next.

_______________

If youre like me, you used to think Artificial Intelligence was a silly sci-fi concept, but lately youve been hearing it mentioned by serious people, and you dont really quite get it.

There are three reasons a lot of people are confused about the term AI:

1) We associate AI with movies. Star Wars. Terminator. 2001: A Space Odyssey. Even the Jetsons. And those are fiction, as are the robot characters. So it makes AI sound a little fictional to us.

2) AI is a broad topic. It ranges from your phones calculator to self-driving cars to something in the future that might change the world dramatically. AI refers to all of these things, which is confusing.

3) We use AI all the time in our daily lives, but we often dont realize its AI. John McCarthy, who coined the term Artificial Intelligence in 1956, complained that as soon as it works, no one calls it AI anymore.4 Because of this phenomenon, AI often sounds like a mythical future prediction more than a reality. At the same time, it makes it sound like a pop concept from the past that never came to fruition. Ray Kurzweil says he hears people say that AI withered in the 1980s, which he compares to insisting that the Internet died in the dot-com bust of the early 2000s.5

So lets clear things up. First, stop thinking of robots. A robot is a container for AI, sometimes mimicking the human form, sometimes notbut the AI itself is the computer inside the robot. AI is the brain, and the robot is its bodyif it even has a body. For example, the software and data behind Siri is AI, the womans voice we hear is a personification of that AI, and theres no robot involved at all.

Secondly, youve probably heard the term singularity or technological singularity. This term has been used in math to describe an asymptote-like situation where normal rules no longer apply. Its been used in physics to describe a phenomenon like an infinitely small, dense black hole or the point we were all squished into right before the Big Bang. Again, situations where the usual rules dont apply. In 1993, Vernor Vinge wrote a famous essay in which he applied the term to the moment in the future when our technologys intelligence exceeds our owna moment for him when life as we know it will be forever changed and normal rules will no longer apply. Ray Kurzweil then muddled things a bit by defining the singularity as the time when the Law of Accelerating Returns has reached such an extreme pace that technological progress is happening at a seemingly-infinite pace, and after which well be living in a whole new world. I found that many of todays AI thinkers have stopped using the term, and its confusing anyway, so I wont use it much here (even though well be focusing on that idea throughout).

Finally, while there are many different types or forms of AI since AI is a broad concept, the critical categories we need to think about are based on an AIs caliber. There are three major AI caliber categories:

AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. Theres AI that can beat the world chess champion in chess, but thats the only thing it does. Ask it to figure out a better way to store data on a hard drive, and itll look at you blankly.

AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the boarda machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and were yet to do it. Professor Linda Gottfredson describes intelligence as a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. AGI would be able to do all of those things as easily as you can.

AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. Artificial Superintelligence ranges from a computer thats just a little smarter than a human to one thats trillions of times smarteracross the board. ASI is the reason the topic of AI is such a spicy meatball and why the words immortality and extinction will both appear in these posts multiple times.

As of now, humans have conquered the lowest caliber of AIANIin many ways, and its everywhere. The AI Revolution is the road from ANI, through AGI, to ASIa road we may or may not survive but that, either way, will change everything.

Lets take a close look at what the leading thinkers in the field believe this road looks like and why this revolution might happen way sooner than you might think:

Artificial Narrow Intelligence is machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing. A few examples:

ANI systems as they are now arent especially scary. At worst, a glitchy or badly-programmed ANI can cause an isolated catastrophe like knocking out a power grid, causing a harmful nuclear power plant malfunction, or triggering a financial markets disaster (like the 2010 Flash Crash when an ANI program reacted the wrong way to an unexpected situation and caused the stock market to briefly plummet, taking $1 trillion of market value with it, only part of which was recovered when the mistake was corrected).

But while ANI doesnt have the capability to cause an existential threat, we should see this increasingly large and complex ecosystem of relatively-harmless ANI as a precursor of the world-altering hurricane thats on the way. Each new ANI innovation quietly adds another brick onto the road to AGI and ASI. Or as Aaron Saenz sees it, our worlds ANI systems are like the amino acids in the early Earths primordial oozethe inanimate stuff of life that, one unexpected day, woke up.

Why Its So Hard

Nothing will make you appreciate human intelligence like learning about how unbelievably challenging it is to try to create a computer as smart as we are. Building skyscrapers, putting humans in space, figuring out the details of how the Big Bang went downall far easier than understanding our own brain or how to make something as cool as it. As of now, the human brain is the most complex object in the known universe.

Whats interesting is that the hard parts of trying to build AGI (a computer as smart as humans in general, not just at one narrow specialty) are not intuitively what youd think they are. Build a computer that can multiply two ten-digit numbers in a split secondincredibly easy. Build one that can look at a dog and answer whether its a dog or a catspectacularly difficult. Make AI that can beat any human in chess? Done. Make one that can read a paragraph from a six-year-olds picture book and not just recognize the words but understand the meaning of them? Google is currently spending billions of dollars trying to do it. Hard thingslike calculus, financial market strategy, and language translationare mind-numbingly easy for a computer, while easy thingslike vision, motion, movement, and perceptionare insanely hard for it. Or, as computer scientist Donald Knuth puts it, AI has by now succeeded in doing essentially everything that requires thinking but has failed to do most of what people and animals do without thinking.'7

What you quickly realize when you think about this is that those things that seem easy to us are actually unbelievably complicated, and they only seem easy because those skills have been optimized in us (and most animals) by hundreds of millions of years of animal evolution. When you reach your hand up toward an object, the muscles, tendons, and bones in your shoulder, elbow, and wrist instantly perform a long series of physics operations, in conjunction with your eyes, to allow you to move your hand in a straight line through three dimensions. It seems effortless to you because you have perfected software in your brain for doing it. Same idea goes for why its not that malware is dumb for not being able to figure out the slanty word recognition test when you sign up for a new account on a siteits that your brain is super impressive for being able to.

On the other hand, multiplying big numbers or playing chess are new activities for biological creatures and we havent had any time to evolve a proficiency at them, so a computer doesnt need to work too hard to beat us. Think about itwhich would you rather do, build a program that could multiply big numbers or one that could understand the essence of a B well enough that you could show it a B in any one of thousands of unpredictable fonts or handwriting and it could instantly know it was a B?

One fun examplewhen you look at this, you and a computer both can figure out that its a rectangle with two distinct shades, alternating:

Tied so far. But if you pick up the black and reveal the whole image

you have no problem giving a full description of the various opaque and translucent cylinders, slats, and 3-D corners, but the computer would fail miserably. It would describe what it seesa variety of two-dimensional shapes in several different shadeswhich is actually whats there. Your brain is doing a ton of fancy shit to interpret the implied depth, shade-mixing, and room lighting the picture is trying to portray.8 And looking at the picture below, a computer sees a two-dimensional white, black, and gray collage, while you easily see what it really isa photo of an entirely-black, 3-D rock:

Credit: Matthew Lloyd

And everything we just mentioned is still only taking in stagnant information and processing it. To be human-level intelligent, a computer would have to understand things like the difference between subtle facial expressions, the distinction between being pleased, relieved, content, satisfied, and glad, and why Braveheart was great but The Patriot was terrible.

Daunting.

So how do we get there?

First Key to Creating AGI: Increasing Computational Power

One thing that definitely needs to happen for AGI to be a possibility is an increase in the power of computer hardware. If an AI system is going to be as intelligent as the brain, itll need to equal the brains raw computing capacity.

One way to express this capacity is in the total calculations per second (cps) the brain could manage, and you could come to this number by figuring out the maximum cps of each structure in the brain and then adding them all together.

Ray Kurzweil came up with a shortcut by taking someones professional estimate for the cps of one structure and that structures weight compared to that of the whole brain and then multiplying proportionally to get an estimate for the total. Sounds a little iffy, but he did this a bunch of times with various professional estimates of different regions, and the total always arrived in the same ballparkaround 1016, or 10 quadrillion cps.

Currently, the worlds fastest supercomputer, Chinas Tianhe-2, has actually beaten that number, clocking in at about 34 quadrillion cps. But Tianhe-2 is also a dick, taking up 720 square meters of space, using 24 megawatts of power (the brain runs on just 20 watts), and costing $390 million to build. Not especially applicable to wide usage, or even most commercial or industrial usage yet.

Kurzweil suggests that we think about the state of computers by looking at how many cps you can buy for $1,000. When that number reaches human-level10 quadrillion cpsthen thatll mean AGI could become a very real part of life.

Moores Law is a historically-reliable rule that the worlds maximum computing power doubles approximately every two years, meaning computer hardware advancement, like general human advancement through history, grows exponentially. Looking at how this relates to Kurzweils cps/$1,000 metric, were currently at about 10 trillion cps/$1,000, right on pace with this graphs predicted trajectory:9

So the worlds $1,000 computers are now beating the mouse brain and theyre at about a thousandth of human level. This doesnt sound like much until you remember that we were at about a trillionth of human level in 1985, a billionth in 1995, and a millionth in 2005. Being at a thousandth in 2015 puts us right on pace to get to an affordable computer by 2025 that rivals the power of the brain.

So on the hardware side, the raw power needed for AGI is technically available now, in China, and well be ready for affordable, widespread AGI-caliber hardware within 10 years. But raw computational power alone doesnt make a computer generally intelligentthe next question is, how do we bring human-level intelligence to all that power?

Second Key to Creating AGI: Making It Smart

This is the icky part. The truth is, no one really knows how to make it smartwere still debating how to make a computer human-level intelligent and capable of knowing what a dog and a weird-written B and a mediocre movie is. But there are a bunch of far-fetched strategies out there and at some point, one of them will work. Here are the three most common strategies I came across:

This is like scientists toiling over how that kid who sits next to them in class is so smart and keeps doing so well on the tests, and even though they keep studying diligently, they cant do nearly as well as that kid, and then they finally decide k fuck it Im just gonna copy that kids answers. It makes sensewere stumped trying to build a super-complex computer, and there happens to be a perfect prototype for one in each of our heads.

The science world is working hard on reverse engineering the brain to figure out how evolution made such a rad thingoptimistic estimates say we can do this by 2030. Once we do that, well know all the secrets of how the brain runs so powerfully and efficiently and we can draw inspiration from it and steal its innovations. One example of computer architecture that mimics the brain is the artificial neural network. It starts out as a network of transistor neurons, connected to each other with inputs and outputs, and it knows nothinglike an infant brain. The way it learns is it tries to do a task, say handwriting recognition, and at first, its neural firings and subsequent guesses at deciphering each letter will be completely random. But when its told it got something right, the transistor connections in the firing pathways that happened to create that answer are strengthened; when its told it was wrong, those pathways connections are weakened. After a lot of this trial and feedback, the network has, by itself, formed smart neural pathways and the machine has become optimized for the task. The brain learns a bit like this but in a more sophisticated way, and as we continue to study the brain, were discovering ingenious new ways to take advantage of neural circuitry.

More extreme plagiarism involves a strategy called whole brain emulation, where the goal is to slice a real brain into thin layers, scan each one, use software to assemble an accurate reconstructed 3-D model, and then implement the model on a powerful computer. Wed then have a computer officially capable of everything the brain is capable ofit would just need to learn and gather information. If engineers get really good, theyd be able to emulate a real brain with such exact accuracy that the brains full personality and memory would be intact once the brain architecture has been uploaded to a computer. If the brain belonged to Jim right before he passed away, the computer would now wake up as Jim (?), which would be a robust human-level AGI, and we could now work on turning Jim into an unimaginably smart ASI, which hed probably be really excited about.

How far are we from achieving whole brain emulation? Well so far, weve not yet just recently been able to emulate a 1mm-long flatworm brain, which consists of just 302 total neurons. The human brain contains 100 billion. If that makes it seem like a hopeless project, remember the power of exponential progressnow that weve conquered the tiny worm brain, an ant might happen before too long, followed by a mouse, and suddenly this will seem much more plausible.

So if we decide the smart kids test is too hard to copy, we can try to copy the way he studies for the tests instead.

Heres something we know. Building a computer as powerful as the brain is possibleour own brains evolution is proof. And if the brain is just too complex for us to emulate, we could try to emulate evolution instead. The fact is, even if we can emulate a brain, that might be like trying to build an airplane by copying a birds wing-flapping motionsoften, machines are best designed using a fresh, machine-oriented approach, not by mimicking biology exactly.

So how can we simulate evolution to build AGI? The method, called genetic algorithms, would work something like this: there would be a performance-and-evaluation process that would happen again and again (the same way biological creatures perform by living life and are evaluated by whether they manage to reproduce or not). A group of computers would try to do tasks, and the most successful ones would be bred with each other by having half of each of their programming merged together into a new computer. The less successful ones would be eliminated. Over many, many iterations, this natural selection process would produce better and better computers. The challenge would be creating an automated evaluation and breeding cycle so this evolution process could run on its own.

The downside of copying evolution is that evolution likes to take a billion years to do things and we want to do this in a few decades.

But we have a lot of advantages over evolution. First, evolution has no foresight and works randomlyit produces more unhelpful mutations than helpful ones, but we would control the process so it would only be driven by beneficial glitches and targeted tweaks. Secondly, evolution doesnt aim for anything, including intelligencesometimes an environment might even select against higher intelligence (since it uses a lot of energy). We, on the other hand, could specifically direct this evolutionary process toward increasing intelligence. Third, to select for intelligence, evolution has to innovate in a bunch of other ways to facilitate intelligencelike revamping the ways cells produce energywhen we can remove those extra burdens and use things like electricity. Its no doubt wed be much, much faster than evolutionbut its still not clear whether well be able to improve upon evolution enough to make this a viable strategy.

This is when scientists get desperate and try to program the test to take itself. But it might be the most promising method we have.

The idea is that wed build a computer whose two major skills would be doing research on AI and coding changes into itselfallowing it to not only learn but to improve its own architecture. Wed teach computers to be computer scientists so they could bootstrap their own development. And that would be their main jobfiguring out how to make themselves smarter. More on this later.

Rapid advancements in hardware and innovative experimentation with software are happening simultaneously, and AGI could creep up on us quickly and unexpectedly for two main reasons:

1) Exponential growth is intense and what seems like a snails pace of advancement can quickly race upwardsthis GIF illustrates this concept nicely:

2) When it comes to software, progress can seem slow, but then one epiphany can instantly change the rate of advancement (kind of like the way science, during the time humans thought the universe was geocentric, was having difficulty calculating how the universe worked, but then the discovery that it was heliocentric suddenly made everything much easier). Or, when it comes to something like a computer that improves itself, we might seem far away but actually be just one tweak of the system away from having it become 1,000 times more effective and zooming upward to human-level intelligence.

At some point, well have achieved AGIcomputers with human-level general intelligence. Just a bunch of people and computers living together in equality.

Oh actually not at all.

The thing is, AGI with an identical level of intelligence and computational capacity as a human would still have significant advantages over humans. Like:

Hardware:

Software:

AI, which will likely get to AGI by being programmed to self-improve, wouldnt see human-level intelligence as some important milestoneits only a relevant marker from our point of viewand wouldnt have any reason to stop at our level. And given the advantages over us that even human intelligence-equivalent AGI would have, its pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence.

This may shock the shit out of us when it happens. The reason is that from our perspective, A) while the intelligence of different kinds of animals varies, the main characteristic were aware of about any animals intelligence is that its far lower than ours, and B) we view the smartest humans as WAY smarter than the dumbest humans. Kind of like this:

So as AI zooms upward in intelligence toward us, well see it as simply becoming smarter, for an animal. Then, when it hits the lowest capacity of humanityNick Bostrom uses the term the village idiotwell be like, Oh wow, its like a dumb human. Cute! The only thing is, in the grand spectrum of intelligence, all humans, from the village idiot to Einstein, are within a very small rangeso just after hitting village idiot level and being declared to be AGI, itll suddenly be smarter than Einstein and we wont know what hit us:

Go here to read the rest:

The Artificial Intelligence Revolution: Part 1 - Wait But Why

Superintelligence: From Chapter Eight of Films from the …

This concern would often come out in conversations around meals. Id be sitting next to some engaging person, having what seemed like a normal conversation, when theyd ask So, do you believe in superintelligence? As something of an agnostic, Id either prevaricate, or express some doubts as to the plausibility of the idea. In most cases, theyd then proceed to challenge any doubts that I might express, and try to convert me to becoming a superintelligence believer. I sometimes had to remind myself that I was at a scientific meeting, not a religious convention.

Part of my problem with these conversations was that, despite respecting Bostroms brilliance as a philosopher, I dont fully buy into his notion of superintelligence, and I suspect that many of my overzealous dining companions could spot this a mile off. I certainly agree that the trends in AI-based technologies suggest we are approaching a tipping point in areas like machine learning and natural language processing. And the convergence were seeing between AI-based algorithms, novel processing architectures, and advances in neurotechnology are likely to lead to some stunning advances over the next few years. But I struggle with what seems to me to be a very human idea that narrowly-defined intelligence and a particular type of power will lead to world domination.

Here, I freely admit that I may be wrong. And to be sure, were seeing far more sophisticated ideas begin to emerge around what the future of AI might look likephysicist Max Tegmark, for one, outlines a compelling vision in his book Life 3.0. The problem is, though, that were all looking into a crystal ball as we gaze into the future of AI, and trying to make sense of shadows and portents that, to be honest, none of us really understand. When it comes to some of the more extreme imaginings of superintelligence, two things in particular worry me. One is the challenge we face in differentiating between what is imaginable and what is plausible when we think about the future. The other, looking back to chapter five and the movie Limitless, is how we define and understand intelligence in the first place.

***

With a creative imagination, it is certainly possible to envision a future where AI takes over the world and crushes humanity. This is the Skynet scenario of the Terminator movies, or the constraining virtual reality of The Matrix. But our technological capabilities remain light-years away from being able to create such futureseven if we do create machines that can design future generations of smarter machines. And its not just our inability to write clever-enough algorithms thats holding us back. For human- like intelligence to emerge from machines, wed first have to come up with radically different computing substrates and architectures. Our quaint, two-dimensional digital circuits are about as useful to superintelligence as the brain cells of a flatworm are to solving the unified theory of everything; its a good start, but theres a long way to go.

Here, what is plausible, rather than simply imaginable, is vitally important for grounding conversations around what AI will and wont be able to do in the near future. Bostroms ideas of superintelligence are intellectually fascinating, but theyre currently scientifically implausible. On the other hand, Max Tegmark and others are beginning to develop ideas that have more of a ring of plausibility to them, while still painting a picture of a radically different future to the world we live in now (and in Tegmarks case, one where there is a clear pathway to strong AGI leading to a vastly better future). But in all of these cases, future AI scenarios depend on an understanding of intelligence that may end up being deceptive

Link:

Superintelligence: From Chapter Eight of Films from the ...

Amazon.com: Superintelligence: Paths, Dangers, Strategies …

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains.

If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biologicalcognitive enhancement, and collective intelligence.

This profoundly ambitious and original book picks its way carefully through a vast tract of forbiddingly difficult intellectual terrain. Yet the writing is so lucid that it somehow makes it all seem easy. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.

Read the original:

Amazon.com: Superintelligence: Paths, Dangers, Strategies ...