Thinking Beyond Flesh and Bones with AI – Ghana Latest Football News, Live Scores, Results – Ghanasoccernet.com

The best way to predict the future is to invent it goes the quote. If you are someone who is interested in discovering and inventing things, then "Artificial Intelligence" is the right domain for you. It will not only make your life interesting, but you would be able to make other lives simple and easy!

What does thinking beyond bones and flesh mean? Artificial intelligence is not just about inventing robots and replacing humans, but also about every hard activity of replacing the slog. For example, AI can be used in different areas in the medical field, civil engineering, military services, machine learning, and other fields. To simply portray, artificial intelligence enables computers or software to think wisely about how a person behaves. As a result, the field is vast and you can have your hands on whichever lane seems alluring to you.

The ultimate goal of AI is to achieve human goals through computer programming! AI is about mimicking human intelligence, but with a computer program and with a little help from data. The way they think, act and respond to problems just like a human mind.

One of the most significant features of AI is the new invention of Israeli military soldier robots, which are used as soldiers to replace men and women. This, in turn, is not only effective, but it also reduces the loss of life caused by each war. Its design also minimizes the damage to the robot. How sensitive, but a knowledgeable and useful invention! Therefore, the future of the world depends on how easy it is to obtain any work, and our future is nothing more than Artificial Intelligence!

Now, let us see how many types of AI are there!

Artificial Narrow Intelligence (ANI)

The concept of ANI generally means the flow of designing a computer or machine to perform a single task with high intelligence. It understands the individual tasks that must be performed efficiently. It is considered the most rudimentary concept of AI.

E.g.:

Artificial superintelligence is an aspect where intelligence is more powerful and sophisticated than human intelligence. While human intelligence is considered to be the most capable and developmental, superintelligence can suppress human intelligence.

It will be able to perform abstractions that are impossible for human minds to even think. The human brain is constrained to some billion neurons.

Artificial intelligence has the ability to mimic human thought. The ASI goes a step beyond and acquires a cognitive ability that is superior to humans.

As the name suggests, it is designed for general purpose. Its smartness could be applied to a variety of tasks as well as to learn and improve itself. It is as intelligent as a human brain. Unlike ANI it can improve the performance of itself.

E.g.: AlphaGo, it is currently used only to play the game Go, but its intelligence can be used in various levels and fields.

Scope of AI

The global demand for experts with relevant AI knowledge has doubled in the past three years and will continue to increase in the future. There are many more options for voice recognition, expert system, AI-enabled equipment, and more.

Artificial intelligence is the end of the future. So, why is no one willing to contribute to the future of the planet? Actually, in recent years, AI jobs have increased by almost 129%. In the United States alone, the demand for AI-related job is as high as 4,000!

Well, to catch the lightning opportunity present in AI, you need a bachelor's degree in computer science, data science, information science, math, etc. Now, if you are an undergraduate, then you can easily get a job in the AI domain with a reputed online certification course on AI. Doing this, you can earn anywhere between 600,000 and 1,000,000 in India! In the United States, you can get US$50,000 - US$100,000.

In this smart world, it's easy to find any online certification courses. Some online courses may only focus on the simple foundations of AI, while others offer professional courses, etc. All you have to do is choose the lane you want to follow and start your route.

You would be glad to know that Intellipaat offers an industry-wide best AI course program that has been meticulously designed as per the industry standard and conducted by SMEs. This will not only enhance your knowledge but also help you bring a share of the knowledge gained in the field.

You need to master certain necessary skills to shine in this field such as Programming, Robotics, autonomous cars, space research, etc., You will also be required to gain special skills in Mathematics, statistics, analytics, and engineering skills. A good communication skill is always appreciated if you are aspiring to be in the business field in order to explain and get the right thing to the people out there.

Learners fascinated in the profession of artificial intelligence should discover numerous options in the field. Up-and-coming careers in AI can be accomplished in a variety of environments, such as finance, government, private agencies, healthcare, arts, research, agriculture, and more. The range of jobs and opportunities in AI is very high.

See more here:

Thinking Beyond Flesh and Bones with AI - Ghana Latest Football News, Live Scores, Results - Ghanasoccernet.com

Liquid metal tendons could give robots the ability to heal themselves – Digital Trends

Since fans first clapped eyes on the T-1000, the shape-shifting antagonist from 1991s Terminator 2: Judgment Day, many people have been eagerly anticipating the day in which liquid metal robots became a reality. And by eagerly anticipating, we mean had the creeping sense that such a thing is a Skynet eventuality, so we might as well make the best of it.

Jump forward to the closing days of 2019 and, while robots havent quite advanced to the level of the 2029 future sequences seen in T2, scientists are getting closer. In Japan, roboticists from the University of Tokyos JSK Lab have created a prototype robot leg with a metal tendon fusethats able to repair fractures. How does it do this? Simple: By autonomously melting itself and then reforming as a single piece. The work was presented at the recent 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

The self-healing module is comprised of two halves that are connected via magnets and springs. Each half of the module is filled with an alloy with a low melting point of just 50 degrees Celsius (122 degrees Fahrenheit). When the fuse breaks, the cartridges heat, melting the alloy and allowing the two halves to fuse together again. While the re-fused joints are not as strong as they were before any break took place, the researchers have observed that gently vibrating the joint during melting and reforming results in a joint that is up to 90% of its original strength. This could be further optimized in the future.

Its still very early in the development process. But the ultimate ambition is to develop ways that robots will be able to better heal themselves, rather than having to rely on external tools to do so. Since roboticists regularly borrow from nature for biomimetic solutions to problems, the idea of robots that can heal like biological creatures makes a lot of sense.

Just like breakthroughs in endeavors like artificial muscles and continued research toward creating superintelligence, it does take us one step closer to the world envisioned in Terminator. Wheres John savior of all humanity Connor when you need him?

Link:

Liquid metal tendons could give robots the ability to heal themselves - Digital Trends

NIU expert: 4 leaps in technology to expect in the 2020s | NIU – NIU Newsroom

DeKalb, Ill. Autopilot automobiles, wearable devices, services such as Uber and Lyft. Technological advances in the 2010s made headlines, and some made their way into our everyday lives.

So what should we expect from the roaring 2020s?

We put that question to NIU Professor David Gunkel, a communication technology expert and author of Robot Rights and How to Survive a Robot Invasion. Gunkel pointed to four areas where technology is poised to make an impact on the coming decade.

Robots By the mid-2020s, robots of one kind or another will be everywhere and doing virtually everything, Gunkel says. This robot invasion will not transpire as we have imagined it in our science fiction, with a marauding army of evil-minded androids either descending from the heavens or rising up in revolt against their human masters. It will look less like Blade Runner, Terminator or Westworld and more like the Fall of Rome, asmachines of various configurations and capabilities come to take up influential positions in our world through a slow but steady incursion.

Artificial Intelligence Innovations in Artificial Intelligence, especially with deep-learning algorithms, have made great strides in the previous decade. The 2020s will see AI in everything, from our handheld mobile devices to self-driving vehicles. These will be very capable but highly specialized AIs. We are creating a world full of idiot savants that will control every aspect of our lives. This might actually be more interesting, and possibly more terrifying, than superintelligence.

Things that Talk In 2018, Amazon put Alexa in the toilet, when they teamed up with Kohler at the Consumer Electronics Show. Manufactures of these digital voice assistants, which also include the likes of Siri, Google Assistant and Bixby, are currently involved in an arms race to dominate the voice-activated, screenless Internet of the future. By mid-decade, everything will be talking to us, which will dramatically change how we think about social interaction. But they will also be listening to what we say and sharing all this personal data with their parent corporations.

The Empires Strike Back This past year has seen unprecedented investment in AI ethics and governance. The 2020s will see amplification of this effort as stakeholders in Europe, China and North America compete to dominate the AI policy and governance market. Europe might be the odds-on favorite, since it was first to exit the starting block, but China and the U.S. are not far behind. The technology of AI might be global in scope and controlled by borderless multinationals. But tech policy and governance is still a matter of nation states, and the 2020s will see increasing involvement as the empires strike back.

Media Contact:Tom Parisi

About NIU

Northern Illinois University is a student-centered, nationally recognized public research university, with expertise that benefits its region and spans the globe in a wide variety of fields, including the sciences, humanities, arts, business, engineering, education, health and law. Through its main campus in DeKalb, Illinois, and education centers for students and working professionals in Chicago, Hoffman Estates, Naperville, Oregon and Rockford, NIU offers more than 100 areas of study while serving a diverse and international student body.

Continued here:

NIU expert: 4 leaps in technology to expect in the 2020s | NIU - NIU Newsroom

Playing Tetris Shows That True AI Is Impossible – Walter Bradley Center for Natural and Artificial Intelligence

Hi there! I recently put together an electroencephalogram (EEG), or in normal words, a brain wave reader, so you can see what goes on inside my brain!

I received a kit from OpenBCI, a successful kickstarter project to make inexpensive brain wave readers available to the masses. Heres what it looks like:

Yes, it looks like something Calvin and Hobbes would invent.

Here is how it looks on my head:

A number of electrodes are touching my scalp and a wire is connected to my ear. The layout on my head looks like the following schematic:

The EEG is measuring the voltage between different points on my scalp and my earlobe. The positions on my scalp are receiving a current from my brain while my earlobe acts as the ground. The EEG is essentially a multimeter for my brain.

Brain waves are generated by ions building up inside the neurons. Once the neurons reach capacity, they release the ions in a cascade across the brain. This leads to the wave effect.

So can I see any connection between my brain waves and what Im consciously experiencing in my mind?

To test that, inspired by the EEG hacker blog, I generated a graphic known as a spectrogram of my brain waves across a set of activities.

The spectrogram shows the range of brainwave frequencies in my brain at a given point in time. In the following plots, the horizontal axis is time, and the vertical axis is frequency. There are some artifacts in the plots, such as a middle band and a big pink blotch, so dont take all patterns as significant. The important thing to note is the overall texture of the plot.

The greens and reds are low amplitude frequencies, and the blue and magenta are high amplitude frequencies, meaning those brain waves are stronger. The spectrogram is generated from the readings of the #1 electrode in the schematic above.

I performed three different activities to see how they affect thespectrogram. Results and code are provided athttps://github.com/yters/eeg_tetris.

First, I just absentmindedly tapped the Enter key on my keyboard. I did not focus on anything in particular, just pressed Enter whenever I felt like it. This is the EEG spectrogram that random tapping generated:

Second, I played a game of Tetris on very slow speed, using a Github repo.

Heres a video of the game speed:

This is the corresponding spectrogram:

Finally, I played Tetris much faster, and the spectrogram looked like this:

You can watch a video of the game speed here:

The big difference is that, as my activity became cognitively more difficult, the spectrogram became more blue and magenta, meaning that my brain waves became stronger.

What does this mean? It means that, at least at a high level. I can measure how cognitively difficult a mental task is.

Another interesting thing is the direction of causality. The intensity of my mental processing brought about an observable brain state. The causality did not go in the other direction; the magenta brain state did not increase my conscious process.

So my subjective mental experience brought about a change in my physical brain. In other words, my consciousness has a causal impact on my physical processing unit, the brain.

This type of observation causes a problem for those hoping to duplicate human intelligence in a computer program. This Tetris EEG experiment shows that conscious thought is essential to human intelligence. So, until we make conscious computers, which is most likely never, we will not have computers that display human intelligence.

Update: Someone online suggested it might just be my facial muscle tension. So I tested out the idea by recording while I tensed my brow (where the electrode is placed). https://github.com/yters/eeg_tetris

The result looked no different than the tapping EEG, so I consider the just facial tension hypothesis falsified.

If you enjoyed this item, here are some of Eric Holloways other reflections on human consciousness and computer intelligence:

No materialist theory of consciousness is plausible All such theories either deny the very thing they are trying to explain, result in absurd scenarios, or end up requiring an immaterial intervention

We need a better test for AI intelligence Better than Turing or Lovelace. The difficulty is that intelligence, like randomness, is mathematically undefinable

and

Will artificial intelligence design artificial superintelligence? And then turn us all into super-geniuses, as some AI researchers hope? No, and heres why not

Read the original:

Playing Tetris Shows That True AI Is Impossible - Walter Bradley Center for Natural and Artificial Intelligence

AI R&D is booming, but general intelligence is still out of reach – The Verge

Trying to get a handle on the progress of artificial intelligence is a daunting task, even for those enmeshed in the AI community. But the latest edition of the AI Index report an annual rundown of machine learning data points now in its third year does a good job confirming what you probably already suspected: the AI world is booming in a range of metrics covering research, education, and technical achievements.

The AI Index covers a lot of ground so much so that its creators, which include institutions like Harvard, Stanford, and OpenAI, have also released two new tools just to sift through the information they sourced from. One tool is for searching AI research papers and the other is for investigating country-level data on research and investment.

Most of the 2019 report basically confirms the continuation of trends weve highlighted in previous years. But to save you from having to trudge through its 290 pages, here are some of the more interesting and pertinent points:

All this is impressive, but one big caveat applies: no matter how fast AI improves, its never going to match the achievements accorded to it by pop culture and hyped headlines. This may seem pedantic or even obvious, but its worth remembering that, while the world of artificial intelligence is booming, AI itself is still limited in some important ways.

The best demonstration of this comes from a timeline of human-level performance milestones featured in the AI Index report; a history of moments when AI has matched or surpassed human-level expertise.

The timeline starts in the 1990s when programs first beat humans at checkers and chess, and accelerates with the recent machine learning boom, listing video games and board games where AI has came, saw, and conquered (Go in 2016, Dota 2 in 2018, etc.). This is mixed with miscellaneous tasks like human-level classification of skin cancer images in 2017 and in Chinese to English translation in 2018. (Many experts would take issue with that last achievement being included at all, and note that AI translation is still way behind humans.)

And while this list is impressive, it shouldnt lead you to believe that AI superintelligence is nigh.

For a start, the majority of these milestones come from defeating humans in video games and board games domains that, because of their clear rules and easy simulation, are particularly amenable to AI training. Such training usually relies on AI agents sinking many lifetimes worth of work into a single game, training hundreds of years in a solar day: a fact that highlights how quickly humans learn compared to computers.

Similarly, each achievements was set in a single domain. With very few exceptions, AI systems trained at one task cant transfer what theyve learned to another. A superhuman StarCraft II bot would lose to a five-year-old playing chess. And while an AI might be able to spot breast cancer tumors as accurately as an oncologist, it cant do the same for lung cancer (let alone write a prescription or deliver a diagnosis). In other words: AI systems are single-use tools, not flexible intelligences that are stand-ins for humans.

But and yes, theres another but that doesnt mean AI isnt incredibly useful. As this report shows, despite the limitations of machine learning, it continues to accelerate in terms of funding, interest, and technical achievements.

When thinking about AI limitations and promises, its good to remember the words of machine learning pioneer Andrew Ng: If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future. Were just beginning to find out what happens when those seconds are added up.

Read the rest here:

AI R&D is booming, but general intelligence is still out of reach - The Verge

Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits – Forbes

Digital Human Brain Covered with Networks

Artificial intelligence is advancing rapidly. In a few decades machines will achieve superintelligence and become self-improving. Soon after that happens we will launch a thousand ships into space. These probes will land on distant planets, moons, asteroids, and comets. Using AI and terabytes of code, they will then nanoassemble local particles into living organisms. Each probe will, in fact, contain the information needed to create an entire ecosystem. Thanks to AI and advanced biotechnology, the species in each place will be tailored to their particular plot of rock. People will thrive in low temperatures, dim light, high radiation, and weak gravity. Humanity will become an incredibly elastic concept. In time our distant progeny will build megastructures that surround stars and capture most of their energy. Then the power of entire galaxies will be harnessed. Then life and AIlong a common entity by this pointwill construct a galaxy-sized computer. It will take a mind that large about a hundred-thousand years to have a thought. But those thoughts will pierce the veil of reality. They will grasp things as they really are. All will be one. This is our destiny.

Then again, maybe not.

There are, of course, innumerable reasons to reject this fantastic tale out of hand. Heres a quick and dirty one built around Copernicuss discovery that we are not the center of the universe. Most times, places, people, and things are average. But if sentient beings from Earth are destined to spend eons multiplying and spreading across the heavens, then those of us alive today are special. We are among the very few of our kind to live in our cosmic infancy, confined in our planetary cradle. Because we probably are not special, we probably are not at an extreme tip of the human timeline; were likely somewhere in the broad middle. Perhaps a hundred-billion modern humans have existed, across a span of around 50,000 years. To claim in the teeth of these figures that our species is on the cusp of spending millions of years spreading trillions of individuals across this galaxy and others, you must engage in some wishful thinking. You must embrace the notion that we today are, in a sense, back at the center of the universe.

It is in any case more fashionable to speculate about imminent catastrophes. Technology again looms large. In the gray goo scenario, runaway self-replicating nanobots consume all of the Earths biomass. Thinking along similar lines, philosopher Nick Bostrom imagines an AI-enhanced paperclip machine that, ruthlessly following its prime directive to make paperclips, liquidates mankind and converts the planet into a giant paperclip mill. Elon Musk, when he discusses this hypothetical, replaces paperclips with strawberries, so that he can worry about strawberry fields forever. What Bostrom and Musk are driving at is the fear that an advanced AI being will not share our values. We might accidently give it a bad aim (e.g., paperclips at all costs). Or it might start setting its own aims. As Stephen Hawking noted shortly before his death, a machine that sees your intelligence the way you see a snails might decide it has no need for you. Instead of using AI to colonize distant planets, we will use it to destroy ourselves.

When someone mentions AI these days, she is usually referring to deep neural networks. Such networks are far from the only form of AI, but they have been the source of most of the recent successes in the field. A deep neural network can recognize a complex pattern without relying on a large body of pre-set rules. It does this with algorithms that loosely mimic how a human brain tunes neural pathways.

The neurons, or units, in a deep neural network are layered. The first layer is an input layer that breaks incoming data into pieces. In a network that looks at black-and-white images, for instance, each of the first layers units might link to a single pixel. Each input unit in this network will translate its pixels grayscale brightness into a numer. It might turn a white pixel into zero, a black pixel into one, and a gray pixel into some fraction in between. These numbers will then pass to the next layer of units. Each of the units there will generate a weighted sum of the values coming in from several of the previous layers units. The next layer will do the same thing to that second layer, and so on through many layers more. The deeper the layer, the more pixels accounted for in each weighted sum.

An early-layer unit will produce a high weighted sumit will fire, like a neuron doesfor a pattern as simple as a black pixel above a white pixel. A middle-layer unit will fire only when given a more complex pattern, like a line or a curve. An end-layer unit will fire only when the patternor, rather, the weighted sums of many other weighted sumspresented to it resembles a chair or a bonfire or a giraffe. At the end of the network is an output layer. If one of the units in this layer reliably fires only when the network has been fed an image with a giraffe in it, the network can be said to recognize giraffes.

A deep neural network is not born recognizing objects. The network just described would have to learn from pre-labeled examples. At first the network would produce random outputs. Each time the network did this, however, the correct answers for the labeled image would be run backward through the network. An algorithm would be used, in other words, to move the networks unit weighting functions closer to what they would need to be to recognize a given object. The more samples a network is fed, the more finely tuned and accurate it becomes.

Some deep neural networks do not need spoon-fed examples. Say you want a program equipped with such networks to play chess. Give it the rules of the game, instruct it to seek points, and tell it that a checkmate is worth a hundred points. Then have it use a Monte Carlo method to randomly simulate games. Through trial and error, the program will stumble on moves that lead to a checkmate, and then on moves that lead to moves that lead to a checkmate, and so on. Over time the program will assign value to moves that simply tend to lead toward a checkmate. It will do this by constantly adjusting its networks unit weighting functions; it will just use points instead of correctly labeled images. Once the networks are trained, the program can win discrete contests in much the way it learned to play in the first place. At each of its turns, the program will simulate games for each potential move it is considering. It will then choose the move that does best in the simulations. Thanks to constant fine-tuning, even these in-game simulations will get better and better.

There is a chess program that operates more or less this way. It is called AlphaZero, and at present it is the best chess player on the planet. Unlike other chess supercomputers, it has never seen a game between humans. It learned to play by spending just a few hours simulating moves with itself. In 2017 it played a hundred games against Stockfish 8, one of the best chess programs to that point. Stockfish8 examined 70million moves per second. AlphaZero examined only 80,000. AlphaZero won 28 games, drew 72, and lost zero. It sometimes made baffling moves (to humans) that turned out to be masterstrokes. AlphaZero is not just a chess genius; it is an alien chess genius.

AlphaZero is at the cutting edge of AI, and it is very impressive. But its success is not a sign that AI will take us to the starsor enslave usany time soon. In Artificial Intelligence: A Guide For Thinking Humans, computer scientist Melanie Mitchell makes the case for AI sobriety. AI currently excels, she notes, only when there are clear rules, straightforward reward functions (for example, rewards for points gained or for winning), and relatively few possible actions (moves). Take IBMs Watson program. In 2011 it crushed the best human competitors on the quiz show Jeopardy!, leading IBM executives to declare that its successors would soon be making legal arguments and medical diagnoses. It has not worked out that way. Real-world questions and answers in real-world domains, Mitchell explains, have neither the simple short structure of Jeopardy! clues nor their well-defined responses.

Even in the narrow domains that most suit it, AI is brittle. A program that is a chess grandmaster cannot compete on a board with a slightly different configuration of squares or pieces. Unlike humans, Mitchell observes, none of these programs can transfer anything it has learned about one game to help it learn a different game. Because the programs cannot generalize or abstract from what they know, they can function only within the exact parameters in which they have been trained.

A related point is that current AI does not understand even basic aspects of how the world works. Consider this sentence: The city council refused the demonstrators a permit because they feared violence. Who feared violence, the city council or the demonstrators? Using what she knows about bureaucrats, protestors, and riots, a human can spot at once that the fear resides in the city council. When AI-driven language-processing programs are asked this kind of question, however, their responses are little better than random guesses. When AI cant determine what it refers to in a sentence, Mitchell writes, quoting computer scientist Oren Etzioni, its hard to believe that it will take over the world.

And it is not accurate to say, as many journalists do, that a program like AlphaZero learns by itself. Humans must painstakingly decide how many layers a network should have, how much incoming data should link to each input unit, how fast data should aggregate as it passes through the layers, how much each unit weighting function should change in response to feedback, and much else. These settings and designs, adds Mitchell, must typically be decided anew for each task a network is trained on. It is hard to see nefarious unsupervised AI on the horizon.

The doom camp (AI will murder us) and the rapture camp (it will take us into the mind of God) share a common premise. Both groups extrapolate from past trends of exponential progress. Moores lawwhich is not really a law, but an observationsays that the number of transistors we can fit on a computer chip doubles every two years or so. This enables computer processing speeds to increase at an exponential rate. The futurist Ray Kurzweil asserts that this trend of accelerating improvement stretches back to the emergence of life, the appearance of Eukaryotic cells, and the Cambrian Explosion. Looking forward, Kurzweil sees an AI singularitythe rise of self-improving machine superintelligenceon the trendline around 2045.

The political scientist Philip Tetlock has looked closely at whether experts are any good at predicting the future. The short answer is that theyre terrible at it. But theyre not hopeless. Borrowing an analogy from Isaiah Berlin, Tetlock divides thinkers into hedgehogs and foxes. A hedgehog knows one big thing, whereas a fox knows many small things. A hedgehog tries to fit what he sees into a sweeping theory. A fox is skeptical of such theories. He looks for facts that will show he is wrong. A hedgehog gives answers and says moreover a lot. A fox asks questions and says however a lot. Tetlock has found that foxes are better forecasters than hedgehogs. The more distant the subject of the prediction, the more the hedgehogs performance lags.

Using a theory of exponential growth to predict an impending AI singularity is classic hedgehog thinking. It is a bit like basing a prediction about human extinction on nothing more than the Copernican principle. Kurzweils vision of the future is clever and provocative, but it is also hollow. It is almost as if huge obstacles to general AI will soon be overcome because the theory says so, rather than because the scientists on the ground will perform the necessary miracles. Gordon Moore himself acknowledges that his law will not hold much longer. (Quantum computers might pick up the baton. Well see.) Regardless, increased processing capacity might be just a small piece of whats needed for the next big leaps in machine thinking.

When at Thanksgiving dinner you see Aunt Jane sigh after Uncle Bob tells a blue joke, you can form an understanding of what Jane thinks about what Bob thinks. For that matter, you get the joke, and you can imagine analogous jokes that would also annoy Jane. You can infer that your cousin Mary, who normally likes such jokes but is not laughing now, is probably still angry at Bob for spilling the gravy earlier. You know that although you cant see Bobs feet, they exist, under the table. No deep neural network can do any of this, and its not at all clear that more layers or faster chips or larger training sets will close the gap. We probably need further advances that we have only just begun to contemplate. Enabling machines to form humanlike conceptual abstractions, Mitchell declares, is still an almost completely unsolved problem.

There has been some concern lately about the demise of the corporate laboratory. Mitchell gives the impression that, at least in the technology sector, the corporate basic-research division is alive and well. Over the course of her narrative, labs at Google, Microsoft, Facebook, and Uber make major breakthroughs in computer image recognition, decision making, and translation. In 2013, for example, researchers at Google trained a network to create vectors among a vast array of words. A vector set of this sort enables a language-processing program to define and use a word based on the other words with which it tends to appear. The researchers put their vector set online for public use. Google is in some ways the protagonist of Mitchells story. It is now an applied AI company, in Mitchells words, that has placed machine thinking at the center of diverse products, services, and blue-sky research.

Google has hired Ray Kurzweil, a move that might be taken as an implicit endorsement of his views. It is pleasing to think that many Google engineers earnestly want to bring on the singularity. The grand theory may be illusory, but the treasures produced in pursuit of it will be real.

More:

Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits - Forbes

Melissa McCarthy And Ben Falcone Have Decided To Release ‘Superintelligence’ Via HBO Max Ins – Science Fiction

Kathy Hutchins / Shutterstock.com

The new Melissa McCarthy sci-fi comedy Superintelligence will not open theatrically as planned. Instead, the comedian and her director husband, Ben Falcone, have decided to release the movie via the new HBO Max streaming service. Superintelligence had been slated for release during the busy holiday season, on December 20, but have chosen a different route, at least in part to reach a wider audience.

McCarthy told Deadline:

It was actually Bens idea, it came from the filmmaker himself. We had a release date, a full marketing plan, and I had all my press lined up. We were really ready to go. When the announcement came that HBO Max was really happening, Ben had this idea. And we thought, is this better? Different doesnt mean worse, and how arewewatching films ourselves? To us, each movie is near and dear to our hearts. You just want people to see it and love it and you want them to feel good.Superintelligenceat its core is, love wins, and people matter. I want that to get to as many people as it can. We need that today, and this seemed like the best way to do it. So no, [this wasnt imposed on us]. We were ready to go the other way, and we decided to make the detour.

Falcone added:

Honestly, you can release a mid-budget movie, and if wed stayed in the theaters, we could have done incredibly well. There still are those examples of movies like this one that do. But for this movie, at this time, we felt like it was the best way to go. The PG rating, the fact they are starting this thing. All these streaming services are starting, and here, we are up there withSesame Street, and Meryl Streep and JJ Abrams and Hugh Jackman and Jordan Peele. There are cool people doing this. So following my fear-based mentality, I thought it was the best move.

In addition to Superintelligence, HBO Max will also be offering Let Them All Talk from Steven Soderbergh and starring Meryl Streep, Greg Berlantis Unpregnant, and Bad Education starring Hugh Jackman and Allison Janney, which HBO paid $17 million to acquire.

Carol Peters life is turned upside down when she is selected for observation by the worlds first superintelligence a form of artificial intelligence that may or may not take over the world.

Superintelligence also stars Bobby Cannavale, Jean Smart, Michael Beach, Brian Tyree Henry, and the voice of James Corden as the titular Superintelligence. The release will now be delayed, as HBO Max isnt expected to launch until next spring.

Falcone and McCarthy are re-teaming for Thunder Force for Netflix, which also stars Octavia Spencer.

Jax's earliest memory is of watching 'Batman,' followed shortly by a memory of playing Batman & Robin with a friend, which entailed running outside in just their underwear and towels as capes. When adults told them they couldn't run around outside in their underwear, both boys promptly whipped theirs off and ran around in just capes.

Read more:

Melissa McCarthy And Ben Falcone Have Decided To Release 'Superintelligence' Via HBO Max Ins - Science Fiction

Melissa McCarthy & Director Ben Falcone On Choosing HBO Max Bow Instead Of WB Xmas Release For Superintelligence – Deadline

EXCLUSIVE: In a move that could become more common as major studios lean in heavily toward their streaming launches, the Ben Falcone-directed Melissa McCarthy-starrer Superintelligence has exited its December 20 theatrical release date to instead become the first Warner Bros Pictures Group film to premiere on HBO Max.

This comes before an HBO Max presentation on October 29 where it is expected that other projects might become part of a streamer launch slate that now will have Superintelligence; the Steven Soderbergh-directed Meryl Streep-starrer Let Them All Talk; the Greg Berlanti-produced YA novel adaptation Unpregnant;and sooner or later Bad Education, the Hugh Jackman/Allison Janney-starrer bought at Toronto for north of $17 million to bow on HBO. The original programming will be part of a service that launches with WarnerMedias own library titles including Friends and The Big Bang Theory and third-party acquisitions including Sesame Street.

Amid the high-stakes battle for subscription streaming service launches by WarnerMedia, Disney, Comcast and Apple to go along with Netflix and Amazon, it isnt hard to see how the prospect of being among the first marquee titles on HBO Max is enticing. Especially when mid-budget comedies and dramas are plagued by the optics of eight-figure P&A spends and heavy scrutiny on opening-weekend box office grosses. That doesnt exist if you are launching on an OTT to a wide audience.

McCarthy and Falcone, long married and longtime frequent creative collaborators they are now making their first Netflix film, Thunder Force said the decision to move out of theaters and onto HBO Max was theirs, and that it wasnt imposed on them by WarnerMedia, Warner Bros or New Line, which developed the comedy and shepherded the film through production.

It was actually Bens idea, it came from the filmmaker himself, McCarthy told Deadline. We had a release date, a full marketing plan, and I had all my press lined up. We were really ready to go. When the announcement came that HBO Max was really happening, Ben had this idea. And we thought, is this better? Different doesnt mean worse, and how are we watching films ourselves? To us, each movie is near and dear to our hearts. You just want people to see it and love it and you want them to feel good. Superintelligence at its core is, love wins, and people matter. I want that to get to as many people as it can. We need that today, and this seemed like the best way to do it. So no, [this wasnt imposed on us]. We were ready to go the other way, and we decided to make the detour.

When I brought up the perilous track many theatrical releases face these days, Falcone acknowledged it is something a filmmaker thinks about.

I pride myself on living a fear-based life, and that wont stop, Falcone joked. I dont exactly remember the question, but I wanted to make that clear to you, and to everyone. Honestly, you can release a mid-budget movie, and if wed stayed in the theaters, we could have done incredibly well. There still are those examples of movies like this one that do. But for this movie, at this time, we felt like it was the best way to go. The PG rating, the fact they are starting this thing. All these streaming services are starting, and here, we are up there with Sesame Street, and Meryl Streep and JJ Abrams and Hugh Jackman and Jordan Peele. There are cool people doing this. So following my fear-based mentality, I thought it was the best move.

McCarthy and Falcone also felt a thematic fit as the film explores relationships in the backdrop of technological evolution. McCarthys character finds herself getting messages from her TV, phone and microwave and what she doesnt realize is she has been selected for observation by the worlds first superintelligence, a form of artificial intelligence that is contemplating taking over the world. Steve Mallory wrote the script, James Corden voices the A.I., and Bobby Cannavale is playing her love interest.

We made the film for New Line and Warner Bros, and there are different challenges in the way people watch films, how and where they see them on different platforms, McCarthy said. We were all geared up to open theatrically, and Ben was the one who said, this would be better for HBO Max. What a way to reach a massive amount of people, and to be put in pretty amazing company. It seemed like a win-win. We have two young kids, and we thought about how we watch movies. Superintelligence is PG, and we thought about how we watch these movies with our kids. We still go to the theater, and we love going to the theater. I would cry if that ever went away. But we watch a lot of movies at home, and a lot of people do. This just seemed like an exciting new way to get it in front of a lot of people.

The move pushes the release of the film until sometime in the spring, and though a specific date hasnt been decided, the couple is really warming to the platform.

I urge you and all your friends to immediately subscribe to HBO Max, Falcone said.

Added McCarthy: Just give us your credit card, Mike, and wed be happy to process it for you. And maybe give us your bank account numbers, too.

Read more here:

Melissa McCarthy & Director Ben Falcone On Choosing HBO Max Bow Instead Of WB Xmas Release For Superintelligence - Deadline

AMC Is Still In the Theater Business, But VOD Is a Funny Way of Showing It – IndieWire

Was I the only one who found it weird when AMC Theatres announced that it was getting into the streaming business with the launch of AMC Theatres On Demand? When it comes to places to buy and rent movies, weve got Apple, Amazon, Fandango, Vudu, Google Play, YouTube, and a few more that I dont need to remember because its too many already.

I also thought it suggested some seriously mixed messaging, but maybe that was just me until I got a call from an NBC affiliate who wanted to do an interview about AMCs new streaming service. That seemed like a curious topic for local news; why were they interested? The answer: They wanted to know if it meant AMC was getting out of the theater business.

Of course, AMC is very much dedicated to theatrical business, but this is a funny way of showing it. Launching a platform for VOD transactions something that runs counter to going out to the movies is not what Id expect a theater chain to worry about right now. Theres far more pressing issues at hand, starting with the sacred cow of The Theatrical Experience.

Its the theme of every CinemaCon, repeated like rosary as exhibitors and distributors take the Caesars Palace stage and talk about how worldwide audiences continue to share the primacy of the theatrical experience. However, that audience also has the option to stay home with their couches, pause buttons, and very large TV sets to watch an infinite number of entertainment options. By contrast, choosing to go to the theater means spending a lot of time, money, and effort on a very small selection of premium products. So whether youre going to the AMC to see Avengers, or to the Alamo to see Parasite, the act of going to the movies is now a bespoke experience.

But is that what chain theaters deliver? If youre Alamo with the fun beers on tap and no commercials and weird short films, sure. If youre a chain that inspired the ire of Edward Norton, who encountered low-light projection and crappy sound while preparing for the November 2 nationwide release of Motherless Brooklyn, that would be no. Its the theater chains that are destroying the theatrical experience, he said. Period, full-stop. No one else. Meanwhile, he sang the praises of Netflix as it represents an unprecedented period of ripe opportunity for many more types of stories and voices to be heard. (Netflix is also looking at a long-term lease for the tony, single-screen Paris Theater in Manhattan. Oh, the irony.)

Netflix turned to the Paris, the Belasco, and the Egyptian as showcases for Oscar contenders Marriage Story and The Irishman because major chains wont let them book their theaters but a much more significant threat to exhibitors is coming from inside the house. This week, Warners chose to move Melissa McCarthys Christmas title, Superintelligence, out of theaters and on to its upcoming streaming platform, HBO Max, which is scheduled to launch sometime next spring.

Melissa McCarthy and Ben Falcone at the Warner Bros. Cinemacon presentation, April 2019

Rob Latour/Shutterstock

Speaking to Deadline, McCarthy spun it as all being the idea of her husband, director Ben Falcone:

It was actually Bens idea, it came from the filmmaker himself, We had a release date, a full marketing plan, and I had all my press lined up. We were really ready to go. When the announcement came that HBO Max was really happening, Ben had this idea. And we thought, is this better? Different doesnt mean worse, and how arewe watching films ourselves? To us, each movie is near and dear to our hearts. You just want people to see it and love it and you want them to feel good. Superintelligence at its core is, love wins, and people matter. I want that to get to as many people as it can. We need that today, and this seemed like the best way to do it. So no, [this wasnt imposed on us]. We were ready to go the other way, and we decided to make the detour.

Ultimately, it doesnt matter if the idea came from Falcone, or from the studio (sources told IndieWire that the film didnt test well). What matters is this is likely the first of many films in which a distributor weighs its options: Invest many millions and see what you get back from theatrical, or substantially fewer millions on a global streaming platform and see what you generate in subscribers? Studios may find themselves following in Netflixs footsteps and sorting their slates: These movies demand a theatrical investment, and these will do well on streaming.

Last May, when HBO Max was only a twinkle in the eye of current WarnerMedia CEO John Stankey, Warners released the McCarthy and Falcone comedy Life of the Party; at $53 million domestic, it wasnt a blockbuster. But with box office on track to fall nearly 6% behind 2018, exhibitors need every $53 million they can get. And with almost every major studio now tied to a streaming outlet, they now have a no-friction solution for theatrical releases that might struggle: Whats dull on the big screen can look very shiny on the smaller ones. And, as McCarthy said: How arewewatching films ourselves?

Increasingly, were watching them at home. But probably not on AMC Theatres On Demand.

Heres some of the best work from this week on IndieWire:

Disneys Most Valuable Screenwriter Has Had Enough of the Strong Female Trope, by Kate ErblandLinda Woolverton, the woman who brought Belle, Maleficent, and a billion-dollar animated movie to Disney, speaks her mind.

Large-Format Cameras Are Changing Film Language, From Joker to Midsommar, by Chris OFalt

With the advent of cameras like the Alexa 65, a new generation of large format filmmaker is using its immersive qualities in exciting ways.

Peak TV Is Only a Concern in the Gated Community of Hollywood, by Libby HillThe average Joe doesnt care about The Morning Show. They already have all the TV they need and can afford.

Bombshell and Jojo Rabbit Share an Oscar Superpower: Theyre Made For the Mainstream, by Anne ThompsonFilms like Parasite and Pain and Glory are critical darlings, but the truth is that when it comes to Oscar votes, popularity counts.

Is This Is Us Making You Seasick? Youre Not Alone, by Leo GarciaDigital image stabilization mixed with the shows penchant for shaky camera work make it seem as if certain scenes were filmed out at sea.

Disney+: 200 Must-Watch TV Shows & Movies Available on Launch, by LaToya Ferguson

From the beloved Star Wars trilogies to the Marvel Cinematic Universe to Pixars greatest achievements, heres the best of the content that will be available to subscribers for $6.99 a month.

Have a great weekend,

Dana

Sign Up: Stay on top of the latest breaking film and TV news! Sign up for our Email Newsletters here.

See the article here:

AMC Is Still In the Theater Business, But VOD Is a Funny Way of Showing It - IndieWire

Idiot Box: HBO Max joins the flood of streaming services – Weekly Alibi

HBO Max joins the flood of streaming services

Viewers of visual media can be forgiven for thinking that todays streaming services have turned into a veritable deluge. Every other week it seems like Im educating/warning people about another streaming service with a catalogue of original programming, an archive of old TV shows and a random selection of movies available on your mobile devices for a low monthly subscription fee. Since I didnt talk about one last week, I guess Im obliged to this week. Netflix, Hulu, Amazon Prime, Disney Plus, Apple TV+: Meet HBO Max.

Like a lot of Americans, you may be confused at this point. Isnt HBO already a pay-per-view station full of movies, TV shows and original content? Sure. And cant you already subscribe to HBO Now, a streaming service for portable devices that bypasses the need for cable or satellite? Yup. But HBO Max is a long-brewing corporate mash-up from AT&T-owned multinational mass media conglomerate WarnerMedia. Not only will it consist of HBOs normal slate of movies, miniseries and TV showsit will also have access to all of WarnerMedias corporate catalogue. Basically, whatever Disney doesnt own, WarnerMedia does (HBO, CNN, TBS, TNT, TruTV, Cartoon Network, Adult Swim, TCM, Warner Bros, New Line, Crunchy Roll, Looney Tunes, The CW, DC Comics).

HBO Max, for example, will be the new home for the Warner Bros.-produced series Friendsnow that the beloved 90s sitcom is free from its $100 million dollar contract with Netflix. Also lined up: The Fresh Prince of Bel Air (which is owned by Warner Bros. Domestic Television Distribution) and any Warner Bros.-produced dramas on The CW Network (like, for example, Riverdale). Throw in some Bugs Bunny cartoons, all the Nightmare On Elm Street films (from New Line Cinema) and stuff like Full Frontal with Samantha Bee (thats TBS), and youve got a solid back catalogue on which to build.

In addition to everything WarnerMedia owns, HBO Max has signed contracts to re-air BBC shows including Doctor Who, The Office, Top Gear and Luther. The network also signed a deal with Japans Studio Ghibli to secure US streaming rights to all of its animated films (My Neighbor Totoro, Princess Mononoke, Spirited Away, Ponyo, Howls Moving Castle, Kikis Delivery Service, to name a few). These deals add some impressive weight to HBO Maxs lineup (while, at the same time, stealing these shows away from cable/streaming rivals).

As far as the new programming is concerned, the floodgates have already opened. Dozens of emails have been pouring into my inbox this week, touting HBO Maxs new projects. Director Denis Villeneuve (Blade Runner 2049) will adapt Dune: The Sisterhood, a series based on Brian Herbert and Kevin Andersons sequel to Frank Herberts sci-fi classic. The classic 1984 horror-comedy Gremlins is being turned into an animated series. The Hos is a multigenerational docu-reality series about a rich Vietnamese-American family in Houston. Monica Lewinsky (yes, that Monica Lewinsky) executive produces 15 Minutes of Shame, a documentary series about the public shaming epidemic in our culture and our collective need to destroy one another. Brad and Gary Go To finds Hollywood power couple Brad Goreski and Gary Janetti traveling around the globe sampling international cuisine. The streaming service has also ordered up Grease: Rydell High, a musical spin-off which brings the 1978 film Grease to todays post-Glee audiences.

There will be original movies on tap as well. Emmy-winning comedian Amy Schumer climbs on board with Expecting Amy, a documentary about the funny ladys struggle to prepare for a stand-up comedy tour while pregnant. Melissa McCarthy (Spy, Bridesmaids) will star in Superintelligence, about an ordinary woman who is befriended by the worlds first artificial intelligence with an attitude.

As far as when we can get a look at HBO Max, WarnerMedia has pushed the premiere date several times and is now simply saying spring 2020. What will it cost the consumer? Given that HBO Now costs $15 a month, and HBO Max will include all of HBOs streaming product (plus all that other stuff mentioned above), we can only assume that it will cost more than that. With Hulu starting at $6 a month, Disney+ banking on charging $6.99 a month and Netflix running $13 a month, HBO Max is looking kinda pricey. But what do you say, American consumers? Are you ready to fork out for one more monthly streaming service? Its the last one. I swear. (Its not. Not by a longshot.)

Read this article:

Idiot Box: HBO Max joins the flood of streaming services - Weekly Alibi

Here’s How to Watch Watchmen, HBOs Next Game of Thrones – Cosmopolitan

The DC universe just keeps getting bigger, and the newest addition to the comic world is HBOs Watchmen, a series based on the 1986 graphic novel where the superheroes are the outlaws (dont worry, Ill explain what that even means in a bit).

Youve probs already heard about it because its being dubbed as the new Game of Thrones, which means our hopes are high for the beginning and our expectations for the series ending are at an all-time low.

The Watchmen graphic novel is about superheroes. (Yes, thats in quotes for a reason.) These superheroes arent born with crazy superhuman abilities but instead are really, really good at one specific thingso they might have, say, extremely high intelligence or insane detective skills.

The comic takes place in a world where these everyday people would dress in superhero costumes and act as vigilantesuntil the practice was outlawed in 1977 after a riot involving said vigilante superheroes. A lot of the former superheroes went to work for the government, using their powers for good, but some ignored the law (aka a man named Rorschach) and continued their work in a more anarchic way.

The show is being described as more of a continuation, not an adaptation. It picks up a little over 30 years after the novel ended.

Queen Regina King stars as the main character, a police officer in Tulsa who goes by the name Sister Night and is super protective over her husband and child. Also, she has a BADASS costume that is part Catwoman, part Xena Warrior Princess. Serious Halloween inspo.

Dr. Manhattan *might* be making a return. If youre not familiar, hes a blue guy and the only one in the series with actual superpowers. His godlike capabilities include teleportation, total clairvoyance, and telekinesis. At the end of the graphic novel, he leaves Earth to go to Mars, BUT hes in the HBO previews, so fingers crossed.

Youll definitely see Adrian Veidt (also known as Ozymandias)a retired superhero with superintelligence who is known for faking an alien invasion with a giant squid. (Ya, this show gets weird.)

Of course, Rorschach (who was killed by Dr. Manhattan at the end of the series) will returnbut not exactly. His name, mask, and overall evil mission will be carried on by a group of white supremacists.

You can catch the series on HBO or HBO Now every Sunday at 9 p.m. ET. But if you cant be held to a strict TV-watching schedule, it can also be streamed with an HBO Go account! TG for streaming services.

Link:

Here's How to Watch Watchmen, HBOs Next Game of Thrones - Cosmopolitan

The Best Artificial Intelligence Books you Need to Read Today – Edgy Labs

If youre looking for a selection of the top artificial intelligence books, the offerings could be overwhelming. But were here to help with that.

Artificial intelligence is slowly and steadily making its way through pretty much every system humans have created.

AI-powered agents are getting increasingly smarter as they hone their problem-solving and decision-making skills.

On the other hand, humans avail themselves of AI as much as possible. But, theyre called to adapt and learn to coexist with machines if they are to, at best, thrive, or survive, at the worst.

As far as humans are concerned, intelligent agents cut both ways.

Thankfully, the worlds leading scientists and thinkers help us understand whats at stake and the best damage control measures to take if need be.

Many books deal with AI theory, modern AI sciences, and the technologys future implications.

The ones listed below are some of the best artificial books today that dissect all of these areas.

1. Introduction to Artificial Intelligence

As befits the topic, we start our list with a comprehensive introduction into AI technology: Introduction to Artificial Intelligence. Written by Phillip C. Jackson, Jr., the book is one of the classics thats still read by experts in the field and non-specialists alike.

This book provides a summary of the previous two decades of research into the science of computer reasoning, and where it could be heading. Published in 1985, some of the information might be outdated, but if nothing else, the book could serve as a valuable historical document.

2. Artificial Intelligence: A Modern Approach

Another classic is Artificial Intelligence: A Modern Approach, written by Stuart Russell and Peter Norvig.

No list on the best artificial intelligence books can fail to mention this bestseller that has become a standard book for AI students. Used as a textbook in hundreds of universities around the world, the book was first published in 1995. A third edition came out in 2009.

You may want to check this book to know why its described as the most popular artificial intelligence textbook in the world.

3. Life 3.0

This book is one of my personal favorites, by one of the leading physicists and cosmologists in the world, Max Tegmark, aka Mad Max.

Tegmarks Life 3.0: Being Human in the Age of Artificial Intelligence welcomes you to the most important conversation of our time. The MIT physics professor explores the future of AI and how it would reshape many facets of human life, from jobs to wars. Hes one of those thinking AI is a double-edged sword, and its really up to us to give it free rein.

Elon Musk recommends this book as worth reading, recapping that AI could be the best or worst thing.

4. How to Create a Mind

How to Create a Mind The Secret of Human Thought Revealed is a book by famous futurist and tech visionary Ray Kurzweil.

Kurzweil discusses the notion of mind and how it emerges from the brain, and the attempts of scientists to recreate human intelligence. He predicts that by 2020, computers would be powerful enough to simulate an entire human brain.

Kurzweil offers some interesting thought experiments on thinking. in the book. For example, most people can recite the alphabet correctly, but most would fail at reciting it backward as easily. The reason for this, according to the author, has to do with the memory formation process. The brain stores memories as hierarchical sequences only accessible in the order theyre remembered in.

5. Superintelligence Paths, Dangers, Strategies

Oxford philosopher Nick Bostrom is known for his work on major existential risks. He includes the superintelligence threat among the bunch.

A poorly-programmed or a flawed superintelligence

In Superintelligence Paths, Dangers, Strategies, Bostrom questions whether smart algorithms would spell the end of humanity or be a catalyst for a better future.

A New York Times bestseller, Bostrom argues that superintelligent machines left unchecked could replace humans as the dominant lifeform on Earth.

6. Weapons of Math Destruction

AI is all about Big Data, and the algorithms that work off of it. And thats the focus of the book titled Weapons of Math Destruction by Cathy ONeil, a data scientist at Harvard University,

In the book, the author explores how math, at the heart of data and by extension AI, could be manipulated and biased. The Author discusses the negative social implications of AI and how it could be a threat to democracy.

ONeil identifies three factorsscale, secrecy, and destructivenessthat could turn an AI algorithm into a Weapon of Math Destruction.

7. Our Final Invention

Its thanks to their brains not brawn that humans dominated Earth and reigned supreme over other species. Now, a human invention, AI, is posing a potential threat to this dominance.

Our Final Invention: Artificial Intelligence And The End Of The Human Era is a book by American documentary filmmaker James Barrat.

According to the author, while human intelligence stagnates, machines are getting smarter and would soon surpass humans cognitive abilities. Superintelligent artificial species could develop survival drives that could eventually lead them to clash with humans.

8. The Sentient Machine

Unlike other books on this list, The Sentient Machine The Coming Age of Artificial Intelligence provides a more optimistic look at AI.

In the book, inventor and techpreneur Amir Husainunlike Bostrom, Tegmark, and Muskthinks humans can thrive with AI, not just survive.

Weighing AIs risk and potential, Husain thinks we should embrace AI and let sentient machines lead us to a bright future. This isnt some void utopian daydreaming! The authors approach is based on scientific, cultural, and historical arguments. He also provides a wide-ranging discussion on what makes us humans and our role as creators in the world.

9. The Fourth Age

We find another optimistic take on AI in The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

In this book, author Byron Reese manages to both engage and entertain the reader with his insights into history and projections for the future. According to Reese, the human civilization went through three major disruptions in its history: fire and language, agriculture, and finally writing and the wheel.

AI promises a fourth age, which the book discusses in detail.

10. AI Superpowers

The United States and China are at the forefront of AI research. In a context marked by a geopolitical and economic rivalry between the two countries, it stands to reason that AI would be weaponized someway.

AI Superpowers: China, Silicon Valley, and the New World Order is a book by AI pioneer Kai-Fu Lee. China is racing with the U.S. to take the AI lead globally, and Lee thinks it will dominate the industry. If data is the new oil, says Lee. then China is the new Saudi Arabia.

Lee points out the factors that he thinks would help China win the AI arms race. He cites a high quantity of data, less data protection regulations, and a more aggressive AI startup culture as reasons giving China a potential edge.

These are our picks. What are the artificial intelligence books worth reading that left an impression on you?

Go here to read the rest:

The Best Artificial Intelligence Books you Need to Read Today - Edgy Labs

Aquinas’ Fifth Way: The Proof from Specification – Discovery Institute

Editors note: See also, Introducing Aquinas Five Ways, by Michael Egnor. For Dr. Egnors previous posts in this series on Aquinas Five Ways, seehere,here, here, and here. For more on Thomas Aquinas, intelligent design, and evolution, see the websiteAquinas.Design.

Aquinas Fifth Way is the proof of Gods existence that is easiest to grasp in everyday life. The order of nature points to a Mind that gives it order. This obvious order is the substrate for all natural science after all, without natural order, scientific study of nature would be an exercise in futility. And the natural order is the framework for everyday life. We could not take a breath unless our lungs and nerves worked consistently, and unless oxygen had the chemical properties that it has. Order in nature is ubiquitous. We have become so accustomed to it that we fail to notice how remarkable it is.

That this natural order points to God is obvious. But what are the characteristics of this order? In living things, ID theorists describe this order as specified complexity. Specified complexity means that a pattern has substantial independently specified information (specification) that has a low probability of occurrence by chance (complexity). Aquinas would agree that such specified complexity points to a designer, but he understands natural order in a way that is rather different from the understanding of many ID theorists.

For Aquinas, it is the specification, rather than the complexity, that is at the heart of the Fifth Way. Aquinas understands specification in an Aristotelian sense: as final cause (teleology). The Fifth Way is often called the proof from Final Cause, or the Teleological proof.

Final cause is fundamental to Aristotelian-Thomistic metaphysics. One may ask: What is the cause of a thing? St. Thomas answers that to completely understand a cause in nature, we really must know four causes:

Material cause: the matter out of which something is made. The material cause of a statue is the block of marble from which it is carved.

Efficient cause: the agent that gets the cause started. The efficient cause of a statue is the sculptor.

Formal cause: the structure of the system that is caused. The formal cause of a statue is the shape of the statue.

Final cause: the end or purpose for the cause. The final cause of a statue is the purpose in the mind of the sculptor to use the statue to decorate a garden, for example.

In nature, final causes and formal causes often overlap. The formal cause of an acorn growing into an oak tree is the form of the oak tree, which is also the final cause of the growth of the acorn the end or telos of the growth of the acorn is the form of the oak tree it will become.

The four causes have reciprocal relations. Material cause and formal cause work together, in the sense that form provides structure to matter. Efficient cause and final cause work together, in the sense of a push-pull relationship. An efficient cause pushes while a final cause pulls simultaneously. Efficient causes point to ends regular causes in nature tend to specific outcomes. When you strike a match (efficient cause), it bursts into flame (final cause). Efficient causation is incomprehensible without final cause: regular cause-and-effect in nature is directional, in the sense that cause is consistently from one specific state to another specific state. It makes no sense to speak of cause from unless we also speak of cause to. Causes have beginnings and ends.

For St. Thomas (following Aristotle), final cause is particularly important, because it provides direction to natural causes. Final cause is the essential principle by which causes in nature happen. We moderns tend to ignore final causeswe think in terms of cause as a push efficient cause, rather than cause that pulls final cause. For St. Thomas, it is the pull of final cause that is fundamental to the regularity of nature. Final cause is the cause of causes.

With this in mind, lets look at the proof from the Fifth Way. St. Thomas notes that causes in nature are more or less consistent. Causation is the actualization of potentiality, and causation follows patterns. Things fall down, not up. Cold weather causes water to freeze, not boil. Acorns become oaks, but oaks dont become acorns. Aquinas notes that the final cause of an acorn is in some sense in the acorn itself: that is, in order for an acorn to reliably grow into an oak tree, the form of the oak tree must have some sort of existence while the acorn is still an acorn. A process of change cant point to an end unless the end pre-exists in some sense. But how can an oak tree exist when it is merely an acorn?

What exists is the form of the oak tree. The form of the oak tree can exist in two ways. It can exist in an object as a substantial form that is, the form can exist in the oak tree itself. This is the way forms ordinarily exist in objects.

A form can also exist in an intentional sense that is, the form can exist in the mind of a person who thinks about it. When I know an oak tree, the form of that oak tree is in my mind as well as in the oak tree. That is, in fact, how I know it. My mind grasps its form.

For change to occur in nature, the form of the end-state of the change must in some way exist prior to the completion of the change. Otherwise, the change would have no direction colloquially, the acorn wouldnt know what to grow into.

But of course most things in nature and all inanimate things dont know anything. An electron doesnt know quantum mechanics, but it moves in strict accordance with quantum mechanical laws. A rock knows nothing of Newtons law of gravity, but it falls in strict accordance with Newtons law. A plant knows nothing about photosynthesis, but it does it very well every day with an expertise exceeding that of the best chemist.

Since the form of the final state of a process of change cant be in the thing being changed the acorn is not yet the oak tree and change routinely occurs in things that have no mind to look forward to the final state, where is the form of the final state of change in nature?

Aquinas asserts that the form of the final state the telos or final cause must therefore be in the Mind of a Superintelligence that directs natural change. That is what all men call God.

So you can see that in the Thomistic Fifth Way, it is the specification of change, not its complexity, that is at the heart of the matter. Its reminiscent of the quip about a dog that can recite Shakespeare. Its not that the mutt knows Shakespeare thats remarkable; its remarkable that he can talk at all. Whats remarkable in nature is not so much that nature follows complex patterns, but that it follows any pattern at all. Any pattern in nature, even the simplest, cries out for explanation, and it is the fact of natural patterns that is the starting point of the Fifth Way.

From the Thomistic perspective, even the most simple natural process a leaf falling to the ground is proof of Gods existence. The fall of the leaf is specified prior to the fall leaves fall to the ground, rather than doing any of countless other things a natural object might do (like burst into flame or grow a tail). This specification this telos requires a Mind in which the fallen state of the leaf is conceived prior to the actual fall of the leaf. Change in nature requires a Mind to look ahead and direct it. Complexity (or simplicity) of the change is irrelevant.

It is the consistent directedness of change in nature that points to God. Atheists, with much handwaving and dubious science, claim to explain biological complexity by Darwinian stories. Yet, even on its own terms, Darwinism fails. Adaptation by natural selection may account on some level for the fixation of a particular phenotype in a population, but it offers no explanation for the fundamental fact of teleology in nature. In fact, Darwinian theory depends on teleology in nature. If natural causes were not consistent and mostly directed, there would be no consistency to evolution at all. There is no evolution in chaos. Without teleology, chance and necessity would be all chance and no necessity, and therefore no evolution.

Actually, atheists cant explain chance either. Chance is the accidental conjunction of teleological processes. A car accident may be by chance, but it necessarily occurs in a matrix of purpose and teleology the cars move in accordance with laws of physics, the road was constructed according to plans, the cars are driven purposefully by drivers, etc. There can be no chance unless there is a system of regularity in which chance can occur. Chance by itself cant happen it is, by definition, the accidental conjunction of teleological processes. Both chance and necessity point to God. Pure chance, without a framework of regularity, is unintelligible.

From the perspective of the Fifth Way, necessity permeates nature. But it is specification, rather than the complexity, that characterizes necessity and points to Gods existence. The specification need not be complex. The simplest motion of an inanimate object a raindrop falling to the ground is proof of Gods existence.

Teleology is foresight, the ability of a natural process to proceed to an end not yet realized. Yet the end must be realized, in some real sense, for final cause to be a cause. The foresight inherent in teleology is in Gods Mind, and it is via His manifest foresight in teleology that we see Him at work all around us.

This rules out the God of deism. The God of the Fifth Way is no watchmaker who winds up the world and walks away. He is at work ceaselessly and everywhere. The evidence for a Designer is as clear in the most simple inanimate process as it is in the most complex living organism. The elegant intricate complexity of cellular metabolism is certainly a manifestation of Gods glory the beauty of biological processes is breath-taking. But the proof of His existence is in every movement in nature in every detail of cellular metabolism, of course, but also in every raindrop and in every blown grain of dust.

Photo: An oak tree, by Abrget47j [CC BY-SA 3.0], via Wikimedia Commons.

Read the rest here:

Aquinas' Fifth Way: The Proof from Specification - Discovery Institute

Elon Musk warns ‘advanced A.I.’ will soon manipulate social media – Big Think

Twitter bots in 2019 can perform some basic functions, like tweeting content, retweeting, following other users, quoting other users, liking tweets and even sending direct messages. But even though bots on Twitter and other social media seem to be getting smarter than previous iterations, these A.I. are still relatively unsophisticated in terms of how well they can manipulate social discourse.

But it's only a matter of time before more advanced A.I. changes begins manipulating the conversation on a large scale, according to Tesla and SpaceX CEO Elon Musk.

"If advanced A.I. (beyond basic bots) hasn't been applied to manipulate social media, it won't be long before it is," Musk tweeted on Thursday morning.

It's unclear exactly what Musk is referring to by "advanced A.I." but his tweet come just hours after The New York Times published an article outlining a study showing that at least 70 countries have experienced digital disinformation campaigns over the past two years.

"In recent years, governments have used 'cyber troops' to shape public opinion, including networks of bots to amplify a message, groups of "trolls" to harass political dissidents or journalists, and scores of fake social media accounts to misrepresent how many people engaged with an issue," Davey Alba and Adam Satariano wrote for the Times. "The tactics are no longer limited to large countries. Smaller states can now easily set up internet influence operations as well."

Musk followed up his tweet by saying that "anonymous bot swarms" presumably referring to coordinated activity by a large number of social media bots should be investigated.

"If they're evolving rapidly, something's up," he tweeted.

Musk has long predicted a gloomy future with AI. In 2017, he told staff at Neuralink Musk's company that's developing an implantable brain-computer interface that he thinks there's about "a five to 10 percent chance" of making artificial intelligence safe. In the documentary "Do You Trust Your Computer?", Musk warned of the dangers of a single organization someday developing superintelligence.

"The least scary future I can think of is one where we have at least democratized AI because if one company or small group of people manages to develop godlike digital superintelligence, they could take over the world," Musk said.

"At least when there's an evil dictator, that human is going to die. But for an AI, there would be no death. It would live forever. And then you'd have an immortal dictator from which we can never escape."

Related Articles Around the Web

Read more here:

Elon Musk warns 'advanced A.I.' will soon manipulate social media - Big Think

Superintelligence: Paths, Dangers, Strategies – Wikipedia

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the Swedish philosopher Nick Bostrom from the University of Oxford. It argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists,[2] and the outcome could be an existential catastrophe for humans.[3]

Bostrom's book has been translated into many languages and is available as an audiobook.[1][4]

It is unknown whether human-level artificial intelligence will arrive in a matter of years, later this century, or not until future centuries. Regardless of the initial timescale, once human-level machine intelligence is developed, a "superintelligent" system that "greatly exceeds the cognitive performance of humans in virtually all domains of interest" would follow surprisingly quickly, possibly even instantaneously. Such a superintelligence would be difficult to control or restrain.

While the ultimate goals of superintelligences can vary greatly, a functional superintelligence will spontaneously generate, as natural subgoals, "instrumental goals" such as self-preservation and goal-content integrity, cognitive enhancement, and resource acquisition. For example, an agent whose sole final goal is to solve the Riemann hypothesis (a famous unsolved, mathematical conjecture) could create and act upon a subgoal of transforming the entire Earth into some form of computronium (hypothetical "programmable matter") to assist in the calculation. The superintelligence would proactively resist any outside attempts to turn the superintelligence off or otherwise prevent its subgoal completion. In order to prevent such an existential catastrophe, it might be necessary to successfully solve the "AI control problem" for the first superintelligence. The solution might involve instilling the superintelligence with goals that are compatible with human survival and well-being. Solving the control problem is surprisingly difficult because most goals, when translated into machine-implementable code, lead to unforeseen and undesirable consequences.

The book ranked #17 on the New York Times list of best selling science books for August 2014.[5] In the same month, business magnate Elon Musk made headlines by agreeing with the book that artificial intelligence is potentially more dangerous than nuclear weapons.[6][7][8]Bostrom's work on superintelligence has also influenced Bill Gatess concern for the existential risks facing humanity over the coming century.[9][10] In a March 2015 interview with Baidu's CEO, Robin Li, Gates said that he would "highly recommend" Superintelligence.[11]

The science editor of the Financial Times found that Bostrom's writing "sometimes veers into opaque language that betrays his background as a philosophy professor" but convincingly demonstrates that the risk from superintelligence is large enough that society should start thinking now about ways to endow future machine intelligence with positive values.[2]A review in The Guardian pointed out that "even the most sophisticated machines created so far are intelligent in only a limited sense" and that "expectations that AI would soon overtake human intelligence were first dashed in the 1960s", but finds common ground with Bostrom in advising that "one would be ill-advised to dismiss the possibility altogether".[3]

Some of Bostrom's colleagues suggest that nuclear war presents a greater threat to humanity than superintelligence, as does the future prospect of the weaponisation of nanotechnology and biotechnology.[3] The Economist stated that "Bostrom is forced to spend much of the book discussing speculations built upon plausible conjecture... but the book is nonetheless valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect of actually doing so seems remote."[12] Ronald Bailey wrote in the libertarian Reason that Bostrom makes a strong case that solving the AI control problem is the "essential task of our age".[13] According to Tom Chivers of The Daily Telegraph, the book is difficult to read, but nonetheless rewarding.[14]

Original post:

Superintelligence: Paths, Dangers, Strategies - Wikipedia

Global Risks Report 2017 – Reports – World Economic Forum

Every step forward in artificial intelligence (AI) challenges assumptions about what machines can do. Myriad opportunities for economic benefit have created a stable flow of investment into AI research and development, but with the opportunities come risks to decision-making, security and governance. Increasingly intelligent systems supplanting both blue- and white-collar employees are exposing the fault lines in our economic and social systems and requiring policy-makers to look for measures that will build resilience to the impact of automation.

Leading entrepreneurs and scientists are also concerned about how to engineer intelligent systems as these systems begin implicitly taking on social obligations and responsibilities, and several of them penned an Open Letter on Research Priorities for Robust and Beneficial Artificial Intelligence in late 2015.1 Whether or not we are comfortable with AI may already be moot: more pertinent questions might be whether we can and ought to build trust in systems that can make decisions beyond human oversight that may have irreversible consequences.

By providing new information and improving decision-making through data-driven strategies, AI could potentially help to solve some of the complex global challenges of the 21st century, from climate change and resource utilization to the impact of population growth and healthcare issues. Start-ups specializing in AI applications received US$2.4 billion in venture capital funding globally in 2015 and more than US$1.5 billion in the first half of 2016.2 Government programmes and existing technology companies add further billions (Figure 3.2.1). Leading players are not just hiring from universities, they are hiring the universities: Amazon, Google and Microsoft have moved to funding professorships and directly acquiring university researchers in the search for competitive advantage.3

Machine learning techniques are now revealing valuable patterns in large data sets and adding value to enterprises by tackling problems at a scale beyond human capability. For example, Stanfords computational pathologist (C-Path) has highlighted unnoticed indicators for breast cancer by analysing thousands of cellular features on hundreds of tumour images,4 while DeepMind increased the power usage efficiency of Alphabet Inc.s data centres by 15%.5 AI applications can reduce costs and improve diagnostics with staggering speed and surprising creativity.

The generic term AI covers a wide range of capabilities and potential capabilities. Some serious thinkers fear that AI could one day pose an existential threat: a superintelligence might pursue goals that prove not to be aligned with the continued existence of humankind. Such fears relate to strong AI or artificial general intelligence (AGI), which would be the equivalent of human-level awareness, but which does not yet exist.6 Current AI applications are forms of weak or narrow AI or artificial specialized intelligence (ASI); they are directed at solving specific problems or taking actions within a limited set of parameters, some of which may be unknown and must be discovered and learned.

Tasks such as trading stocks, writing sports summaries, flying military planes and keeping a car within its lane on the highway are now all within the domain of ASI. As ASI applications expand, so do the risks of these applications operating in unforeseeable ways or outside the control of humans.7 The 2010 and 2015 stock market flash crashes illustrate how ASI applications can have unanticipated real-world impacts, while AlphaGo shows how ASI can surprise human experts with novel but effective tactics (Box 3.2.1). In combination with robotics, AI applications are already affecting employment and shaping risks related to social inequality.8

AI has great potential to augment human decision-making by countering cognitive biases and making rapid sense of extremely large data sets: at least one venture capital firm has already appointed an AI application to help determine its financial decisions.9 Gradually removing human oversight can increase efficiency and is necessary for some applications, such as automated vehicles. However, there are dangers in coming to depend entirely on the decisions of AI systems when we do not fully understand how the systems are making those decisions.10

by Jean-Marc Rickli, Geneva Centre for Security Policy

One sector that saw the huge disruptive potential of AI from an early stage is the military. The weaponization of AI will represent a paradigm shift in the way wars are fought, with profound consequences for international security and stability. Serious investment in autonomous weapon systems (AWS) began a few years ago; in July 2016 the Pentagons Defense Science Board published its first study on autonomy, but there is no consensus yet on how to regulate the development of these weapons.

The international community started to debate the emerging technology of lethal autonomous weapons systems (LAWS) in the framework of the United Nations Convention on Conventional Weapon (CCW) in 2014. Yet, so far, states have not agreed on how to proceed. Those calling for a ban on AWS fear that human beings will be removed from the loop, leaving decisions on the use lethal force to machines, with ramifications we do not yet understand.

There are lessons here from non-military applications of AI. Consider the example of AlphaGo, the AI Go-player created by Googles DeepMind division, which in March last year beat the worlds second-best human player. Some of AlphaGos moves puzzled observers, because they did not fit usual human patterns of play. DeepMind CEO Demis Hassabis explained the reason for this difference as follows: unlike humans, the AlphaGo program aims to maximize the probability of winning rather than optimizing margins. If this binary logic in which the only thing that matters is winning while the margin of victory is irrelevant were built into an autonomous weapons system, it would lead to the violation of the principle of proportionality, because the algorithm would see no difference between victories that required it to kill one adversary or 1,000.

Autonomous weapons systems will also have an impact on strategic stability. Since 1945, the global strategic balance has prioritized defensive systems a priority that has been conducive to stability because it has deterred attacks. However, the strategy of choice for AWS will be based on swarming, in which an adversarys defence system is overwhelmed with a concentrated barrage of coordinated simultaneous attacks. This risks upsetting the global equilibrium by neutralizing the defence systems on which it is founded. This would lead to a very unstable international configuration, encouraging escalation and arms races and the replacement of deterrence by pre-emption.

We may already have passed the tipping point for prohibiting the development of these weapons. An arms race in autonomous weapons systems is very likely in the near future. The international community should tackle this issue with the utmost urgency and seriousness because, once the first fully autonomous weapons are deployed, it will be too late to go back.

In any complex and chaotic system, including AI systems, potential dangers include mismanagement, design vulnerabilities, accidents and unforeseen occurrences.11 These pose serious challenges to ensuring the security and safety of individuals, governments and enterprises. It may be tolerable for a bug to cause an AI mobile phone application to freeze or misunderstand a request, for example, but when an AI weapons system or autonomous navigation system encounters a mistake in a line of code, the results could be lethal.

Machine-learning algorithms can also develop their own biases, depending on the data they analyse. For example, an experimental Twitter account run by an AI application ended up being taken down for making socially unacceptable remarks;12 search engine algorithms have also come under fire for undesirable race-related results.13 Decision-making that is either fully or partially dependent on AI systems will need to consider management protocols to avoid or remedy such outcomes.

AI systems in the Cloud are of particular concern because of issues of control and governance. Some experts propose that robust AI systems should run in a sandbox an experimental space disconnected from external systems but some cognitive services already depend on their connection to the internet. The AI legal assistant ROSS, for example, must have access to electronically available databases. IBMs Watson accesses electronic journals, delivers its services, and even teaches a university course via the internet.14 The data extraction program TextRunner is successful precisely because it is left to explore the web and draw its own conclusions unsupervised.15

On the other hand, AI can help solve cybersecurity challenges. Currently AI applications are used to spot cyberattacks and potential fraud in internet transactions. Whether AI applications are better at learning to attack or defend will determine whether online systems become more secure or more prone to successful cyberattacks.16 AI systems are already analysing vast amounts of data from phone applications and wearables; as sensors find their way into our appliances and clothing, maintaining security over our data and our accounts will become an even more crucial priority. In the physical world, AI systems are also being used in surveillance and monitoring analysing video and sound to spot crime, help with anti-terrorism and report unusual activity.17 How much they will come to reduce overall privacy is a real concern.

So far, AI development has occurred in the absence of almost any regulatory environment.18 As AI systems inhabit more technologies in daily life, calls for regulatory guidelines will increase. But can AI systems be sufficiently governed? Such governance would require multiple layers that include ethical standards, normative expectations of AI applications, implementation scenarios, and assessments of responsibility and accountability for actions taken by or on behalf of an autonomous AI system.

AI research and development presents issues that complicate standard approaches to governance, and can take place outside of traditional institutional frameworks, with both people and machines and in various locations. The developments in AI may not be well understood by policy-makers who do not have specialized knowledge of the field; and they may involve technologies that are not an issue on their own but that collectively present emergent properties that require attention.19 It would be difficult to regulate such things before they happen, and any unforeseeable consequences or control issues may be beyond governance once they occur (Box 3.2.2).

One option could be to regulate the technologies through which the systems work. For example, in response to the development of automated transportation that will require AI systems, the U.S. Department of Transportation has issued a 116 page policy guide.20 Although the policy guide does not address AI applications directly, it does put in place guidance frameworks for the developers of automated vehicles in terms of safety, control and testing.

Scholars, philosophers, futurists and tech enthusiasts vary in their predictions for the advent of artificial general intelligence (AGI), with timelines ranging from the 2030s to never. However, given the possibility of an AGI working out how to improve itself into a superintelligence, it may be prudent or even morally obligatory to consider potentially feasible scenarios, and how serious or even existential threats may be avoided.

The creation of AGI may depend on converging technologies and hybrid platforms. Much of human intelligence is developed by the use of a body and the occupation of physical space, and robotics provides such embodiment for experimental and exploratory AI applications. Proof-of-concept for muscle and braincomputer interfaces has already been established: Massachusetts Institute of Technology (MIT) scientists have shown that memories can be encoded in silicon,21 and Japanese researchers have used electroencephalogram (EEG) patterns to predict the next syllable someone will say with up to 90% accuracy, which may lead to the ability to control machines simply by thinking.22

Superintelligence could potentially also be achieved by augmenting human intelligence through smart systems, biotech, and robotics rather than by being embodied in a computational or robotic form.23 Potential barriers to integrating humans with intelligence-augmenting technology include peoples cognitive load, physical acceptance and concepts of personal identity.24 Should these challenges be overcome, keeping watch over the state of converging technologies will become an ever more important task as AI capabilities grow and fuse with other technologies and organisms.

Advances in computing technologies such as quantum computing, parallel systems, and neurosynaptic computing research may create new opportunities for AI applications or unleash new unforeseen behaviours in computing systems.25 New computing technologies are already having an impact: for instance, IBMs TrueNorth chip with a design inspired by the human brain and built for exascale computing already has contracts from Lawrence Livermore National Laboratory in California to work on nuclear weapons security.26 While adding great benefit to scenario modelling today, the possibility of a superintelligence could turn this into a risk.

by Stuart Russell, University of California, Berkeley

Few in the field believe that there are intrinsic limits to machine intelligence, and even fewer argue for self-imposed limits. Thus it is prudent to anticipate the possibility that machines will exceed human capabilities, as Alan Turing posited in 1951: If a machine can think, it might think more intelligently than we do. [T]his new danger is certainly something which can give us anxiety.

So far, the most general approach to creating generally intelligent machines is to provide them with our desired objectives and with algorithms for finding ways to achieve those objectives. Unfortunately, we may not specify our objectives in such a complete and well-calibrated fashion that a machine cannot find an undesirable way to achieve them. This is known as the value alignment problem, or the King Midas problem. Turing suggested turning off the power at strategic moments as a possible solution to discovering that a machine is misaligned with our true objectives, but a superintelligent machine is likely to have taken steps to prevent interruptions to its power supply.

How can we define problems in such a way that any solution the machine finds will be provably beneficial? One idea is to give a machine the objective of maximizing the true human objective, but without initially specifying that true objective: the machine has to gradually resolve its uncertainty by observing human actions, which reveal information about the true objective. This uncertainty should avoid the single-minded and potentially catastrophic pursuit of a partial or erroneous objective. It might even persuade a machine to leave open the possibility of allowing itself to be switched off.

There are complications: humans are irrational, inconsistent, weak-willed, computationally limited and heterogeneous, all of which conspire to make learning about human values from human behaviour a difficult (and perhaps not totally desirable) enterprise. However, these ideas provide a glimmer of hope that an engineering discipline can be developed around provably beneficial systems, allowing a safe way forward for AI. Near-term developments such as intelligent personal assistants and domestic robots will provide opportunities to develop incentives for AI systems to learn value alignment: assistants that book employees into US$20,000-a-night suites and robots that cook the cat for the family dinner are unlikely to prove popular.

Both existing ASI systems and the plausibility of AGI demand mature consideration. Major firms such as Microsoft, Google, IBM, Facebook and Amazon have formed the Partnership on Artificial Intelligence to Benefit People and Society to focus on ethical issues and helping the public better understand AI.27 AI will become ever more integrated into daily life as businesses employ it in applications to provide interactive digital interfaces and services, increase efficiencies and lower costs.28 Superintelligent systems remain, for now, only a theoretical threat, but artificial intelligence is here to stay and it makes sense to see whether it can help us to create a better future. To ensure that AI stays within the boundaries that we set for it, we must continue to grapple with building trust in systems that will transform our social, political and business environments, make decisions for us, and become an indispensable faculty for interpreting the world around us.

Chapter 3.2 was contributed by Nicholas Davis, World Economic Forum, and Thomas Philbeck, World Economic Forum.

Armstrong, S. 2014. Smarter than Us: The Rise of Machine Intelligence. Berkeley, CA: Machine Intelligence Research Institute.

Bloomberg. 2016. Boston Marathon Security: Can A.I. Predict Crimes? Bloomberg News, Video, 21 April 2016. http://www.bloomberg.com/news/videos/b/d260fb95-751b-43d5-ab8d-26ca87fa8b83

Bostrom, N. 2014. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.

CB Insights. 2016. Artificial intelligence explodes: New deal activity record for AI startups. Blog, 20 June 2016. https://www.cbinsights.com/blog/artificial-intelligence-funding-trends/

Chiel, E. 2016. Black teenagers vs. white teenagers: Why Googles algorithm displays racist results. Fusion, 10 June 2016. http://fusion.net/story/312527/google-image-search-algorithm-three-black-teenagers-vs-three-white-teenagers/

Clark, J. 2016. Google cuts its giant electricity bill with deepmind-powered AI. Bloomberg Technology, 19 July 2016. http://www.bloomberg.com/news/articles/2016-07-19/google-cuts-its-giant-electricity-bill-with-deepmind-powered-ai

Cohen, J. 2013. Memory implants: A maverick neuroscientist believes he has deciphered the code by which the brain forms long-term memories. MIT Technology Review. https://www.technologyreview.com/s/513681/memory-implants/

Frey, C. B. and M. A. Osborne. 2015. Technology at work: The future of innovation and employment. Citi GPS: Global Perspectives & Solutions, February 2015. http://www.oxfordmartin.ox.ac.uk/downloads/reports/Citi_GPS_Technology_Work.pdf

Hern, A. 2016. Partnership on AI formed by Google, Facebook, Amazon, IBM and Microsoft. The Guardian Online, 28 September 2016. https://www.theguardian.com/technology/2016/sep/28/google-facebook-amazon-ibm-microsoft-partnership-on-ai-tech-firms

Hunt, E. 2016. Tay, Microsofts AI chatbot, gets a crash course in racism from Twitter. The Guardian, 24 March 2016. https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter

Kelly, A. 2016. Will Artificial Intelligence read your mind? Scientific research analyzes brainwaves to predict words before you speak. iDigital Times, 9 January 2016. http://www.idigitaltimes.com/will-artificial-intelligence-read-your-mind-scientific-research-analyzes-brainwaves-502730

Kime, B. 3 Chatbots to deploy in your busines. VentureBeat, 1 October 2016. http://venturebeat.com/2016/10/01/3-chatbots-to-deploy-in-your-business/

Lawrence Livermore National Laboratory. 2016. Lawrence Livermore and IBM collaborate to build new brain-inspired supercomputer, Press release, 29 March 2016. https://www.llnl.gov/news/lawrence-livermore-and-ibm-collaborate-build-new-brain-inspired-supercomputer

Maderer, J. 2016. Artificial Intelligence course creates AI teaching assistant. Georgia Tech News Center, 9 May 2016. http://www.news.gatech.edu/2016/05/09/artificial-intelligence-course-creates-ai-teaching-assistant

Martin, M. 2012. C-Path: Updating the art of pathology. Journal of the National Cancer Institute 104 (16): 120204. http://jnci.oxfordjournals.org/content/104/16/1202.full

Mizroch, A. 2015. Artificial-intelligence experts are in high demand. Wall Street Journal Online, 1 May 2015. http://www.wsj.com/articles/artificial-intelligence-experts-are-in-high-demand-1430472782

Russell, S., D. Dewey, and M. Tegmark. 2015. Research priorities for a robust and beneficial artificial intelligence. AI Magazine Winter 2015: 10514.

Scherer, M. U. 2016. Regulating Artificial Intelligence systems: Risks, challenges, competencies, and strategies. Harvard Journal of Law & Technology 29 (2): 35498.

Sherpany. 2016. Artificial Intelligence: Bringing machines into the boardroom, 21 April 2016. https://www.sherpany.com/en/blog/2016/04/21/artificial-intelligence-bringing-machines-boardroom/

Talbot, D. 2009. Extracting meaning from millions of pages. MIT Technology Review, 10 June 2009. https://www.technologyreview.com/s/413767/extracting-meaning-from-millions-of-pages/

Turing, A. M. 1951. Can digital machines think? Lecture broadcast on BBC Third Programme; typescript at turingarchive.org

U.S. Department of Transportation. 2016. Federal Automated Vehicles Policy September 2016. Washington, DC: U.S. Department of Transportation. https://www.transportation.gov/AV/federal-automated-vehicles-policy-september-2016

Wallach, W. 2015. A Dangerous Master. New York: Basic Books.

Yirka, B. 2016. Researchers create organic nanowire synaptic transistors that emulate the working principles of biological synapses. TechXplore, 20 June 2016. https://techxplore.com/news/2016-06-nanowire-synaptic-transistors-emulate-principles.html

More:

Global Risks Report 2017 - Reports - World Economic Forum

Superintelligence – Hardcover – Nick Bostrom – Oxford …

Superintelligence Paths, Dangers, Strategies Nick Bostrom

"I highly recommend this book" --Bill Gates

"Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era." --Stuart Russell, Professor of Computer Science, University of California, Berkley

"Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book." --Martin Rees, Past President, Royal Society

"This superb analysis by one of the world's clearest thinkers tackles one of humanity's greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn't become the last?" --Professor Max Tegmark, MIT

"Terribly important ... groundbreaking... extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole... If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Spring from 1962, or ever." --Olle Haggstrom, Professor of Mathematical Statistics

"Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking" --The Economist

"There is no doubting the force of [Bostrom's] arguments...the problem is a research challenge worthy of the next generation's best mathematical talent. Human civilisation is at stake." --Clive Cookson, Financial Times

"Worth reading.... We need to be super careful with AI. Potentially more dangerous than nukes" --Elon Musk, Founder of SpaceX and Tesla

"Every intelligent person should read it." --Nils Nilsson, Artificial Intelligence Pioneer, Stanford University

Continued here:

Superintelligence - Hardcover - Nick Bostrom - Oxford ...

Superintelligence: From Chapter Eight of Films from the …

This concern would often come out in conversations around meals. Id be sitting next to some engaging person, having what seemed like a normal conversation, when theyd ask So, do you believe in superintelligence? As something of an agnostic, Id either prevaricate, or express some doubts as to the plausibility of the idea. In most cases, theyd then proceed to challenge any doubts that I might express, and try to convert me to becoming a superintelligence believer. I sometimes had to remind myself that I was at a scientific meeting, not a religious convention.

Part of my problem with these conversations was that, despite respecting Bostroms brilliance as a philosopher, I dont fully buy into his notion of superintelligence, and I suspect that many of my overzealous dining companions could spot this a mile off. I certainly agree that the trends in AI-based technologies suggest we are approaching a tipping point in areas like machine learning and natural language processing. And the convergence were seeing between AI-based algorithms, novel processing architectures, and advances in neurotechnology are likely to lead to some stunning advances over the next few years. But I struggle with what seems to me to be a very human idea that narrowly-defined intelligence and a particular type of power will lead to world domination.

Here, I freely admit that I may be wrong. And to be sure, were seeing far more sophisticated ideas begin to emerge around what the future of AI might look likephysicist Max Tegmark, for one, outlines a compelling vision in his book Life 3.0. The problem is, though, that were all looking into a crystal ball as we gaze into the future of AI, and trying to make sense of shadows and portents that, to be honest, none of us really understand. When it comes to some of the more extreme imaginings of superintelligence, two things in particular worry me. One is the challenge we face in differentiating between what is imaginable and what is plausible when we think about the future. The other, looking back to chapter five and the movie Limitless, is how we define and understand intelligence in the first place.

***

With a creative imagination, it is certainly possible to envision a future where AI takes over the world and crushes humanity. This is the Skynet scenario of the Terminator movies, or the constraining virtual reality of The Matrix. But our technological capabilities remain light-years away from being able to create such futureseven if we do create machines that can design future generations of smarter machines. And its not just our inability to write clever-enough algorithms thats holding us back. For human- like intelligence to emerge from machines, wed first have to come up with radically different computing substrates and architectures. Our quaint, two-dimensional digital circuits are about as useful to superintelligence as the brain cells of a flatworm are to solving the unified theory of everything; its a good start, but theres a long way to go.

Here, what is plausible, rather than simply imaginable, is vitally important for grounding conversations around what AI will and wont be able to do in the near future. Bostroms ideas of superintelligence are intellectually fascinating, but theyre currently scientifically implausible. On the other hand, Max Tegmark and others are beginning to develop ideas that have more of a ring of plausibility to them, while still painting a picture of a radically different future to the world we live in now (and in Tegmarks case, one where there is a clear pathway to strong AGI leading to a vastly better future). But in all of these cases, future AI scenarios depend on an understanding of intelligence that may end up being deceptive

Link:

Superintelligence: From Chapter Eight of Films from the ...

Amazon.com: Superintelligence: Paths, Dangers, Strategies …

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains.

If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biologicalcognitive enhancement, and collective intelligence.

This profoundly ambitious and original book picks its way carefully through a vast tract of forbiddingly difficult intellectual terrain. Yet the writing is so lucid that it somehow makes it all seem easy. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.

Read the original:

Amazon.com: Superintelligence: Paths, Dangers, Strategies ...

Superintelligence survey – Future of Life Institute

Click here to see this page in other languages: FrenchGermanJapaneseRussian

Max Tegmarks new book on artificial intelligence, Life 3.0: Being Human in the Age of Artificial Intelligence, explores how AI will impact life as it grows increasingly advanced, perhaps even achieving superintelligence far beyond human level in all areas. For the book, Max surveys experts forecasts, and explores a broad spectrum of views on what will/should happen. But its time to expand the conversation. If were going to create a future that benefits as many people as possible, we need to include as many voices as possible. And that includes yours! Below are the answers from the first 14,866 people who have taken the survey that goes along with Maxs book. To join the conversation yourself, please take the survey here.

The first big controversy, dividing even leading AI researchers, involves forecasting what will happen. When, if ever, will AI outperform humans at all intellectual tasks, and will it be a good thing?

Everything we love about civilization is arguably the product of intelligence, so we can potentially do even better by amplifying human intelligence with machine intelligence. But some worry that superintelligent machines would end up controlling us and wonder whether their goals would be aligned with ours. Do you want there to be superintelligent AI, i.e., general intelligence far beyond human level?

In his book, Tegmark argues that we shouldnt passively ask what will happen? as if the future is predetermined, but instead ask what we want to happen and then try to create that future. What sort of future do you want?

If superintelligence arrives, who should be in control?

If you one day get an AI helper, do you want it to be conscious, i.e., to have subjective experience (as opposed to being like a zombie which can at best pretend to be conscious)?

What should a future civilization strive for?

Do you want life spreading into the cosmos?

In Life 3.0, Max explores 12 possible future scenarios, describing what might happen in the coming millennia if superintelligence is/isnt developed. You can find a cheatsheet that quickly describes each here, but for a more detailed look at the positives and negatives of each possibility, check out chapter 5 of the book. Heres a breakdown so far of the options people prefer:

You can learn a lot more about these possible future scenarios along with fun explanations about what AI is, how it works, how its impacting us today, and what else the future might bring when you order Maxs new book.

The results above will be updated regularly. Please add your voice by taking the survey here, and share your comments below!

Read the rest here:

Superintelligence survey - Future of Life Institute