Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

Superintelligence asks the questions: what happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful--possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

Link:

Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

The world’s best virology lab isn’t where you think – Spectator.co.uk

If you ever doubt how clever evolution can be, remember that it may take a year or morefor the brightest minds on the planet to find and approve a vaccine for the coronavirus. Yet 99 per cent of otherwise healthy people seem to have an immune system that can crack the problem in under a week.

When I posted this on Twitter, I got a little abuse from a few strange people who thought I was calling scientists dumb. Quite the reverse. 99 per cent may be too high a figure, but it is surely evidence of some bizarre superintelligence within the human body that many of us can do unconsciously something that the combined brains of the worlds pharmaceutical industries so far cannot match. In a matter of days, it can spot, target, test and devise an antibody to eliminate a hostile pathogen that it has never encountered before. Each of us is walking around every day without realising that we are home to the worlds best virology lab.

True, the immune system does not have to wait for FDA approval. But it does have to do something similar ensure that the cure does not do more harm than the disease. (Diseases such as Lupus, Multiple Sclerosis and Rheumatoid Arthritis are examples of what happens when the system goes rogue.) And its also worth noting that a human vaccine does not, in fact, cure the disease it simply hacks the immune system to create its own cure.

A few dissident thinkers including me and theeconomist Robin Hanson - have wondered aloud whether, in the time before a vaccine is available, there might be a role for an earlier practice called 'variolation'. This was introduced to Britain from the Ottoman Empire by Lady Mary Wortley Montagu in the early eighteenth century as a treatment against smallpox. Montagu controversially infected her own children with a small initial dose of smallpox, the assumption being that the body was better able to cope when presented with a small initial dose of the virus than with a larger one. She gained a PR coup for the procedure when the then Princess of Wales adopted the procedure for her two daughters. Seven prisoners awaiting hanging at Newgate prison had been offered their freedom in exchange for undergoing the procedure all seven survived. (Horrible to say it, but one small advantage of the death penalty is that it does solve certain problems in medical ethics). Once EdwardJenner (and, earlier, Benjamin Jesty) came up with a cowpox vaccine, variolation sensibly fell out of favour.

We dont yet know whether the scale of the initial dose affects the course or outcome of the disease and it would be heinous to act without this information. So far, strangely, most models of the disease assume infection is just a binary question you are either infected or you are not. Is this a safe assumption, or are there gains to be had from also ensuring that if you are infected, you arent infected very much?

Im not taking any chances, While everyone else was stockpiling toilet paper, I invested in one of these.

Link:

The world's best virology lab isn't where you think - Spectator.co.uk

Is Artificial Intelligence (AI) A Threat To Humans? – Forbes

Are artificial intelligence (AI) and superintelligent machines the best or worst thing that could ever happen to humankind? This has been a question in existence since the 1940s when computer scientist Alan Turing wondered and began to believe that there would be a time when machines could have an unlimited impact on humanity through a process that mimicked evolution.

Is Artificial Intelligence (AI) A Threat To Humans?

When Oxford University Professor Nick Bostroms New York Times best-seller, Superintelligence: Paths, Dangers, Strategies was first published in 2014, it struck a nerve at the heart of this debate with its focus on all the things that could go wrong. However, in my recent conversation with Bostrom, he also acknowledged theres an enormous upside to artificial intelligence technology.

You can see the full video of our conversation here:

Since the writing of Bostrom's book in 2014, progress has been very rapid in artificial intelligence and machine and deep learning. Artificial intelligence is in the public discourse, and most governments have some sort of strategy or road map to address AI. In his book, he talked about AI being a little bit like children playing with a bomb that could go off at any time.

Bostrom explained, "There's a mismatch between our level of maturity in terms of our wisdom, our ability to cooperate as a species on the one hand and on the other hand our instrumental ability to use technology to make big changes in the world. It seems like we've grown stronger faster than we've grown wiser."

There are all kinds of exciting AI tools and applications that are beginning to affect the economy in many ways. These shouldnt be overshadowed by the overhype on the hypothetical future point where you get AIs with the same general learning and planning abilities that humans have as well as superintelligent machines.These are two different contexts that require attention.

Today, the more imminent threat isn't from a superintelligence, but the usefulyet potentially dangerousapplications AI is used for presently.

How is AI dangerous?

If we focus on whats possible today with AI, here are some of the potential negative impacts of artificial intelligence that we should consider and plan for:

Change the jobs humans do/job automation: AI will change the workplace and the jobs that humans do. Some jobs will be lost to AI technology, so humans will need to embrace the change and find new activities that will provide them the social and mental benefits their job provided.

Political, legal, and social ramifications: As Bostrom advises, rather than avoid pursuing AI innovation, "Our focus should be on putting ourselves in the best possible position so that when all the pieces fall into place, we've done our homework. We've developed scalable AI control methods, we've thought hard about the ethics and the governments, etc. And then proceed further and then hopefully have an extremely good outcome from that." If our governments and business institutions don't spend time now formulating rules, regulations, and responsibilities, there could be significant negative ramifications as AI continues to mature.

AI-enabled terrorism: Artificial intelligence will change the way conflicts are fought from autonomous drones, robotic swarms, and remote and nanorobot attacks. In addition to being concerned with a nuclear arms race, we'll need to monitor the global autonomous weapons race.

Social manipulation and AI bias: So far, AI is still at risk for being biased by the humans that build it. If there is bias in the data sets the AI is trained from, that bias will affect AI action. In the wrong hands, AI can be used, as it was in the 2016 U.S. presidential election, for social manipulation and to amplify misinformation.

AI surveillance: AIs face recognition capabilities give us conveniences such as being able to unlock phones and gain access to a building without keys, but it also launched what many civil liberties groups believe is alarming surveillance of the public. In China and other countries, the police and government are invading public privacy by using face recognition technology. Bostrom explains that AI's ability to monitor the global information systems from surveillance data, cameras, and mining social network communication has great potential for good and for bad.

Deepfakes: AI technology makes it very easy to create "fake" videos of real people. These can be used without an individual's permission to spread fake news, create porn in a person's likeness who actually isn't acting in it, and more to not only damage an individual's reputation but livelihood. The technology is getting so good the possibility for people to be duped by it is high.

As Nick Bostrom explained, The biggest threat is the longer-term problem introducing something radical thats super intelligent and failing to align it with human values and intentions. This is a big technical problem. Wed succeed at solving the capability problem before we succeed at solving the safety and alignment problem.

Today, Nick describes himself as a frightful optimist that is very excited about what AI can do if we get it right. He said, The near-term effects are just overwhelmingly positive. The longer-term effect is more of an open question and is very hard to predict. If we do our homework and the more we get our act together as a world and a species in whatever time we have available, the better we are prepared for this, the better the odds for a favorable outcome. In that case, it could be extremely favorable.

For more on AI and other technology trends, see Bernard Marrs new book Tech Trends in Practice: The 25 Technologies That Are Driving The 4Th Industrial Revolution, which is available to pre-order now.

Here is the original post:

Is Artificial Intelligence (AI) A Threat To Humans? - Forbes

Elon Musk dings Bill Gates and says their conversations were underwhelming, after the Microsoft billionaire buys an electric Porsche – Pulse Nigeria

No-one is safe from Elon Musk's barbs, it seems not even Bill Gates.

Elon Musk dissed the Microsoft billionaire in a tweet sent Tuesday, claiming his conversations with the Microsoft founder had been "underwhelming."

Musk made the remark after an unofficial Tesla news account expressed disappointment with Gates' recent decision to buy a Porsche Taycan instead of a Tesla.

The Porsche Taycan is the German automaker's first all-electric vehicle and represents a direct rival to many of Tesla's models. Its starting price is $103,800 .

Gates said he'd ordered the "very, very cool" vehicle during an interview with YouTuber Marques Brownlee , published Friday.

"That's my first electric car, and I'm enjoying it a lot," he said.

During the interview, the 64-year-old tech grandee discussed the state of electric cars in general, noting that their range still falls below that of traditional gasoline vehicles. Consumers may experience "anxiety" about this when buying one, he said.

Still, Gates and Musk have more insights in common than the Tesla CEO might like to admit.

They have both, for example, spoken about the dangers posed by artificial intelligence.

Both men have endorsed a book by Oxford philosophy professor Nick Bostrom, "Superintelligence," which warns of the risks to human life posed by AI.

Musk said the book was "worth reading" in a 2014 tweet , while Gates endorsed the book in a 2015 interview with Baidu CEO Robin Li .

NOW WATCH: 62 new emoji and emoji variations were just finalized, including a bubble tea emoji and a transgender flag. Here's how everyday people submit their own emoji.

See Also:

SEE ALSO: AI is a greater threat to human existence than climate change, says the Oxford professor endorsed by Bill Gates

Originally posted here:

Elon Musk dings Bill Gates and says their conversations were underwhelming, after the Microsoft billionaire buys an electric Porsche - Pulse Nigeria

Thinking Beyond Flesh and Bones with AI – Ghana Latest Football News, Live Scores, Results – Ghanasoccernet.com

The best way to predict the future is to invent it goes the quote. If you are someone who is interested in discovering and inventing things, then "Artificial Intelligence" is the right domain for you. It will not only make your life interesting, but you would be able to make other lives simple and easy!

What does thinking beyond bones and flesh mean? Artificial intelligence is not just about inventing robots and replacing humans, but also about every hard activity of replacing the slog. For example, AI can be used in different areas in the medical field, civil engineering, military services, machine learning, and other fields. To simply portray, artificial intelligence enables computers or software to think wisely about how a person behaves. As a result, the field is vast and you can have your hands on whichever lane seems alluring to you.

The ultimate goal of AI is to achieve human goals through computer programming! AI is about mimicking human intelligence, but with a computer program and with a little help from data. The way they think, act and respond to problems just like a human mind.

One of the most significant features of AI is the new invention of Israeli military soldier robots, which are used as soldiers to replace men and women. This, in turn, is not only effective, but it also reduces the loss of life caused by each war. Its design also minimizes the damage to the robot. How sensitive, but a knowledgeable and useful invention! Therefore, the future of the world depends on how easy it is to obtain any work, and our future is nothing more than Artificial Intelligence!

Now, let us see how many types of AI are there!

Artificial Narrow Intelligence (ANI)

The concept of ANI generally means the flow of designing a computer or machine to perform a single task with high intelligence. It understands the individual tasks that must be performed efficiently. It is considered the most rudimentary concept of AI.

E.g.:

Artificial superintelligence is an aspect where intelligence is more powerful and sophisticated than human intelligence. While human intelligence is considered to be the most capable and developmental, superintelligence can suppress human intelligence.

It will be able to perform abstractions that are impossible for human minds to even think. The human brain is constrained to some billion neurons.

Artificial intelligence has the ability to mimic human thought. The ASI goes a step beyond and acquires a cognitive ability that is superior to humans.

As the name suggests, it is designed for general purpose. Its smartness could be applied to a variety of tasks as well as to learn and improve itself. It is as intelligent as a human brain. Unlike ANI it can improve the performance of itself.

E.g.: AlphaGo, it is currently used only to play the game Go, but its intelligence can be used in various levels and fields.

Scope of AI

The global demand for experts with relevant AI knowledge has doubled in the past three years and will continue to increase in the future. There are many more options for voice recognition, expert system, AI-enabled equipment, and more.

Artificial intelligence is the end of the future. So, why is no one willing to contribute to the future of the planet? Actually, in recent years, AI jobs have increased by almost 129%. In the United States alone, the demand for AI-related job is as high as 4,000!

Well, to catch the lightning opportunity present in AI, you need a bachelor's degree in computer science, data science, information science, math, etc. Now, if you are an undergraduate, then you can easily get a job in the AI domain with a reputed online certification course on AI. Doing this, you can earn anywhere between 600,000 and 1,000,000 in India! In the United States, you can get US$50,000 - US$100,000.

In this smart world, it's easy to find any online certification courses. Some online courses may only focus on the simple foundations of AI, while others offer professional courses, etc. All you have to do is choose the lane you want to follow and start your route.

You would be glad to know that Intellipaat offers an industry-wide best AI course program that has been meticulously designed as per the industry standard and conducted by SMEs. This will not only enhance your knowledge but also help you bring a share of the knowledge gained in the field.

You need to master certain necessary skills to shine in this field such as Programming, Robotics, autonomous cars, space research, etc., You will also be required to gain special skills in Mathematics, statistics, analytics, and engineering skills. A good communication skill is always appreciated if you are aspiring to be in the business field in order to explain and get the right thing to the people out there.

Learners fascinated in the profession of artificial intelligence should discover numerous options in the field. Up-and-coming careers in AI can be accomplished in a variety of environments, such as finance, government, private agencies, healthcare, arts, research, agriculture, and more. The range of jobs and opportunities in AI is very high.

See more here:

Thinking Beyond Flesh and Bones with AI - Ghana Latest Football News, Live Scores, Results - Ghanasoccernet.com

Liquid metal tendons could give robots the ability to heal themselves – Digital Trends

Since fans first clapped eyes on the T-1000, the shape-shifting antagonist from 1991s Terminator 2: Judgment Day, many people have been eagerly anticipating the day in which liquid metal robots became a reality. And by eagerly anticipating, we mean had the creeping sense that such a thing is a Skynet eventuality, so we might as well make the best of it.

Jump forward to the closing days of 2019 and, while robots havent quite advanced to the level of the 2029 future sequences seen in T2, scientists are getting closer. In Japan, roboticists from the University of Tokyos JSK Lab have created a prototype robot leg with a metal tendon fusethats able to repair fractures. How does it do this? Simple: By autonomously melting itself and then reforming as a single piece. The work was presented at the recent 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

The self-healing module is comprised of two halves that are connected via magnets and springs. Each half of the module is filled with an alloy with a low melting point of just 50 degrees Celsius (122 degrees Fahrenheit). When the fuse breaks, the cartridges heat, melting the alloy and allowing the two halves to fuse together again. While the re-fused joints are not as strong as they were before any break took place, the researchers have observed that gently vibrating the joint during melting and reforming results in a joint that is up to 90% of its original strength. This could be further optimized in the future.

Its still very early in the development process. But the ultimate ambition is to develop ways that robots will be able to better heal themselves, rather than having to rely on external tools to do so. Since roboticists regularly borrow from nature for biomimetic solutions to problems, the idea of robots that can heal like biological creatures makes a lot of sense.

Just like breakthroughs in endeavors like artificial muscles and continued research toward creating superintelligence, it does take us one step closer to the world envisioned in Terminator. Wheres John savior of all humanity Connor when you need him?

Link:

Liquid metal tendons could give robots the ability to heal themselves - Digital Trends

NIU expert: 4 leaps in technology to expect in the 2020s | NIU – NIU Newsroom

DeKalb, Ill. Autopilot automobiles, wearable devices, services such as Uber and Lyft. Technological advances in the 2010s made headlines, and some made their way into our everyday lives.

So what should we expect from the roaring 2020s?

We put that question to NIU Professor David Gunkel, a communication technology expert and author of Robot Rights and How to Survive a Robot Invasion. Gunkel pointed to four areas where technology is poised to make an impact on the coming decade.

Robots By the mid-2020s, robots of one kind or another will be everywhere and doing virtually everything, Gunkel says. This robot invasion will not transpire as we have imagined it in our science fiction, with a marauding army of evil-minded androids either descending from the heavens or rising up in revolt against their human masters. It will look less like Blade Runner, Terminator or Westworld and more like the Fall of Rome, asmachines of various configurations and capabilities come to take up influential positions in our world through a slow but steady incursion.

Artificial Intelligence Innovations in Artificial Intelligence, especially with deep-learning algorithms, have made great strides in the previous decade. The 2020s will see AI in everything, from our handheld mobile devices to self-driving vehicles. These will be very capable but highly specialized AIs. We are creating a world full of idiot savants that will control every aspect of our lives. This might actually be more interesting, and possibly more terrifying, than superintelligence.

Things that Talk In 2018, Amazon put Alexa in the toilet, when they teamed up with Kohler at the Consumer Electronics Show. Manufactures of these digital voice assistants, which also include the likes of Siri, Google Assistant and Bixby, are currently involved in an arms race to dominate the voice-activated, screenless Internet of the future. By mid-decade, everything will be talking to us, which will dramatically change how we think about social interaction. But they will also be listening to what we say and sharing all this personal data with their parent corporations.

The Empires Strike Back This past year has seen unprecedented investment in AI ethics and governance. The 2020s will see amplification of this effort as stakeholders in Europe, China and North America compete to dominate the AI policy and governance market. Europe might be the odds-on favorite, since it was first to exit the starting block, but China and the U.S. are not far behind. The technology of AI might be global in scope and controlled by borderless multinationals. But tech policy and governance is still a matter of nation states, and the 2020s will see increasing involvement as the empires strike back.

Media Contact:Tom Parisi

About NIU

Northern Illinois University is a student-centered, nationally recognized public research university, with expertise that benefits its region and spans the globe in a wide variety of fields, including the sciences, humanities, arts, business, engineering, education, health and law. Through its main campus in DeKalb, Illinois, and education centers for students and working professionals in Chicago, Hoffman Estates, Naperville, Oregon and Rockford, NIU offers more than 100 areas of study while serving a diverse and international student body.

Continued here:

NIU expert: 4 leaps in technology to expect in the 2020s | NIU - NIU Newsroom

Playing Tetris Shows That True AI Is Impossible – Walter Bradley Center for Natural and Artificial Intelligence

Hi there! I recently put together an electroencephalogram (EEG), or in normal words, a brain wave reader, so you can see what goes on inside my brain!

I received a kit from OpenBCI, a successful kickstarter project to make inexpensive brain wave readers available to the masses. Heres what it looks like:

Yes, it looks like something Calvin and Hobbes would invent.

Here is how it looks on my head:

A number of electrodes are touching my scalp and a wire is connected to my ear. The layout on my head looks like the following schematic:

The EEG is measuring the voltage between different points on my scalp and my earlobe. The positions on my scalp are receiving a current from my brain while my earlobe acts as the ground. The EEG is essentially a multimeter for my brain.

Brain waves are generated by ions building up inside the neurons. Once the neurons reach capacity, they release the ions in a cascade across the brain. This leads to the wave effect.

So can I see any connection between my brain waves and what Im consciously experiencing in my mind?

To test that, inspired by the EEG hacker blog, I generated a graphic known as a spectrogram of my brain waves across a set of activities.

The spectrogram shows the range of brainwave frequencies in my brain at a given point in time. In the following plots, the horizontal axis is time, and the vertical axis is frequency. There are some artifacts in the plots, such as a middle band and a big pink blotch, so dont take all patterns as significant. The important thing to note is the overall texture of the plot.

The greens and reds are low amplitude frequencies, and the blue and magenta are high amplitude frequencies, meaning those brain waves are stronger. The spectrogram is generated from the readings of the #1 electrode in the schematic above.

I performed three different activities to see how they affect thespectrogram. Results and code are provided athttps://github.com/yters/eeg_tetris.

First, I just absentmindedly tapped the Enter key on my keyboard. I did not focus on anything in particular, just pressed Enter whenever I felt like it. This is the EEG spectrogram that random tapping generated:

Second, I played a game of Tetris on very slow speed, using a Github repo.

Heres a video of the game speed:

This is the corresponding spectrogram:

Finally, I played Tetris much faster, and the spectrogram looked like this:

You can watch a video of the game speed here:

The big difference is that, as my activity became cognitively more difficult, the spectrogram became more blue and magenta, meaning that my brain waves became stronger.

What does this mean? It means that, at least at a high level. I can measure how cognitively difficult a mental task is.

Another interesting thing is the direction of causality. The intensity of my mental processing brought about an observable brain state. The causality did not go in the other direction; the magenta brain state did not increase my conscious process.

So my subjective mental experience brought about a change in my physical brain. In other words, my consciousness has a causal impact on my physical processing unit, the brain.

This type of observation causes a problem for those hoping to duplicate human intelligence in a computer program. This Tetris EEG experiment shows that conscious thought is essential to human intelligence. So, until we make conscious computers, which is most likely never, we will not have computers that display human intelligence.

Update: Someone online suggested it might just be my facial muscle tension. So I tested out the idea by recording while I tensed my brow (where the electrode is placed). https://github.com/yters/eeg_tetris

The result looked no different than the tapping EEG, so I consider the just facial tension hypothesis falsified.

If you enjoyed this item, here are some of Eric Holloways other reflections on human consciousness and computer intelligence:

No materialist theory of consciousness is plausible All such theories either deny the very thing they are trying to explain, result in absurd scenarios, or end up requiring an immaterial intervention

We need a better test for AI intelligence Better than Turing or Lovelace. The difficulty is that intelligence, like randomness, is mathematically undefinable

and

Will artificial intelligence design artificial superintelligence? And then turn us all into super-geniuses, as some AI researchers hope? No, and heres why not

Read the original:

Playing Tetris Shows That True AI Is Impossible - Walter Bradley Center for Natural and Artificial Intelligence

AI R&D is booming, but general intelligence is still out of reach – The Verge

Trying to get a handle on the progress of artificial intelligence is a daunting task, even for those enmeshed in the AI community. But the latest edition of the AI Index report an annual rundown of machine learning data points now in its third year does a good job confirming what you probably already suspected: the AI world is booming in a range of metrics covering research, education, and technical achievements.

The AI Index covers a lot of ground so much so that its creators, which include institutions like Harvard, Stanford, and OpenAI, have also released two new tools just to sift through the information they sourced from. One tool is for searching AI research papers and the other is for investigating country-level data on research and investment.

Most of the 2019 report basically confirms the continuation of trends weve highlighted in previous years. But to save you from having to trudge through its 290 pages, here are some of the more interesting and pertinent points:

All this is impressive, but one big caveat applies: no matter how fast AI improves, its never going to match the achievements accorded to it by pop culture and hyped headlines. This may seem pedantic or even obvious, but its worth remembering that, while the world of artificial intelligence is booming, AI itself is still limited in some important ways.

The best demonstration of this comes from a timeline of human-level performance milestones featured in the AI Index report; a history of moments when AI has matched or surpassed human-level expertise.

The timeline starts in the 1990s when programs first beat humans at checkers and chess, and accelerates with the recent machine learning boom, listing video games and board games where AI has came, saw, and conquered (Go in 2016, Dota 2 in 2018, etc.). This is mixed with miscellaneous tasks like human-level classification of skin cancer images in 2017 and in Chinese to English translation in 2018. (Many experts would take issue with that last achievement being included at all, and note that AI translation is still way behind humans.)

And while this list is impressive, it shouldnt lead you to believe that AI superintelligence is nigh.

For a start, the majority of these milestones come from defeating humans in video games and board games domains that, because of their clear rules and easy simulation, are particularly amenable to AI training. Such training usually relies on AI agents sinking many lifetimes worth of work into a single game, training hundreds of years in a solar day: a fact that highlights how quickly humans learn compared to computers.

Similarly, each achievements was set in a single domain. With very few exceptions, AI systems trained at one task cant transfer what theyve learned to another. A superhuman StarCraft II bot would lose to a five-year-old playing chess. And while an AI might be able to spot breast cancer tumors as accurately as an oncologist, it cant do the same for lung cancer (let alone write a prescription or deliver a diagnosis). In other words: AI systems are single-use tools, not flexible intelligences that are stand-ins for humans.

But and yes, theres another but that doesnt mean AI isnt incredibly useful. As this report shows, despite the limitations of machine learning, it continues to accelerate in terms of funding, interest, and technical achievements.

When thinking about AI limitations and promises, its good to remember the words of machine learning pioneer Andrew Ng: If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future. Were just beginning to find out what happens when those seconds are added up.

Read the rest here:

AI R&D is booming, but general intelligence is still out of reach - The Verge

Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits – Forbes

Digital Human Brain Covered with Networks

Artificial intelligence is advancing rapidly. In a few decades machines will achieve superintelligence and become self-improving. Soon after that happens we will launch a thousand ships into space. These probes will land on distant planets, moons, asteroids, and comets. Using AI and terabytes of code, they will then nanoassemble local particles into living organisms. Each probe will, in fact, contain the information needed to create an entire ecosystem. Thanks to AI and advanced biotechnology, the species in each place will be tailored to their particular plot of rock. People will thrive in low temperatures, dim light, high radiation, and weak gravity. Humanity will become an incredibly elastic concept. In time our distant progeny will build megastructures that surround stars and capture most of their energy. Then the power of entire galaxies will be harnessed. Then life and AIlong a common entity by this pointwill construct a galaxy-sized computer. It will take a mind that large about a hundred-thousand years to have a thought. But those thoughts will pierce the veil of reality. They will grasp things as they really are. All will be one. This is our destiny.

Then again, maybe not.

There are, of course, innumerable reasons to reject this fantastic tale out of hand. Heres a quick and dirty one built around Copernicuss discovery that we are not the center of the universe. Most times, places, people, and things are average. But if sentient beings from Earth are destined to spend eons multiplying and spreading across the heavens, then those of us alive today are special. We are among the very few of our kind to live in our cosmic infancy, confined in our planetary cradle. Because we probably are not special, we probably are not at an extreme tip of the human timeline; were likely somewhere in the broad middle. Perhaps a hundred-billion modern humans have existed, across a span of around 50,000 years. To claim in the teeth of these figures that our species is on the cusp of spending millions of years spreading trillions of individuals across this galaxy and others, you must engage in some wishful thinking. You must embrace the notion that we today are, in a sense, back at the center of the universe.

It is in any case more fashionable to speculate about imminent catastrophes. Technology again looms large. In the gray goo scenario, runaway self-replicating nanobots consume all of the Earths biomass. Thinking along similar lines, philosopher Nick Bostrom imagines an AI-enhanced paperclip machine that, ruthlessly following its prime directive to make paperclips, liquidates mankind and converts the planet into a giant paperclip mill. Elon Musk, when he discusses this hypothetical, replaces paperclips with strawberries, so that he can worry about strawberry fields forever. What Bostrom and Musk are driving at is the fear that an advanced AI being will not share our values. We might accidently give it a bad aim (e.g., paperclips at all costs). Or it might start setting its own aims. As Stephen Hawking noted shortly before his death, a machine that sees your intelligence the way you see a snails might decide it has no need for you. Instead of using AI to colonize distant planets, we will use it to destroy ourselves.

When someone mentions AI these days, she is usually referring to deep neural networks. Such networks are far from the only form of AI, but they have been the source of most of the recent successes in the field. A deep neural network can recognize a complex pattern without relying on a large body of pre-set rules. It does this with algorithms that loosely mimic how a human brain tunes neural pathways.

The neurons, or units, in a deep neural network are layered. The first layer is an input layer that breaks incoming data into pieces. In a network that looks at black-and-white images, for instance, each of the first layers units might link to a single pixel. Each input unit in this network will translate its pixels grayscale brightness into a numer. It might turn a white pixel into zero, a black pixel into one, and a gray pixel into some fraction in between. These numbers will then pass to the next layer of units. Each of the units there will generate a weighted sum of the values coming in from several of the previous layers units. The next layer will do the same thing to that second layer, and so on through many layers more. The deeper the layer, the more pixels accounted for in each weighted sum.

An early-layer unit will produce a high weighted sumit will fire, like a neuron doesfor a pattern as simple as a black pixel above a white pixel. A middle-layer unit will fire only when given a more complex pattern, like a line or a curve. An end-layer unit will fire only when the patternor, rather, the weighted sums of many other weighted sumspresented to it resembles a chair or a bonfire or a giraffe. At the end of the network is an output layer. If one of the units in this layer reliably fires only when the network has been fed an image with a giraffe in it, the network can be said to recognize giraffes.

A deep neural network is not born recognizing objects. The network just described would have to learn from pre-labeled examples. At first the network would produce random outputs. Each time the network did this, however, the correct answers for the labeled image would be run backward through the network. An algorithm would be used, in other words, to move the networks unit weighting functions closer to what they would need to be to recognize a given object. The more samples a network is fed, the more finely tuned and accurate it becomes.

Some deep neural networks do not need spoon-fed examples. Say you want a program equipped with such networks to play chess. Give it the rules of the game, instruct it to seek points, and tell it that a checkmate is worth a hundred points. Then have it use a Monte Carlo method to randomly simulate games. Through trial and error, the program will stumble on moves that lead to a checkmate, and then on moves that lead to moves that lead to a checkmate, and so on. Over time the program will assign value to moves that simply tend to lead toward a checkmate. It will do this by constantly adjusting its networks unit weighting functions; it will just use points instead of correctly labeled images. Once the networks are trained, the program can win discrete contests in much the way it learned to play in the first place. At each of its turns, the program will simulate games for each potential move it is considering. It will then choose the move that does best in the simulations. Thanks to constant fine-tuning, even these in-game simulations will get better and better.

There is a chess program that operates more or less this way. It is called AlphaZero, and at present it is the best chess player on the planet. Unlike other chess supercomputers, it has never seen a game between humans. It learned to play by spending just a few hours simulating moves with itself. In 2017 it played a hundred games against Stockfish 8, one of the best chess programs to that point. Stockfish8 examined 70million moves per second. AlphaZero examined only 80,000. AlphaZero won 28 games, drew 72, and lost zero. It sometimes made baffling moves (to humans) that turned out to be masterstrokes. AlphaZero is not just a chess genius; it is an alien chess genius.

AlphaZero is at the cutting edge of AI, and it is very impressive. But its success is not a sign that AI will take us to the starsor enslave usany time soon. In Artificial Intelligence: A Guide For Thinking Humans, computer scientist Melanie Mitchell makes the case for AI sobriety. AI currently excels, she notes, only when there are clear rules, straightforward reward functions (for example, rewards for points gained or for winning), and relatively few possible actions (moves). Take IBMs Watson program. In 2011 it crushed the best human competitors on the quiz show Jeopardy!, leading IBM executives to declare that its successors would soon be making legal arguments and medical diagnoses. It has not worked out that way. Real-world questions and answers in real-world domains, Mitchell explains, have neither the simple short structure of Jeopardy! clues nor their well-defined responses.

Even in the narrow domains that most suit it, AI is brittle. A program that is a chess grandmaster cannot compete on a board with a slightly different configuration of squares or pieces. Unlike humans, Mitchell observes, none of these programs can transfer anything it has learned about one game to help it learn a different game. Because the programs cannot generalize or abstract from what they know, they can function only within the exact parameters in which they have been trained.

A related point is that current AI does not understand even basic aspects of how the world works. Consider this sentence: The city council refused the demonstrators a permit because they feared violence. Who feared violence, the city council or the demonstrators? Using what she knows about bureaucrats, protestors, and riots, a human can spot at once that the fear resides in the city council. When AI-driven language-processing programs are asked this kind of question, however, their responses are little better than random guesses. When AI cant determine what it refers to in a sentence, Mitchell writes, quoting computer scientist Oren Etzioni, its hard to believe that it will take over the world.

And it is not accurate to say, as many journalists do, that a program like AlphaZero learns by itself. Humans must painstakingly decide how many layers a network should have, how much incoming data should link to each input unit, how fast data should aggregate as it passes through the layers, how much each unit weighting function should change in response to feedback, and much else. These settings and designs, adds Mitchell, must typically be decided anew for each task a network is trained on. It is hard to see nefarious unsupervised AI on the horizon.

The doom camp (AI will murder us) and the rapture camp (it will take us into the mind of God) share a common premise. Both groups extrapolate from past trends of exponential progress. Moores lawwhich is not really a law, but an observationsays that the number of transistors we can fit on a computer chip doubles every two years or so. This enables computer processing speeds to increase at an exponential rate. The futurist Ray Kurzweil asserts that this trend of accelerating improvement stretches back to the emergence of life, the appearance of Eukaryotic cells, and the Cambrian Explosion. Looking forward, Kurzweil sees an AI singularitythe rise of self-improving machine superintelligenceon the trendline around 2045.

The political scientist Philip Tetlock has looked closely at whether experts are any good at predicting the future. The short answer is that theyre terrible at it. But theyre not hopeless. Borrowing an analogy from Isaiah Berlin, Tetlock divides thinkers into hedgehogs and foxes. A hedgehog knows one big thing, whereas a fox knows many small things. A hedgehog tries to fit what he sees into a sweeping theory. A fox is skeptical of such theories. He looks for facts that will show he is wrong. A hedgehog gives answers and says moreover a lot. A fox asks questions and says however a lot. Tetlock has found that foxes are better forecasters than hedgehogs. The more distant the subject of the prediction, the more the hedgehogs performance lags.

Using a theory of exponential growth to predict an impending AI singularity is classic hedgehog thinking. It is a bit like basing a prediction about human extinction on nothing more than the Copernican principle. Kurzweils vision of the future is clever and provocative, but it is also hollow. It is almost as if huge obstacles to general AI will soon be overcome because the theory says so, rather than because the scientists on the ground will perform the necessary miracles. Gordon Moore himself acknowledges that his law will not hold much longer. (Quantum computers might pick up the baton. Well see.) Regardless, increased processing capacity might be just a small piece of whats needed for the next big leaps in machine thinking.

When at Thanksgiving dinner you see Aunt Jane sigh after Uncle Bob tells a blue joke, you can form an understanding of what Jane thinks about what Bob thinks. For that matter, you get the joke, and you can imagine analogous jokes that would also annoy Jane. You can infer that your cousin Mary, who normally likes such jokes but is not laughing now, is probably still angry at Bob for spilling the gravy earlier. You know that although you cant see Bobs feet, they exist, under the table. No deep neural network can do any of this, and its not at all clear that more layers or faster chips or larger training sets will close the gap. We probably need further advances that we have only just begun to contemplate. Enabling machines to form humanlike conceptual abstractions, Mitchell declares, is still an almost completely unsolved problem.

There has been some concern lately about the demise of the corporate laboratory. Mitchell gives the impression that, at least in the technology sector, the corporate basic-research division is alive and well. Over the course of her narrative, labs at Google, Microsoft, Facebook, and Uber make major breakthroughs in computer image recognition, decision making, and translation. In 2013, for example, researchers at Google trained a network to create vectors among a vast array of words. A vector set of this sort enables a language-processing program to define and use a word based on the other words with which it tends to appear. The researchers put their vector set online for public use. Google is in some ways the protagonist of Mitchells story. It is now an applied AI company, in Mitchells words, that has placed machine thinking at the center of diverse products, services, and blue-sky research.

Google has hired Ray Kurzweil, a move that might be taken as an implicit endorsement of his views. It is pleasing to think that many Google engineers earnestly want to bring on the singularity. The grand theory may be illusory, but the treasures produced in pursuit of it will be real.

More:

Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits - Forbes

Melissa McCarthy And Ben Falcone Have Decided To Release ‘Superintelligence’ Via HBO Max Ins – Science Fiction

Kathy Hutchins / Shutterstock.com

The new Melissa McCarthy sci-fi comedy Superintelligence will not open theatrically as planned. Instead, the comedian and her director husband, Ben Falcone, have decided to release the movie via the new HBO Max streaming service. Superintelligence had been slated for release during the busy holiday season, on December 20, but have chosen a different route, at least in part to reach a wider audience.

McCarthy told Deadline:

It was actually Bens idea, it came from the filmmaker himself. We had a release date, a full marketing plan, and I had all my press lined up. We were really ready to go. When the announcement came that HBO Max was really happening, Ben had this idea. And we thought, is this better? Different doesnt mean worse, and how arewewatching films ourselves? To us, each movie is near and dear to our hearts. You just want people to see it and love it and you want them to feel good.Superintelligenceat its core is, love wins, and people matter. I want that to get to as many people as it can. We need that today, and this seemed like the best way to do it. So no, [this wasnt imposed on us]. We were ready to go the other way, and we decided to make the detour.

Falcone added:

Honestly, you can release a mid-budget movie, and if wed stayed in the theaters, we could have done incredibly well. There still are those examples of movies like this one that do. But for this movie, at this time, we felt like it was the best way to go. The PG rating, the fact they are starting this thing. All these streaming services are starting, and here, we are up there withSesame Street, and Meryl Streep and JJ Abrams and Hugh Jackman and Jordan Peele. There are cool people doing this. So following my fear-based mentality, I thought it was the best move.

In addition to Superintelligence, HBO Max will also be offering Let Them All Talk from Steven Soderbergh and starring Meryl Streep, Greg Berlantis Unpregnant, and Bad Education starring Hugh Jackman and Allison Janney, which HBO paid $17 million to acquire.

Carol Peters life is turned upside down when she is selected for observation by the worlds first superintelligence a form of artificial intelligence that may or may not take over the world.

Superintelligence also stars Bobby Cannavale, Jean Smart, Michael Beach, Brian Tyree Henry, and the voice of James Corden as the titular Superintelligence. The release will now be delayed, as HBO Max isnt expected to launch until next spring.

Falcone and McCarthy are re-teaming for Thunder Force for Netflix, which also stars Octavia Spencer.

Jax's earliest memory is of watching 'Batman,' followed shortly by a memory of playing Batman & Robin with a friend, which entailed running outside in just their underwear and towels as capes. When adults told them they couldn't run around outside in their underwear, both boys promptly whipped theirs off and ran around in just capes.

Read more:

Melissa McCarthy And Ben Falcone Have Decided To Release 'Superintelligence' Via HBO Max Ins - Science Fiction

Melissa McCarthy & Director Ben Falcone On Choosing HBO Max Bow Instead Of WB Xmas Release For Superintelligence – Deadline

EXCLUSIVE: In a move that could become more common as major studios lean in heavily toward their streaming launches, the Ben Falcone-directed Melissa McCarthy-starrer Superintelligence has exited its December 20 theatrical release date to instead become the first Warner Bros Pictures Group film to premiere on HBO Max.

This comes before an HBO Max presentation on October 29 where it is expected that other projects might become part of a streamer launch slate that now will have Superintelligence; the Steven Soderbergh-directed Meryl Streep-starrer Let Them All Talk; the Greg Berlanti-produced YA novel adaptation Unpregnant;and sooner or later Bad Education, the Hugh Jackman/Allison Janney-starrer bought at Toronto for north of $17 million to bow on HBO. The original programming will be part of a service that launches with WarnerMedias own library titles including Friends and The Big Bang Theory and third-party acquisitions including Sesame Street.

Amid the high-stakes battle for subscription streaming service launches by WarnerMedia, Disney, Comcast and Apple to go along with Netflix and Amazon, it isnt hard to see how the prospect of being among the first marquee titles on HBO Max is enticing. Especially when mid-budget comedies and dramas are plagued by the optics of eight-figure P&A spends and heavy scrutiny on opening-weekend box office grosses. That doesnt exist if you are launching on an OTT to a wide audience.

McCarthy and Falcone, long married and longtime frequent creative collaborators they are now making their first Netflix film, Thunder Force said the decision to move out of theaters and onto HBO Max was theirs, and that it wasnt imposed on them by WarnerMedia, Warner Bros or New Line, which developed the comedy and shepherded the film through production.

It was actually Bens idea, it came from the filmmaker himself, McCarthy told Deadline. We had a release date, a full marketing plan, and I had all my press lined up. We were really ready to go. When the announcement came that HBO Max was really happening, Ben had this idea. And we thought, is this better? Different doesnt mean worse, and how are we watching films ourselves? To us, each movie is near and dear to our hearts. You just want people to see it and love it and you want them to feel good. Superintelligence at its core is, love wins, and people matter. I want that to get to as many people as it can. We need that today, and this seemed like the best way to do it. So no, [this wasnt imposed on us]. We were ready to go the other way, and we decided to make the detour.

When I brought up the perilous track many theatrical releases face these days, Falcone acknowledged it is something a filmmaker thinks about.

I pride myself on living a fear-based life, and that wont stop, Falcone joked. I dont exactly remember the question, but I wanted to make that clear to you, and to everyone. Honestly, you can release a mid-budget movie, and if wed stayed in the theaters, we could have done incredibly well. There still are those examples of movies like this one that do. But for this movie, at this time, we felt like it was the best way to go. The PG rating, the fact they are starting this thing. All these streaming services are starting, and here, we are up there with Sesame Street, and Meryl Streep and JJ Abrams and Hugh Jackman and Jordan Peele. There are cool people doing this. So following my fear-based mentality, I thought it was the best move.

McCarthy and Falcone also felt a thematic fit as the film explores relationships in the backdrop of technological evolution. McCarthys character finds herself getting messages from her TV, phone and microwave and what she doesnt realize is she has been selected for observation by the worlds first superintelligence, a form of artificial intelligence that is contemplating taking over the world. Steve Mallory wrote the script, James Corden voices the A.I., and Bobby Cannavale is playing her love interest.

We made the film for New Line and Warner Bros, and there are different challenges in the way people watch films, how and where they see them on different platforms, McCarthy said. We were all geared up to open theatrically, and Ben was the one who said, this would be better for HBO Max. What a way to reach a massive amount of people, and to be put in pretty amazing company. It seemed like a win-win. We have two young kids, and we thought about how we watch movies. Superintelligence is PG, and we thought about how we watch these movies with our kids. We still go to the theater, and we love going to the theater. I would cry if that ever went away. But we watch a lot of movies at home, and a lot of people do. This just seemed like an exciting new way to get it in front of a lot of people.

The move pushes the release of the film until sometime in the spring, and though a specific date hasnt been decided, the couple is really warming to the platform.

I urge you and all your friends to immediately subscribe to HBO Max, Falcone said.

Added McCarthy: Just give us your credit card, Mike, and wed be happy to process it for you. And maybe give us your bank account numbers, too.

Read more here:

Melissa McCarthy & Director Ben Falcone On Choosing HBO Max Bow Instead Of WB Xmas Release For Superintelligence - Deadline

AMC Is Still In the Theater Business, But VOD Is a Funny Way of Showing It – IndieWire

Was I the only one who found it weird when AMC Theatres announced that it was getting into the streaming business with the launch of AMC Theatres On Demand? When it comes to places to buy and rent movies, weve got Apple, Amazon, Fandango, Vudu, Google Play, YouTube, and a few more that I dont need to remember because its too many already.

I also thought it suggested some seriously mixed messaging, but maybe that was just me until I got a call from an NBC affiliate who wanted to do an interview about AMCs new streaming service. That seemed like a curious topic for local news; why were they interested? The answer: They wanted to know if it meant AMC was getting out of the theater business.

Of course, AMC is very much dedicated to theatrical business, but this is a funny way of showing it. Launching a platform for VOD transactions something that runs counter to going out to the movies is not what Id expect a theater chain to worry about right now. Theres far more pressing issues at hand, starting with the sacred cow of The Theatrical Experience.

Its the theme of every CinemaCon, repeated like rosary as exhibitors and distributors take the Caesars Palace stage and talk about how worldwide audiences continue to share the primacy of the theatrical experience. However, that audience also has the option to stay home with their couches, pause buttons, and very large TV sets to watch an infinite number of entertainment options. By contrast, choosing to go to the theater means spending a lot of time, money, and effort on a very small selection of premium products. So whether youre going to the AMC to see Avengers, or to the Alamo to see Parasite, the act of going to the movies is now a bespoke experience.

But is that what chain theaters deliver? If youre Alamo with the fun beers on tap and no commercials and weird short films, sure. If youre a chain that inspired the ire of Edward Norton, who encountered low-light projection and crappy sound while preparing for the November 2 nationwide release of Motherless Brooklyn, that would be no. Its the theater chains that are destroying the theatrical experience, he said. Period, full-stop. No one else. Meanwhile, he sang the praises of Netflix as it represents an unprecedented period of ripe opportunity for many more types of stories and voices to be heard. (Netflix is also looking at a long-term lease for the tony, single-screen Paris Theater in Manhattan. Oh, the irony.)

Netflix turned to the Paris, the Belasco, and the Egyptian as showcases for Oscar contenders Marriage Story and The Irishman because major chains wont let them book their theaters but a much more significant threat to exhibitors is coming from inside the house. This week, Warners chose to move Melissa McCarthys Christmas title, Superintelligence, out of theaters and on to its upcoming streaming platform, HBO Max, which is scheduled to launch sometime next spring.

Melissa McCarthy and Ben Falcone at the Warner Bros. Cinemacon presentation, April 2019

Rob Latour/Shutterstock

Speaking to Deadline, McCarthy spun it as all being the idea of her husband, director Ben Falcone:

It was actually Bens idea, it came from the filmmaker himself, We had a release date, a full marketing plan, and I had all my press lined up. We were really ready to go. When the announcement came that HBO Max was really happening, Ben had this idea. And we thought, is this better? Different doesnt mean worse, and how arewe watching films ourselves? To us, each movie is near and dear to our hearts. You just want people to see it and love it and you want them to feel good. Superintelligence at its core is, love wins, and people matter. I want that to get to as many people as it can. We need that today, and this seemed like the best way to do it. So no, [this wasnt imposed on us]. We were ready to go the other way, and we decided to make the detour.

Ultimately, it doesnt matter if the idea came from Falcone, or from the studio (sources told IndieWire that the film didnt test well). What matters is this is likely the first of many films in which a distributor weighs its options: Invest many millions and see what you get back from theatrical, or substantially fewer millions on a global streaming platform and see what you generate in subscribers? Studios may find themselves following in Netflixs footsteps and sorting their slates: These movies demand a theatrical investment, and these will do well on streaming.

Last May, when HBO Max was only a twinkle in the eye of current WarnerMedia CEO John Stankey, Warners released the McCarthy and Falcone comedy Life of the Party; at $53 million domestic, it wasnt a blockbuster. But with box office on track to fall nearly 6% behind 2018, exhibitors need every $53 million they can get. And with almost every major studio now tied to a streaming outlet, they now have a no-friction solution for theatrical releases that might struggle: Whats dull on the big screen can look very shiny on the smaller ones. And, as McCarthy said: How arewewatching films ourselves?

Increasingly, were watching them at home. But probably not on AMC Theatres On Demand.

Heres some of the best work from this week on IndieWire:

Disneys Most Valuable Screenwriter Has Had Enough of the Strong Female Trope, by Kate ErblandLinda Woolverton, the woman who brought Belle, Maleficent, and a billion-dollar animated movie to Disney, speaks her mind.

Large-Format Cameras Are Changing Film Language, From Joker to Midsommar, by Chris OFalt

With the advent of cameras like the Alexa 65, a new generation of large format filmmaker is using its immersive qualities in exciting ways.

Peak TV Is Only a Concern in the Gated Community of Hollywood, by Libby HillThe average Joe doesnt care about The Morning Show. They already have all the TV they need and can afford.

Bombshell and Jojo Rabbit Share an Oscar Superpower: Theyre Made For the Mainstream, by Anne ThompsonFilms like Parasite and Pain and Glory are critical darlings, but the truth is that when it comes to Oscar votes, popularity counts.

Is This Is Us Making You Seasick? Youre Not Alone, by Leo GarciaDigital image stabilization mixed with the shows penchant for shaky camera work make it seem as if certain scenes were filmed out at sea.

Disney+: 200 Must-Watch TV Shows & Movies Available on Launch, by LaToya Ferguson

From the beloved Star Wars trilogies to the Marvel Cinematic Universe to Pixars greatest achievements, heres the best of the content that will be available to subscribers for $6.99 a month.

Have a great weekend,

Dana

Sign Up: Stay on top of the latest breaking film and TV news! Sign up for our Email Newsletters here.

See the article here:

AMC Is Still In the Theater Business, But VOD Is a Funny Way of Showing It - IndieWire

Idiot Box: HBO Max joins the flood of streaming services – Weekly Alibi

HBO Max joins the flood of streaming services

Viewers of visual media can be forgiven for thinking that todays streaming services have turned into a veritable deluge. Every other week it seems like Im educating/warning people about another streaming service with a catalogue of original programming, an archive of old TV shows and a random selection of movies available on your mobile devices for a low monthly subscription fee. Since I didnt talk about one last week, I guess Im obliged to this week. Netflix, Hulu, Amazon Prime, Disney Plus, Apple TV+: Meet HBO Max.

Like a lot of Americans, you may be confused at this point. Isnt HBO already a pay-per-view station full of movies, TV shows and original content? Sure. And cant you already subscribe to HBO Now, a streaming service for portable devices that bypasses the need for cable or satellite? Yup. But HBO Max is a long-brewing corporate mash-up from AT&T-owned multinational mass media conglomerate WarnerMedia. Not only will it consist of HBOs normal slate of movies, miniseries and TV showsit will also have access to all of WarnerMedias corporate catalogue. Basically, whatever Disney doesnt own, WarnerMedia does (HBO, CNN, TBS, TNT, TruTV, Cartoon Network, Adult Swim, TCM, Warner Bros, New Line, Crunchy Roll, Looney Tunes, The CW, DC Comics).

HBO Max, for example, will be the new home for the Warner Bros.-produced series Friendsnow that the beloved 90s sitcom is free from its $100 million dollar contract with Netflix. Also lined up: The Fresh Prince of Bel Air (which is owned by Warner Bros. Domestic Television Distribution) and any Warner Bros.-produced dramas on The CW Network (like, for example, Riverdale). Throw in some Bugs Bunny cartoons, all the Nightmare On Elm Street films (from New Line Cinema) and stuff like Full Frontal with Samantha Bee (thats TBS), and youve got a solid back catalogue on which to build.

In addition to everything WarnerMedia owns, HBO Max has signed contracts to re-air BBC shows including Doctor Who, The Office, Top Gear and Luther. The network also signed a deal with Japans Studio Ghibli to secure US streaming rights to all of its animated films (My Neighbor Totoro, Princess Mononoke, Spirited Away, Ponyo, Howls Moving Castle, Kikis Delivery Service, to name a few). These deals add some impressive weight to HBO Maxs lineup (while, at the same time, stealing these shows away from cable/streaming rivals).

As far as the new programming is concerned, the floodgates have already opened. Dozens of emails have been pouring into my inbox this week, touting HBO Maxs new projects. Director Denis Villeneuve (Blade Runner 2049) will adapt Dune: The Sisterhood, a series based on Brian Herbert and Kevin Andersons sequel to Frank Herberts sci-fi classic. The classic 1984 horror-comedy Gremlins is being turned into an animated series. The Hos is a multigenerational docu-reality series about a rich Vietnamese-American family in Houston. Monica Lewinsky (yes, that Monica Lewinsky) executive produces 15 Minutes of Shame, a documentary series about the public shaming epidemic in our culture and our collective need to destroy one another. Brad and Gary Go To finds Hollywood power couple Brad Goreski and Gary Janetti traveling around the globe sampling international cuisine. The streaming service has also ordered up Grease: Rydell High, a musical spin-off which brings the 1978 film Grease to todays post-Glee audiences.

There will be original movies on tap as well. Emmy-winning comedian Amy Schumer climbs on board with Expecting Amy, a documentary about the funny ladys struggle to prepare for a stand-up comedy tour while pregnant. Melissa McCarthy (Spy, Bridesmaids) will star in Superintelligence, about an ordinary woman who is befriended by the worlds first artificial intelligence with an attitude.

As far as when we can get a look at HBO Max, WarnerMedia has pushed the premiere date several times and is now simply saying spring 2020. What will it cost the consumer? Given that HBO Now costs $15 a month, and HBO Max will include all of HBOs streaming product (plus all that other stuff mentioned above), we can only assume that it will cost more than that. With Hulu starting at $6 a month, Disney+ banking on charging $6.99 a month and Netflix running $13 a month, HBO Max is looking kinda pricey. But what do you say, American consumers? Are you ready to fork out for one more monthly streaming service? Its the last one. I swear. (Its not. Not by a longshot.)

Read this article:

Idiot Box: HBO Max joins the flood of streaming services - Weekly Alibi

Here’s How to Watch Watchmen, HBOs Next Game of Thrones – Cosmopolitan

The DC universe just keeps getting bigger, and the newest addition to the comic world is HBOs Watchmen, a series based on the 1986 graphic novel where the superheroes are the outlaws (dont worry, Ill explain what that even means in a bit).

Youve probs already heard about it because its being dubbed as the new Game of Thrones, which means our hopes are high for the beginning and our expectations for the series ending are at an all-time low.

The Watchmen graphic novel is about superheroes. (Yes, thats in quotes for a reason.) These superheroes arent born with crazy superhuman abilities but instead are really, really good at one specific thingso they might have, say, extremely high intelligence or insane detective skills.

The comic takes place in a world where these everyday people would dress in superhero costumes and act as vigilantesuntil the practice was outlawed in 1977 after a riot involving said vigilante superheroes. A lot of the former superheroes went to work for the government, using their powers for good, but some ignored the law (aka a man named Rorschach) and continued their work in a more anarchic way.

The show is being described as more of a continuation, not an adaptation. It picks up a little over 30 years after the novel ended.

Queen Regina King stars as the main character, a police officer in Tulsa who goes by the name Sister Night and is super protective over her husband and child. Also, she has a BADASS costume that is part Catwoman, part Xena Warrior Princess. Serious Halloween inspo.

Dr. Manhattan *might* be making a return. If youre not familiar, hes a blue guy and the only one in the series with actual superpowers. His godlike capabilities include teleportation, total clairvoyance, and telekinesis. At the end of the graphic novel, he leaves Earth to go to Mars, BUT hes in the HBO previews, so fingers crossed.

Youll definitely see Adrian Veidt (also known as Ozymandias)a retired superhero with superintelligence who is known for faking an alien invasion with a giant squid. (Ya, this show gets weird.)

Of course, Rorschach (who was killed by Dr. Manhattan at the end of the series) will returnbut not exactly. His name, mask, and overall evil mission will be carried on by a group of white supremacists.

You can catch the series on HBO or HBO Now every Sunday at 9 p.m. ET. But if you cant be held to a strict TV-watching schedule, it can also be streamed with an HBO Go account! TG for streaming services.

Link:

Here's How to Watch Watchmen, HBOs Next Game of Thrones - Cosmopolitan

The Best Artificial Intelligence Books you Need to Read Today – Edgy Labs

If youre looking for a selection of the top artificial intelligence books, the offerings could be overwhelming. But were here to help with that.

Artificial intelligence is slowly and steadily making its way through pretty much every system humans have created.

AI-powered agents are getting increasingly smarter as they hone their problem-solving and decision-making skills.

On the other hand, humans avail themselves of AI as much as possible. But, theyre called to adapt and learn to coexist with machines if they are to, at best, thrive, or survive, at the worst.

As far as humans are concerned, intelligent agents cut both ways.

Thankfully, the worlds leading scientists and thinkers help us understand whats at stake and the best damage control measures to take if need be.

Many books deal with AI theory, modern AI sciences, and the technologys future implications.

The ones listed below are some of the best artificial books today that dissect all of these areas.

1. Introduction to Artificial Intelligence

As befits the topic, we start our list with a comprehensive introduction into AI technology: Introduction to Artificial Intelligence. Written by Phillip C. Jackson, Jr., the book is one of the classics thats still read by experts in the field and non-specialists alike.

This book provides a summary of the previous two decades of research into the science of computer reasoning, and where it could be heading. Published in 1985, some of the information might be outdated, but if nothing else, the book could serve as a valuable historical document.

2. Artificial Intelligence: A Modern Approach

Another classic is Artificial Intelligence: A Modern Approach, written by Stuart Russell and Peter Norvig.

No list on the best artificial intelligence books can fail to mention this bestseller that has become a standard book for AI students. Used as a textbook in hundreds of universities around the world, the book was first published in 1995. A third edition came out in 2009.

You may want to check this book to know why its described as the most popular artificial intelligence textbook in the world.

3. Life 3.0

This book is one of my personal favorites, by one of the leading physicists and cosmologists in the world, Max Tegmark, aka Mad Max.

Tegmarks Life 3.0: Being Human in the Age of Artificial Intelligence welcomes you to the most important conversation of our time. The MIT physics professor explores the future of AI and how it would reshape many facets of human life, from jobs to wars. Hes one of those thinking AI is a double-edged sword, and its really up to us to give it free rein.

Elon Musk recommends this book as worth reading, recapping that AI could be the best or worst thing.

4. How to Create a Mind

How to Create a Mind The Secret of Human Thought Revealed is a book by famous futurist and tech visionary Ray Kurzweil.

Kurzweil discusses the notion of mind and how it emerges from the brain, and the attempts of scientists to recreate human intelligence. He predicts that by 2020, computers would be powerful enough to simulate an entire human brain.

Kurzweil offers some interesting thought experiments on thinking. in the book. For example, most people can recite the alphabet correctly, but most would fail at reciting it backward as easily. The reason for this, according to the author, has to do with the memory formation process. The brain stores memories as hierarchical sequences only accessible in the order theyre remembered in.

5. Superintelligence Paths, Dangers, Strategies

Oxford philosopher Nick Bostrom is known for his work on major existential risks. He includes the superintelligence threat among the bunch.

A poorly-programmed or a flawed superintelligence

In Superintelligence Paths, Dangers, Strategies, Bostrom questions whether smart algorithms would spell the end of humanity or be a catalyst for a better future.

A New York Times bestseller, Bostrom argues that superintelligent machines left unchecked could replace humans as the dominant lifeform on Earth.

6. Weapons of Math Destruction

AI is all about Big Data, and the algorithms that work off of it. And thats the focus of the book titled Weapons of Math Destruction by Cathy ONeil, a data scientist at Harvard University,

In the book, the author explores how math, at the heart of data and by extension AI, could be manipulated and biased. The Author discusses the negative social implications of AI and how it could be a threat to democracy.

ONeil identifies three factorsscale, secrecy, and destructivenessthat could turn an AI algorithm into a Weapon of Math Destruction.

7. Our Final Invention

Its thanks to their brains not brawn that humans dominated Earth and reigned supreme over other species. Now, a human invention, AI, is posing a potential threat to this dominance.

Our Final Invention: Artificial Intelligence And The End Of The Human Era is a book by American documentary filmmaker James Barrat.

According to the author, while human intelligence stagnates, machines are getting smarter and would soon surpass humans cognitive abilities. Superintelligent artificial species could develop survival drives that could eventually lead them to clash with humans.

8. The Sentient Machine

Unlike other books on this list, The Sentient Machine The Coming Age of Artificial Intelligence provides a more optimistic look at AI.

In the book, inventor and techpreneur Amir Husainunlike Bostrom, Tegmark, and Muskthinks humans can thrive with AI, not just survive.

Weighing AIs risk and potential, Husain thinks we should embrace AI and let sentient machines lead us to a bright future. This isnt some void utopian daydreaming! The authors approach is based on scientific, cultural, and historical arguments. He also provides a wide-ranging discussion on what makes us humans and our role as creators in the world.

9. The Fourth Age

We find another optimistic take on AI in The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

In this book, author Byron Reese manages to both engage and entertain the reader with his insights into history and projections for the future. According to Reese, the human civilization went through three major disruptions in its history: fire and language, agriculture, and finally writing and the wheel.

AI promises a fourth age, which the book discusses in detail.

10. AI Superpowers

The United States and China are at the forefront of AI research. In a context marked by a geopolitical and economic rivalry between the two countries, it stands to reason that AI would be weaponized someway.

AI Superpowers: China, Silicon Valley, and the New World Order is a book by AI pioneer Kai-Fu Lee. China is racing with the U.S. to take the AI lead globally, and Lee thinks it will dominate the industry. If data is the new oil, says Lee. then China is the new Saudi Arabia.

Lee points out the factors that he thinks would help China win the AI arms race. He cites a high quantity of data, less data protection regulations, and a more aggressive AI startup culture as reasons giving China a potential edge.

These are our picks. What are the artificial intelligence books worth reading that left an impression on you?

Go here to read the rest:

The Best Artificial Intelligence Books you Need to Read Today - Edgy Labs

Aquinas’ Fifth Way: The Proof from Specification – Discovery Institute

Editors note: See also, Introducing Aquinas Five Ways, by Michael Egnor. For Dr. Egnors previous posts in this series on Aquinas Five Ways, seehere,here, here, and here. For more on Thomas Aquinas, intelligent design, and evolution, see the websiteAquinas.Design.

Aquinas Fifth Way is the proof of Gods existence that is easiest to grasp in everyday life. The order of nature points to a Mind that gives it order. This obvious order is the substrate for all natural science after all, without natural order, scientific study of nature would be an exercise in futility. And the natural order is the framework for everyday life. We could not take a breath unless our lungs and nerves worked consistently, and unless oxygen had the chemical properties that it has. Order in nature is ubiquitous. We have become so accustomed to it that we fail to notice how remarkable it is.

That this natural order points to God is obvious. But what are the characteristics of this order? In living things, ID theorists describe this order as specified complexity. Specified complexity means that a pattern has substantial independently specified information (specification) that has a low probability of occurrence by chance (complexity). Aquinas would agree that such specified complexity points to a designer, but he understands natural order in a way that is rather different from the understanding of many ID theorists.

For Aquinas, it is the specification, rather than the complexity, that is at the heart of the Fifth Way. Aquinas understands specification in an Aristotelian sense: as final cause (teleology). The Fifth Way is often called the proof from Final Cause, or the Teleological proof.

Final cause is fundamental to Aristotelian-Thomistic metaphysics. One may ask: What is the cause of a thing? St. Thomas answers that to completely understand a cause in nature, we really must know four causes:

Material cause: the matter out of which something is made. The material cause of a statue is the block of marble from which it is carved.

Efficient cause: the agent that gets the cause started. The efficient cause of a statue is the sculptor.

Formal cause: the structure of the system that is caused. The formal cause of a statue is the shape of the statue.

Final cause: the end or purpose for the cause. The final cause of a statue is the purpose in the mind of the sculptor to use the statue to decorate a garden, for example.

In nature, final causes and formal causes often overlap. The formal cause of an acorn growing into an oak tree is the form of the oak tree, which is also the final cause of the growth of the acorn the end or telos of the growth of the acorn is the form of the oak tree it will become.

The four causes have reciprocal relations. Material cause and formal cause work together, in the sense that form provides structure to matter. Efficient cause and final cause work together, in the sense of a push-pull relationship. An efficient cause pushes while a final cause pulls simultaneously. Efficient causes point to ends regular causes in nature tend to specific outcomes. When you strike a match (efficient cause), it bursts into flame (final cause). Efficient causation is incomprehensible without final cause: regular cause-and-effect in nature is directional, in the sense that cause is consistently from one specific state to another specific state. It makes no sense to speak of cause from unless we also speak of cause to. Causes have beginnings and ends.

For St. Thomas (following Aristotle), final cause is particularly important, because it provides direction to natural causes. Final cause is the essential principle by which causes in nature happen. We moderns tend to ignore final causeswe think in terms of cause as a push efficient cause, rather than cause that pulls final cause. For St. Thomas, it is the pull of final cause that is fundamental to the regularity of nature. Final cause is the cause of causes.

With this in mind, lets look at the proof from the Fifth Way. St. Thomas notes that causes in nature are more or less consistent. Causation is the actualization of potentiality, and causation follows patterns. Things fall down, not up. Cold weather causes water to freeze, not boil. Acorns become oaks, but oaks dont become acorns. Aquinas notes that the final cause of an acorn is in some sense in the acorn itself: that is, in order for an acorn to reliably grow into an oak tree, the form of the oak tree must have some sort of existence while the acorn is still an acorn. A process of change cant point to an end unless the end pre-exists in some sense. But how can an oak tree exist when it is merely an acorn?

What exists is the form of the oak tree. The form of the oak tree can exist in two ways. It can exist in an object as a substantial form that is, the form can exist in the oak tree itself. This is the way forms ordinarily exist in objects.

A form can also exist in an intentional sense that is, the form can exist in the mind of a person who thinks about it. When I know an oak tree, the form of that oak tree is in my mind as well as in the oak tree. That is, in fact, how I know it. My mind grasps its form.

For change to occur in nature, the form of the end-state of the change must in some way exist prior to the completion of the change. Otherwise, the change would have no direction colloquially, the acorn wouldnt know what to grow into.

But of course most things in nature and all inanimate things dont know anything. An electron doesnt know quantum mechanics, but it moves in strict accordance with quantum mechanical laws. A rock knows nothing of Newtons law of gravity, but it falls in strict accordance with Newtons law. A plant knows nothing about photosynthesis, but it does it very well every day with an expertise exceeding that of the best chemist.

Since the form of the final state of a process of change cant be in the thing being changed the acorn is not yet the oak tree and change routinely occurs in things that have no mind to look forward to the final state, where is the form of the final state of change in nature?

Aquinas asserts that the form of the final state the telos or final cause must therefore be in the Mind of a Superintelligence that directs natural change. That is what all men call God.

So you can see that in the Thomistic Fifth Way, it is the specification of change, not its complexity, that is at the heart of the matter. Its reminiscent of the quip about a dog that can recite Shakespeare. Its not that the mutt knows Shakespeare thats remarkable; its remarkable that he can talk at all. Whats remarkable in nature is not so much that nature follows complex patterns, but that it follows any pattern at all. Any pattern in nature, even the simplest, cries out for explanation, and it is the fact of natural patterns that is the starting point of the Fifth Way.

From the Thomistic perspective, even the most simple natural process a leaf falling to the ground is proof of Gods existence. The fall of the leaf is specified prior to the fall leaves fall to the ground, rather than doing any of countless other things a natural object might do (like burst into flame or grow a tail). This specification this telos requires a Mind in which the fallen state of the leaf is conceived prior to the actual fall of the leaf. Change in nature requires a Mind to look ahead and direct it. Complexity (or simplicity) of the change is irrelevant.

It is the consistent directedness of change in nature that points to God. Atheists, with much handwaving and dubious science, claim to explain biological complexity by Darwinian stories. Yet, even on its own terms, Darwinism fails. Adaptation by natural selection may account on some level for the fixation of a particular phenotype in a population, but it offers no explanation for the fundamental fact of teleology in nature. In fact, Darwinian theory depends on teleology in nature. If natural causes were not consistent and mostly directed, there would be no consistency to evolution at all. There is no evolution in chaos. Without teleology, chance and necessity would be all chance and no necessity, and therefore no evolution.

Actually, atheists cant explain chance either. Chance is the accidental conjunction of teleological processes. A car accident may be by chance, but it necessarily occurs in a matrix of purpose and teleology the cars move in accordance with laws of physics, the road was constructed according to plans, the cars are driven purposefully by drivers, etc. There can be no chance unless there is a system of regularity in which chance can occur. Chance by itself cant happen it is, by definition, the accidental conjunction of teleological processes. Both chance and necessity point to God. Pure chance, without a framework of regularity, is unintelligible.

From the perspective of the Fifth Way, necessity permeates nature. But it is specification, rather than the complexity, that characterizes necessity and points to Gods existence. The specification need not be complex. The simplest motion of an inanimate object a raindrop falling to the ground is proof of Gods existence.

Teleology is foresight, the ability of a natural process to proceed to an end not yet realized. Yet the end must be realized, in some real sense, for final cause to be a cause. The foresight inherent in teleology is in Gods Mind, and it is via His manifest foresight in teleology that we see Him at work all around us.

This rules out the God of deism. The God of the Fifth Way is no watchmaker who winds up the world and walks away. He is at work ceaselessly and everywhere. The evidence for a Designer is as clear in the most simple inanimate process as it is in the most complex living organism. The elegant intricate complexity of cellular metabolism is certainly a manifestation of Gods glory the beauty of biological processes is breath-taking. But the proof of His existence is in every movement in nature in every detail of cellular metabolism, of course, but also in every raindrop and in every blown grain of dust.

Photo: An oak tree, by Abrget47j [CC BY-SA 3.0], via Wikimedia Commons.

Read the rest here:

Aquinas' Fifth Way: The Proof from Specification - Discovery Institute

Elon Musk warns ‘advanced A.I.’ will soon manipulate social media – Big Think

Twitter bots in 2019 can perform some basic functions, like tweeting content, retweeting, following other users, quoting other users, liking tweets and even sending direct messages. But even though bots on Twitter and other social media seem to be getting smarter than previous iterations, these A.I. are still relatively unsophisticated in terms of how well they can manipulate social discourse.

But it's only a matter of time before more advanced A.I. changes begins manipulating the conversation on a large scale, according to Tesla and SpaceX CEO Elon Musk.

"If advanced A.I. (beyond basic bots) hasn't been applied to manipulate social media, it won't be long before it is," Musk tweeted on Thursday morning.

It's unclear exactly what Musk is referring to by "advanced A.I." but his tweet come just hours after The New York Times published an article outlining a study showing that at least 70 countries have experienced digital disinformation campaigns over the past two years.

"In recent years, governments have used 'cyber troops' to shape public opinion, including networks of bots to amplify a message, groups of "trolls" to harass political dissidents or journalists, and scores of fake social media accounts to misrepresent how many people engaged with an issue," Davey Alba and Adam Satariano wrote for the Times. "The tactics are no longer limited to large countries. Smaller states can now easily set up internet influence operations as well."

Musk followed up his tweet by saying that "anonymous bot swarms" presumably referring to coordinated activity by a large number of social media bots should be investigated.

"If they're evolving rapidly, something's up," he tweeted.

Musk has long predicted a gloomy future with AI. In 2017, he told staff at Neuralink Musk's company that's developing an implantable brain-computer interface that he thinks there's about "a five to 10 percent chance" of making artificial intelligence safe. In the documentary "Do You Trust Your Computer?", Musk warned of the dangers of a single organization someday developing superintelligence.

"The least scary future I can think of is one where we have at least democratized AI because if one company or small group of people manages to develop godlike digital superintelligence, they could take over the world," Musk said.

"At least when there's an evil dictator, that human is going to die. But for an AI, there would be no death. It would live forever. And then you'd have an immortal dictator from which we can never escape."

Related Articles Around the Web

Read more here:

Elon Musk warns 'advanced A.I.' will soon manipulate social media - Big Think

Superintelligence – Wikipedia

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

University of Oxford philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". The program Fritz falls short of superintelligence even though it is much better than humans at chess because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity toeither as a single being or as a new speciesbecome much more powerful than humans, and to displace them.

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself a feature called "recursive self-improvement". It would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything.

Computer components already greatly surpass human performance in speed. Bostrom writes, "Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz)." Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, "whereas existing electronic processing cores can communicate optically at the speed of light". Thus, the simplest example of a superintelligence may be an emulated human mind that's run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed.[9] Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and primate cognition.) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.

All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence.[12] By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

Selective breeding, nootropics, NSI-189, MAO-I's, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process very rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.

Alternatively, collective intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism.

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or braincomputer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem.

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.[17]

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines "that can carry out most human professions at least as well as a typical human" (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said 'never' for 50% confidence, and the 16.5% who said 'never' for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:

Responding to Bostrom, Santos-Lang raised concern that developers may attempt to start with a single kind of superintelligence.

Learning computers that rapidly become superintelligent may take unforeseen actions or robots might out-compete humanity (one potential technological singularity scenario).[21] Researchers have argued that, by way of an "intelligence explosion" sometime over the next century, a self-improving AI could become so powerful as to be unstoppable by humans.[22]

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[23]

Eliezer Yudkowsky explains: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."[24]

This presents the AI control problem: how to build a superintelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. The danger of not designing control right "the first time", is that a misprogrammed superintelligence might rationally decide to "take over the world" and refuse to permit its programmers to modify it once it has been activated. Potential design strategies include "capability control" (preventing an AI from being able to pursue harmful plans), and "motivational control" (building an AI that wants to be helpful).

Bill Hibbard advocates for public education about superintelligence and public control over the development of superintelligence.

Follow this link:

Superintelligence - Wikipedia

Superintelligence: Paths, Dangers, Strategies – Wikipedia

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the Swedish philosopher Nick Bostrom from the University of Oxford. It argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists,[2] and the outcome could be an existential catastrophe for humans.[3]

Bostrom's book has been translated into many languages and is available as an audiobook.[1][4]

It is unknown whether human-level artificial intelligence will arrive in a matter of years, later this century, or not until future centuries. Regardless of the initial timescale, once human-level machine intelligence is developed, a "superintelligent" system that "greatly exceeds the cognitive performance of humans in virtually all domains of interest" would follow surprisingly quickly, possibly even instantaneously. Such a superintelligence would be difficult to control or restrain.

While the ultimate goals of superintelligences can vary greatly, a functional superintelligence will spontaneously generate, as natural subgoals, "instrumental goals" such as self-preservation and goal-content integrity, cognitive enhancement, and resource acquisition. For example, an agent whose sole final goal is to solve the Riemann hypothesis (a famous unsolved, mathematical conjecture) could create and act upon a subgoal of transforming the entire Earth into some form of computronium (hypothetical "programmable matter") to assist in the calculation. The superintelligence would proactively resist any outside attempts to turn the superintelligence off or otherwise prevent its subgoal completion. In order to prevent such an existential catastrophe, it might be necessary to successfully solve the "AI control problem" for the first superintelligence. The solution might involve instilling the superintelligence with goals that are compatible with human survival and well-being. Solving the control problem is surprisingly difficult because most goals, when translated into machine-implementable code, lead to unforeseen and undesirable consequences.

The book ranked #17 on the New York Times list of best selling science books for August 2014.[5] In the same month, business magnate Elon Musk made headlines by agreeing with the book that artificial intelligence is potentially more dangerous than nuclear weapons.[6][7][8]Bostrom's work on superintelligence has also influenced Bill Gatess concern for the existential risks facing humanity over the coming century.[9][10] In a March 2015 interview with Baidu's CEO, Robin Li, Gates said that he would "highly recommend" Superintelligence.[11]

The science editor of the Financial Times found that Bostrom's writing "sometimes veers into opaque language that betrays his background as a philosophy professor" but convincingly demonstrates that the risk from superintelligence is large enough that society should start thinking now about ways to endow future machine intelligence with positive values.[2]A review in The Guardian pointed out that "even the most sophisticated machines created so far are intelligent in only a limited sense" and that "expectations that AI would soon overtake human intelligence were first dashed in the 1960s", but finds common ground with Bostrom in advising that "one would be ill-advised to dismiss the possibility altogether".[3]

Some of Bostrom's colleagues suggest that nuclear war presents a greater threat to humanity than superintelligence, as does the future prospect of the weaponisation of nanotechnology and biotechnology.[3] The Economist stated that "Bostrom is forced to spend much of the book discussing speculations built upon plausible conjecture... but the book is nonetheless valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect of actually doing so seems remote."[12] Ronald Bailey wrote in the libertarian Reason that Bostrom makes a strong case that solving the AI control problem is the "essential task of our age".[13] According to Tom Chivers of The Daily Telegraph, the book is difficult to read, but nonetheless rewarding.[14]

Original post:

Superintelligence: Paths, Dangers, Strategies - Wikipedia