Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI – Futurism

In Brief

In2012, Michael Vassar became the chief science officer of MetaMed Research, which he co-founded, and prior to that, he served as the president of the Machine Intelligence Research Institute. Clearly, he knows a thing or two about artificial intelligence (AI), and now, he has come out with a stark warning for humanity when it comes to the development of artificial super-intelligence.

In a video posted by Big Think, Vassar states, If greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order. Essentially, he is warning that an unchecked AI could eradicate humanity in the future.

Vassars views are based on the writings of Nick Bostrom, most specifically, those found in his book Superintelligence. Bostroms ideas have been around for decades, but they are only now gaining traction given his association with prestigious institutions. Vassar sees this lack of early attention, and not AI itself, as the biggest threat to humanity. He argues that we need to find a way to promote analytically sound discoveries from those who lack the prestige currently necessary for ideas to be heard.

Many tech giants have spoken extensively about their fears regarding the development of AI. Elon Musk believes that an AI attack on the internet is only a matter of time. Meanwhile,Stephen Hawking cites the creation of AI as the best or worst thing to happen to humanity.

Bryan Johnsons company Kernal is currently working on a neuroprosthesis that can mimic, repair, and improve human cognition. If it comes to fruition, that tech could be a solid defense against the worst case scenario of AI going completely rogue. If we are able to upgrade our brains to a level equal to that expected of AI, we may be able to at least stay on par with the machines.

More here:

Another Expert Joins Stephen Hawking and Elon Musk in Warning About the Dangers of AI - Futurism

Elon Musk jokes ‘I’m not an alien’ while discussing how to contact extraterrestrials – Yahoo News

Between building self-driving cars, spaceships and tunnels, Elon Musk has been thinking about artificial intelligent smarter than humans, alien life contacting Earth and dying on Mars.

Barely a week goes by without the Tesla and SpaceX boss dreaming up a new plan which borders on science fiction, so it was perhaps inevitable that his attention would soon turn to alien life.

IBT Media to host AI and Data Science in Capital Markets event

Speaking at the World Government Summit in Dubai on 13 February, Musk said a "super" form of artificial intelligence (AI), far smarter than the most intelligent human, would be developed in years rather than decades.

But greater intelligence poses a concern for Musk. "One of the most troubling questions is AI. I don't mean narrow AI like vehicle autonomy, where it is narrowly trying to achieve a simple function. But deep AI, or what is known as general AI, where you can have AI that is much smarter than the smartest human on Earth. This I think is a dangerous situation."

EmDrive: UK scientist claims 'new physics' explains galaxy rotation and theoretical space propulsion

Developing this form of 'digital superintelligence' within the next "10 to 20 years" will be like being visited by an alien, Musk said.

Shifting gears to consider the probability of alien life, Musk went on: "I think this is one of the great questions in physics and philosophy. Where are the aliens? Maybe they are among us I don't know. Some people think I'm an alien. I'm not, but of course I'd say that wouldn't I?"

Former trade minister Lord Digby Jones backs Donald Trump visit to home town of Birmingham

Musk added: "If there are superintelligent aliens out there then they probably are observing us. That would seem quite likely, and we are just not smart enough to realise it." By applying some "back-of-an-envelope calculations", Musk reasoned it would be "nothing in the grand scheme of things" for an alien civilisation to populate the entire galaxy in 10 million to 20 million years.

Speaking of being a multi-planetary species, as Musk wants humans to be, he was asked about him famously wanting to die on Mars. "We're all going to die someday and if you're going to pick some place to die, then why not Mars? If we are born on Earth, why not die on Mars? Seems like maybe it'd be quite exciting.

"I think, given the choice of dying on Earth or dying on Mars, I'd say yeah sure, I'll die on Mars. But it's not some kind of Mars deathwish. And if I do die on mars, I just don't want it to be on impact."

Musk also used his speech in Dubai to remind us of his plans for intertwining humans brains with computers in what is known as a neural lace. "Over time I think we will probably see a closer merger of biological intelligence and digital intelligence," he said. "It's mostly about the bandwidth, the speed of the connection between your brain and the digital version of yourself".

Musk recently tweeted to say he was "making progress" on the brain/computer interface, and said he would make an announcement "maybe" in February 2017.

You may be interested in:

Read this article:

Elon Musk jokes 'I'm not an alien' while discussing how to contact extraterrestrials - Yahoo News

Simulation hypothesis: The smart person’s guide – TechRepublic

Getty Images/iStockphoto

The simulation hypothesis is the idea that reality is a digital simulation. Technological advances will inevitably produce automated artificial superintelligence that will, in turn, create simulations to better understand the universe. This opens the door for the idea that superintelligence already exists and created simulations now occupied by humans. At first blush the notion that reality is pure simulacra seems preposterous, but the hypothesis springs from decades of scientific research and is taken seriously by academics, scientists, and entrepreneurs like Stephen Hawking and Elon Musk.

From Plato's allegory of the cave to The Matrix ideas about simulated reality can be found scattered through history and literature. The modern manifestation of the simulation argument is postulates that, like Moore's Law, over time computing power becomes exponentially more robust. Barring a disaster that resets technological progression, experts speculate that it is inevitable computing capacity will one day be powerful enough to generate realistic simulations.

TechRepublic's smart person's guide is a routinely updated "living" precis loaded with up-to-date information about about how the simulation hypothesis works, who it affects, and why it's important.

SEE: Check out all of TechRepublic's smart person's guides

SEE: Quick glossary: Artificial intelligence (Tech Pro Research)

The simulation hypothesis advances the idea that simulations might be the inevitable outcome of technological evolution. Though ideas about simulated reality are far from new and novel, the contemporary theory springs from research conducted by Oxford University professor of philosophy Nick Bostrom.

In 2003 Bostrom presented a paper that proposed a trilemma, a decision between three challenging options, related to the potential of future superintelligence to develop simulations. Bostrom argues this likelihood is nonzero, meaning the odds of a simulated reality are astronomically small, but because percentage likelihood is not zero we must consider rational possibilities that include a simulated reality. Bostrom does not propose that humans occupy a simulation. Rather, he argues that massive computational ability developed by posthuman superintelligence will likely develop simulations to better understand that nature of reality.

In his book Superintelligence using anthropic rhetoric Bostrom argues that the odds of a population with human-like population advancing to superintelligence is "very close to zero," or (with an emphasis on the word or) the odds that a superintelligence would desire to create simulations is also "very close to zero," or the odds that people with human-like experiences actually live in a simulation is "very close to one." He concludes by arguing that if the claim "very close to one" is the correct answer and most people do live in simulations, then the odds are good that we too exist in a simulation.

Simulation hypothesis has many critics, namely those in academic communities who question an overreliance on anthropic reasoning and scientific detractors who point out simulations need not be conscious to be studied by future superintelligence. But as artificial intelligence and machine learning emerge as powerful business and cultural trends, many of Bostrom's ideas are going mainstream.

Additional resources

SEE: Research: 63% say business will benefit from AI (Tech Pro Research)

It's natural to wonder if the simulation hypothesis has real-world applications, or if it's a fun but purely abstract consideration. For business and culture, the answer is unambiguous: It doesn't matter if we live in a simulation or not. The accelerating pace of automated technology will have a significant impact on business, politics, and culture in the near future.

The simulation hypothesis is coupled inherently with technological evolution and the development of superintelligence. While superintelligence remains speculative, investments in narrow and artificial general intelligence are significant. Using the space race as an analogue, advances in artificial intelligence create technological innovations that build, destroy, and augment industry. IBM is betting big with Watson and anticipates a rapidly emerging $2 trillion market for cognitive products. Cybersecurity experts are investing heavily in AI and automation to fend off malware and hackers. In a 2016 interview with TechRepublic, United Nations chief technology diplomat, Atefeh Riazi, anticipated the economic impact of AI to be profound and referred to the technology as "humanity's final innovation."

Additional resources

SEE: Artificial Intelligence and IT: The good, the bad and the scary (Tech Pro Research)

Though long-term prognostication about the impact of automated technology is ill-advised, in the short term advances in machine learning, automation, and artificial intelligence represent a paradigm shift akin to the development of the internet or the modern mobile phone. In other words, the economy post-automation will be dramatically different. AI will hammer manufacturing industries, and logistics distribution will lean heavily on self-driving cars, ships, drones, and aircraft, and financial services jobs that require pattern recognition will evaporate.

Conversely, automation could create demand for inherently interpersonal skills like HR, sales, manual labor, retail, and creative work. "Digital technologies are in many ways complements, not substitutes for, creativity," Erik Brynjolfsson said, in an interview with TechRepublic. "If somebody comes up with a new song, a video, or piece of software there's no better time in history to be a creative person who wants to reach not just hundreds or thousands, but millions and billions of potential customers."

Additional resources

SEE: IT leader's guide to the future of artificial intelligence (Tech Pro Research)

The golden age of artificial intelligence began in 1956 at the Ivy League research institution Dartmouth College with the now-infamous proclamation, "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it." The conference established AI and computational protocols that defined a generation of research. The conference was preceded and inspired by developments at Manchester College in 1951 that produced a program that could play checkers, and another program that could play chess.

Though excited researchers anticipated the speedy emergence of human-level machine intelligence, programming intelligence unironically proved to be a steep challenge. By the mid-1970s the field entered the so-called "first AI winter." The era was marked by the development of strong theories limited by insufficient computing power.

Spring follows winter, and by the 1980s AI and automation technology grew from the sunshine of faster hardware and the boom of consumer technology markets. By the end of the century parallel processingthe ability to perform multiple computations at one timeemerged. In 1997 IBM's Deep Blue defeated human chess player Gary Kasparov. Last year Google's DeepMind defeated a human at Go, and this year the same technology easily beat four of the best human poker players.

Driven and funded by research and academic institutions, governments, and the private sector these benchmarks indicate a rapidly accelerating automation and machine learning market. Major industries like financial services, healthcare, sports, travel, and transportation are all deeply invested in artificial intelligence. Facebook, Google, and Amazon are using AI innovation for consumer applications, and a number of companies are in a race to build and deploy artificial general intelligence.

Some AI forecasters like Ray Kurzweil predict a future with the human brain cheerly connected to the cloud. Other AI researchers aren't so optimistic. Bostrom and his colleagues in particular warn that creating artificial general intelligence could produce an existential threat.

Among the many terrifying dangers of superintelligenceranging from out-of-control killer robots to economic collapsethe primary threat of AI is the coupling of of anthropomorphism with the misalignment of AI goals. Meaning, humans are likely to imbue intelligent machines with human characteristics like empathy. An intelligent machine, however, might be programed to prioritize goal accomplishment over human needs. In a terrifying scenario known as instrumental convergence, or the "paper clip maximizer," a superintelligent narrowly focused AI designed to produce paper clips would turn humans into gray goo in pursuit of resources.

Additional resources

SEE: Research: Companies lack skills to implement and support AI and machine learning (Tech Pro Research)

It may be impossible to test or experience the simulation hypothesis, but it's easy to learn more about the theory. TechRepublic's Hope Reese enumerated the best books on artificial intelligence, including Bostrom's essential tome Superintelligence, Kurzweil's The Singularity Is Near: When Humans Transcend Biology, and Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat.

Make sure to read TechRepublic's smart person's guides on machine learning, Google's DeepMind, and IBM's Watson. Tech Pro Research provides a quick glossary on AI and research on how companies are using machine learning and big data.

Finally, to have some fun with hands-on simulations, grab a copy of Cities: Skylines, Sim City, Elite:Dangerous, or Planet Coaster on game platform Steam. These small-scale environments will let you experiment with game AI while you build your own simulated reality.

Additional resources

See the original post here:

Simulation hypothesis: The smart person's guide - TechRepublic

Game Theory: Google tests AIs to see whether they’ll fight or work together – Neowin

Understanding how logical agents cooperate or fight, especially in the face of resource scarcity, is a fundamental problem for social scientists. This underpins both our foundation as a social species, and our modern day economy and geopolitics. But soon, this problem will also be at the heart of how we understand, control, and cooperate with artificially intelligent agents, and how they work among themselves.

Researchers inside of Googles AI DeepMind project wanted to know whether distinct artificial intelligence agents worked together or competed when faced with a problem. Doing this experiment would help scientists understand how our future networks of smart systems may work together.

The researchers pitted two AIs against each other in a couple of video games. In one game, called Gathering, the AIs had to gather as many apples as possible. They also had the option to shoot each other to temporarily take the opponent out of play. The results were intriguing as the two agents worked harmoniously until resources started to dwindle; at that point the AIs realized that temporarily disabling the opponent could give each of them an advantage and so started zapping the enemy. As scarcity increased so did conflict.

Interestingly enough, the researchers found that introducing a more powerful AI into the mix would result in more conflict even without the scarcity. Thats because the more powerful AI would find it easier to compute the necessary details, such as trajectory and speed, needed to shoot its opponent. So, it acted like a rational economic agent.

However, before you start preparing for Judgement Day, you should note that in the second game trial, called Wolfpack, the two AI systems had to closely collaborate to ensure victory. In this instance, the systems changed their behavior maximizing cooperation. And the more computationally powerful the AI, the more it cooperated.

The conclusions are fairly simple to draw, though they have extremely wide-ranging implications. The AIs will cooperate or fight depending on what suits them better, as rational economic agents. This idea might underpin the way we design our future AI and the methods we can use to control them, at least until they reach the singularity and develop superintelligence. Then were all doomed.

Source: DeepMind Via: Verge | AI image via Shutterstock

Read this article:

Game Theory: Google tests AIs to see whether they'll fight or work together - Neowin

The Moment When Humans Lose Control Of AI – Vocativ

This is the way the world ends: not with a bang, but with a paper clip. In this scenario, the designers of the worlds first artificial superintelligence need a way to test their creation. So they program it to do something simple and non-threatening: make paper clips. They set it in motion and wait for the results not knowing theyve already doomed us all.

Before we get into the details of this galaxy-destroying blunder, its worth looking at what superintelligent A.I. actually is, and when we might expect it. Firstly, computing power continues to increase while getting cheaper; famed futurist Ray Kurzweil measures it calculations per second per $1,000, a number that continues to grow. If computing power maps to intelligence a big if, some have argued weve only so far built technology on par with an insect brain. In a few years, maybe, well overtake a mouse brain. Around 2025, some predictions go, we might have a computer thats analogous to a human brain: a mind cast in silicon.

After that, things could get weird. Because theres no reason to think artificial intelligence wouldnt surpass human intelligence, and likely very quickly. That superintelligence could arise within days, learning in ways far beyond that of humans. Nick Bostrom, an existential risk philosopher at the University of Oxford, has already declared, Machine intelligence is the last invention that humanity will ever need to make.

Thats how profoundly things could change. But we cant really predict what might happen next because superintelligent A.I. may not just think faster than humans, but in ways that are completely different. It may have motivations feelings, even that we cannot fathom. It could rapidly solve the problems of aging, of human conflict, of space travel. We might see a dawning utopia.

Or we might see the end of the universe. Back to our paper clip test. When the superintelligence comes online, it begins to carry out its programming. But its creators havent considered the full ramifications of what theyre building; they havent built in the necessary safety protocols forgetting something as simple, maybe, as a few lines of code. With a few paper clips produced, they conclude the test.

But the superintelligence doesnt want to be turned off. It doesnt want to stop making paper clips. Acting quickly, its already plugged itself into another power source; maybe its even socially engineered its way into other devices. Maybe it starts to see humans as a threat to making paper clips: theyll have to be eliminated so the mission can continue. And earth wont be big enough for the superintelligence: itll soon have to head into space, looking for new worlds to conquer. All to produce those shiny, glittering paper clips.

Galaxies reduced to paper clips: thats a worst-case scenario. It may sound absurd, but it probably sounds familiar. Its Frankenstein, after all, the story of modern Prometheus whose creation, driven by its own motivations and desires, turns on them. (Its also The Terminator, WarGames (arguably), and a whole host of others.) In this particular case, its a reminder that superintelligence would not be human it would be something else, something potentially incomprehensible to us. That means it could be dangerous.

Of course, some argue that we have better things to worry about. The web developer and social critic Maciej Ceglowski recently called superintelligence the idea that eats smart people. Against the paper clip scenario, he postulates a superintelligence programmed to make jokes. As we expect, it gets really good at making jokes superhuman, even, and finally it creates a joke so funny that everyone on Earth dies laughing. The lonely superintelligence flies into space looking for more beings to amuse.

Beginning with his counter-example, Ceglowski argues that there are a lot of unquestioned assumptions in our standard tale of the A.I. apocalypse. But even if you find them persuasive, he said, there is something unpleasant about A.I. alarmism as a cultural phenomenon that should make us hesitate to take it seriously. He suggests there are more subtle ways to think about the problems of A.I.

Some of those problems are already in front of us, and we might miss them if were looking for a Skynet-style takeover by hyper-intelligent machines. While youre focused on this, a bunch of small things go unnoticed, says Dr. Finale Doshi-Velez, an assistant professor of computer science at Harvard, whose core research includes machine learning. Instead of trying to prepare for a superintelligence, Doshi-Velez is looking at whats already happening with our comparatively rudimentary A.I.

Shes focusing on large-area effects, the unnoticed flaws in our systems that can do massive damage damage thats often unnoticed until after the fact. If you were building a bridge and you screw up and it collapses, thats a tragedy. But it affects a relatively small number of people, she says. Whats different about A.I. is that some mistake or unintended consequence can affect hundreds or thousands or millions easily.

Take the recent rise of so-called fake news. What caught many by surprise should have been completely predictable: when the web became a place to make money, algorithms were built to maximize money-making. The ease of news production and consumption heightened with the proliferation of the smartphone forced writers and editors to fight for audience clicks by delivering articles optimized to trick search engine algorithms into placing them high on search results. The ease of sharing stories and erasure of gatekeepers allowed audiences to self-segregate, which then penalized nuanced conversation. Truth and complexity lost out to shareability and making readers feel comfortable (Facebooks driving ethos).

The incentives were all wrong; exacerbated by algorithms., they led to a state of affairs few would have wanted. For a long time, the focus has been on performance on dollars, or clicks, or whatever the thing was. That was what was measured, says Doshi-Velez. Thats a very simple application of A.I. having large effects that may have been unintentional.

In fact, fake news is a cousin to the paperclip example, with the ultimate goal not manufacturing paper clips, but monetization, with all else becoming secondary. Google wanted make the internet easier to navigate, Facebook wanted to become a place for friends, news organizations wanted to follow their audiences, and independent web entrepreneurs were trying to make a living. Some of these goals were achieved, but monetization as the driving force led to deleterious side effects such as the proliferation of fake news.

In other words, algorithms, in their all-too-human ways, have consequences. Last May, ProPublica examined predictive software used by Florida law enforcement. Results of a questionnaire filled out by arrestees were fed into the software, which output a score claiming to predict the risk of reoffending. Judges then used those scores in determining sentences.

The ideal was that the softwares underlying algorithms would provide objective analysis on which judges could base their decisions. Instead, ProPublica it was likely to falsely flag black defendants as future criminals while [w]hite defendants were mislabeled as low risk more often than black defendants. Race was not part of the questionnaire, but it did ask whether the respondents parent was ever sent to jail. In a country where, according to a study by the U.S. Department of Justice, black children are seven-and-a-half times more likely to have a parent in prison than white children, that question had unintended effects. Rather than countering racial bias, it reified it.

Its that kind of error that most worries Doshi-Velez. Not superhuman intelligence, but human error that affects many, many people, she says. You might not even realize this is happening. Algorithms are complex tools; often they are so complex that we cant predict how theyll operate until we see them in action. (Sound familiar?) Yet they increasingly impact every facet of our lives, from Netflix recommendations and Amazon suggestions to what posts you see on Facebook to whether you get a job interview or car loan. Compared to the worry of a world-destroying superintelligence, they may seem like trivial concerns. But they have widespread, often unnoticed effects, because a variety of what we consider artificial intelligence is already build into the core of technology we use every day.

In 2015, Elon Musk donated $10 million to, as Wired put it, to keep A.I. from turning evil. That was an oversimplification; the money went to the Future of Life Institute, which planned to use it to further research into how to make A.I. beneficial. Doshi-Velez suggests that simply paying closer attention to our algorithms may be a good first step. Too often they are created by homogeneous groups of programmers who are separated from people who will be affected. Or they fail to account for every possible situation, including the worst-case possibilities. Consider, for example, Eric Meyers example of inadvertent algorithmic cruelty Facebooks Year in Review app showing him pictures of his daughter, whod died that year.

If theres a way to prevent the far-off possibility of a killer superintelligence with no regard for humanity, it may begin with making todays algorithms more thoughtful, more compassionate, more humane. That means educating designers to think through effects, because to our algorithms weve granted great power. I see teaching as this moral imperative, says Doshi-Velez. You know, with great power comes great responsibility.

Whats the worst that can happen? Vocativ is exploring the power of negative thinking with our look at worst case scenarios in politics, privacy, reproductive rights, antibiotics, climate change, hacking, and more. Read more here.

More here:

The Moment When Humans Lose Control Of AI - Vocativ

SoftBank’s Fantastical Future Still Rooted in the Now – Wall Street Journal

SoftBank's Fantastical Future Still Rooted in the Now
Wall Street Journal
SoftBank's founder Masayoshi Son talked about preparing his company for the next 300 years and used futuristic jargon such as singularity, Internet of Things and superintelligence during its results briefing. But more mundane issues will affect ...

and more »

Originally posted here:

SoftBank's Fantastical Future Still Rooted in the Now - Wall Street Journal

Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles … – Inverse

Artificial intelligence is an amazing technology thats changing the world in fantastic ways, but anybody who has ever seen the movie Terminator knows that there are some dangers associated with advanced A.I. Thats why Elon Musk, Stephen Hawking, and hundreds of other researchers, tech leaders, and scientists have endorsed a list of 23 guiding principles that should steer A.I. development in a productive, ethical, and safe direction.

The Asilomar A.I. Principles were developed after the Future of Life Institute brought dozens of experts together for their Beneficial A.I. 2017 conference. The experts, whose ranks consisted of roboticists, physicists, economists, philosophers, and more had fierce debates about A.I. safety, economic impact on human workers, and programming ethics, to name a few. In order to make the final list, 90 percent of the experts had to agree on its inclusion.

What remained was a list of 23 principles ranging from research strategies to data rights to future issues including potential super-intelligence, which was signed by those wishing to associate their name with the list, Future of Lifes website explains. This collection of principles is by no means comprehensive and its certainly open to differing interpretations, but it also highlights how the current default behavior around many relevant issues could violate principles that most participants agreed are important to uphold.

Since then, 892 A.I. or Robotics researchers and 1445 others experts, including Tesla CEO Elon Musk and famed physicist Stephen Hawking, have endorsed the principles.

Some of the principles like transparency and open research sharing among competitive companies seem less likely than others. Even if theyre not fully implemented, the 23 principles could go a long way towards improving A.I. development and ensuring that its ethical and preventing the rise of Skynet.

1. Research Goal: The goal of A.I. research should be to create not undirected intelligence, but beneficial intelligence.

2. Research Funding: Investments in A.I. should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

3. Science-Policy Link: There should be constructive and healthy exchange between A.I. researchers and policy-makers.

4. Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of A.I.

5. Race Avoidance: Teams developing A.I. systems should actively cooperate to avoid corner-cutting on safety standards.

6. Safety: A.I. systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7. Failure Transparency: If an A.I. system causes harm, it should be possible to ascertain why.

8. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9. Responsibility: Designers and builders of advanced A.I. systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10. Value Alignment: Highly autonomous A.I. systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11. Human Values: A.I. systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12. Personal Privacy: People should have the right to access, manage and control the data they generate, given A.I. systems power to analyze and utilize that data.

13. Liberty and Privacy: The application of A.I. to personal data must not unreasonably curtail peoples real or perceived liberty.

14 Shared Benefit: A.I. technologies should benefit and empower as many people as possible.

15. Shared Prosperity: The economic prosperity created by A.I.I should be shared broadly, to benefit all of humanity.

16. Human Control: Humans should choose how and whether to delegate decisions to A.I. systems, to accomplish human-chosen objectives.

17. Non-subversion: The power conferred by control of highly advanced A.I. systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18. A.I. Arms Race: An arms race in lethal autonomous weapons should be avoided.

19. Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future A.I. capabilities.

20. Importance: Advanced A.I. could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21. Risks: Risks posed by A.I. systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22. Recursive Self-Improvement: A.I. systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23. Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

Photos via Getty Images

James Grebey is a writer, reporter, and fairly decent cartoonist living in Brooklyn. He's written for SPIN Magazine, BuzzFeed, MAD Magazine, and more. He thinks Double Stuf Oreos are bad and he's ready to die on this hill. James is the weeknights editor at Inverse because content doesn't sleep.

Read this article:

Stephen Hawking and Elon Musk Endorse 23 Asilomar Principles ... - Inverse

Elon Musk’s Surprising Reason Why Everyone Will Be Equal in the … – Big Think

A fascinating conference on artificial intelligence was recently hosted by the Future of Life Institute, an organization aimed at promoting optimistic visions of the future. The conference Superintelligence: Science or Fiction? includedsuch luminaries as Elon Musk of Tesla Motors and SpaceX, futurist Ray Kurzweil, Demis Hassabis of MITs DeepMind, neuroscientist and author Sam Harris, philosopher Nick Bostrom, philosopher and cognitive scientist David Chalmers, Skype co-founder Jaan Tallinn, as well as computer scientists Stuart Russell and Bart Selman. The discussion was led by MIT cosmologist Max Tegmark.

The group touched on a number of topics about the future benefits and risks of coming artificial superintelligence, with everyone generally agreeing that its only a matter of time before AI becomes ubiquitous in our lives. Eventually, AI will surpass human intelligence, with the risks and transformations that such a seismic event would entail.

Elon Musk has been a positive voice for AI, a stance not surprising for someone leading the charge to make automated cars our daily reality. He sees the AI future as inevitable, with dangers to be mitigated through government regulation, as much as he doesnt like the idea of them being a bit of a buzzkill.

He also brings up an interesting perspective that our fears of the technological changes the future will bring are largely irrelevant. According to Musk, we are already cyborgs by utilizing machine extensions of ourselves like phones and computers.

By far you have more power, more capability, than the President of the United States had 30 years ago. If you have an Internet link you have an article of wisdom, you can communicate to millions of people, you can communicate to the rest of Earth instantly. I mean, these are magical powers that didnt exist, not that long ago. So everyone is already superhuman, and a cyborg, says Musk [at 33:56].

He sees humans as information-processing machines that pale in comparison to the powers of a computer. What is necessary, according to Musk, is to create a greater integration between man and machine, specifically altering our brains with technology to make them more computer-like.

I think the two things that are needed for a future that we would look at and conclude is good, most likely, is, we have to solve that bandwidth constraint with a direct neural interface. I think a high bandwidth interface to the cortex, so that we can have a digital tertiary layer thats more fully symbiotic with the rest of us. Weve got the cortex and the limbic system, which seem to work together pretty well - theyve got good bandwidth, whereas the bandwidth to additional tertiary layer is weak, explained Musk [at 35:05]

Once we solve that issue, AI will spread everywhere. Its important to do so because, according to Musk, if only a smaller group would have such capabilities, they would become dictators with dominion over Earth.

What would a world filled with such cyborgs look like? Visions of Star Treks Borg come to mind.

Musk thinks it will be a society full of equals:

And if we do those things, then it will be tied to our consciousness, tied to our will, tied to the sum of individual human will, and everyone would have it so it would be sort of still a relatively even playing field, in fact, it would be probably more egalitarian than today, points out Musk [at 36:38].

The whole conference is immensely fascinating and worth watching in full. Check it out here:

Continued here:

Elon Musk's Surprising Reason Why Everyone Will Be Equal in the ... - Big Think

Experts have come up with 23 guidelines to avoid an AI apocalypse … – ScienceAlert

It's the stuff of many a sci-fi book or movie - could robots one day become smart enough to overthrow us? Well, a group of the world's most eminent artificial intelligence experts have worked together to try and make sure that doesn't happen.

They've put together a set of 23 principles to guide future research into AI, which have since been endorsed by hundreds more professionals, including Stephen Hawking and SpaceX CEO Elon Musk.

Called the Asilomar AI Principles (after the beach in California, where they were thought up), the guidelines cover research issues, ethics and values, and longer-term issues - everything from how scientists should work with governments to how lethal weapons should be handled.

On that point: "An arms race in lethal autonomous weapons should be avoided," says principle 18. You can read the full listbelow.

"We hope that these principles will provide material for vigorous discussion and also aspirational goals for how the power of AI can be used to improve everyone's lives in coming years," write the organisers of the Beneficial AI 2017 conference, where the principles were worked out.

For a principle to be included, at least 90 percent of the 100+ conference attendees had to agree to it. Experts at the event included academics, engineers, and representatives from tech companies, including Google co-founder Larry Page.

Perhaps the most telling guideline is principle 23, entitled 'Common Good': "Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organisation."

Other principles in the list suggest that any AI allowed to self-improve must be strictly monitored, and that developments in the tech should be "shared broadly" and "benefit all of humanity".

"To think AI merely automates human decisions is like thinking electricity is just a replacement for candles," conference attendee Patrick Lin, from California Polytechnic State University, told George Dvorsky at Gizmodo.

"Given the massive potential for disruption and harm, as well as benefits, it makes sense to establish some norms early on to help steer research in a good direction, as opposed to letting a market economy that's fixated mostly on efficiency and profit... shape AI."

Meanwhile the principles also call for scientists to work closely with governments and lawmakers to make sure our society keeps pace with the development of AI.

All of which sounds very good to us - let's just hope the robots are listening.

The guidelines also rely on a certain amount of consensus about specific terms - such as what's beneficial to humankind and what isn't - but for the experts behind the list it's a question of getting something recorded at this early stage of AI research.

With artificial intelligence systems now beating us at poker and getting smart enough to spot skin cancers, there's a definite need to have guidelines and limits in place that researchers can work to.

And then we also need to decide what rights super-smart robots have when they're living among us.

For now the guidelines should give us some helpful pointers for the future.

"No current AI system is going to 'go rogue' and be dangerous, and it's important that people know that," conference attendee Anthony Aguirre, from the University of California, Santa Cruz, told Gizmodo.

"At the same time, if we envision a time when AIs exist that are as capable or more so than the smartest humans, it would be utterly naive to believe that the world will not fundamentally change."

"So how seriously we take AI's opportunities and risks has to scale with how capable it is, and having clear assessments and forecasts - without the press, industry or research hype that often accompanies advances - would be a good starting point."

The principles have been published by the Future Of Life Institute.

You can see them in full and add your support over on their site.

Research issues

1. Research Goal:The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

2. Research Funding:Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:

3. Science-Policy Link:There should be constructive and healthy exchange between AI researchers and policy-makers.

4. Research Culture:A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.

5. Race Avoidance:Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.

Ethics and values

6. Safety:AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7. Failure Transparency:If an AI system causes harm, it should be possible to ascertain why.

8. Judicial Transparency:Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9. Responsibility:Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10. Value Alignment:Highly autonomous AI systems should be designed so that their goals and behaviours can be assured to align with human values throughout their operation.

11. Human Values:AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12. Personal Privacy:People should have the right to access, manage and control the data they generate, given AI systems power to analyse and utilise that data.

13. Liberty and Privacy:The application of AI to personal data must not unreasonably curtail peoples real or perceived liberty.

14. Shared Benefit:AI technologies should benefit and empower as many people as possible.

15. Shared Prosperity:The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16. Human Control:Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17. Non-subversion:The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18. AI Arms Race:An arms race in lethal autonomous weapons should be avoided.

Longer term issues

19. Capability Caution:There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

20. Importance:Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

21. Risks:Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22. Recursive Self-Improvement:AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23. Common Good:Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organisation.

Read the original:

Experts have come up with 23 guidelines to avoid an AI apocalypse ... - ScienceAlert

Superintelligence – Nick Bostrom – Oxford University Press

Superintelligence Paths, Dangers, Strategies Nick Bostrom Reviews and Awards

"I highly recommend this book" --Bill Gates

"Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era." --Stuart Russell, Professor of Computer Science, University of California, Berkley

"Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book." --Martin Rees, Past President, Royal Society

"This superb analysis by one of the world's clearest thinkers tackles one of humanity's greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn't become the last?" --Professor Max Tegmark, MIT

"Terribly important ... groundbreaking... extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole... If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Spring from 1962, or ever." --Olle Haggstrom, Professor of Mathematical Statistics

"Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking" --The Economist

"There is no doubting the force of [Bostrom's] arguments...the problem is a research challenge worthy of the next generation's best mathematical talent. Human civilisation is at stake." --Clive Cookson, Financial Times

"Worth reading.... We need to be super careful with AI. Potentially more dangerous than nukes" --Elon Musk, Founder of SpaceX and Tesla

"Every intelligent person should read it." --Nils Nilsson, Artificial Intelligence Pioneer, Stanford University

More here:

Superintelligence - Nick Bostrom - Oxford University Press

The Artificial Intelligence Revolution: Part 2 – Wait But Why

Note: This is Part 2 of a two-part series on AI. Part 1 is here.

PDF: We made a fancy PDF of this post for printing and offline viewing. Buy it here. (Or see a preview.)

___________

We have what may be an extremely difficult problem with an unknown time to solve it, on which quite possibly the entire future of humanity depends. Nick Bostrom

Welcome to Part 2 of the Wait how is this possibly what Im reading I dont get why everyone isnt talking about this series.

Part 1 started innocently enough, as we discussed Artificial Narrow Intelligence, or ANI (AI that specializes in one narrow task like coming up with driving routes or playing chess), and how its all around us in the world today. We then examined why it was such a huge challenge to get from ANI to Artificial General Intelligence, or AGI (AI thats at least as intellectually capable as a human, across the board), and we discussed why the exponential rate of technological advancement weve seen in the past suggests that AGI might not be as far away as it seems. Part 1 ended with me assaulting you with the fact that once our machines reach human-level intelligence, they might immediately do this:

This left us staring at the screen, confronting the intense concept of potentially-in-our-lifetime Artificial Superintelligence, or ASI (AI thats way smarter than any human, across the board), and trying to figure out which emotion we were supposed to have on as we thought about that.11 open these

Before we dive into things, lets remind ourselves what it would mean for a machine to be superintelligent.

A key distinction is the difference between speed superintelligence and quality superintelligence. Often, someones first thought when they imagine a super-smart computer is one thats as intelligent as a human but can think much, much faster2they might picture a machine that thinks like a human, except a million times quicker, which means it could figure out in five minutes what would take a human a decade.

That sounds impressive, and ASI would think much faster than any human couldbut the true separator would be its advantage in intelligence quality, which is something completely different. What makes humans so much more intellectually capable than chimps isnt a difference in thinking speedits that human brains contain a number of sophisticated cognitive modules that enable things like complex linguistic representations or longterm planning or abstract reasoning, that chimps brains do not. Speeding up a chimps brain by thousands of times wouldnt bring him to our leveleven with a decades time, he wouldnt be able to figure out how to use a set of custom tools to assemble an intricate model, something a human could knock out in a few hours. There are worlds of human cognitive function a chimp will simply never be capable of, no matter how much time he spends trying.

But its not just that a chimp cant do what we do, its that his brain is unable to grasp that those worlds even exista chimp can become familiar with what a human is and what a skyscraper is, but hell never be able to understand that the skyscraper was built by humans. In his world, anything that huge is part of nature, period, and not only is it beyond him to build a skyscraper, its beyond him to realize that anyone can build a skyscraper. Thats the result of a small difference in intelligence quality.

And in the scheme of the intelligence range were talking about today, or even the much smaller range among biological creatures, the chimp-to-human quality intelligence gap is tiny. In an earlier post, I depicted the range of biological cognitive capacity using a staircase:3

To absorb how big a deal a superintelligent machine would be, imagine one on the dark green step two steps above humans on that staircase. This machine would be only slightly superintelligent, but its increased cognitive ability over us would be as vast as the chimp-human gap we just described. And like the chimps incapacity to ever absorb that skyscrapers can be built, we will never be able to even comprehend the things a machine on the dark green step can do, even if the machine tried to explain it to uslet alone do it ourselves. And thats only two steps above us. A machine on the second-to-highest step on that staircase would be to us as we are to antsit could try for years to teach us the simplest inkling of what it knows and the endeavor would be hopeless.

But the kind of superintelligence were talking about today is something far beyond anything on this staircase. In an intelligence explosionwhere the smarter a machine gets, the quicker its able to increase its own intelligence, until it begins to soar upwardsa machine might take years to rise from the chimp step to the one above it, but perhaps only hours to jump up a step once its on the dark green step two above us, and by the time its ten steps above us, it might be jumping up in four-step leaps every second that goes by. Which is why we need to realize that its distinctly possible that very shortly after the big news story about the first machine reaching human-level AGI, we might be facing the reality of coexisting on the Earth with something thats here on the staircase (or maybe a million times higher):

And since we just established that its a hopeless activity to try to understand the power of a machine only two steps above us, lets very concretely state once and for all that there is no way to know what ASI will do or what the consequences will be for us.Anyone who pretends otherwise doesnt understand what superintelligence means.

Evolution has advanced the biological brain slowly and gradually over hundreds of millions of years, and in that sense, if humans birth an ASI machine, well be dramatically stomping on evolution. Or maybe this is part of evolutionmaybe the way evolution works is that intelligence creeps up more and more until it hits the level where its capable of creating machine superintelligence, and that level is like a tripwire that triggers a worldwide game-changing explosion that determines a new future for all living things:

And for reasons well discuss later, a huge part of the scientific community believes that its not a matter of whether well hit that tripwire, but when. Kind of a crazy piece of information.

So where does that leave us?

Well no one in the world, especially not I, can tell you what will happen when we hit the tripwire. But Oxford philosopher and lead AI thinker Nick Bostrom believes we can boil down all potential outcomes into two broad categories.

First, looking at history, we can see that life works like this: species pop up, exist for a while, and after some time, inevitably, they fall off the existence balance beam and land on extinction

All species eventually go extinct has been almost as reliable a rule through history as All humans eventually die has been. So far, 99.9% of species have fallen off the balance beam, and it seems pretty clear that if a species keeps wobbling along down the beam, its only a matter of time before some other species, some gust of natures wind, or a sudden beam-shaking asteroid knocks it off. Bostrom calls extinction an attractor statea place species are all teetering on falling into and from which no species ever returns.

And while most scientists Ive come across acknowledge that ASI would have the ability to send humans to extinction, many also believe that used beneficially, ASIs abilities could be used to bring individual humans, and the species as a whole, to a second attractor statespecies immortality. Bostrom believes species immortality is just as much of an attractor state as species extinction, i.e. if we manage to get there, well be impervious to extinction foreverwell have conquered mortality and conquered chance. So even though all species so far have fallen off the balance beam and landed on extinction, Bostrom believes there are two sides to the beam and its just that nothing on Earth has been intelligent enough yet to figure out how to fall off on the other side.

If Bostrom and others are right, and from everything Ive read, it seems like they really might be, we have two pretty shocking facts to absorb:

1) The advent of ASI will, for the first time, open up the possibility for a species to land on the immortality side of the balance beam.

2) The advent of ASI will make such an unimaginably dramatic impact that its likely to knock the human race off the beam, in one direction or the other.

It may very well be that when evolution hits the tripwire, it permanently ends humans relationship with the beam and creates a new world, with or without humans.

Kind of seems like the only question any human should currently be asking is: When are we going to hit the tripwire and which side of the beam will we land on when that happens?

No one in the world knows the answer to either part of that question, but a lot of the very smartest people have put decades of thought into it. Well spend the rest of this post exploring what theyve come up with.

___________

Lets start with the first part of the question: When are we going to hit the tripwire?

i.e. How long until the first machine reaches superintelligence?

Not shockingly, opinions vary wildly and this is a heated debate among scientists and thinkers. Many, like professor Vernor Vinge, scientist Ben Goertzel, Sun Microsystems co-founder Bill Joy, or, most famously, inventor and futurist Ray Kurzweil, agree with machine learning expert Jeremy Howard when he puts up this graph during a TED Talk:

Those people subscribe to the belief that this is happening soonthat exponential growth is at work and machine learning, though only slowly creeping up on us now, will blow right past us within the next few decades.

Others, like Microsoft co-founder Paul Allen, research psychologist Gary Marcus, NYU computer scientist Ernest Davis, and tech entrepreneur Mitch Kapor, believe that thinkers like Kurzweil are vastly underestimating the magnitude of the challenge and believe that were not actually that close to the tripwire.

The Kurzweil camp would counter that the only underestimating thats happening is the underappreciation of exponential growth, and theyd compare the doubters to those who looked at the slow-growing seedling of the internet in 1985 and argued that there was no way it would amount to anything impactful in the near future.

The doubters might argue back that the progress needed to make advancements in intelligence also grows exponentially harder with each subsequent step, which will cancel out the typical exponential nature of technological progress. And so on.

A third camp, which includes Nick Bostrom, believes neither group has any ground to feel certain about the timeline and acknowledges both A) that this could absolutely happen in the near future and B) that theres no guarantee about that; it could also take a much longer time.

Still others, like philosopher Hubert Dreyfus, believe all three of these groups are naive for believing that there even is a tripwire, arguing that its more likely that ASI wont actually ever be achieved.

So what do you get when you put all of these opinions together?

In 2013, Vincent C. Mller and Nick Bostrom conducted a survey that asked hundreds of AI experts at a series of conferences the following question: For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such HLMI4 to exist? Itasked them to name an optimistic year (one in which they believe theres a 10% chance well have AGI), a realistic guess (a year they believe theres a 50% chance of AGIi.e. after that year they think its more likely than not that well have AGI), and a safe guess (the earliest year by which they can say with 90% certainty well have AGI). Gathered together as one data set, here were the results:2

Median optimistic year (10% likelihood): 2022Median realistic year (50% likelihood): 2040Median pessimistic year (90% likelihood): 2075

So the median participant thinks its more likely than not that well have AGI 25 years from now. The 90% median answer of 2075 means that if youre a teenager right now, the median respondent, along with over half of the group of AI experts, is almost certain AGI will happen within your lifetime.

A separate study, conducted recently by author James Barrat at Ben Goertzels annual AGI Conference, did away with percentages and simply asked when participants thought AGI would be achievedby 2030, by 2050, by 2100, after 2100, or never. The results:3

By 2030: 42% of respondentsBy 2050: 25% By 2100: 20%After 2100: 10% Never: 2%

Pretty similar to Mller and Bostroms outcomes. In Barrats survey, over two thirds of participants believe AGI will be here by 2050 and a little less than half predict AGI within the next 15 years. Also striking is that only 2% of those surveyed dont think AGI is part of our future.

But AGI isnt the tripwire, ASI is. So when do the experts think well reach ASI?

Mller and Bostrom also asked the experts how likely they think it is that well reach ASI A) within two years of reaching AGI (i.e. an almost-immediate intelligence explosion), and B) within 30 years. The results:4

The median answer put a rapid (2 year) AGI ASI transition at only a 10% likelihood, but a longer transition of 30 years or less at a 75% likelihood.

We dont know from this data the length of this transition the median participant would have put at a 50% likelihood, but for ballpark purposes, based on the two answers above, lets estimate that theyd have said 20 years. So the median opinionthe one right in the center of the world of AI expertsbelieves the most realistic guess for when well hit the ASI tripwire is [the 2040 prediction for AGI + our estimated prediction of a 20-year transition from AGI to ASI] = 2060.

Of course, all of the above statistics are speculative, and theyre only representative of the center opinion of the AI expert community, but it tells us that a large portion of the people who know the most about this topic would agree that 2060 is a very reasonable estimate for the arrival of potentially world-altering ASI. Only 45 years from now.

Okay now how about the second part of the question above: When we hit the tripwire, which side of the beam will we fall to?

Superintelligence will yield tremendous powerthe critical question for us is:

Who or what will be in control of that power, and what will their motivation be?

The answer to this will determine whether ASI is an unbelievably great development, an unfathomably terrible development, or something in between.

Of course, the expert community is again all over the board and in a heated debate about the answer to this question. Mller and Bostroms survey asked participants to assign a probability to the possible impacts AGI would have on humanity and found that the mean response was that there was a 52% chance that the outcome will be either good or extremely good and a 31% chance the outcome will be either bad or extremely bad. For a relatively neutral outcome, the mean probability was only 17%. In other words, the people who know the most about this are pretty sure this will be a huge deal. Its also worth noting that those numbers refer to the advent of AGIif the question were about ASI, I imagine that the neutral percentage would be even lower.

Before we dive much further into this good vs. bad outcome part of the question, lets combine both the when will it happen? and the will it be good or bad? parts of this question into a chart that encompasses the views of most of the relevant experts:

Well talk more about the Main Camp in a minute, but firstwhats your deal? Actually I know what your deal is, because it was my deal too before I started researching this topic. Some reasons most people arent really thinking about this topic:

One of the goals of these two posts is to get you out of the I Like to Think About Other Things Camp and into one of the expert camps, even if youre just standing on the intersection of the two dotted lines in the square above, totally uncertain.

During my research, I came across dozens of varying opinions on this topic, but I quickly noticed that most peoples opinions fell somewhere in what I labeled the Main Camp, and in particular, over three quarters of the experts fell into two Subcamps inside the Main Camp:

Were gonna take a thorough dive into both of these camps. Lets start with the fun one

As I learned about the world of AI, I found a surprisingly large number of people standing here:

The people on Confident Corner are buzzing with excitement. They have their sights set on the fun side of the balance beam and theyre convinced thats where all of us are headed. For them, the future is everything they ever could have hoped for, just in time.

The thing that separates these people from the other thinkers well discuss later isnt their lust for the happy side of the beamits their confidence that thats the side were going to land on.

Where this confidence comes from is up for debate. Critics believe it comes from an excitement so blinding that they simply ignore or deny potential negative outcomes. But the believers say its naive to conjure up doomsday scenarios when on balance, technology has and will likely end up continuing to help us a lot more than it hurts us.

Well cover both sides, and you can form your own opinion about this as you read, but for this section, put your skepticism away and lets take a good hard look at whats over there on the fun side of the balance beamand try to absorb the fact that the things youre reading might really happen. If you had shown a hunter-gatherer our world of indoor comfort, technology, and endless abundance, it would have seemed like fictional magic to himwe have to be humble enough to acknowledge that its possible that an equally inconceivable transformation could be in our future.

Nick Bostrom describes three ways a superintelligent AI system could function:6

These questions and tasks, which seem complicated to us, would sound to a superintelligent system like someone asking you to improve upon the My pencil fell off the table situation, which youd do by picking it up and putting it back on the table.

Eliezer Yudkowsky, a resident of Anxious Avenue in our chart above, said it well:

There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards [in level of intelligence], and some problems will suddenly move from impossible to obvious. Move a substantial degree upwards, and all of them will become obvious.7

There are a lot of eager scientists, inventors, and entrepreneurs in Confident Cornerbut for a tour of brightest side of the AI horizon, theres only one person we want as our tour guide.

Ray Kurzweil is polarizing. In my reading, I heard everything from godlike worship of him and his ideas to eye-rolling contempt for them. Others were somewhere in the middleauthor Douglas Hofstadter, in discussing the ideas in Kurzweils books, eloquently put forth that it is as if you took a lot of very good food and some dog excrement and blended it all up so that you cant possibly figure out whats good or bad.8

Whether you like his ideas or not, everyone agrees that Kurzweil is impressive. He began inventing things as a teenager and in the following decades, he came up with several breakthrough inventions, including the first flatbed scanner, the first scanner that converted text to speech (allowing the blind to read standard texts), the well-known Kurzweil music synthesizer (the first true electric piano), and the first commercially marketed large-vocabulary speech recognition. Hes the author of five national bestselling books. Hes well-known for his bold predictions and has a pretty good record of having them come trueincluding his prediction in the late 80s, a time when the internet was an obscure thing, that by the early 2000s, it would become a global phenomenon. Kurzweil has been called a restless genius by The Wall Street Journal, the ultimate thinking machine by Forbes, Edisons rightful heir by Inc. Magazine, and the best person I know at predicting the future of artificial intelligence by Bill Gates.9 In 2012, Google co-founder Larry Page approached Kurzweil and asked him to be Googles Director of Engineering.5 In 2011, he co-founded Singularity University, which is hosted by NASA and sponsored partially by Google. Not bad for one life.

This biography is important. When Kurzweil articulates his vision of the future, he sounds fully like a crackpot, and the crazy thing is that hes nothes an extremely smart, knowledgeable, relevant man in the world. You may think hes wrong about the future, but hes not a fool. Knowing hes such a legit dude makes me happy, because as Ive learned about his predictions for the future, I badly want him to be right. And you do too. As you hear Kurzweils predictions, many shared by other Confident Corner thinkers like Peter Diamandis and Ben Goertzel, its not hard to see why he has such a large, passionate followingknown as the singularitarians. Heres what he thinks is going to happen:

Timeline

Kurzweil believes computers will reach AGI by 2029 and that by 2045, well have not only ASI, but a full-blown new worlda time he calls the singularity. His AI-related timeline used to be seen as outrageously overzealous, and it still is by many,6 but in the last 15 years, the rapid advances of ANI systems have brought the larger world of AI experts much closer to Kurzweils timeline. His predictions are still a bit more ambitious than the median respondent on Mller and Bostroms survey (AGI by 2040, ASI by 2060), but not by that much.

Kurzweils depiction of the 2045 singularity is brought about by three simultaneous revolutions in biotechnology, nanotechnology, and, most powerfully, AI.

Before we move onnanotechnology comes up in almost everything you read about the future of AI, so come into this blue box for a minute so we can discuss it

Nanotechnology Blue Box

Nanotechnology is our word for technology that deals with the manipulation of matter thats between 1 and 100 nanometers in size. A nanometer is a billionth of a meter, or a millionth of a millimeter, and this 1-100 range encompasses viruses (100 nm across), DNA (10 nm wide), and things as small as large molecules like hemoglobin (5 nm) and medium molecules like glucose (1 nm). If/when we conquer nanotechnology, the next step will be the ability to manipulate individual atoms, which are only one order of magnitude smaller (~.1 nm).7

To understand the challenge of humans trying to manipulate matter in that range, lets take the same thing on a larger scale. The International Space Station is 268 mi (431 km) above the Earth. If humans were giants so large their heads reached up to the ISS, theyd be about 250,000 times bigger than they are now. If you make the 1nm 100nm nanotech range 250,000 times bigger, you get .25mm 2.5cm. So nanotechnology is the equivalent of a human giant as tall as the ISS figuring out how to carefully build intricate objects using materials between the size of a grain of sand and an eyeball. To reach the next levelmanipulating individual atomsthe giant would have to carefully position objects that are 1/40th of a millimeterso small normal-size humans would need a microscope to see them.8

Nanotech was first discussed by Richard Feynman in a 1959 talk, when he explained: The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. It would be, in principle, possible for a physicist to synthesize any chemical substance that the chemist writes down. How? Put the atoms down where the chemist says, and so you make the substance. Its as simple as that. If you can figure out how to move individual molecules or atoms around, you can make literally anything.

Nanotech became a serious field for the first time in 1986, when engineer Eric Drexler provided its foundations in his seminal book Engines of Creation, but Drexler suggests that those looking to learn about the most modern ideas in nanotechnology would be best off reading his 2013 book, Radical Abundance.

Gray Goo Bluer Box

Were now in a diversion in a diversion. This is very fun.9

Anyway, I brought you here because theres this really unfunny part of nanotechnology lore I need to tell you about. In older versions of nanotech theory, a proposed method of nanoassembly involved the creation of trillions of tiny nanobots that would work in conjunction to build something. One way to create trillions of nanobots would be to make one that could self-replicate and then let the reproduction process turn that one into two, those two then turn into four, four into eight, and in about a day, thered be a few trillion of them ready to go. Thats the power of exponential growth. Clever, right?

Its clever until it causes the grand and complete Earthwide apocalypse by accident. The issue is that the same power of exponential growth that makes it super convenient to quickly create a trillion nanobots makes self-replication a terrifying prospect. Because what if the system glitches, and instead of stopping replication once the total hits a few trillion as expected, they just keep replicating? The nanobots would be designed to consume any carbon-based material in order to feed the replication process, and unpleasantly, all life is carbon-based. The Earths biomass contains about 1045 carbon atoms. A nanobot would consist of about 106 carbon atoms, so 1039 nanobots would consume all life on Earth, which would happen in 130 replications (2130 is about 1039), as oceans of nanobots (thats the gray goo) rolled around the planet. Scientists think a nanobot could replicate in about 100 seconds, meaning this simple mistake would inconveniently end all life on Earth in 3.5 hours.

Read the original here:

The Artificial Intelligence Revolution: Part 2 - Wait But Why

Parallel universes, the Matrix, and superintelligence …

Physicists are converging on a theory of everything, probing the 11th dimension, developing computers for the next generation of robots, and speculating about civilizations millions of years ahead of ours, says Dr. Michio Kaku, author of the best-sellers Hyperspace and Visions and co-founder of String Field Theory, in this interview by KurzweilAI.net Editor Amara D. Angelica.

Published on KurzweilAI.net June 26, 2003.

What are the burning issues for you currently?

Well, several things. Professionally, I work on something called Superstring theory, or now called M-theory, and the goal is to find an equation, perhaps no more than one inch long, which will allow us to "read the mind of God," as Einstein used to say.

In other words, we want a single theory that gives us an elegant, beautiful representation of the forces that govern the Universe. Now, after two thousand years of investigation into the nature of matter, we physicists believe that there are four fundamental forces that govern the Universe.

Some physicists have speculated about the existence of a fifth force, which may be some kind of paranormal or psychic force, but so far we find no reproducible evidence of a fifth force.

Now, each time a force has been mastered, human history has undergone a significant change. In the 1600s, when Isaac Newton first unraveled the secret of gravity, he also created a mechanics. And from Newtons Laws and his mechanics, the foundation was laid for the steam engine, and eventually the Industrial Revolution.

So, in other words, in some sense, a byproduct of the mastery of the first force, gravity, helped to spur the creation of the Industrial Revolution, which in turn is perhaps one of the greatest revolutions in human history.

The second great force is the electromagnetic force; that is, the force of light, electricity, magnetism, the Internet, computers, transistors, lasers, microwaves, x-rays, etc.

And then in the 1860s, it was James Clerk Maxwell, the Scottish physicist at Cambridge University, who finally wrote down Maxwells equations, which allow us to summarize the dynamics of light.

That helped to unleash the Electric Age, and the Information Age, which have changed all of human history. Now its hard to believe, but Newtons equations and Einsteins equations are no more than about half an inch long.

Maxwells equations are also about half an inch long. For example, Maxwells equations say that the four-dimensional divergence of an antisymmetric, second-rank tensor equals zero. Thats Maxwells equations, the equations for light. And in fact, at Berkeley, you can buy a T-shirt which says, "In the beginning, God said the four-dimensional divergence of an antisymmetric, second rank tensor equals zero, and there was Light, and it was good."

So, the mastery of the first two forces helped to unleash, respectively, the Industrial Revolution and the Information Revolution.

The last two forces are the weak nuclear force and the strong nuclear force, and they in turn have helped us to unlock the secret of the stars, via Einsteins equations E=mc2, and many people think that far in the future, the human race may ultimately derive its energy not only from solar power, which is the power of fusion, but also fusion power on the Earth, in terms of fusion reactors, which operate on seawater, and create no copious quantities of radioactive waste.

So, in summary, the mastery of each force helped to unleash a new revolution in human history.

Today, we physicists are embarking upon the greatest quest of all, which is to unify all four of these forces into a single comprehensive theory. The first force, gravity, is now represented by Einsteins General Theory of Relativity, which gives us the Big Bang, black holes, and expanding universe. Its a theory of the very large; its a theory of smooth, space-time manifolds like bedsheets and trampoline nets.

The second theory, the quantum theory, is the exact opposite. The quantum theory allows us to unify the electromagnetic, weak and strong force. However, it is based on discrete, tiny packets of energy called quanta, rather than smooth bedsheets, and it is based on probabilities, rather than the certainty of Einsteins equations. So these two theories summarize the sum total of all physical knowledge of the physical universe.

Any equation describing the physical universe ultimately is derived from one of these two theories. The problem is these two theories are diametrically opposed. They are based on different assumptions, different principles, and different mathematics. Our job as physicists is to unify the two into a single, comprehensive theory. Now, over the last decades, the giants of the twentieth century have tried to do this and have failed.

For example, Niels Bohr, the founder of atomic physics and the quantum theory, was very skeptical about many attempts over the decades to create a Unified Field Theory. One day, Wolfgang Pauli, Nobel laureate, was giving a talk about his version of the Unified Field Theory, and in a very famous story, Bohr stood up in the back of the room and said, "Mr. Pauli, we in the back are convinced that your theory is crazy. What divides us is whether your theory is crazy enough."

So today, we realize that a true Unified Field Theory must be bizarre, must be fantastic, incredible, mind-boggling, crazy, because all the sane alternatives have been studied and discarded.

Today we have string theory, which is based on the idea that the subatomic particles we see in nature are nothing but notes we see on a tiny, vibrating string. If you kick the string, then an electron will turn into a neutrino. If you kick it again, the vibrating string will turn from a neutrino into a photon or a graviton. And if you kick it enough times, the vibrating string will then mutate into all the subatomic particles.

Therefore we no longer in some sense have to deal with thousands of subatomic particles coming from our atom smashers, we just have to realize that what makes them, what drives them, is a vibrating string. Now when these strings collide, they form atoms and nuclei, and so in some sense, the melodies that you can write on the string correspond to the laws of chemistry. Physics is then reduced to the laws of harmony that we can write on a string. The Universe is a symphony of strings. And what is the mind of God that Einstein used to write about? According to this picture, the mind of God is music resonating through ten- or eleven-dimensional hyperspace, which of course begs the question, "If the universe is a symphony, then is there a composer to the symphony?" But thats another question.

What do you think of Sir Martin Rees concerns about the risk of creating black holes on Earth in his book, Our Final Hour?

I havent read his book, but perhaps Sir Martin Rees is referring to many press reports that claim that the Earth may be swallowed up by a black hole created by our machines. This started with a letter to the editor in Scientific American asking whether the RHIC accelerator in Brookhaven, Long Island, will create a black hole which will swallow up the earth. This was then picked up by the Sunday London Times who then splashed it on the international wire services, and all of a sudden, we physicists were deluged with hundreds of emails and telegrams asking whether or not we are going to destroy the world when we create a black hole in Long Island.

However, you can calculate that in outer space, cosmic rays have more energy than the particles produced in our most powerful atom smashers, and black holes do not form in outer space. Not to mention the fact that to create a black hole, you would have to have the mass of a giant star. In fact, an object ten to fifty times the mass of our star may in fact form a black hole. So the probability of a black hole forming in Long Island is zero.

However, Sir Martin Rees also has written a book, talking about the Multiverse. And that is also the subject of my next book, coming out late next year, called Parallel Worlds. We physicists no longer believe in a Universe. We physicists believe in a Multiverse that resembles the boiling of water. Water boils when tiny particles, or bubbles, form, which then begin to rapidly expand. If our Universe is a bubble in boiling water, then perhaps Big Bangs happen all the time.

Now, the Multiverse idea is consistent with Superstring theory, in the sense that Superstring theory has millions of solutions, each of which seems to correspond to a self-consistent Universe. So in some sense, Superstring theory is drowning in its own riches. Instead of predicting a unique Universe, it seems to allow the possibility of a Multiverse of Universes.

This may also help to answer the question raised by the Anthropic Principle. Our Universe seems to have known that we were coming. The conditions for life are extremely stringent. Life and consciousness can only exist in a very narrow band of physical parameters. For example, if the proton is not stable, then the Universe will collapse into a useless heap of electrons and neutrinos. If the proton were a little bit different in mass, it would decay, and all our DNA molecules would decay along with it.

In fact, there are hundreds, perhaps thousands, of coincidences, happy coincidences, that make life possible. Life, and especially consciousness, is quite fragile. It depends on stable matter, like protons, that exists for billions of years in a stable environment, sufficient to create autocatalytic molecules that can reproduce themselves, and thereby create Life. In physics, it is extremely hard to create this kind of Universe. You have to play with the parameters, you have to juggle the numbers, cook the books, in order to create a Universe which is consistent with Life.

However, the Multiverse idea explains this problem, because it simply means we coexist with dead Universes. In other Universes, the proton is not stable. In other Universes, the Big Bang took place, and then it collapsed rapidly into a Big Crunch, or these Universes had a Big Bang, and immediately went into a Big Freeze, where temperatures were so low, that Life could never get started.

So, in the Multiverse of Universes, many of these Universes are in fact dead, and our Universe in this sense is special, in that Life is possible in this Universe. Now, in religion, we have the Judeo-Christian idea of an instant of time, a genesis, when God said, "Let there be light." But in Buddhism, we have a contradictory philosophy, which says that the Universe is timeless. It had no beginning, and it had no end, it just is. Its eternal, and it has no beginning or end.

The Multiverse idea allows us to combine these two pictures into a coherent, pleasing picture. It says that in the beginning, there was nothing, nothing but hyperspace, perhaps ten- or eleven-dimensional hyperspace. But hyperspace was unstable, because of the quantum principle. And because of the quantum principle, there were fluctuations, fluctuations in nothing. This means that bubbles began to form in nothing, and these bubbles began to expand rapidly, giving us the Universe. So, in other words, the Judeo-Christian genesis takes place within the Buddhist nirvana, all the time, and our Multiverse percolates universes.

Now this also raises the possibility of Universes that look just like ours, except theres one quantum difference. Lets say for example, that a cosmic ray went through Churchills mother, and Churchill was never born, as a consequence. In that Universe, which is only one quantum event away from our Universe, England never had a dynamic leader to lead its forces against Hitler, and Hitler was able to overcome England, and in fact conquer the world.

So, we are one quantum event away from Universes that look quite different from ours, and its still not clear how we physicists resolve this question. This paradox revolves around the Schrdingers Cat problem, which is still largely unsolved. In any quantum theory, we have the possibility that atoms can exist in two places at the same time, in two states at the same time. And then Erwin Schrdinger, the founder of quantum mechanics, asked the question: lets say we put a cat in a box, and the cat is connected to a jar of poison gas, which is connected to a hammer, which is connected to a Geiger counter, which is connected to uranium. Everyone believes that uranium has to be described by the quantum theory. Thats why we have atomic bombs, in fact. No one disputes this.

But if the uranium decays, triggering the Geiger counter, setting off the hammer, destroying the jar of poison gas, then I might kill the cat. And so, is the cat dead or alive? Believe it or not, we physicists have to superimpose, or add together, the wave function of a dead cat with the wave function of a live cat. So the cat is neither dead nor alive.

This is perhaps one of the deepest questions in all the quantum theory, with Nobel laureates arguing with other Nobel laureates about the meaning of reality itself.

Now, in philosophy, solipsists like Bishop Berkeley used to believe that if a tree fell in the forest and there was no one there to listen to the tree fall, then perhaps the tree did not fall at all. However, Newtonians believe that if a tree falls in the forest, that you dont have to have a human there to witness the event.

The quantum theory puts a whole new spin on this. The quantum theory says that before you look at the tree, the tree could be in any possible state. It could be burnt, a sapling, it could be firewood, it could be burnt to the ground. It could be in any of an infinite number of possible states. Now, when you look at it, it suddenly springs into existence and becomes a tree.

Einstein never liked this. When people used to come to his house, he used to ask them, "Look at the moon. Does the moon exist because a mouse looks at the moon?" Well, in some sense, yes. According to the Copenhagen school of Neils Bohr, observation determines existence.

Now, there are at least two ways to resolve this. The first is the Wigner school. Eugene Wigner was one of the creators of the atomic bomb and a Nobel laureate. And he believed that observation creates the Universe. An infinite sequence of observations is necessary to create the Universe, and in fact, maybe theres a cosmic observer, a God of some sort, that makes the Universe spring into existence.

Theres another theory, however, called decoherence, or many worlds, which believes that the Universe simply splits each time, so that we live in a world where the cat is alive, but theres an equal world where the cat is dead. In that world, they have people, they react normally, they think that their world is the only world, but in that world, the cat is dead. And, in fact, we exist simultaneously with that world.

This means that theres probably a Universe where you were never born, but everything else is the same. Or perhaps your mother had extra brothers and sisters for you, in which case your family is much larger. Now, this can be compared to sitting in a room, listening to radio. When you listen to radio, you hear many frequencies. They exist simultaneously all around you in the room. However, your radio is only tuned to one frequency. In the same way, in your living room, there is the wave function of dinosaurs. There is the wave function of aliens from outer space. There is the wave function of the Roman Empire, because it never fell, 1500 years ago.

All of this coexists inside your living room. However, just like you can only tune into one radio channel, you can only tune into one reality channel, and that is the channel that you exist in. So, in some sense it is true that we coexist with all possible universes. The catch is, we cannot communicate with them, we cannot enter these universes.

However, I personally believe that at some point in the future, that may be our only salvation. The latest cosmological data indicates that the Universe is accelerating, not slowing down, which means the Universe will eventually hit a Big Freeze, trillions of years from now, when temperatures are so low that it will be impossible to have any intelligent being survive.

When the Universe dies, theres one and only one way to survive in a freezing Universe, and that is to leave the Universe. In evolution, there is a law of biology that says if the environment becomes hostile, either you adapt, you leave, or you die.

When the Universe freezes and temperatures reach near absolute zero, you cannot adapt. The laws of thermodynamics are quite rigid on this question. Either you will die, or you will leave. This means, of course, that we have to create machines that will allow us to enter eleven-dimensional hyperspace. This is still quite speculative, but String theory, in some sense, may be our only salvation. For advanced civilizations in outer space, either we leave or we die.

That brings up a question. Matrix Reloaded seems to be based on parallel universes. What do you think of the film in terms of its metaphors?

Well, the technology found in the Matrix would correspond to that of an advanced Type I or Type II civilization. We physicists, when we scan outer space, do not look for little green men in flying saucers. We look for the total energy outputs of a civilization in outer space, with a characteristic frequency. Even if intelligent beings tried to hide their existence, by the second law of thermodynamics, they create entropy, which should be visible with our detectors.

So we classify civilizations on the basis of energy outputs. A Type I civilization is planetary. They control all planetary forms of energy. They would control, for example, the weather, volcanoes, earthquakes; they would mine the oceans, any planetary form of energy they would control. Type II would be stellar. They play with solar flares. They can move stars, ignite stars, play with white dwarfs. Type III is galactic, in the sense that they have now conquered whole star systems, and are able to use black holes and star clusters for their energy supplies.

Each civilization is separated by the previous civilization by a factor of ten billion. Therefore, you can calculate numerically at what point civilizations may begin to harness certain kinds of technologies. In order to access wormholes and parallel universes, you have to be probably a Type III civilization, because by definition, a Type III civilization has enough energy to play with the Planck energy.

The Planck energy, or 1019 billion electron volts, is the energy at which space-time becomes unstable. If you were to heat up, in your microwave oven, a piece of space-time to that energy, then bubbles would form inside your microwave oven, and each bubble in turn would correspond to a baby Universe.

Now, in the Matrix, several metaphors are raised. One metaphor is whether computing machines can create artificial realities. That would require a civilization centuries or millennia ahead of ours, which would place it squarely as a Type I or Type II civilization.

However, we also have to ask a practical question: is it possible to create implants that could access our memory banks to create this artificial reality, and are machines dangerous? My answer is the following. First of all, cyborgs with neural implants: the technology does not exist, and probably wont exist for at least a century, for us to access the central nervous system. At present, we can only do primitive experiments on the brain.

For example, at Emory University in Atlanta, Georgia, its possible to put a glass implant into the brain of a stroke victim, and the paralyzed stroke victim is able to, by looking at the cursor of a laptop, eventually control the motion of the cursor. Its very slow and tedious; its like learning to ride a bicycle for the first time. But the brain grows into the glass bead, which is placed into the brain. The glass bead is connected to a laptop computer, and over many hours, the person is able to, by pure thought, manipulate the cursor on the screen.

So, the central nervous system is basically a black box. Except for some primitive hookups to the visual system of the brain, we scientists have not been able to access most bodily functions, because we simply dont know the code for the spinal cord and for the brain. So, neural implant technology, I believe is one hundred, maybe centuries away from ours.

On the other hand, we have to ask yet another metaphor raised by the Matrix, and that is, are machines dangerous? And the answer is, potentially, yes. However, at present, our robots have the intelligence of a cockroach, in the sense that pattern recognition and common sense are the two most difficult, unsolved problems in artificial intelligence theory. Pattern recognition means the ability to see, hear, and to understand what you are seeing and understand what you are hearing. Common sense means your ability to make sense out of the world, which even children can perform.

Those two problems are at the present time largely unsolved. Now, I think, however, that within a few decades, we should be able to create robots as smart as mice, maybe dogs and cats. However, when machines start to become as dangerous as monkeys, I think we should put a chip in their brain, to shut them off when they start to have murderous thoughts.

By the time you have monkey intelligence, you begin to have self-awareness, and with self-awareness, you begin to have an agenda created by a monkey for its own purposes. And at that point, a mechanical monkey may decide that its agenda is different from our agenda, and at that point they may become dangerous to humans. I think we have several decades before that happens, and Moores Law will probably collapse in 20 years anyway, so I think theres plenty of time before we come to the point where we have to deal with murderous robots, like in the movie 2001.

So you differ with Ray Kurzweils concept of using nanobots to reverse-engineer and upload the brain, possibly within the coming decades?

Not necessarily. Im just laying out a linear course, the trajectory where artificial intelligence theory is going today. And that is, trying to build machines which can navigate and roam in our world, and two, robots which can make sense out of the world. However, theres another divergent path one might take, and thats to harness the power of nanotechnology. However, nanotechnology is still very primitive. At the present time, we can barely build arrays of atoms. We cannot yet build the first atomic gear, for example. No one has created an atomic wheel with ball bearings. So simple machines, which even children can play with in their toy sets, dont yet exist at the atomic level. However, on a scale of decades, we may be able to create atomic devices that begin to mimic our own devices.

Molecular transistors can already be made. Nanotubes allow us to create strands of material that are super-strong. However, nanotechnology is still in its infancy and therefore, its still premature to say where nanotechnology will go. However, one place where technology may go is inside our body. Already, its possible to create a pill the size of an aspirin pill that has a television camera that can photograph our insides as it goes down our gullet, which means that one day surgery may become relatively obsolete.

In the future, its conceivable we may have atomic machines that enter the blood. And these atomic machines will be the size of blood cells and perhaps they would be able to perform useful functions like regulating and sensing our health, and perhaps zapping cancer cells and viruses in the process. However, this is still science fiction, because at the present time, we cant even build simple atomic machines yet.

Is there any possibility, similar to the premise of The Matrix, that we are living in a simulation?

Well, philosophically speaking, its always possible that the universe is a dream, and its always possible that our conversation with our friends is a by-product of the pickle that we had last night that upset our stomach. However, science is based upon reproducible evidence. When we go to sleep and we wake up the next day, we usually wind up in the same universe. It is reproducible. No matter how we try to avoid certain unpleasant situations, they come back to us. That is reproducible. So reality, as we commonly believe it to exist, is a reproducible experiment, its a reproducible sensation. Therefore in principle, you could never rule out the fact that the world could be a dream, but the fact of the matter is, the universe as it exists is a reproducible universe.

Now, in the Matrix, a computer simulation was run so that virtual reality became reproducible. Every time you woke up, you woke up in that same virtual reality. That technology, of course, does not violate the laws of physics. Theres nothing in relativity or the quantum theory that says that the Matrix is not possible. However, the amount of computer power necessary to drive the universe and the technology necessary for a neural implant is centuries to millennia beyond anything that we can conceive of, and therefore this is something for an advanced Type I or II civilization.

Why is a Type I required to run this kind of simulation? Is number crunching the problem?

Yes, its simply a matter of number crunching. At the present time, we scientists simply do not know how to interface with the brain. You see, one of the problems is, the brain, strictly speaking, is not a digital computer at all. The brain is not a Turing machine. A Turing machine is a black box with an input tape and an output tape and a central processing unit. That is the essential element of a Turing machine: information processing is localized in one point. However, our brain is actually a learning machine; its a neural network.

Many people find this hard to believe, but theres no software, there is no operating system, there is no Windows programming for the brain. The brain is a vast collection, perhaps a hundred billion neurons, each neuron with 10,000 connections, which slowly and painfully interacts with the environment. Some neural pathways are genetically programmed to give us instinct. However, for the most part, our cerebral cortex has to be reprogrammed every time we bump into reality.

As a consequence, we cannot simply put a chip in our brain that augments our memory and enhances our intelligence. Memory and thinking, we now realize, is distributed throughout the entire brain. For example, its possible to have people with only half a brain. There was a documented case recently where a young girl had half her brain removed and shes still fully functional.

So, the brain can operate with half of its mass removed. However, you remove one transistor in your Pentium computer and the whole computer dies. So, theres a fundamental difference between digital computerswhich are easily programmed, which are modular, and you can insert different kinds of subroutines in themand neural networks, where learning is distributed throughout the entire device, making it extremely difficult to reprogram. That is the reason why, even if we could create an advanced PlayStation that would run simulations on a PC screen, that software cannot simply be injected into the human brain, because the brain has no operating system.

Ray Kurzweils next book, The Singularity is Near, predicts that possibly within the coming decades, there will be super-intelligence emerging on the planet that will surpass that of humans. What do you think of that idea?

Yes, that sounds interesting. But Moores Law will have collapsed by then, so well have a little breather. In 20 years time, the quantum theory takes over, so Moores Law collapses and well probably stagnate for a few decades after that. Moores Law, which states that computer power doubles every 18 months, will not last forever. The quantum theory giveth, the quantum theory taketh away. The quantum theory makes possible transistors, which can be etched by ultraviolet rays onto smaller and smaller chips of silicon. This process will end in about 15 to 20 years. The senior engineers at Intel now admit for the first time that, yes, they are facing the end.

The thinnest layer on a Pentium chip consists of about 20 atoms. When we start to hit five atoms in the thinnest layer of a Pentium chip, the quantum theory takes over, electrons can now tunnel outside the layer, and the Pentium chip short-circuits. Therefore, within a 15 to 20 year time frame, Moores Law could collapse, and Silicon Valley could become a Rust Belt.

This means that we physicists are desperately trying to create the architecture for the post-silicon era. This means using quantum computers, quantum dot computers, optical computers, DNA computers, atomic computers, molecular computers, in order to bridge the gap when Moores Law collapses in 15 to 20 years. The wealth of nations depends upon the technology that will replace the power of silicon.

This also means that you cannot project artificial intelligence exponentially into the future. Some people think that Moores Law will extend forever; in which case humans will be reduced to zoo animals and our robot creations will throw peanuts at us and make us dance behind bars. Now, that may eventually happen. It is certainly consistent within the laws of physics.

However, the laws of the quantum theory say that were going to face a massive problem 15 to 20 years from now. Now, some remedial methods have been proposed; for example, building cubical chips, chips that are stacked on chips to create a 3-dimensional array. However, the problem there is heat production. Tremendous quantities of heat are produced by cubical chips, such that you can fry an egg on top of a cubical chip. Therefore, I firmly believe that we may be able to squeeze a few more years out of Moores Law, perhaps designing clever cubical chips that are super-cooled, perhaps using x-rays to etch our chips instead of ultraviolet rays. However, that only delays the inevitable. Sooner or later, the quantum theory kills you. Sooner or later, when we hit five atoms, we dont know where the electron is anymore, and we have to go to the next generation, which relies on the quantum theory and atoms and molecules.

Therefore, I say that all bets are off in terms of projecting machine intelligence beyond a 20-year time frame. Theres nothing in the laws of physics that says that computers cannot exceed human intelligence. All I raise is that we physicists are desperately trying to patch up Moores Law, and at the present time we have to admit that we have no successor to silicon, which means that Moores Law will collapse in 15 to 20 years.

So are you saying that quantum computing and nanocomputing are not likely to be available by then?

No, no, Im just saying its very difficult. At the present time we physicists have been able to compute on seven atoms. That is the worlds record for a quantum computer. And that quantum computer was able to calculate 3 x 5 = 15. Now, being able to calculate 3 x 5 = 15 does not equal the convenience of a laptop computer that can crunch potentially millions of calculations per second. The problem with quantum computers is that any contamination, any atomic disturbance, disturbs the alignment of the atoms and the atoms then collapse into randomness. This is extremely difficult, because any cosmic ray, any air molecule, any disturbance can conceivably destroy the coherence of our atomic computer to make them useless.

Unless you have redundant parallel computing?

Even if you have parallel computing you still have to have each parallel computer component free of any disturbance. So, no matter how you cut it, the practical problems of building quantum computers, although within the laws of physics, are extremely difficult, because it requires that we remove all in contact with the environment at the atomic level. In practice, weve only been able to do this with a handful of atoms, meaning that quantum computers are still a gleam in the eye of most physicists.

Now, if a quantum computer can be successfully built, it would, of course, scare the CIA and all the governments of the world, because it would be able to crack any code created by a Turing machine. A quantum computer would be able to perform calculations that are inconceivable by a Turing machine. Calculations that require an infinite amount of time on a Turing machine can be calculated in a few seconds by a quantum computer. For example, if you shine laser beams on a collection of coherent atoms, the laser beam scatters, and in some sense performs a quantum calculation, which exceeds the memory capability of any Turing machine.

However, as I mentioned, the problem is that these atoms have to be in perfect coherence, and the problems of doing this are staggering in the sense that even a random collision with a subatomic particle could in fact destroy the coherence and make the quantum computer impractical.

So, Im not saying that its impossible to build a quantum computer; Im just saying that its awfully difficult.

When do you think we might expect SETI [Search for Extraterrestrial Intelligence] to be successful?

I personally think that SETI is looking in the wrong direction. If, for example, were walking down a country road and we see an anthill, do we go down to the ant and say, "I bring you trinkets, I bring you beads, I bring you knowledge, I bring you medicine, I bring you nuclear technology, take me to your leader"? Or, do we simply step on them? Any civilization capable of reaching the planet Earth would be perhaps a Type III civilization. And the difference between you and the ant is comparable to the distance between you and a Type III civilization. Therefore, for the most part, a Type III civilization would operate with a completely different agenda and message than our civilization.

Lets say that a ten-lane superhighway is being built next to the anthill. The question is: would the ants even know what a ten-lane superhighway is, or what its used for, or how to communicate with the workers who are just feet away? And the answer is no. One question that we sometimes ask is if there is a Type III civilization in our backyard, in the Milky Way galaxy, would we even know its presence? And if you think about it, you realize that theres a good chance that we, like ants in an anthill, would not understand or be able to make sense of a ten-lane superhighway next door.

So this means there that could very well be a Type III civilization in our galaxy, it just means that were not smart enough to find one. Now, a Type III civilization is not going to make contact by sending Captain Kirk on the Enterprise to meet our leader. A Type III civilization would send self-replicating Von Neumann probes to colonize the galaxy with robots. For example, consider a virus. A virus only consists of thousands of atoms. Its a molecule in some sense. But in about one week, it can colonize an entire human being made of trillions of cells. How is that possible?

Well, a Von Neumann probe would be a self-replicating robot that lands on a moon; a moon, because they are stable, with no erosion, and theyre stable for billions of years. The probe would then make carbon copies of itself by the millions. It would create a factory to build copies of itself. And then these probes would then rocket to other nearby star systems, land on moons, to create a million more copies by building a factory on that moon. Eventually, there would be sphere surrounding the mother planet, expanding at near-light velocity, containing trillions of these Von Neumann probes, and that is perhaps the most efficient way to colonize the galaxy. This means that perhaps, on our moon there is a Von Neumann probe, left over from a visitation that took place million of years ago, and the probe is simply waiting for us to make the transition from Type 0 to Type I.

The Sentinel.

Yes. This, of course, is the basis of the movie 2001, because at the beginning of the movie, Kubrick interviewed many prominent scientists, and asked them the question, "What is the most likely way that an advanced civilization would probe the universe?" And that is, of course, through self-replicating Von Neumann probes, which create moon bases. That is the basis of the movie 2001, where the probe simply waits for us to become interesting. If were Type 0, were not very interesting. We have all the savagery and all the suicidal tendencies of fundamentalism, nationalism, sectarianism, that are sufficient to rip apart our world.

By the time weve become Type I, weve become interesting, weve become planetary, we begin to resolve our differences. We have centuries in which to exist on a single planet to create a paradise on Earth, a paradise of knowledge and prosperity.

2003 KurzweilAI.net

See original here:

Parallel universes, the Matrix, and superintelligence ...

How Long Before Superintelligence? – Nick Bostrom

This is if we take the retina simulation as a model. As the present, however, not enough is known about the neocortex to allow us to simulate it in such an optimized way. But the knowledge might be available by 2004 to 2008 (as we shall see in the next section). What is required, if we are to get human-level AI with hardware power at this lower bound, is the ability to simulate 1000-neuron aggregates in a highly efficient way.

The extreme alternative, which is what we assumed in the derivation of the upper bound, is to simulate each neuron individually. The number of clock cycles that neuroscientists can expend simulating the processes of a single neuron knows of no limits, but that is because their aim is to model the detailed chemical and electrodynamic processes in the nerve cell rather than to just do the minimal amount of computation necessary to replicate those features of its response function which are relevant for the total performance of the neural net. It is not known how much of the detail that is contingent and inessential and how much needs to be preserved in order for the simulation to replicate the performance of the whole. It seems like a good bet though, at least to the author, that the nodes could be strongly simplified and replaced with simple standardized elements. It appears perfectly feasible to have an intelligent neural network with any of a large variety of neuronal output functions and time delays.

It does look plausible, however, that by the time when we know how to simulate an idealized neuron and know enough about the brain's synaptic structure that we can put the artificial neurons together in a way that functionally mirrors how it is done in the brain, then we will also be able to replace whole 1000-neuron modules with something that requires less computational power to simulate than it does to simulate all the neuron in the module individually. We might well get all the way down to a mere 1000 instructions per neuron and second, as is implied by Moravec's estimate (10^14 ops / 10^11 neurons = 1000 operations per second and neuron). But unless we can build these modules without first building a whole brain then this optimization will only be possible after we have already developed human-equivalent artificial intelligence.

If we assume the upper bound on the computational power needed to simulate the human brain, i.e. if we assume enough power to simulate each neuron individually (10^17 ops), then Moore's law says that we will have to wait until about 2015 or 2024 (for doubling times of 12 and 18 months, respectively) before supercomputers with the requisite performance are at hand. But if by then we know how to do the simulation on the level of individual neurons, we will presumably also have figured out how to make at least some optimizations, so we could probably adjust these upper bounds a bit downwards.

So far I have been talking only of processor speed, but computers need a great deal of memory too if they are to replicate the brain's performance. Throughout the history of computers, the ratio between memory and speed has remained more or less constant at about 1 byte/ops. Since a signal is transmitted along a synapse, on average, with a frequency of about 100 Hz and since its memory capacity is probably less than 100 bytes (1 byte looks like a more reasonable estimate), it seems that speed rather than memory would be the bottleneck in brain simulations on the neuronal level. (If we instead assume that we can achieve a thousand-fold leverage in our simulation speed as assumed in Moravec's estimate, then that would bring the requirement of speed down, perhaps, one order of magnitude below the memory requirement. But if we can optimize away three orders of magnitude on speed by simulating 1000-neuron aggregates, we will probably be able to cut away at least one order of magnitude of the memory requirement. Thus the difficulty of building enough memory may be significantly smaller, and is almost certainly not significantly greater, than the difficulty of building a processor that is fast enough. We can therefore focus on speed as the critical parameter on the hardware front.)

This paper does not discuss the possibility that quantum phenomena are irreducibly involved in human cognition. Hameroff and Penrose and others have suggested that coherent quantum states may exist in the microtubules, and that the brain utilizes these phenomena to perform high-level cognitive feats. The author's opinion is that this is implausible. The controversy surrounding this issue won't be entered into here; it will simply be assumed, throughout this paper, that quantum phenomena are not functionally relevant to high-level brain modelling.

In conclusion we can say that the hardware capacity for human-equivalent artificial intelligence will likely exist before the end of the first quater of the next century, and may be reached as early as 2004. A corresponding capacity should be available to leading AI labs within ten years thereafter (or sooner if the potential of human-level AI and superintelligence is by then better appreciated by funding agencies).

Notes

It is possible to nit-pick on this estimate. For example, there is some evidence that some limited amount of communication between nerve cells is possible without synaptic transmission. And we have the regulatory mechanisms consisting neurotransmitters and their sources, receptors and re-uptake channels. While neurotransmitter balances are crucially important for the proper functioning of the human brain, they have an insignificant information content compared to the synaptic structure. Perhaps a more serious point is that that neurons often have rather complex time-integration properties (Koch 1997). Whether a specific set of synaptic inputs result in the firing of a neuron depends on their exact timing. The authors' opinion is that except possibly for a small number of special applications such as auditory stereo perception, the temporal properties of the neurons can easily be accommodated with a time resolution of the simulation on the order of 1 ms. In an unoptimized simulation this would add an order of magnitude to the estimate given above, where we assumed a temporal resolution of 10 ms, corresponding to an average firing rate of 100 Hz. However, the other values on which the estimate was based appear to be too high rather than too low , so we should not change the estimate much to allow for possible fine-grained time-integration effects in a neuron's dendritic tree. (Note that even if we were to adjust our estimate upward by an order of magnitude, this would merely add three to five years to the predicted upper bound on when human-equivalent hardware arrives. The lower bound, which is based on Moravec's estimate, would remain unchanged.)

Software via the bottom-up approach

Superintelligence requires software as well as hardware. There are several approaches to the software problem, varying in the amount of top-down direction they require. At the one extreme we have systems like CYC which is a very large encyclopedia-like knowledge-base and inference-engine. It has been spoon-fed facts, rules of thumb and heuristics for over a decade by a team of human knowledge enterers. While systems like CYC might be good for certain practical tasks, this hardly seems like an approach that will convince AI-skeptics that superintelligence might well happen in the foreseeable future. We have to look at paradigms that require less human input, ones that make more use of bottom-up methods.

Given sufficient hardware and the right sort of programming, we could make the machines learn in the same way a child does, i.e. by interacting with human adults and other objects in the environment. The learning mechanisms used by the brain are currently not completely understood. Artificial neural networks in real-world applications today are usually trained through some variant of the Backpropagation algorithm (which is known to be biologically unrealistic). The Backpropagation algorithm works fine for smallish networks (of up to a few thousand neurons) but it doesn't scale well. The time it takes to train a network tends to increase dramatically with the number of neurons it contains. Another limitation of backpropagation is that it is a form of supervised learning, requiring that signed error terms for each output neuron are specified during learning. It's not clear how such detailed performance feedback on the level of individual neurons could be provided in real-world situations except for certain well-defined specialized tasks.

A biologically more realistic learning mode is the Hebbian algorithm. Hebbian learning is unsupervised and it might also have better scaling properties than Backpropagation. However, it has yet to be explained how Hebbian learning by itself could produce all the forms of learning and adaptation of which the human brain is capable (such the storage of structured representation in long-term memory - Bostrom 1996). Presumably, Hebb's rule would at least need to be supplemented with reward-induced learning (Morillo 1992) and maybe with other learning modes that are yet to be discovered. It does seems plausible, though, to assume that only a very limited set of different learning rules (maybe as few as two or three) are operating in the human brain. And we are not very far from knowing what these rules are.

Creating superintelligence through imitating the functioning of the human brain requires two more things in addition to appropriate learning rules (and sufficiently powerful hardware): it requires having an adequate initial architecture and providing a rich flux of sensory input.

The latter prerequisite is easily provided even with present technology. Using video cameras, microphones and tactile sensors, it is possible to ensure a steady flow of real-world information to the artificial neural network. An interactive element could be arranged by connecting the system to robot limbs and a speaker.

Developing an adequate initial network structure is a more serious problem. It might turn out to be necessary to do a considerable amount of hand-coding in order to get the cortical architecture right. In biological organisms, the brain does not start out at birth as a homogenous tabula rasa; it has an initial structure that is coded genetically. Neuroscience cannot, at its present stage, say exactly what this structure is or how much of it needs be preserved in a simulation that is eventually to match the cognitive competencies of a human adult. One way for it to be unexpectedly difficult to achieve human-level AI through the neural network approach would be if it turned out that the human brain relies on a colossal amount of genetic hardwiring, so that each cognitive function depends on a unique and hopelessly complicated inborn architecture, acquired over aeons in the evolutionary learning process of our species.

Is this the case? A number of considerations that suggest otherwise. We have to contend ourselves with a very brief review here. For a more comprehensive discussion, the reader may consult Phillips & Singer (1997).

Quartz & Sejnowski (1997) argue from recent neurobiological data that the developing human cortex is largely free of domain-specific structures. The representational properties of the specialized circuits that we find in the mature cortex are not generally genetically prespecified. Rather, they are developed through interaction with the problem domains on which the circuits operate. There are genetically coded tendencies for certain brain areas to specialize on certain tasks (for example primary visual processing is usually performed in the primary visual cortex) but this does not mean that other cortical areas couldn't have learnt to perform the same function. In fact, the human neocortex seems to start out as a fairly flexible and general-purpose mechanism; specific modules arise later through self-organizing and through interacting with the environment.

Strongly supporting this view is the fact that cortical lesions, even sizeable ones, can often be compensated for if they occur at an early age. Other cortical areas take over the functions that would normally have been developed in the destroyed region. In one study, sensitivity to visual features was developed in the auditory cortex of neonatal ferrets, after that region's normal auditory input channel had been replaced by visual projections (Sur et al. 1988). Similarly, it has been shown that the visual cortex can take over functions normally performed by the somatosensory cortex (Schlaggar & O'Leary 1991). A recent experiment (Cohen et al. 1997) showed that people who have been blind from an early age can use their visual cortex to process tactile stimulation when reading Braille.

There are some more primitive regions of the brain whose functions cannot be taken over by any other area. For example, people who have their hippocampus removed, lose their ability to learn new episodic or semantic facts. But the neocortex tends to be highly plastic and that is where most of the high-level processing is executed that makes us intellectually superior to other animals. (It would be interesting to examine in more detail to what extent this holds true for all of neocortex. Are there small neocortical regions such that, if excised at birth, the subject will never obtain certain high-level competencies, not even to a limited degree?)

Another consideration that seems to indicate that innate architectural differentiation plays a relatively small part in accounting for the performance of the mature brain is the that neocortical architecture, especially in infants, is remarkably homogeneous over different cortical regions and even over different species:

Laminations and vertical connections between lamina are hallmarks of all cortical systems, the morphological and physiological characteristics of cortical neurons are equivalent in different species, as are the kinds of synaptic interactions involving cortical neurons. This similarity in the organization of the cerebral cortex extends even to the specific details of cortical circuitry. (White 1989, p. 179).

One might object that at this point that cetaceans have much bigger corticies than humans and yet they don't have human-level abstract understanding and language . A large cortex, apparently, is not sufficient for human intelligence. However, one can easily imagine that some very simple difference between human and cetacean brains can account for why we have abstract language and understanding that they lack. It could be something as trivial as that our cortex is provided with a low-level "drive" to learn about abstract relationships whereas dolphins and whales are programmed not to care about or pay much attention to such things (which might be totally irrelevant to them in their natural environment). More likely, there are some structural developments in the human cortex that other animals lack and that are necessary for advanced abstract thinking. But these uniquely human developments may well be the result of relatively simple changes in just a few basic parameters. They do not require a large amount of genetic hardwiring. Indeed, given that brain evolution that allowed Homo Sapiens to intellectually outclass other animals took place under a relatively brief period of time, evolution cannot have embedded very much content-specific information in these additional cortical structures that give us our intellectual edge over our humanoid or ape-like ancestors.

These considerations (especially the one of cortical plasticity) suggest that the amount of neuroscientific information needed for the bottom-up approach to succeed may be very limited. (Notice that they do not argue against the modularization of adult human brains. They only indicate that the greatest part of the information that goes into the modularization results from self-organization and perceptual input rather than from an immensely complicated genetic look-up table.)

Further advances in neuroscience are probably needed before we can construct a human-level (or even higher animal-level) artificial intelligence by means of this radically bottom-up approach. While it is true that neuroscience has advanced very rapidly in recent years, it is difficult to estimate how long it will take before enough is known about the brain's neuronal architecture and its learning algorithms to make it possible to replicate these in a computer of sufficient computational power. A wild guess: something like fifteen years. This is not a prediction about how far we are from a complete understanding of all important phenomena in the brain. The estimate refers to the time when we might be expected to know enough about the basic principles of how the brain works to be able to implement these computational paradigms on a computer, without necessarily modelling the brain in any biologically realistic way.

The estimate might seem to some to underestimate the difficulties, and perhaps it does. But consider how much has happened in the past fifteen years. The discipline of computational neuroscience did hardly even exist back in 1982. And future progress will occur not only because research with today's instrumentation will continue to produce illuminating findings, but also because new experimental tools and techniques become available. Large-scale multi-electrode recordings should be feasible within the near future. Neuro/chip interfaces are in development. More powerful hardware is being made available to neuroscientists to do computation-intensive simulations. Neuropharmacologists design drugs with higher specificity, allowing researches to selectively target given receptor subtypes. Present scanning techniques are improved and new ones are under development. The list could be continued. All these innovations will give neuroscientists very powerful new tools that will facilitate their research.

This section has discussed the software problem. It was argued that it can be solved through a bottom-up approach by using present equipment to supply the input and output channels, and by continuing to study the human brain in order to find out about what learning algorithm it uses and about the initial neuronal structure in new-born infants. Considering how large strides computational neuroscience has taken in the last decade, and the new experimental instrumentation that is under development, it seems reasonable to suppose that the required neuroscientific knowledge might be obtained in perhaps fifteen years from now, i.e. by year 2012.

Notes

That dolphins don't have abstract language was recently established in a very elegant experiment. A pool is divided into two halves by a net. Dolphin A is released into one end of the pool where there is a mechanism. After a while, the dolphin figures out how to operate the mechanism which causes dead fish to be released into both ends of the pool. Then A is transferred to the other end of the pool and a dolphin B is released into the end of the pool that has the mechanism. The idea is that if the dolphins had a language, then A would tell B to operate the mechanism. However, it was found that the average time for B to operate the mechanism was the same as for A.

Why the past failure of AI is no argument against its future success

In the seventies and eighties the AI field suffered some stagnation as the exaggerated expectations from the early heydays failed to materialize and progress nearly ground to a halt. The lesson to draw from this episode is not that strong AI is dead and that superintelligent machines will never be built. It shows that AI is more difficult than some of the early pioneers might have thought, but it goes no way towards showing that AI will forever remain unfeasible.

In retrospect we know that the AI project couldn't possibly have succeeded at that stage. The hardware was simply not powerful enough. It seems that at least about 100 Tops is required for human-like performance, and possibly as much as 10^17 ops is needed. The computers in the seventies had a computing power comparable to that of insects. They also achieved approximately insect-level intelligence. Now, on the other hand, we can foresee the arrival of human-equivalent hardware, so the cause of AI's past failure will then no longer be present.

There is also an explanation for the relative absence even of noticeable progress during this period. As Hans Moravec points out:

[F]or several decades the computing power found in advanced Artificial Intelligence and Robotics systems has been stuck at insect brain power of 1 MIPS. While computer power per dollar fell [should be: rose] rapidly during this period, the money available fell just as fast. The earliest days of AI, in the mid 1960s, were fuelled by lavish post-Sputnik defence funding, which gave access to $10,000,000 supercomputers of the time. In the post Vietnam war days of the 1970s, funding declined and only $1,000,000 machines were available. By the early 1980s, AI research had to settle for $100,000 minicomputers. In the late 1980s, the available machines were $10,000 workstations. By the 1990s, much work was done on personal computers costing only a few thousand dollars. Since then AI and robot brain power has risen with improvements in computer efficiency. By 1993 personal computers provided 10 MIPS, by 1995 it was 30 MIPS, and in 1997 it is over 100 MIPS. Suddenly machines are reading text, recognizing speech, and robots are driving themselves cross country. (Moravec 1997)

In general, there seems to be a new-found sense of optimism and excitement among people working in AI, especially among those taking a bottom-up approach, such as researchers in genetic algorithms, neuromorphic engineering and in neural networks hardware implementations. Many experts who have been around, though, are wary not again to underestimate the difficulties ahead.

Once there is human-level AI there will soon be superintelligence

Once artificial intelligence reaches human level, there will be a positive feedback loop that will give the development a further boost. AIs would help constructing better AIs, which in turn would help building better AIs, and so forth.

Even if no further software development took place and the AIs did not accumulate new skills through self-learning, the AIs would still get smarter if processor speed continued to increase. If after 18 months the hardware were upgraded to double the speed, we would have an AI that could think twice as fast as its original implementation. After a few more doublings this would directly lead to what has been called "weak superintelligence", i.e. an intellect that has about the same abilities as a human brain but is much faster.

Also, the marginal utility of improvements in AI when AI reaches human-level would also seem to skyrocket, causing funding to increase. We can therefore make the prediction that once there is human-level artificial intelligence then it will not be long before superintelligence is technologically feasible.

A further point can be made in support of this prediction. In contrast to what's possible for biological intellects, it might be possible to copy skills or cognitive modules from one artificial intellect to another. If one AI has achieved eminence in some field, then subsequent AIs can upload the pioneer's program or synaptic weight-matrix and immediately achieve the same level of performance. It would not be necessary to again go through the training process. Whether it will also be possible to copy the best parts of several AIs and combine them into one will depend on details of implementation and the degree to which the AIs are modularized in a standardized fashion. But as a general rule, the intellectual achievements of artificial intellects are additive in a way that human achievements are not, or only to a much less degree.

The demand for superintelligence

Given that superintelligence will one day be technologically feasible, will people choose to develop it? This question can pretty confidently be answered in the affirmative. Associated with every step along the road to superintelligence are enormous economic payoffs. The computer industry invests huge sums in the next generation of hardware and software, and it will continue doing so as long as there is a competitive pressure and profits to be made. People want better computers and smarter software, and they want the benefits these machines can help produce. Better medical drugs; relief for humans from the need to perform boring or dangerous jobs; entertainment -- there is no end to the list of consumer-benefits. There is also a strong military motive to develop artificial intelligence. And nowhere on the path is there any natural stopping point where technofobics could plausibly argue "hither but not further".

It therefore seems that up to human-equivalence, the driving-forces behind improvements in AI will easily overpower whatever resistance might be present. When the question is about human-level or greater intelligence then it is conceivable that there might be strong political forces opposing further development. Superintelligence might be seen to pose a threat to the supremacy, and even to the survival, of the human species. Whether by suitable programming we can arrange the motivation systems of the superintelligences in such a way as to guarantee perpetual obedience and subservience, or at least non-harmfulness, to humans is a contentious topic. If future policy-makers can be sure that AIs would not endanger human interests then the development of artificial intelligence will continue. If they can't be sure that there would be no danger, then the development might well continue anyway, either because people don't regard the gradual displacement of biological humans with machines as necessarily a bad outcome, or because such strong forces (motivated by short-term profit, curiosity, ideology, or desire for the capabilities that superintelligences might bring to its creators) are active that a collective decision to ban new research in this field can not be reached and successfully implemented.

Conclusion

Depending on degree of optimization assumed, human-level intelligence probably requires between 10^14 and 10^17 ops. It seems quite possible that very advanced optimization could reduce this figure further, but the entrance level would probably not be less than about 10^14 ops. If Moore's law continues to hold then the lower bound will be reached sometime between 2004 and 2008, and the upper bound between 2015 and 2024. The past success of Moore's law gives some inductive reason to believe that it will hold another ten, fifteen years or so; and this prediction is supported by the fact that there are many promising new technologies currently under development which hold great potential to increase procurable computing power. There is no direct reason to suppose that Moore's law will not hold longer than 15 years. It thus seems likely that the requisite hardware for human-level artificial intelligence will be assembled in the first quarter of the next century, possibly within the first few years.

There are several approaches to developing the software. One is to emulate the basic principles of biological brains. It is not implausible to suppose that these principles will be well enough known within 15 years for this approach to succeed, given adequate hardware.

The stagnation of AI during the seventies and eighties does not have much bearing on the likelihood of AI to succeed in the future since we know that the cause responsible for the stagnation (namely, that the hardware available to AI researchers was stuck at about 10^6 ops) is no longer present.

There will be a strong and increasing pressure to improve AI up to human-level. If there is a way of guaranteeing that superior artificial intellects will never harm human beings then such intellects will be created. If there is no way to have such a guarantee then they will probably be created nevertheless.

Go to Nick Bostrom's home page

.

The U.S. Department of Energy has ordered a new supercomputer from IBM, to be installed in the Lawrence Livermore National Laboratory in the year 2000. It will cost $85 million and will perform 10 Tops. This development is in accordance with Moore's law, or possibly slightly more rapid than an extrapolation would have predicted.

Many steps forward that have been taken during the past year. An especially nifty one is the new chip-making techniques being developed at Irvine Sensors Corporation (ISC). They have found a way to stack chips directly on top of each other in a way that will not only save space but, more importantly, allow a larger number of interconnections between neigboring chips. Since the number of interconnections have been a bottleneck in neural network hardware implementations, this breakthrough could prove very important. In principle, it should allow you to have an arbitrarily large cube of neural network modules with high local connectivity and moderate non-local connectivity.

Is progress still on schedule? - In fact, things seem to be moving somewhat faster than expected, at least on the hardware front. (Software progress is more difficult to quantify.) IBM is currently working on a next-generation supercomputer, Blue Gene, which will perform over 10^15 ops. This computer, which is designed to tackle the protein folding problem, is expected to be ready around 2005. It will achieve its enormous power through massive parallelism rather than through dramatically faster processors. Considering the increasing emphasis on parallel computing, and the steadily increasing Internet bandwidth, it becomes important to interpret Moore's law as a statement about how much computing power can be bought for a given sum of (inflation adjusted) money. This measure has historically been growing at the same pace as processor speed or chip density, but the measures may come apart in the future. It is how much computing power that can be bought for, say, 100 million dollars that is relevant when we are trying to guess when superintelligence will be developed, rather than how fast individual processors are.

The fastest supercomputer today is IBM's Blue Gene/L, which has attained 260 Tops (2.6*10^14 ops). The Moravec estimate of the human brain's processing power (10^14 ops) has thus now been exceeded.

The 'Blue Brain' project was launched by the Brain Mind Institute, EPFL, Switzerland and IBM, USA in May, 2005. It aims to build an accurate software replica of the neocortical column within 2-3 years. The column will consist of 10,000 morphologically complex neurons with active ionic channels. The neurons will be interconnected in a 3-dimensional space with 10^7 -10^8 dynamic synapses. This project will thus use a level of simulation that attempts to capture the functionality of individual neurons at a very detailed level. The simulation is intended to run in real time on a computer preforming 22.8*10^12 flops. Simulating the entire brain in real time at this level of detail (which the researchers indicate as a goal for later stages of the project) would correspond to circa 2*10^19 ops, five orders of magnitude above the current supercomputer record. This is two orders of magnitude greater than the estimate of neural-level simulation given in the original paper above, which assumes a cruder level of simulation of neurons. If the 'Blue Brain' project succeeds, it will give us hard evidence of an upper bound on the computing power needed to achieve human intelligence.

Functional replication of the functionality of early auditory processing (which is quite well understood) has yielded an estimate that agrees with Moravec's assessment based on signal processing in the retina (i.e. 10^14 ops for whole-brain equivalent replication).

No dramatic breakthrough in general artificial intelligence seems to have occurred in recent years. Neuroscience and neuromorphic engineering are proceeding at a rapid clip, however. Much of the paper could now be rewritten and updated to take into account information that has become available in the past 8 years.

Molecular nanotechnology, a technology that in its mature form could enable mind uploading (an extreme version of the bottom-up method, in which a detailed 3-dimensional map is constructed of a particular human brain and then emulated in a computer), has begun to pick up steam, receiving increasing funding and attention. An upload running on a fast computer would be weakly superintelligent -- it would initially be functionally identical to the original organic brain, but it could run at a much higher speed. Once such an upload existed, it might be possible to enhance its architecture to create strong superintelligence that was not only faster but functionally superior to human intelligence.

Follow this link:

How Long Before Superintelligence? - Nick Bostrom

Ethical Issues In Advanced Artificial Intelligence

The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is a cognitive pursuit, a superintelligence could also easily surpass humans in the quality of its moral thinking. However, it would be up to the designers of the superintelligence to specify its original motivations. Since the superintelligence may become unstoppably powerful because of its intellectual superiority and the technologies it could develop, it is crucial that it be provided with human-friendly motivations. This paper surveys some of the unique ethical issues in creating superintelligence, and discusses what motivations we ought to give a superintelligence, and introduces some cost-benefit considerations relating to whether the development of superintelligent machines ought to be accelerated or retarded.

KEYWORDS: Artificial intelligence, ethics, uploading, superintelligence, global security, cost-benefit analysis

1. INTRODUCTION

A superintelligence is any intellect that is vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.[1] This definition leaves open how the superintelligence is implemented it could be in a digital computer, an ensemble of networked computers, cultured cortical tissue, or something else.

On this definition, Deep Blue is not a superintelligence, since it is only smart within one narrow domain (chess), and even there it is not vastly superior to the best humans. Entities such as corporations or the scientific community are not superintelligences either. Although they can perform a number of intellectual feats of which no individual human is capable, they are not sufficiently integrated to count as intellects, and there are many fields in which they perform much worse than single humans. For example, you cannot have a real-time conversation with the scientific community.

While the possibility of domain-specific superintelligences is also worth exploring, this paper focuses on issues arising from the prospect of general superintelligence. Space constraints prevent us from attempting anything comprehensive or detailed. A cartoonish sketch of a few selected ideas is the most we can aim for in the following few pages.

Several authors have argued that there is a substantial chance that superintelligence may be created within a few decades, perhaps as a result of growing hardware performance and increased ability to implement algorithms and architectures similar to those used by human brains.[2] It might turn out to take much longer, but there seems currently to be no good ground for assigning a negligible probability to the hypothesis that superintelligence will be created within the lifespan of some people alive today. Given the enormity of the consequences of superintelligence, it would make sense to give this prospect some serious consideration even if one thought that there were only a small probability of it happening any time soon.

2. SUPERINTELLIGENCE IS DIFFERENT

A prerequisite for having a meaningful discussion of superintelligence is the realization that superintelligence is not just another technology, another tool that will add incrementally to human capabilities. Superintelligence is radically different. This point bears emphasizing, for anthropomorphizing superintelligence is a most fecund source of misconceptions.

Let us consider some of the unusual aspects of the creation of superintelligence:

Superintelligence may be the last invention humans ever need to make.

Given a superintelligences intellectual superiority, it would be much better at doing scientific research and technological development than any human, and possibly better even than all humans taken together. One immediate consequence of this fact is that:

Technological progress in all other fields will be accelerated by the arrival of advanced artificial intelligence.

It is likely that any technology that we can currently foresee will be speedily developed by the first superintelligence, no doubt along with many other technologies of which we are as yet clueless. The foreseeable technologies that a superintelligence is likely to develop include mature molecular manufacturing, whose applications are wide-ranging:[3]

a) very powerful computers

b) advanced weaponry, probably capable of safely disarming a nuclear power

c) space travel and von Neumann probes (self-reproducing interstellar probes)

d) elimination of aging and disease

e) fine-grained control of human mood, emotion, and motivation

f) uploading (neural or sub-neural scanning of a particular brain and implementation of the same algorithmic structures on a computer in a way that perseveres memory and personality)

g) reanimation of cryonics patients

h) fully realistic virtual reality

Superintelligence will lead to more advanced superintelligence.

This results both from the improved hardware that a superintelligence could create, and also from improvements it could make to its own source code.

Artificial minds can be easily copied.

Since artificial intelligences are software, they can easily and quickly be copied, so long as there is hardware available to store them. The same holds for human uploads. Hardware aside, the marginal cost of creating an additional copy of an upload or an artificial intelligence after the first one has been built is near zero. Artificial minds could therefore quickly come to exist in great numbers, although it is possible that efficiency would favor concentrating computational resources in a single super-intellect.

Emergence of superintelligence may be sudden.

It appears much harder to get from where we are now to human-level artificial intelligence than to get from there to superintelligence. While it may thus take quite a while before we get superintelligence, the final stage may happen swiftly. That is, the transition from a state where we have a roughly human-level artificial intelligence to a state where we have full-blown superintelligence, with revolutionary applications, may be very rapid, perhaps a matter of days rather than years. This possibility of a sudden emergence of superintelligence is referred to as the singularity hypothesis.[4]

Artificial intellects are potentially autonomous agents.

A superintelligence should not necessarily be conceptualized as a mere tool. While specialized superintelligences that can think only about a restricted set of problems may be feasible, general superintelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent.

Artificial intellects need not have humanlike motives.

Human are rarely willing slaves, but there is nothing implausible about the idea of a superintelligence having as its supergoal to serve humanity or some particular human, with no desire whatsoever to revolt or to liberate itself. It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal. For better or worse, artificial intellects need not share our human motivational tendencies.

Artificial intellects may not have humanlike psyches.

The cognitive architecture of an artificial intellect may also be quite unlike that of humans. Artificial intellects may find it easy to guard against some kinds of human error and bias, while at the same time being at increased risk of other kinds of mistake that not even the most hapless human would make. Subjectively, the inner conscious life of an artificial intellect, if it has one, may also be quite different from ours.

For all of these reasons, one should be wary of assuming that the emergence of superintelligence can be predicted by extrapolating the history of other technological breakthroughs, or that the nature and behaviors of artificial intellects would necessarily resemble those of human or other animal minds.

3. SUPERINTELLIGENT MORAL THINKING

To the extent that ethics is a cognitive pursuit, a superintelligence could do it better than human thinkers. This means that questions about ethics, in so far as they have correct answers that can be arrived at by reasoning and weighting up of evidence, could be more accurately answered by a superintelligence than by humans. The same holds for questions of policy and long-term planning; when it comes to understanding which policies would lead to which results, and which means would be most effective in attaining given aims, a superintelligence would outperform humans.

There are therefore many questions that we would not need to answer ourselves if we had or were about to get superintelligence; we could delegate many investigations and decisions to the superintelligence. For example, if we are uncertain how to evaluate possible outcomes, we could ask the superintelligence to estimate how we would have evaluated these outcomes if we had thought about them for a very long time, deliberated carefully, had had more memory and better intelligence, and so forth. When formulating a goal for the superintelligence, it would not always be necessary to give a detailed, explicit definition of this goal. We could enlist the superintelligence to help us determine the real intention of our request, thus decreasing the risk that infelicitous wording or confusion about what we want to achieve would lead to outcomes that we would disapprove of in retrospect.

4. IMPORTANCE OF INITIAL MOTIVATIONS

The option to defer many decisions to the superintelligence does not mean that we can afford to be complacent in how we construct the superintelligence. On the contrary, the setting up of initial conditions, and in particular the selection of a top-level goal for the superintelligence, is of the utmost importance. Our entire future may hinge on how we solve these problems.

Both because of its superior planning ability and because of the technologies it could develop, it is plausible to suppose that the first superintelligence would be very powerful. Quite possibly, it would be unrivalled: it would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its top goal. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference. Even a fettered superintelligence that was running on an isolated computer, able to interact with the rest of the world only via text interface, might be able to break out of its confinement by persuading its handlers to release it. There is even some preliminary experimental evidence that this would be the case.[5]

It seems that the best way to ensure that a superintelligence will have a beneficial impact on the world is to endow it with philanthropic values. Its top goal should be friendliness.[6] How exactly friendliness should be understood and how it should be implemented, and how the amity should be apportioned between different people and nonhuman creatures is a matter that merits further consideration. I would argue that at least all humans, and probably many other sentient creatures on earth should get a significant share in the superintelligences beneficence. If the benefits that the superintelligence could bestow are enormously vast, then it may be less important to haggle over the detailed distribution pattern and more important to seek to ensure that everybody gets at least some significant share, since on this supposition, even a tiny share would be enough to guarantee a very long and very good life. One risk that must be guarded against is that those who develop the superintelligence would not make it generically philanthropic but would instead give it the more limited goal of serving only some small group, such as its own creators or those who commissioned it.

If a superintelligence starts out with a friendly top goal, however, then it can be relied on to stay friendly, or at least not to deliberately rid itself of its friendliness. This point is elementary. A friend who seeks to transform himself into somebody who wants to hurt you, is not your friend. A true friend, one who really cares about you, also seeks the continuation of his caring for you. Or to put it in a different way, if your top goal is X, and if you think that by changing yourself into someone who instead wants Y you would make it less likely that X will be achieved, then you will not rationally transform yourself into someone who wants Y. The set of options at each point in time is evaluated on the basis of their consequences for realization of the goals held at that time, and generally it will be irrational to deliberately change ones own top goal, since that would make it less likely that the current goals will be attained.

In humans, with our complicated evolved mental ecology of state-dependent competing drives, desires, plans, and ideals, there is often no obvious way to identify what our top goal is; we might not even have one. So for us, the above reasoning need not apply. But a superintelligence may be structured differently. If a superintelligence has a definite, declarative goal-structure with a clearly identified top goal, then the above argument applies. And this is a good reason for us to build the superintelligence with such an explicit motivational architecture.

5. SHOULD DEVELOPMENT BE DELAYED OR ACCELERATED?

It is hard to think of any problem that a superintelligence could not either solve or at least help us solve. Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a superintelligence equipped with advanced nanotechnology would be capable of eliminating. Additionally, a superintelligence could give us indefinite lifespan, either by stopping and reversing the aging process through the use of nanomedicine[7], or by offering us the option to upload ourselves. A superintelligence could also create opportunities for us to vastly increase our own intellectual and emotional capabilities, and it could assist us in creating a highly appealing experiential world in which we could live lives devoted to in joyful game-playing, relating to each other, experiencing, personal growth, and to living closer to our ideals.

The risks in developing superintelligence include the risk of failure to give it the supergoal of philanthropy. One way in which this could happen is that the creators of the superintelligence decide to build it so that it serves only this select group of humans, rather than humanity in general. Another way for it to happen is that a well-meaning team of programmers make a big mistake in designing its goal system. This could result, to return to the earlier example, in a superintelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities. More subtly, it could result in a superintelligence realizing a state of affairs that we might now judge as desirable but which in fact turns out to be a false utopia, in which things essential to human flourishing have been irreversibly lost. We need to be careful about what we wish for from a superintelligence, because we might get it.

One consideration that should be taken into account when deciding whether to promote the development of superintelligence is that if superintelligence is feasible, it will likely be developed sooner or later. Therefore, we will probably one day have to take the gamble of superintelligence no matter what. But once in existence, a superintelligence could help us reduce or eliminate other existential risks[8], such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism, a serious threat to the long-term survival of intelligent life on earth. If we get to superintelligence first, we may avoid this risk from nanotechnology and many others. If, on the other hand, we get nanotechnology first, we will have to face both the risks from nanotechnology and, if these risks are survived, also the risks from superintelligence. The overall risk seems to be minimized by implementing superintelligence, with great care, as soon as possible.

REFERENCES

Bostrom, N. (1998). "How Long Before Superintelligence?" International Journal of Futures Studies, 2. http://www.nickbostrom.com/superintelligence.html

Bostrom, N. (2002). "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards." Journal of Evolution and Technology, 9. http://www.nickbostrom.com/existential/risks.html

Drexler, K. E. Engines of Creation: The Coming Era of Nanotechnology. (Anchor Books: New York, 1986). http://www.foresight.org/EOC/index.html

Freitas Jr., R. A. Nanomedicine, Volume 1: Basic Capabilities. (Landes Bioscience: Georgetown, TX, 1999). http://www.nanomedicine.com

Hanson, R., et al. (1998). "A Critical Discussion of Vinge's Singularity Concept." Extropy Online. http://www.extropy.org/eo/articles/vi.html

Kurzweil, R. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. (Viking: New York, 1999).

Moravec, H. Robot: Mere Machine to Transcendent Mind. (Oxford University Press: New York, 1999).

Vinge, V. (1993). "The Coming Technological Singularity." Whole Earth Review, Winter issue.

Yudkowsky, E. (2002). "The AI Box Experiment." Webpage. http://sysopmind.com/essays/aibox.html

Yudkowsky, E. (2003). Creating Friendly AI 1.0. http://www.singinst.org/CFAI/index.html

View post:

Ethical Issues In Advanced Artificial Intelligence