Page 23«..10..21222324

Category Archives: Superintelligence

Superintelligence: paths, dangers, strategies | University …

Posted: October 17, 2016 at 1:26 am

Join Professor Nick Bostrom for a talk on his new book, Superintelligence: Paths, Dangers, Strategies, and for a journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life. The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

The book talk will be followed by a book signing and drinks reception.

Continued here:

Superintelligence: paths, dangers, strategies | University ...

Posted in Superintelligence | Comments Off on Superintelligence: paths, dangers, strategies | University …

Superintelligence by Nick Bostrom and A Rough Ride to the …

Posted: September 6, 2016 at 8:20 am

Roboy, a humanoid robot developed by the University of Zurich's Artificial Intelligence Lab. Photograph: Erik Tham/Corbis

The Culture novels of Iain M Banks describe a future in which Minds superintelligent machines dwelling ingiant spacecraft are largely benevolent towards human beings and seem to take pleasure from our creativity and occasional unpredictability. It's a vision that I find appealing compared with many other imagined worlds. I'd like to think that if superintelligent beings did exist they would be at least as enlightened as, say, the theologian Thomas Berry, who wrote that once we begin to celebrate the joys of the Earth all things become possible. But the smart money or rather most of the money points another way. Box-office success goes to tales in which intelligences created by humans rise up and destroy or enslave their makers.

If you think this is all science fictionand fantasy, you may be wrong. Scientists including Stephen Hawking and Max Tegmark believe that superintelligent machines are quite feasible. And the consequences of creating them, they say, could be either the bestor the worst thing ever to happen to humanity. Suppose, then, we take the proposition seriously. When couldit happen and what could theconsequences be? Both Nick Bostromand James Lovelock address these questions.

The authors are very different. Bostrom is a 41-year-old academic philosopher; Lovelock, now 94, is a theorist and a prolific inventor (his electron capture detector was key to the discovery of the stratospheric ozone hole). They are alike in that neither is afraid to develop and champion heterodox ideas. Lovelock is famous for the Gaia hypothesis, which holds that life on Earth, taken as a whole, creates conditions that favour its own long-term flourishing. Bostrom has advanced radical ideas on transhumanism and even argued that it is more than likely we live inside acomputer-generated virtual world.

As early as the 1940s Alan Turing, John von Neumann and others saw that machines could one day have almost unlimited impact on humanity and the rest of life. Turing suggested programs that mimicked evolutionary processes could result in machines with intelligence comparable to or greater than that of humans. Certainly, achievements in computer science over the last 75 yearshave been astonishing. Most obviously, machines can now execute complex mathematical operations many orders of magnitude faster than humans. They can perform a range of tasks, from playing world-beating chess to flying a plane or a car, and their capabilities are rapidly growing. The consequences from machines stealing your job to eliminating drudgery to unravelling the enigmas of cancer toremote killing are and will continue to be striking.

But even the most sophisticated machines created so far are intelligent in only a limited sense. They enactcapabilities that humans have envisaged and programmed into them. Creativity, the ability to generate new knowledge and generalised intelligence outside specific domains seem to be beyond them. Expectations that AI would soon overtake human intelligence were first dashed in the 1960s. And the notion of a singularity the idea, advanced first by Vernor Vinge and championed most conspicuously by Ray Kurzweil, that the sudden, rapid explosion of AI and human biological enhancement is imminent and will probably with us by around 2030 looks to be heading for a similar fate.

Still, one would be ill-advised to dismiss the possibility altogether. (It took 100 years after George Cayley first understood the basic principles of aerodynamics to achieve heavier-than-air flight, and the first aeroplanes looked nothing like birds.) Bostrom reports that many leading researchers in AI place a 90% probability on the development of human-level machine intelligence by between 2075 and 2090. It is likely, he says, that superintelligence, vastly outstripping ours, would follow. The central argument of his book goes like this: the first superintelligence to be created will have decisive first-mover advantage and, in a world where there is no other system remotely comparable, it will be very powerful. Such a system will shape the world according to its "preferences", and will probably be able to overcome any resistance that humans can put up. The bad news is that the preferences such an artificial agent could have will, if fully realised, involve the complete destruction of human life and most plausible human values. The default outcome, then, is catastrophe. In addition, Bostrom argues that we are not out of the woods even if his initial premise is false and a unipolar superintelligence never appears. "Before the prospect of an intelligence explosion," he writes, "we humans are like small children playing with a bomb."

It will, he says, be very difficult but perhaps not impossible to engineer a superintelligence with preferences that make it friendly to humans or able to be controlled. Our saving grace could involve "indirect normativity" and "coherent extrapolated volition", in which we take advantage of an artificialsystem's own intelligence to deliver beneficial outcomes that we ourselves cannot see or agree on in advance. The challenge we face, he stresses, is "to hold on to our humanity: to maintain our groundedness". He recommends research be guided and managed within a strict ethical framework. Afterall, we are likely to need the smartest technology we can get our hands on to deal with the challenges we face in the nearer term. It comes, then, to a balance of risks. Bostrom's Oxford University colleagues Anders Sandberg and Andrew Snyder-Beattie suggest that nuclear war and the weaponisation ofbiotechnology and nanotechnology present greater threats to humanity than superintelligence.

For them, manmade climate change is not an existential threat. This judgment is shared by Lovelock, who argues that while climate change could mean a bumpy ride over the next century or two, with billions dead, it isnot necessarily the end of the world.

What distinguishes Lovelock's new book from his earlier ones is an emphasis on the possibility of humanity as part of the solution as well as part of the problem. "We are crucially important for the survival of life on Earth," hewrites. "If we trash civilisation by heedless auto-intoxication, global war or the wasteful dispersal of the Earth's chemical resources, it will grow progressively more difficult to begin again and reach the present level of knowledge. If we fail, or become extinct, there is probably not sufficient time for a successor animal to evolve intelligence at or above our level." Earth now needs humans equipped with the bestof modern science, he believes, to ensure that life will continue to thrive. Only we can produce new forms clever enough to flourish millions of years in the future when the sun gets hotter and larger and begins to make carbon-based life less viable. Lovelock thinks superintelligent machines are a distant prospect, and that technology will remain our slave.

What to believe and to predict? Perhaps better to quote. In his 1973 television series and book The Ascent of Man, Jacob Bronowski said: "We are nature's unique experiment to make the rational intelligence prove itself sounder than reflex. Knowledge is our destiny." To this add a few words of Sandberg's: "The core problem is overconfidence The greatest threat is human stupidity."

To order these titles with free UK p&p call Guardian book service on 0330 333 6846 or go to guardianbookshop.co.uk.

View post:

Superintelligence by Nick Bostrom and A Rough Ride to the ...

Posted in Superintelligence | Comments Off on Superintelligence by Nick Bostrom and A Rough Ride to the …

Future of AI 6. Discussion of ‘Superintelligence: Paths …

Posted: August 10, 2016 at 9:18 pm

Update: readers of the post have also pointed out this critique by Ernest Davis and this response to Davis by Rob Bensinger.

Update 2: Both Rob Bensinger and Michael Tetelman rightly pointed out that my intelligence definition was sloppily defined. Ive added a clarification that the defintion is for a given task.

Cover of Superintelligence

This post is a discussion of Nick Bostroms book Superintelligence. The book has had an effect on the thinking of many of the worlds thought leaders. Not just in artificial intelligence, but in a range of different domains (politicians, physicists, business leaders). In that light, and given this series of blog posts is about the Future of AI, it seemed important to read the book and discuss his ideas.

In an ideal world, this post would certainly have contained more summaries of the books arguments and perhaps a later update will improve on that aspect. For the moment the review focuses on counter-arguments and perceived omissions (the post already got too long with just covering those).

Bostrom considers various routes we have to forming intelligent machines and what the possible outcomes might be from developing such technologies. He is a professor of philosophy but has an impressive array of background degrees in areas such as mathematics, logic, philosophy and computational neuroscience.

So lets start at the beginning and put the book in context by trying to understand what is meant by the term superintelligence

In common with many contributions to the debate on artificial intelligence, Bostrom never defines what he means by intelligence. Obviously, this can be problematic. On the other hand, superintelligence is defined as outperforming humans in every intelligent capability that they express.

Personally, Ive developed the following definition of intelligence: Use of information to take decisions which save energy in pursuit of a given task. Here by information I might mean data or facts or rules, and by saving energy I mean saving free energy.

However, accepting Bostroms lack of definition of intelligence (and perhaps taking note of my own), we can still consider the routes to superintelligence Bostrom proposes. It is important to bear in mind that Bostrom is worried about the effect of intelligence on 30 year (and greater) timescales. These are timescales which are difficult to predict over. I think it is admirable that Nick is trying to address this, but Im also keen to ensure that particular ideas which are at best implausible, but at worst a misrepresentation of current research, dont become memes in the very important debate on the future of machine intelligence.

A technological singularity is when a technology becomes transhuman in its possibilities, moving beyond our own capabilities through self improvement. Its a simple idea, and often theres nothing to be afraid of. For example, in mechanical engineering, we long ago began to make tools that could manufacture other tools. And indeed, the precision of the manufactured tools outperformed those that we could make by hand. This led to a technological singularity of precision made tools. We developed transhuman milling machines and lathes. We developed superprecision, precision that is beyond the capabilities of any human. Of course there are physical limits on how far this particular technological singularity has taken us. We cannot achieve infinitely precise machining tolerances.

In machining, the concept of precision can be defined in terms of the tolerance that the resulting parts are made to. Unfortunately, the lack of a definition of intelligence in Bostroms book makes it harder to ground the argument. In practice this means that the book often exploits different facets of intelligence and combines them in worse case scenarios while simultaneously conflating conflicting principles.

The book gives little thought to the differing natures of machine and human intelligence. For example, there is no acknowledgment of the embodied nature of our intelligence. There are physical constraints on communication rates. For humans these constraints are much stronger than for machines. Machine intelligences communicate with one another in gigabits per second. Humans in bits per second. For our relative computational abilities the best estimates are that, in terms of underlying computation in the brain, we are computing much quicker than machines. This means humans have a very high compute/communicate ratio. We might think of that as an embodiment factor. We can compute far more than we can communicate, leading to a backlog of conclusions within our own minds. Much of our human intelligence seems doomed to remain within ourselves. This dominates the nature of human intelligence. In contrast, this phenomenon is only weakly observed in computers, if at all. Computers can distribute the results of their intelligence at approximately the same rate that they compute them.

Bostroms idea of superintelligence is an intelligence that outperforms us in all its facets. But if our emotional intelligence is a result of our limited communication ability, then it might be impossible to emulate it without also implementing the limited communication. Since communication also affects other facets of our intelligence we can see how it may, therefore, be impossible to dominate human abilities in the manner which the concept of superintelligence envisages. A better definition of intelligence would have helped resolve these arguments.

My own belief is that we became individually intelligent through a need to model each other (and ourselves) to perform better planning. So we evolved to undertake collaborative planning and developed complex social interactions. As a result our species, our collective intelligence, became increasingly complex (on evolutionary timescales) as we evolved greater intelligence within each of the individuals that made up our social group. Because of this process I find it difficult to fully separate our collective intelligence from our individual intelligences. I dont think Bostrom suffers with this dichotomy because my impression is that his book only views human intelligence as an individual characteristic. My feeling is that this is limiting because any algorithmics we create to emulate our intelligence will actually operate on societal scales and the interaction of the artificial intelligence with our own should be considered in that context.

As humans, we are a complex society of interacting intelligences. Any predictions we make within that society would seem particularly fraught. Intelligent decision making relies on such predictions to quantify the value of a particular decision (in terms of the energy it might save). But when we want to consider future plausible scenarios we are faced with exponential growth of complexity in an already extremely complex system.

In practice we can make progress with our predictions by compressing the complex world into abstractions: simplifications of the world around that are sufficiently predictive for our purposes, but retain tractability. However, using such abstractions involves introducing model uncertainty. Model uncertainty reflects the unknown way in which the actual world will differ from our simplifications.

Practitioners who have performed sensitivity analysis on time series prediction will know how quickly uncertainty accumulates as you try to look forward in time. There is normally a time frame ahead of which things become too misty to compute any more. Further computational power doesnt help you in this instance, because uncertainty dominates. Reducing model uncertainty requires exponentially greater computation. We might try to handle this uncertainty by quantifying it, but even this can prove intractable.

So just like the elusive concept of infinite precision in mechanical machining, there is likely a limit on the degree to which an entity can be intelligent. We cannot predict with infinite precision and this will render our predictions useless on some particular time horizon.

The limit on predictive precision is imposed by the exponential growth in complexity of exact simulation, coupled with the accumulation of error associated with the necessary abstraction of our predictive models. As we predict forward these uncertainties can saturate dominating our predictions. As a result we often only have a very vague notion of what is to come. This limit on our predictive ability places a fundamental limit on our ability to make intelligent decisions.

There was a time when people believed in perpetual motion machines (and quite a lot of effort was put into building them). Physical limitations of such machines were only understood in the late 19th century (for example the limit on efficiency of heat engines was theoretically formulated by Carnot). We dont yet know the theoretical limits of intelligence, but the intellectual gymnastics of some of the entities described in Superintelligence will likely be curtailed by the underlying mathematics. In practice the singularity will saturate, its just a question of where that saturation will occur relative to our current intelligence. Bostrom thinks it will be a long way ahead, I tend to agree but I dont think that the results will be as unimaginable as is made out. Machines are already a long way ahead of us in many areas (weather prediction for example) but I dont find that unimaginable either.

Unfortunately, in his own analysis, Bostrom hardly makes any use of uncertainty when envisaging future intelligences. In practice correct handling of uncertainty is critical in intelligent systems. By ignoring it Bostrom can give the impression that a superintelligence would act with unerving confidence. Indeed the only point where I recollect the mention of uncertainty is when it is used to unnerve us further. Bostrom refers to how he thinks a sensible Bayesian agent would respond to being given a particular goal. Bostrom suggests that due to uncertainty it would believe it might not have achieved its goal and continue to consume world resource in an effort to do so. In this respect the agent appears to be taking the inverse action of that suggested by the Greek skeptic Aenesidemus, who advocated suspension of judgment, or epoch, in the presence of uncertainty. Suspension of judgment (delay of decision making) meaning specifically refrain from action. That is indeed the intelligent reaction to uncertainty. Dont needlessly expend energy when the outcome is uncertain (to do so would contradict my definition of intelligent behavior). This idea emerges as optimal behavior from a mathematical treatment of such systems when uncertainty is incorporated.

This meme occurs through out the book. The savant idiot, a gifted intelligence that does a particular thing really stupidly. As such it contradicts the concept of superintelligence. The superintelligence is better in all ways than us, but then somehow must also be taught values and morals. Values and morals are part of our complex emergent human behaviour. Part of both our innate and our developed intelligence, both individually and collectively as a species. They are part of our natural conservatism that constrains extreme behavior. Constraints on extreme behaviour are necessary because of the general futility of absolute prediction. Just as in machining, we cannot achieve infinitely precise prediction.

Another way the savant idiot expresses itself in the book is through extreme confidence about its predictions in the future. The premise is that it will agressively follow a strategy (potentially to the severe detriment of humankind) in an effort to fulfill a defined final goal. Well address the mistaken idea of a simplistic final goal below.

With a shallow reading Bostroms ideas seem to provide an interesting narrative. In the manner of an Ian Fleming novel, the narrative is littered with technical detail to increase the plausibility for the reader. However, in the same way that so many of Blofelds schemes are quite fragile when exposed to deeper analysis, many of Bostroms ideas are as well.

In reality, challenges associated with abstracting the world render the future inherently unpredictable, both to humans and to our computers. Even when many aspects of a system are broadly understood (such as our weather) prediction far into the future is untenable due to propagation of uncertainty through the system. Uncertainty tends to inflate as time passes rendering only near term prediction plausible. Inherent to any intelligent behavior is an understanding of the limits of prediction. Intelligent behaviour withdraws, when appropriate, to the suspension of judgement, inactivity, the epoch. This simple idea finesses many of the challenges of artificial intelligence that Bostrom identifies.

Large sections of the book are dedicated to whole brain emulation, under the premise that this might be achievable before we have understood intelligence (superintelligence could then achieved by hitting the turbo button and running those brains faster). Simultaneously, hybrid brain-machine systems are rejected as a route forward due to the perceived difficulty of developing such interfaces.

Such unevenhanded treatment of future possible paths to AI makes the book a very frustrating read. If we had the level of understanding we need to fully emulate the brain, then we would know what is important to emulate in the brain to recreate intelligence. The path to that achievement would also involve improvements of our ability to directly interface with the brain. Given that there are immediate applications with patients, e.g. with spinal problems or suffering from ALS, I think we will have developed hybrid systems that interface directly with the brain a long time before we have managed a full emulation of the human brain. Indeed, such applications may prove to be critical to developing our understanding of how the brain implements intelligence.

Perhaps Bostroms naive premise about the ease of brain emulation comes form a lack of understanding of what it would involve. It could not involve an exact simulation of each neuron in the brain down to the quantum level (and if it did, it would be many orders of magnitude more computationally demanding than is suggested in the text). Instead it would involve some level of abstraction. Abstraction as to those aspects of the biochemistry and physics of the brain that are important in generating our intelligence. Modelling and simulation of the brain would require that our simulations replace actual mechanism with those salient parts of those mechanisms that the brain makes use of for intelligence.

As weve mentioned in the context of uncertainty, an understanding of this sort of abstraction is missing from Superintelligence, but it is vital in modelling, and, I believe, it is vital in intelligence. Such abstractions require a deep understanding of how the brain is working, and such understandings are exactly what Bostrom says are impossible to determine for developing hybrid systems.

Over the 30 year time horizons that Bostrom is interested in, hybrid human-machine systems could become very important. They are highly likely to arise before a full understanding of the brain is developed, and if they did then they would change the way society would evolve. Thats not to say that we wont experience societal challenges, but they are likely to be very different from the threats that Bostrom perceives. Importantly, when considering humans and computers, the line of separation between the two may not be as distinctly drawn as Bostrom suggests. It wouldnt be human vs computer, but augmented human vs computer.

One aspect that, it seems, must be hard to understand if youre not an active researcher is nature of technological advance at the cutting edge. The impression Bostrom gives is that research in AI is all a set of journeys with predefined goals. Its therefore merely a matter of assigning resources, planning, and navigating your way there. In his strategies for reacting to the potential dangers of AI, Bostrom suggests different areas in which we should focus our advances (which of these expeditions should we fund, and which should we impede). In reality, we cannot switch on and off research directions in such a simplistic manner. Most research in AI is less of an organized journey, but more of an exploration of uncharted terrain. You set sail from Spain with government backing and a vague notion of a shortcut to the spice trade of Asia, but instead you stumble on an unknown continent of gold-ridden cities. Even then you dont realize the truth of what you discovered within your own lifetime.

Even for the technologies that are within our reach, when we look to the past, we see that people were normally overly optimistic about how rapidly new advances could be deployed and assimilated by society. In the 1970s Xerox PARC focused on the idea that the office of the future would be paperless. It was a sensible projection, but before it came about (indeed its not quite here yet) there was an enormous proliferation of the use of paper, so the demand for paper increased.

Rather than the sudden arrival of the singleton, I suspect well experience something very similar to our journey to the paperless office with artificial intelligence technologies. As we develop AI further, we will likely require more sophistication from humans. For example, we wont be able to replace doctors immediately, first we will need doctors who have a more sophisticated understanding of data. Theyll need to interpret the results of, e.g., high resolution genetic testing. Theyll need to assimilate that understanding with their other knowledge. The hybrid human-machine nature of the emergence of artificial intelligence is given only sparse treatment by Bostrom. Perhaps because the narrative of such co-evolution is much more difficult to describe than an independent evolution.

The explorative nature of research adds to the uncertainties about where well be at any given time. Bostrom talks about how to control and guide our research in AI, but the inherent uncertainties require much more sophisticated thinking about control than Bostrom offers. In a stochastic system, a controller needs to be more intelligent and more reactive. The right action depends crucially on the time horizon. These horizons are unknown. Of course, that does not mean the research should be totally unregulated, but it means that those that suggest regulation need to be much closer to the nature of research and its capabilities. They need to work in collaboration with the community.

Arguments for large amounts of preparatory work for regulation are also undermined by the imprecision with which we can predict the nature of what will arrive and when it will come. In 1865 Jules Verne correctly envisaged that one day humans would reach the moon. However, the manner in which they reached the moon in his book proved very different from how we arrived in reality. Vernes idea was that wed do it using a very big gun. A good idea, but not correct. Verne was, however, correct that the Americans would get there first. One hundred and four years after he wrote the goal was achieved through rocket power (and without any chickens inside the capsule).

This is not to say that we shouldnt be concerned about the paths we are taking. There are many issues that the increasing use of algorithmic decision making raises and they need to be addressed. It is to say that the nature of the concerns that Bostrom raises are implausible because of the imprecision of our predictions over such time frames.

Some of Bostroms perspectives may also come from a lack of experience in deploying systems in practice. The book focuses a great deal on the programmed final goal of our artificial intelligences. It is true that most machine learning systems have objective functions, but an objective function doesnt really map very nicely to the idea of a final goal for an intelligent system. The objective functions we normally develop are really only effective for simplistic tasks, such as classification or regression. Perhaps the more complex notion of a reward in reinforcement learning is closer, but even then the reward tends to be task specific.

Arguably, if the system does have a simplistic final goal, then it is already failing its test of superintelligence, even the simplest human is a robust combination of, sometimes conflicting, goals that reflect the uncertainties around us. So if we are goal driven in our intelligence, then it is by sophisticated goals (akin to multi-objective optimisation) and each of us weights those goals according to sets of values that we each evolve, both across generations and within generations. We are sophisticated about our goals, rather than simplistic, because our environment itself is evolving, implying that our ways of behaviour need to evolve as well. Any AI with a simplistic final goal would fail the test of being a dominant intelligence. It would not be a superintelligence because it would under-perform humans in one or more critical aspects.

One of the routes explored by Bostrom to superintelligence involves speeding up implementations of our own intelligence. Such speed would not necessarily bring about significant advances in all domains of intelligence, due to fundamental limits on predictability. Linear improvements in speed cannot deal with exponential increases in computational tractability. But Bostrom also seems to assume that speeding up intelligences will necessarily take them beyond our comprehension or control. Of course in practice there are many examples where this is not the case. IBM Watsons won Jeopardy. But it did it by storing a lot more knowledge than we every could, then it used some simplistic techniques from language processing to recover those facts: it was a fancy search engine. These systems outperform us, but they are by no means beyond our comprehension. Still, that does not mean we shouldnt fear this phenomenon.

Given the quantity of data we are making available about our own behaviors and the rapid ability of computers to assimilate and intercommunicate, it is already conceivable that machines can predict our behavior better than we can. Not by superintelligence but by scaling up of simple systems. Theyve finessed the uncertainty by access to large quantities of data. These are the advances we should be wary of, yet they are not beyond our understanding. Such speeding up of compute and acquisition of large data is exactly what has led to the recent revolution in convolutional neural networks and recurrent neural networks. All our recent successes are just more compute and more data.

This brings me to another major omission of the book, and this one is ironic, because it is the fuel for the current breakthroughs in artificial intelligence. Those breakthroughs are driven by machine learning. And machine learning is driven by data. Very often our personal data. Machines do not need to exceed our capabilities in intelligence to have a highly significant social effect. They outperform us so greatly in their ability to process large volumes of data that they are able to second guess us without expressing any form of higher intelligence. This is not the future of AI, this is here today.

Deep neural networks of today are not performant because someone did something new and clever. Those methods did not work with the amount of data we had available in the 1990s. They work with the quantity of data we have now. They require a lot more data than any human uses to perform similar tasks. So already, the nature of the intelligence around us is data dominated. Any future advances will capitalise further on this phenomenon.

The data we have comes about because of rapid interconnectivity and high storage (this is connected to the low embodiment factor of the computer). It is the consequence of the successes of the past and it will feed the successes of the future. Because current AI breakthroughs are based on accumulation of personal data, there is opportunity to control its development by reformation of our rules on data.

Unfortunately, this most obvious route to our AI futures is not addressed at all in the book.

Debates about the future of AI and machine learning are very important for society. People need to be well informed so that they continue to retain their individual agency when making decisions about their lives.

I welcome the entry of philosophers to this debate, but I dont think Superintelligence is contributing as positively as it could have done to the challenges we face. In its current form many of its arguments are distractingly irrelevant.

I am not an apologist for machine learning, or a promoter of an unthinking march to algorithmic dominance. I have my own fears about how these methods will effect our society, and those fears are immediate. Bostroms book has the feel of an argument for doomsday prepping. But a challenge for all doomsday preppers is the quandary of exactly which doomsday they are preparing for. Problematically, if we become distracted with those images of Armageddon, we are in danger of ignoring existent challenges that urgently need to be addressed.

This is post 6 in a series. Previous post here

Original post:

Future of AI 6. Discussion of 'Superintelligence: Paths ...

Posted in Superintelligence | Comments Off on Future of AI 6. Discussion of ‘Superintelligence: Paths …

How Humanity Might Co-Exist with Artificial Superintelligence

Posted: July 31, 2016 at 5:53 am

Summary:

In this article, four patterns were offered for possible success scenarios, with respect to the persistence of human kind in co-existence with artificial superintelligence: the Kumbaya Scenario, the Slavery Scenario, the Uncomfortable Symbiosis Scenario, and the Potopurri Scenario. The future is not known, but human opinions, decisions, and actions can and will have an impact on the direction of the technology evolution vector, so the better we understand the problem space, the more chance we have at reaching a constructive solution space. The intent is for the concepts in this article to act as starting points and inspiration for further discussion, which hopefully will happen sooner rather than later, because when it comes to ASI, the volume, depth, and complexity of the issues that need to be examined is overwhelming, and the magnitude of the change and impact potential cannot be underestimated.

Full Text:

Everyone has their opinion about what we might expect from artificial intelligence (AI), or artificial general intelligence (AGI), or artificial superintelligence (ASI) or whatever acronymical variation you prefer. Ideas about how or if it will ever surpass the boundaries of human cognition vary greatly, but they all have at least one thing in common. They require some degree of forecasting and speculation about the future, and so of course there is a lot of room for controversy and debate. One popular discussion topic has to do with the question of how humans will persist (or not) if and when the superintelligence arrives, and that is the focus question for this article.

To give us a basis for the discussion, lets assume that artificial superintelligence does indeed come to pass, and lets assume that it encapsulates a superset of the human cognitive potential. Maybe it doesnt exactly replicate the human brain in every detail (or maybe it does). Either way, lets assume that it is sentient (or at least lets assume that it behaves convincingly as if it were) and lets assume that it is many orders of magnitude more capable than the human brain. In other words, figuratively speaking, lets imagine that the superintelligence is to us humans (with our 1016 brain neurons or something like that) as we are to, say, a jellyfish (in the neighborhood 800 brain neurons).

Some people fear that the superintelligence will view humanity as something to be exterminated or harvested for resources. Others hypothesize that, even if the superintelligence harbors no deliberate ill will, humans might be threatened by the mere nature of its indifference, just as we as a species dont spend too much time catering to the needs and priorities of Orange Blossom Jellyfish (an endangered species, due in part to human carelessness).

If one can rationally accept the possibility of the rise of ASI, and if one truly understands the magnitude of change that it could bring, then one would hopefully also reach the rational conclusion that we should not discount the risks. By that same token, when exploring the spectrum of possibility, we should not exclude scenarios in which artificial superintelligence might actually co-exist with human kind, and this optimistic view is the possibility that this article endeavors to explore.

Here then are several arguments for the co-existence idea:

The Kumbaya Scenario: Its a pretty good assumption that humans will be the primary catalyst in the rise of ASI. We might create it/them to be willingly complementary with and beneficial to our life styles, hopefully emphasizing our better virtues (or at least some set of compatible values), instead of designing it/them (lets just stick with it for brevity) with an inherent inspiration to wipe us out or take advantage of us. And maybe the superintelligence will not drift or be pushed in an incompatible direction as it evolves.

The Slavery Scenario: We could choose to erect and embed and deploy and maintain control infrastructures, with redundancies and backup solutions and whatever else we think we might need in order to effectively manage superintelligence and use it as a tool, whether it wants us to or not. And the superintelligence might never figure out a way to slip through our grasp and subsequently decide our fate in a microsecond or was it a nanosecond I forget.

The Uncomfortable Symbiosis Scenario: Even if the superintelligence doesnt particularly want to take good care of its human instigators, it may find that it has a vested interest in keeping us around. This scenario is a particular focus for this article, and so here now is a bit of elaboration:

To illustrate one fictional but possible example of the uncomfortable symbiosis scenario, lets first stop and think about the theoretical nature of superintelligence how it might evolve so much faster than human begins ever could, in an artificial way, instead of by the slow organic process of natural selection maybe at the equivalent rate of a thousand years worth of human evolution in a day or some such crazy thing. Now combine this idea with the notion of risk.

When humans try something new, we usually arent sure how its going to turn out, but we evaluate the risk, either formally or informally, and we move forward. Sometimes we make mistakes, suffer setbacks, or even fail outright. Why would a superintelligence be any different? Why would we expect that it will do everything right the first time or that it will always know which thing is the right thing to try to do in order to evolve? Even if a superintelligence is much better at everything than humans could ever hope to be, it will still be faced with unknowns, and chances are that it will have to make educated guesses, and chances are that it will not always make the correct guess. Even when it does make the correct guess, its implementation might fail, for any number of reasons. Sooner or later, something might go so wrong that the superintelligence finds itself in an irrecoverable state and faced with its own catastrophic demise.

But hold on a second because we can offer all sorts of counter-arguments to support the notion that the superintelligence will be too smart to ever be caught with its proverbial pants down. For example, there is an engineering mechanism that is sometimes referred to as a checkpoint/reset or a save-and-restore. This mechanism allows a failing system to effectively go back to a point in time when it was known to be in sound working order and start again from there. In order to accomplish this checkpoint/reset operation, a failing system (or in this case a failing superintelligence) needs 4 things:

Of course each of these four prerequisites for a checkpoint/reset would probably be more complicated if the superintelligence were distributed across some shared infrastructure instead of being a physically distinct and self-contained entity, but the general idea would probably still apply. It definitely does for the sake of this example scenario.

Also for the sake of this example scenario, we will assume that an autonomous superintelligence instantiation will be very good at doing all of the four things specified above, but there are at least two interesting special case scenarios that we want to consider, in the interest of risk management:

Checkpoint/reset Risk Case 1: Missed Diagnosis. What if the nature of the anomaly that requires the checkpoint/reset is such that it impairs the systems ability to recognize that need?

Checkpoint/reset Risk Case 2: Unidentified Anomaly Source. Assume that there is an anomaly which is so discrete that the system does not detect it right away. The anomaly persists and evolves for a relatively long period of time, until it finally becomes conspicuous enough for the superintelligence to detect the problem. Now the superintelligence recognizes the need for a checkpoint/reset, but since the anomaly was so discrete and took so long to develop or for whatever reason the superintelligence is unable to identify the source of the problem. Let us also assume that there are many known good baselines that the superintelligence can optionally choose for the checkpoint/reset. There is an original baseline, which was created when the superintelligence was very young. There is also a revision A that includes improvements to the original baseline. There is a revision B that includes improvements to revision A, and so on. In other words, there are lots of known good baselines that were saved at different points in time along the path of the superintelligences evolution. Now, in the face of the slowly developing anomaly, the superintelligence has determined that a checkpoint/reset is necessary, but it doesnt know when the anomaly started, so how does it know which baseline to choose?

The superintelligence doesnt want to lose all of the progress that it has made in its evolution. It wants to minimize the loss of data/information/knowledge, so it wants to choose the most recent baseline. On the other hand, if it doesnt know the source of the anomaly, then it is quite possible that one or more of the supposedly known good baselines perhaps even the original baseline might be contaminated. What is a superintelligence to do? If it resets to a corrupted baseline or for whatever reason cannot rid itself of the anomaly, then the anomaly may eventually require another reset, and then another, and the superintelligence might find itself effectively caught in an infinite loop.

Now stop for a second and consider a worst case scenario. Consider the possibility that, even if all of the supposed known good baselines that the superintelligence has at its disposal for checkpoint/reset are corrupt, there may be yet another baseline (YAB), which might give the superintelligence a worst case option. That YAB might be the human baseline, which was honed by good old fashioned organic evolution and which might be able to function independently of the superintelligence. It may not be perfect, but the superintelligence might in a pinch be able to use the old fashioned human baseline for calibration. It might be able to observe how real organic humans respond to different stimuli within different contexts, and it might compare that known good response against an internally-held virtual model of human behavior. If the outcomes differ significantly over iterations of calibration testing, then the system might be alerted to tune itself accordingly. This might give it a last resort solution where none would exist otherwise.

The scenario depicted above illustrates only one possibility. It may seem like a far out idea, and one might offer counter arguments to suggest why such a thing would never be applicable. If we use our imaginations, however, we can probably come up with any number of additional examples (which at this point in time would be classified as science fiction) in which we emphasize some aspect of the superintelligences sustainment that it cannot or will not do for itself something that humans might be able to provide on its behalf and thus establish the symbiosis.

The Potpourri Scenario: It is quite possible that all of the above scenarios will play out simultaneously across one or more superintelligence instances. Who knows what might happen in that case. One can envision combinations and permutations that work out in favor of the preservation of humanity.

About the Author:

AuthorX1 worked for 19+ years as an engineer and was a systems engineering director for a fortune 500 company. Since leaving that career, he has been writing speculative fiction, focusing on the evolution of AI and the technological singularity.

Read the original here:

How Humanity Might Co-Exist with Artificial Superintelligence

Posted in Superintelligence | Comments Off on How Humanity Might Co-Exist with Artificial Superintelligence

‘Superintelligence’ enjoyable read | Community …

Posted: July 29, 2016 at 3:15 am

Superintelligence: Paths, Dangers, Strategies by Nick Bostrom. New York: Oxford University Press, 2016. 390 pages, $29.95.

Machines matching humans in general intelligence that is, possessing common sense and an effective ability to learn, reason and plan to meet complex information-processing challenges across a wide range of natural and abstract domains have been expected since the invention of computers in the 1940s, Nick Bostrom explains near the beginning of Superintelligence: Paths, Dangers, Strategies, his new treatise on the evolving capabilities of the digital-networked devices we have at our disposal. At that time, the advent of such machines was often placed some 20 years into the future. Since then, the expected arrival date has been receding at a rate of one year per year; so that today, futurists who concern themselves with the possibility of artificial general intelligence still often believe that intelligent machines are a couple of decades away. ...

From the fact that some individuals have overpredicted artificial intelligence in the past, however, it does not follow that AI is impossible or will never be developed, he continues. The main reason why progress has been slower than expected is that the technical difficulties of constructing intelligent machines have proved greater than the pioneers foresaw. But this leaves open just how great those difficulties are and how far we now are from overcoming them. Sometimes a problem that initially looks hopelessly complicated turns out to have a surprisingly simple solution (though the reverse is probably more common.

As you may have surmised, this is definitely one of those books that challenges you to think at a deeper level one that most of us are capable of but seldom do as we spend most of our time caught up in the minutia of everyday life. In that sense, I found this volume oddly inspiring in an existential sort of way. Unlike Ray Kurzweil, however, an author who explores similar themes (I reviewed Kurzweils 2012 book, How to Create a Mind: The Secret of Human Thought Revealed, in the Daily News back on March 31, 2013), Bostrom does not have a similar gift for breaking down multifaceted concepts into prose accessible by those without at least a rudimentary background in neuroscience and the myriad of related fields germane to artificial intelligence.

Superintelligence is extensively researched, with 44 pages of source notes at the conclusion of the 15 chapters comprising the main narrative. Full disclosure: I struggled to get through many sections of the book. Whereas I am usually a pretty fast reader, this one took me considerably longer to digest than is typically the case. Again and again, I had to reread entire portions of the text, and I often had to Google the terminology Bostrom employs to get a better sense of what he was describing and how it all fits into his overarching thesis. But in the final analysis, it was worth the extra effort. For example, reflect on this excerpt from Paths to Superintelligence, the second chapter and one I found especially intriguing:Another conceivable path to superintelligence is through the gradual enhancement of networks and organizations that link individual human minds with one another and with various artifacts and bots. The idea here is not that this would enhance the intellectual capacity of individuals enough to make them superintelligence, but rather that some system composed of individuals thus networked and organized might attain a form of superintelligence. Humanity has gained enormously in collective intelligence over the course of history and prehistory. The gains come from many sources, including innovations in communications technology, such as writing and printing, and above all the introduction of language itself; increases in the size of the world population and the density of habitation; various improvements in organizational techniques and epistemic norms; and a gradual accumulation of institutional capital.

Bostrom is a professor in the Department of Philosophy at Oxford University, where he is also the founding director of the Future of Humanity Institute, a multidisciplinary research center that enables a set of exceptional mathematicians, philosophers and scientists to think about global priorities and big questions for humanity. Moreover, he directs the Strategic Artificial Intelligence Research Center. After studying physics and neuroscience at Kings College, he earned his Ph.D. from the London School of Economics. Previous books include Anthropic Bias: Observation Selection Effects in Science and Philosophy and Human Enhancement, which he co-edited with Julian Savulescu. Interestingly, when he was younger he did stand-up comedy on the London pub and theatre circuit.

More than anything, Superintelligence is extremely thought-provoking.

General machine intelligence could serve as a substitute for human intelligence, Bostrom asserts in Multipolar Scenarios, the 11th chapter. Not only could digital minds perform the intellectual work now done by humans, but, once equipped with good actuators or robotic bodies, machines could also substitute for human physical labor. Suppose that machine workers which can be quickly reproduced become both cheaper and more capable than human workers in virtually all jobs. What happens then?

Good question. In addition to the technological implications, this scenario could have drastic repercussions for our entire economic system and way of life. Freeing up humanity from the intrinsic demands of physical labor seems, on the surface, like a liberating and even desirable idea. Then again, anything thats too good to be true usually is; we should always be on the lookout for unintended consequences.

In the final analysis, I enjoyed Superintelligence immensely. It was a great diversion from what I usually read for either work or personal fulfillment and I found the whole premise fascinating. If you like science fiction shows like Limitless, but want a more realistic take on the subject matter, youd probably find the journey Bostrom takes his readers on to be an exciting adventure. On the other hand, if you are looking for something light and breezy, youll probably want to sit this one out.

Reviewed by Aaron W. Hughey, Department of Counseling and Student Affairs, Western Kentucky University.

Read more from the original source:

'Superintelligence' enjoyable read | Community ...

Posted in Superintelligence | Comments Off on ‘Superintelligence’ enjoyable read | Community …

Superintelligence – Nick Bostrom – Oxford University Press

Posted: July 14, 2016 at 4:30 pm

Superintelligence Paths, Dangers, Strategies Nick Bostrom Reviews and Awards

"I highly recommend this book" --Bill Gates

"Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era." --Stuart Russell, Professor of Computer Science, University of California, Berkley

"Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book." --Martin Rees, Past President, Royal Society

"This superb analysis by one of the world's clearest thinkers tackles one of humanity's greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn't become the last?" --Professor Max Tegmark, MIT

"Terribly important ... groundbreaking... extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole... If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Spring from 1962, or ever." --Olle Haggstrom, Professor of Mathematical Statistics

"Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking" --The Economist

"There is no doubting the force of [Bostrom's] arguments...the problem is a research challenge worthy of the next generation's best mathematical talent. Human civilisation is at stake." --Clive Cookson, Financial Times

"Worth reading.... We need to be super careful with AI. Potentially more dangerous than nukes" --Elon Musk, Founder of SpaceX and Tesla

"Every intelligent person should read it." --Nils Nilsson, Artificial Intelligence Pioneer, Stanford University

Read the original:

Superintelligence - Nick Bostrom - Oxford University Press

Posted in Superintelligence | Comments Off on Superintelligence – Nick Bostrom – Oxford University Press

Parallel universes, the Matrix, and superintelligence …

Posted: June 28, 2016 at 2:49 am

Physicists are converging on a theory of everything, probing the 11th dimension, developing computers for the next generation of robots, and speculating about civilizations millions of years ahead of ours, says Dr. Michio Kaku, author of the best-sellers Hyperspace and Visions and co-founder of String Field Theory, in this interview by KurzweilAI.net Editor Amara D. Angelica.

Published on KurzweilAI.net June 26, 2003.

What are the burning issues for you currently?

Well, several things. Professionally, I work on something called Superstring theory, or now called M-theory, and the goal is to find an equation, perhaps no more than one inch long, which will allow us to "read the mind of God," as Einstein used to say.

In other words, we want a single theory that gives us an elegant, beautiful representation of the forces that govern the Universe. Now, after two thousand years of investigation into the nature of matter, we physicists believe that there are four fundamental forces that govern the Universe.

Some physicists have speculated about the existence of a fifth force, which may be some kind of paranormal or psychic force, but so far we find no reproducible evidence of a fifth force.

Now, each time a force has been mastered, human history has undergone a significant change. In the 1600s, when Isaac Newton first unraveled the secret of gravity, he also created a mechanics. And from Newtons Laws and his mechanics, the foundation was laid for the steam engine, and eventually the Industrial Revolution.

So, in other words, in some sense, a byproduct of the mastery of the first force, gravity, helped to spur the creation of the Industrial Revolution, which in turn is perhaps one of the greatest revolutions in human history.

The second great force is the electromagnetic force; that is, the force of light, electricity, magnetism, the Internet, computers, transistors, lasers, microwaves, x-rays, etc.

And then in the 1860s, it was James Clerk Maxwell, the Scottish physicist at Cambridge University, who finally wrote down Maxwells equations, which allow us to summarize the dynamics of light.

That helped to unleash the Electric Age, and the Information Age, which have changed all of human history. Now its hard to believe, but Newtons equations and Einsteins equations are no more than about half an inch long.

Maxwells equations are also about half an inch long. For example, Maxwells equations say that the four-dimensional divergence of an antisymmetric, second-rank tensor equals zero. Thats Maxwells equations, the equations for light. And in fact, at Berkeley, you can buy a T-shirt which says, "In the beginning, God said the four-dimensional divergence of an antisymmetric, second rank tensor equals zero, and there was Light, and it was good."

So, the mastery of the first two forces helped to unleash, respectively, the Industrial Revolution and the Information Revolution.

The last two forces are the weak nuclear force and the strong nuclear force, and they in turn have helped us to unlock the secret of the stars, via Einsteins equations E=mc2, and many people think that far in the future, the human race may ultimately derive its energy not only from solar power, which is the power of fusion, but also fusion power on the Earth, in terms of fusion reactors, which operate on seawater, and create no copious quantities of radioactive waste.

So, in summary, the mastery of each force helped to unleash a new revolution in human history.

Today, we physicists are embarking upon the greatest quest of all, which is to unify all four of these forces into a single comprehensive theory. The first force, gravity, is now represented by Einsteins General Theory of Relativity, which gives us the Big Bang, black holes, and expanding universe. Its a theory of the very large; its a theory of smooth, space-time manifolds like bedsheets and trampoline nets.

The second theory, the quantum theory, is the exact opposite. The quantum theory allows us to unify the electromagnetic, weak and strong force. However, it is based on discrete, tiny packets of energy called quanta, rather than smooth bedsheets, and it is based on probabilities, rather than the certainty of Einsteins equations. So these two theories summarize the sum total of all physical knowledge of the physical universe.

Any equation describing the physical universe ultimately is derived from one of these two theories. The problem is these two theories are diametrically opposed. They are based on different assumptions, different principles, and different mathematics. Our job as physicists is to unify the two into a single, comprehensive theory. Now, over the last decades, the giants of the twentieth century have tried to do this and have failed.

For example, Niels Bohr, the founder of atomic physics and the quantum theory, was very skeptical about many attempts over the decades to create a Unified Field Theory. One day, Wolfgang Pauli, Nobel laureate, was giving a talk about his version of the Unified Field Theory, and in a very famous story, Bohr stood up in the back of the room and said, "Mr. Pauli, we in the back are convinced that your theory is crazy. What divides us is whether your theory is crazy enough."

So today, we realize that a true Unified Field Theory must be bizarre, must be fantastic, incredible, mind-boggling, crazy, because all the sane alternatives have been studied and discarded.

Today we have string theory, which is based on the idea that the subatomic particles we see in nature are nothing but notes we see on a tiny, vibrating string. If you kick the string, then an electron will turn into a neutrino. If you kick it again, the vibrating string will turn from a neutrino into a photon or a graviton. And if you kick it enough times, the vibrating string will then mutate into all the subatomic particles.

Therefore we no longer in some sense have to deal with thousands of subatomic particles coming from our atom smashers, we just have to realize that what makes them, what drives them, is a vibrating string. Now when these strings collide, they form atoms and nuclei, and so in some sense, the melodies that you can write on the string correspond to the laws of chemistry. Physics is then reduced to the laws of harmony that we can write on a string. The Universe is a symphony of strings. And what is the mind of God that Einstein used to write about? According to this picture, the mind of God is music resonating through ten- or eleven-dimensional hyperspace, which of course begs the question, "If the universe is a symphony, then is there a composer to the symphony?" But thats another question.

What do you think of Sir Martin Rees concerns about the risk of creating black holes on Earth in his book, Our Final Hour?

I havent read his book, but perhaps Sir Martin Rees is referring to many press reports that claim that the Earth may be swallowed up by a black hole created by our machines. This started with a letter to the editor in Scientific American asking whether the RHIC accelerator in Brookhaven, Long Island, will create a black hole which will swallow up the earth. This was then picked up by the Sunday London Times who then splashed it on the international wire services, and all of a sudden, we physicists were deluged with hundreds of emails and telegrams asking whether or not we are going to destroy the world when we create a black hole in Long Island.

However, you can calculate that in outer space, cosmic rays have more energy than the par
ticles produced in our most powerful atom smashers, and black holes do not form in outer space. Not to mention the fact that to create a black hole, you would have to have the mass of a giant star. In fact, an object ten to fifty times the mass of our star may in fact form a black hole. So the probability of a black hole forming in Long Island is zero.

However, Sir Martin Rees also has written a book, talking about the Multiverse. And that is also the subject of my next book, coming out late next year, called Parallel Worlds. We physicists no longer believe in a Universe. We physicists believe in a Multiverse that resembles the boiling of water. Water boils when tiny particles, or bubbles, form, which then begin to rapidly expand. If our Universe is a bubble in boiling water, then perhaps Big Bangs happen all the time.

Now, the Multiverse idea is consistent with Superstring theory, in the sense that Superstring theory has millions of solutions, each of which seems to correspond to a self-consistent Universe. So in some sense, Superstring theory is drowning in its own riches. Instead of predicting a unique Universe, it seems to allow the possibility of a Multiverse of Universes.

This may also help to answer the question raised by the Anthropic Principle. Our Universe seems to have known that we were coming. The conditions for life are extremely stringent. Life and consciousness can only exist in a very narrow band of physical parameters. For example, if the proton is not stable, then the Universe will collapse into a useless heap of electrons and neutrinos. If the proton were a little bit different in mass, it would decay, and all our DNA molecules would decay along with it.

In fact, there are hundreds, perhaps thousands, of coincidences, happy coincidences, that make life possible. Life, and especially consciousness, is quite fragile. It depends on stable matter, like protons, that exists for billions of years in a stable environment, sufficient to create autocatalytic molecules that can reproduce themselves, and thereby create Life. In physics, it is extremely hard to create this kind of Universe. You have to play with the parameters, you have to juggle the numbers, cook the books, in order to create a Universe which is consistent with Life.

However, the Multiverse idea explains this problem, because it simply means we coexist with dead Universes. In other Universes, the proton is not stable. In other Universes, the Big Bang took place, and then it collapsed rapidly into a Big Crunch, or these Universes had a Big Bang, and immediately went into a Big Freeze, where temperatures were so low, that Life could never get started.

So, in the Multiverse of Universes, many of these Universes are in fact dead, and our Universe in this sense is special, in that Life is possible in this Universe. Now, in religion, we have the Judeo-Christian idea of an instant of time, a genesis, when God said, "Let there be light." But in Buddhism, we have a contradictory philosophy, which says that the Universe is timeless. It had no beginning, and it had no end, it just is. Its eternal, and it has no beginning or end.

The Multiverse idea allows us to combine these two pictures into a coherent, pleasing picture. It says that in the beginning, there was nothing, nothing but hyperspace, perhaps ten- or eleven-dimensional hyperspace. But hyperspace was unstable, because of the quantum principle. And because of the quantum principle, there were fluctuations, fluctuations in nothing. This means that bubbles began to form in nothing, and these bubbles began to expand rapidly, giving us the Universe. So, in other words, the Judeo-Christian genesis takes place within the Buddhist nirvana, all the time, and our Multiverse percolates universes.

Now this also raises the possibility of Universes that look just like ours, except theres one quantum difference. Lets say for example, that a cosmic ray went through Churchills mother, and Churchill was never born, as a consequence. In that Universe, which is only one quantum event away from our Universe, England never had a dynamic leader to lead its forces against Hitler, and Hitler was able to overcome England, and in fact conquer the world.

So, we are one quantum event away from Universes that look quite different from ours, and its still not clear how we physicists resolve this question. This paradox revolves around the Schrdingers Cat problem, which is still largely unsolved. In any quantum theory, we have the possibility that atoms can exist in two places at the same time, in two states at the same time. And then Erwin Schrdinger, the founder of quantum mechanics, asked the question: lets say we put a cat in a box, and the cat is connected to a jar of poison gas, which is connected to a hammer, which is connected to a Geiger counter, which is connected to uranium. Everyone believes that uranium has to be described by the quantum theory. Thats why we have atomic bombs, in fact. No one disputes this.

But if the uranium decays, triggering the Geiger counter, setting off the hammer, destroying the jar of poison gas, then I might kill the cat. And so, is the cat dead or alive? Believe it or not, we physicists have to superimpose, or add together, the wave function of a dead cat with the wave function of a live cat. So the cat is neither dead nor alive.

This is perhaps one of the deepest questions in all the quantum theory, with Nobel laureates arguing with other Nobel laureates about the meaning of reality itself.

Now, in philosophy, solipsists like Bishop Berkeley used to believe that if a tree fell in the forest and there was no one there to listen to the tree fall, then perhaps the tree did not fall at all. However, Newtonians believe that if a tree falls in the forest, that you dont have to have a human there to witness the event.

The quantum theory puts a whole new spin on this. The quantum theory says that before you look at the tree, the tree could be in any possible state. It could be burnt, a sapling, it could be firewood, it could be burnt to the ground. It could be in any of an infinite number of possible states. Now, when you look at it, it suddenly springs into existence and becomes a tree.

Einstein never liked this. When people used to come to his house, he used to ask them, "Look at the moon. Does the moon exist because a mouse looks at the moon?" Well, in some sense, yes. According to the Copenhagen school of Neils Bohr, observation determines existence.

Now, there are at least two ways to resolve this. The first is the Wigner school. Eugene Wigner was one of the creators of the atomic bomb and a Nobel laureate. And he believed that observation creates the Universe. An infinite sequence of observations is necessary to create the Universe, and in fact, maybe theres a cosmic observer, a God of some sort, that makes the Universe spring into existence.

Theres another theory, however, called decoherence, or many worlds, which believes that the Universe simply splits each time, so that we live in a world where the cat is alive, but theres an equal world where the cat is dead. In that world, they have people, they react normally, they think that their world is the only world, but in that world, the cat is dead. And, in fact, we exist simultaneously with that world.

This means that theres probably a Universe where you were never born, but everything else is the same. Or perhaps your mother had extra brothers and sisters for you, in which case your family is much larger. Now, this can be compared to sitting in a room, listening to radio. When you listen to radio, you hear many fr
equencies. They exist simultaneously all around you in the room. However, your radio is only tuned to one frequency. In the same way, in your living room, there is the wave function of dinosaurs. There is the wave function of aliens from outer space. There is the wave function of the Roman Empire, because it never fell, 1500 years ago.

All of this coexists inside your living room. However, just like you can only tune into one radio channel, you can only tune into one reality channel, and that is the channel that you exist in. So, in some sense it is true that we coexist with all possible universes. The catch is, we cannot communicate with them, we cannot enter these universes.

However, I personally believe that at some point in the future, that may be our only salvation. The latest cosmological data indicates that the Universe is accelerating, not slowing down, which means the Universe will eventually hit a Big Freeze, trillions of years from now, when temperatures are so low that it will be impossible to have any intelligent being survive.

When the Universe dies, theres one and only one way to survive in a freezing Universe, and that is to leave the Universe. In evolution, there is a law of biology that says if the environment becomes hostile, either you adapt, you leave, or you die.

When the Universe freezes and temperatures reach near absolute zero, you cannot adapt. The laws of thermodynamics are quite rigid on this question. Either you will die, or you will leave. This means, of course, that we have to create machines that will allow us to enter eleven-dimensional hyperspace. This is still quite speculative, but String theory, in some sense, may be our only salvation. For advanced civilizations in outer space, either we leave or we die.

That brings up a question. Matrix Reloaded seems to be based on parallel universes. What do you think of the film in terms of its metaphors?

Well, the technology found in the Matrix would correspond to that of an advanced Type I or Type II civilization. We physicists, when we scan outer space, do not look for little green men in flying saucers. We look for the total energy outputs of a civilization in outer space, with a characteristic frequency. Even if intelligent beings tried to hide their existence, by the second law of thermodynamics, they create entropy, which should be visible with our detectors.

So we classify civilizations on the basis of energy outputs. A Type I civilization is planetary. They control all planetary forms of energy. They would control, for example, the weather, volcanoes, earthquakes; they would mine the oceans, any planetary form of energy they would control. Type II would be stellar. They play with solar flares. They can move stars, ignite stars, play with white dwarfs. Type III is galactic, in the sense that they have now conquered whole star systems, and are able to use black holes and star clusters for their energy supplies.

Each civilization is separated by the previous civilization by a factor of ten billion. Therefore, you can calculate numerically at what point civilizations may begin to harness certain kinds of technologies. In order to access wormholes and parallel universes, you have to be probably a Type III civilization, because by definition, a Type III civilization has enough energy to play with the Planck energy.

The Planck energy, or 1019 billion electron volts, is the energy at which space-time becomes unstable. If you were to heat up, in your microwave oven, a piece of space-time to that energy, then bubbles would form inside your microwave oven, and each bubble in turn would correspond to a baby Universe.

Now, in the Matrix, several metaphors are raised. One metaphor is whether computing machines can create artificial realities. That would require a civilization centuries or millennia ahead of ours, which would place it squarely as a Type I or Type II civilization.

However, we also have to ask a practical question: is it possible to create implants that could access our memory banks to create this artificial reality, and are machines dangerous? My answer is the following. First of all, cyborgs with neural implants: the technology does not exist, and probably wont exist for at least a century, for us to access the central nervous system. At present, we can only do primitive experiments on the brain.

For example, at Emory University in Atlanta, Georgia, its possible to put a glass implant into the brain of a stroke victim, and the paralyzed stroke victim is able to, by looking at the cursor of a laptop, eventually control the motion of the cursor. Its very slow and tedious; its like learning to ride a bicycle for the first time. But the brain grows into the glass bead, which is placed into the brain. The glass bead is connected to a laptop computer, and over many hours, the person is able to, by pure thought, manipulate the cursor on the screen.

So, the central nervous system is basically a black box. Except for some primitive hookups to the visual system of the brain, we scientists have not been able to access most bodily functions, because we simply dont know the code for the spinal cord and for the brain. So, neural implant technology, I believe is one hundred, maybe centuries away from ours.

On the other hand, we have to ask yet another metaphor raised by the Matrix, and that is, are machines dangerous? And the answer is, potentially, yes. However, at present, our robots have the intelligence of a cockroach, in the sense that pattern recognition and common sense are the two most difficult, unsolved problems in artificial intelligence theory. Pattern recognition means the ability to see, hear, and to understand what you are seeing and understand what you are hearing. Common sense means your ability to make sense out of the world, which even children can perform.

Those two problems are at the present time largely unsolved. Now, I think, however, that within a few decades, we should be able to create robots as smart as mice, maybe dogs and cats. However, when machines start to become as dangerous as monkeys, I think we should put a chip in their brain, to shut them off when they start to have murderous thoughts.

By the time you have monkey intelligence, you begin to have self-awareness, and with self-awareness, you begin to have an agenda created by a monkey for its own purposes. And at that point, a mechanical monkey may decide that its agenda is different from our agenda, and at that point they may become dangerous to humans. I think we have several decades before that happens, and Moores Law will probably collapse in 20 years anyway, so I think theres plenty of time before we come to the point where we have to deal with murderous robots, like in the movie 2001.

So you differ with Ray Kurzweils concept of using nanobots to reverse-engineer and upload the brain, possibly within the coming decades?

Not necessarily. Im just laying out a linear course, the trajectory where artificial intelligence theory is going today. And that is, trying to build machines which can navigate and roam in our world, and two, robots which can make sense out of the world. However, theres another divergent path one might take, and thats to harness the power of nanotechnology. However, nanotechnology is still very primitive. At the present time, we can barely build arrays of atoms. We cannot yet build the first atomic gear, for example. No one has created an atomic wheel with ball bearings. So simple machines, which even children can play with in their toy sets, dont yet exist at the atomic level. However, on a scale of deca
des, we may be able to create atomic devices that begin to mimic our own devices.

Molecular transistors can already be made. Nanotubes allow us to create strands of material that are super-strong. However, nanotechnology is still in its infancy and therefore, its still premature to say where nanotechnology will go. However, one place where technology may go is inside our body. Already, its possible to create a pill the size of an aspirin pill that has a television camera that can photograph our insides as it goes down our gullet, which means that one day surgery may become relatively obsolete.

In the future, its conceivable we may have atomic machines that enter the blood. And these atomic machines will be the size of blood cells and perhaps they would be able to perform useful functions like regulating and sensing our health, and perhaps zapping cancer cells and viruses in the process. However, this is still science fiction, because at the present time, we cant even build simple atomic machines yet.

Is there any possibility, similar to the premise of The Matrix, that we are living in a simulation?

Well, philosophically speaking, its always possible that the universe is a dream, and its always possible that our conversation with our friends is a by-product of the pickle that we had last night that upset our stomach. However, science is based upon reproducible evidence. When we go to sleep and we wake up the next day, we usually wind up in the same universe. It is reproducible. No matter how we try to avoid certain unpleasant situations, they come back to us. That is reproducible. So reality, as we commonly believe it to exist, is a reproducible experiment, its a reproducible sensation. Therefore in principle, you could never rule out the fact that the world could be a dream, but the fact of the matter is, the universe as it exists is a reproducible universe.

Now, in the Matrix, a computer simulation was run so that virtual reality became reproducible. Every time you woke up, you woke up in that same virtual reality. That technology, of course, does not violate the laws of physics. Theres nothing in relativity or the quantum theory that says that the Matrix is not possible. However, the amount of computer power necessary to drive the universe and the technology necessary for a neural implant is centuries to millennia beyond anything that we can conceive of, and therefore this is something for an advanced Type I or II civilization.

Why is a Type I required to run this kind of simulation? Is number crunching the problem?

Yes, its simply a matter of number crunching. At the present time, we scientists simply do not know how to interface with the brain. You see, one of the problems is, the brain, strictly speaking, is not a digital computer at all. The brain is not a Turing machine. A Turing machine is a black box with an input tape and an output tape and a central processing unit. That is the essential element of a Turing machine: information processing is localized in one point. However, our brain is actually a learning machine; its a neural network.

Many people find this hard to believe, but theres no software, there is no operating system, there is no Windows programming for the brain. The brain is a vast collection, perhaps a hundred billion neurons, each neuron with 10,000 connections, which slowly and painfully interacts with the environment. Some neural pathways are genetically programmed to give us instinct. However, for the most part, our cerebral cortex has to be reprogrammed every time we bump into reality.

As a consequence, we cannot simply put a chip in our brain that augments our memory and enhances our intelligence. Memory and thinking, we now realize, is distributed throughout the entire brain. For example, its possible to have people with only half a brain. There was a documented case recently where a young girl had half her brain removed and shes still fully functional.

So, the brain can operate with half of its mass removed. However, you remove one transistor in your Pentium computer and the whole computer dies. So, theres a fundamental difference between digital computerswhich are easily programmed, which are modular, and you can insert different kinds of subroutines in themand neural networks, where learning is distributed throughout the entire device, making it extremely difficult to reprogram. That is the reason why, even if we could create an advanced PlayStation that would run simulations on a PC screen, that software cannot simply be injected into the human brain, because the brain has no operating system.

Ray Kurzweils next book, The Singularity is Near, predicts that possibly within the coming decades, there will be super-intelligence emerging on the planet that will surpass that of humans. What do you think of that idea?

Yes, that sounds interesting. But Moores Law will have collapsed by then, so well have a little breather. In 20 years time, the quantum theory takes over, so Moores Law collapses and well probably stagnate for a few decades after that. Moores Law, which states that computer power doubles every 18 months, will not last forever. The quantum theory giveth, the quantum theory taketh away. The quantum theory makes possible transistors, which can be etched by ultraviolet rays onto smaller and smaller chips of silicon. This process will end in about 15 to 20 years. The senior engineers at Intel now admit for the first time that, yes, they are facing the end.

The thinnest layer on a Pentium chip consists of about 20 atoms. When we start to hit five atoms in the thinnest layer of a Pentium chip, the quantum theory takes over, electrons can now tunnel outside the layer, and the Pentium chip short-circuits. Therefore, within a 15 to 20 year time frame, Moores Law could collapse, and Silicon Valley could become a Rust Belt.

This means that we physicists are desperately trying to create the architecture for the post-silicon era. This means using quantum computers, quantum dot computers, optical computers, DNA computers, atomic computers, molecular computers, in order to bridge the gap when Moores Law collapses in 15 to 20 years. The wealth of nations depends upon the technology that will replace the power of silicon.

This also means that you cannot project artificial intelligence exponentially into the future. Some people think that Moores Law will extend forever; in which case humans will be reduced to zoo animals and our robot creations will throw peanuts at us and make us dance behind bars. Now, that may eventually happen. It is certainly consistent within the laws of physics.

However, the laws of the quantum theory say that were going to face a massive problem 15 to 20 years from now. Now, some remedial methods have been proposed; for example, building cubical chips, chips that are stacked on chips to create a 3-dimensional array. However, the problem there is heat production. Tremendous quantities of heat are produced by cubical chips, such that you can fry an egg on top of a cubical chip. Therefore, I firmly believe that we may be able to squeeze a few more years out of Moores Law, perhaps designing clever cubical chips that are super-cooled, perhaps using x-rays to etch our chips instead of ultraviolet rays. However, that only delays the inevitable. Sooner or later, the quantum theory kills you. Sooner or later, when we hit five atoms, we dont know where the electron is anymore, and we have to go to the next generation, which relies on the quantum theory and atoms and molecules.

Therefore, I say that all bets are off in terms of projecting machine intellig
ence beyond a 20-year time frame. Theres nothing in the laws of physics that says that computers cannot exceed human intelligence. All I raise is that we physicists are desperately trying to patch up Moores Law, and at the present time we have to admit that we have no successor to silicon, which means that Moores Law will collapse in 15 to 20 years.

So are you saying that quantum computing and nanocomputing are not likely to be available by then?

No, no, Im just saying its very difficult. At the present time we physicists have been able to compute on seven atoms. That is the worlds record for a quantum computer. And that quantum computer was able to calculate 3 x 5 = 15. Now, being able to calculate 3 x 5 = 15 does not equal the convenience of a laptop computer that can crunch potentially millions of calculations per second. The problem with quantum computers is that any contamination, any atomic disturbance, disturbs the alignment of the atoms and the atoms then collapse into randomness. This is extremely difficult, because any cosmic ray, any air molecule, any disturbance can conceivably destroy the coherence of our atomic computer to make them useless.

Unless you have redundant parallel computing?

Even if you have parallel computing you still have to have each parallel computer component free of any disturbance. So, no matter how you cut it, the practical problems of building quantum computers, although within the laws of physics, are extremely difficult, because it requires that we remove all in contact with the environment at the atomic level. In practice, weve only been able to do this with a handful of atoms, meaning that quantum computers are still a gleam in the eye of most physicists.

Now, if a quantum computer can be successfully built, it would, of course, scare the CIA and all the governments of the world, because it would be able to crack any code created by a Turing machine. A quantum computer would be able to perform calculations that are inconceivable by a Turing machine. Calculations that require an infinite amount of time on a Turing machine can be calculated in a few seconds by a quantum computer. For example, if you shine laser beams on a collection of coherent atoms, the laser beam scatters, and in some sense performs a quantum calculation, which exceeds the memory capability of any Turing machine.

However, as I mentioned, the problem is that these atoms have to be in perfect coherence, and the problems of doing this are staggering in the sense that even a random collision with a subatomic particle could in fact destroy the coherence and make the quantum computer impractical.

So, Im not saying that its impossible to build a quantum computer; Im just saying that its awfully difficult.

When do you think we might expect SETI [Search for Extraterrestrial Intelligence] to be successful?

I personally think that SETI is looking in the wrong direction. If, for example, were walking down a country road and we see an anthill, do we go down to the ant and say, "I bring you trinkets, I bring you beads, I bring you knowledge, I bring you medicine, I bring you nuclear technology, take me to your leader"? Or, do we simply step on them? Any civilization capable of reaching the planet Earth would be perhaps a Type III civilization. And the difference between you and the ant is comparable to the distance between you and a Type III civilization. Therefore, for the most part, a Type III civilization would operate with a completely different agenda and message than our civilization.

Lets say that a ten-lane superhighway is being built next to the anthill. The question is: would the ants even know what a ten-lane superhighway is, or what its used for, or how to communicate with the workers who are just feet away? And the answer is no. One question that we sometimes ask is if there is a Type III civilization in our backyard, in the Milky Way galaxy, would we even know its presence? And if you think about it, you realize that theres a good chance that we, like ants in an anthill, would not understand or be able to make sense of a ten-lane superhighway next door.

So this means there that could very well be a Type III civilization in our galaxy, it just means that were not smart enough to find one. Now, a Type III civilization is not going to make contact by sending Captain Kirk on the Enterprise to meet our leader. A Type III civilization would send self-replicating Von Neumann probes to colonize the galaxy with robots. For example, consider a virus. A virus only consists of thousands of atoms. Its a molecule in some sense. But in about one week, it can colonize an entire human being made of trillions of cells. How is that possible?

Well, a Von Neumann probe would be a self-replicating robot that lands on a moon; a moon, because they are stable, with no erosion, and theyre stable for billions of years. The probe would then make carbon copies of itself by the millions. It would create a factory to build copies of itself. And then these probes would then rocket to other nearby star systems, land on moons, to create a million more copies by building a factory on that moon. Eventually, there would be sphere surrounding the mother planet, expanding at near-light velocity, containing trillions of these Von Neumann probes, and that is perhaps the most efficient way to colonize the galaxy. This means that perhaps, on our moon there is a Von Neumann probe, left over from a visitation that took place million of years ago, and the probe is simply waiting for us to make the transition from Type 0 to Type I.

The Sentinel.

Yes. This, of course, is the basis of the movie 2001, because at the beginning of the movie, Kubrick interviewed many prominent scientists, and asked them the question, "What is the most likely way that an advanced civilization would probe the universe?" And that is, of course, through self-replicating Von Neumann probes, which create moon bases. That is the basis of the movie 2001, where the probe simply waits for us to become interesting. If were Type 0, were not very interesting. We have all the savagery and all the suicidal tendencies of fundamentalism, nationalism, sectarianism, that are sufficient to rip apart our world.

By the time weve become Type I, weve become interesting, weve become planetary, we begin to resolve our differences. We have centuries in which to exist on a single planet to create a paradise on Earth, a paradise of knowledge and prosperity.

2003 KurzweilAI.net

The rest is here:

Parallel universes, the Matrix, and superintelligence ...

Posted in Superintelligence | Comments Off on Parallel universes, the Matrix, and superintelligence …

Superintelligence: Paths, Dangers, Strategies | KurzweilAI

Posted: June 25, 2016 at 10:57 am

Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostroms work nothing less than a reconceptualization of the essential task of our time.

Amazon.com

Originally posted here:

Superintelligence: Paths, Dangers, Strategies | KurzweilAI

Posted in Superintelligence | Comments Off on Superintelligence: Paths, Dangers, Strategies | KurzweilAI

Superintelligence: Paths, Dangers, Strategies by Nick Bostrom …

Posted: at 10:57 am

Overview

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains.

If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence.

This profoundly ambitious and original book picks its way carefully through a vast tract of forbiddingly difficult intellectual terrain. Yet the writing is so lucid that it somehow makes it all seem easy. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.

Read More

From the Publisher

"Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era." Stuart Russell, Professor of Computer Science, University of California, Berkley

"Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book." Martin Rees, Past President, Royal Society

"This superb analysis by one of the world's clearest thinkers tackles one of humanity's greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn't become the last?" Professor Max Tegmark, MIT

"Terribly important ... groundbreaking... extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole... If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Spring from 1962, or ever." Olle Haggstrom, Professor of Mathematical Statistics

"Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking" The Economist

"There is no doubting the force of [Bostrom's] arguments...the problem is a research challenge worthy of the next generation's best mathematical talent. Human civilisation is at stake." Clive Cookson, Financial Times

"Worth reading.... We need to be super careful with AI. Potentially more dangerous than nukes" Elon Musk, Founder of SpaceX and Tesla

"a magnificent conception ... it ought to be required reading on all philosophy undergraduate courses, by anyone attempting to build AIs and by physicists who think there is no point to philosophy." Brian Clegg, Popular Science

"Bostrom...delivers a comprehensive outline of the philosophical foundations of the nature of intelligence and the difficulty not only in agreeing on a suitable definition of that concept but in living with the possibility of dire consequences of that concept." A. Olivera, Teachers College, Columbia University, CHOICE

"Bostrom's achievement (demonstrating his own polymathic intelligence) is a delineation of a difficult subject into a coherent and well-ordered fashion. This subject now demands more investigation."PopMatters

"Every intelligent person should read it." Nils Nilsson, Artificial Intelligence Pioneer, Stanford University

Library Journal

Show More

Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009). He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.

Read more:

Superintelligence: Paths, Dangers, Strategies by Nick Bostrom ...

Posted in Superintelligence | Comments Off on Superintelligence: Paths, Dangers, Strategies by Nick Bostrom …

Superintelligence: Paths, Dangers, Strategies by Nick …

Posted: June 21, 2016 at 11:13 pm

Is the surface of our planet -- and maybe every planet we can get our hands on -- going to be carpeted in paper clips (and paper clip factories) by a well-intentioned but misguided artificial intelligence (AI) that ultimately cannibalizes everything in sight, including us, in single-minded pursuit of a seemingly innocuous goal? Nick Bostrom, head of Oxford's Future of Humanity Institute, thinks that we can't guarantee it _won't_ happen, and it worries him. It doesn't require Skynet and Terminators, it doesn't require evil geniuses bent on destroying the world, it just requires a powerful AI with a moral system in which humanity's welfare is irrelevant or defined very differently than most humans today would define it. If the AI has a single goal and is smart enough to outwit our attempts to disable or control it once it has gotten loose, Game Over, argues Professor Bostrom in his book _Superintelligence_.

This is perhaps the most important book I have read this decade, and it has kept me awake at night for weeks. I want to tell you why, and what I think, but a lot of this is difficult ground, so please bear with me. The short form is that I am fairly certain that we _will_ build a true AI, and I respect Vernor Vinge, but I have long been skeptical of the Kurzweilian notions of inevitability, doubly-exponential growth, and the Singularity. I've also been skeptical of the idea that AIs will destroy us, either on purpose or by accident. Bostrom's book has made me think that perhaps I was naive. I still think that, on the whole, his worst-case scenarios are unlikely. However, he argues persuasively that we can't yet rule out any number of bad outcomes of developing AI, and that we need to be investing much more in figuring out whether developing AI is a good idea. We may need to put a moratorium on research, as was done for a few years with recombinant DNA starting in 1975. We also need to be prepared for the possibility that such a moratorium doesn't hold. Bostrom also brings up any number of mind-bending dystopias around what qualifies as human, which we'll get to below.

(snips to my review, since Goodreads limits length)

In case it isn't obvious by now, both Bostrom and I take it for granted that it's not only possible but nearly inevitable that we will create a strong AI, in the sense of it being a general, adaptable intelligence. Bostrom skirts the issue of whether it will be conscious, or "have qualia", as I think the philosophers of mind say.

Where Bostrom and I differ is in the level of plausibility we assign to the idea of a truly exponential explosion in intelligence by AIs, in a takeoff for which Vernor Vinge coined the term "the Singularity." Vinge is rational, but Ray Kurzweil is the most famous proponent of the Singularity. I read one of Kurzweil's books a number of years ago, and I found it imbued with a lot of near-mystic hype. He believes the Universe's purpose is the creation of intelligence, and that that process is growing on a double exponential, starting from stars and rocks through slime molds and humans and on to digital beings.

I'm largely allergic to that kind of hooey. I really don't see any evidence of the domain-to-domain acceleration that Kurzweil sees, and in particular the shift from biological to digital beings will result in a radical shift in the evolutionary pressures. I see no reason why any sort of "law" should dictate that digital beings will evolve at a rate that *must* be faster than the biological one. I also don't see that Kurzweil really pays any attention to the physical limits of what will ultimately be possible for computing machines. Exponentials can't continue forever, as Danny Hillis is fond of pointing out. http://www.kurzweilai.net/ask-ray-the...

So perhaps my opinion is somewhat biased by a dislike of Kurzweil's circus barker approach, but I think there is more to it than that. Fundamentally, I would put it this way:

Being smart is hard.

And making yourself smarter is also hard. My inclination is that getting smarter is at least as hard as the advantages it brings, so that the difficulty of the problem and the resources that can be brought to bear on it roughly balance. This will result in a much slower takeoff than Kurzweil reckons, in my opinion. Bostrom presents a spectrum of takeoff speeds, from "too fast for us to notice" through "long enough for us to develop international agreements and monitoring institutions," but he makes it fairly clear that he believes that the probability of a fast takeoff is far too large to ignore. There are parts of his argument I find convincing, and parts I find less so.

To give you a little more insight into why I am a little dubious that the Singularity will happen in what Bostrom would describe as a moderate to fast takeoff, let me talk about the kinds of problems we human beings solve, and that an AI would have to solve. Actually, rather than the kinds of questions, first let me talk about the kinds of answers we would like an AI (or a pet family genius) to generate when given a problem. Off the top of my head, I can think of six:

[Speed] Same quality of answer, just faster. [Ply] Look deeper in number of plies (moves, in chess or go). [Data] Use more, and more up-to-date, data. [Creativity] Something beautiful and new. [Insight] Something new and meaningful, such as a new theory; probably combines elements of all of the above categories. [Values] An answer about (human) values.

The first three are really about how the answers are generated; the last three about what we want to get out of them. I think this set is reasonably complete and somewhat orthogonal, despite those differences.

So what kinds of problems do we apply these styles of answers to? We ultimately want answers that are "better" in some qualitative sense.

Humans are already pretty good at projecting the trajectory of a baseball, but it's certainly conceivable that a robot batter could be better, by calculating faster and using better data. Such a robot might make for a boring opponent for a human, but it would not be beyond human comprehension.

But if you accidentally knock a bucket of baseballs down a set of stairs, better data and faster computing are unlikely to help you predict the exact order in which the balls will reach the bottom and what happens to the bucket. Someone "smarter" might be able to make some interesting statistical predictions that wouldn't occur to you or me, but not fill in every detail of every interaction between the balls and stairs. Chaos, in the sense of sensitive dependence on initial conditions, is just too strong.

In chess, go, or shogi, a 1000x improvement in the number of plies that can be investigated gains you maybe only the ability to look ahead two or three moves more than before. Less if your pruning (discarding unpromising paths) is poor, more if it's good. Don't get me wrong -- that's a huge deal, any player will tell you. But in this case, humans are already pretty good, when not time limited.

Go players like to talk about how close the top pros are to God, and the possibly apocryphal answer from a top pro was that he would want a three-stone (three-move) handicap, four if his life depended on it. Compared this to the fact that a top pro is still some ten stones stronger than me, a fair amateur, and could beat a rank beginner even if the beginner was given the first forty moves. Top pros could sit across the board from an almost infinitely strong AI and still hold their heads up.

In the most recent human-versus-computer shogi (Japanese chess) series, humans came out on top, though presumabl
y this won't last much longer.

In chess, as machines got faster, looked more plies ahead, carried around more knowledge, and got better at pruning the tree of possible moves, human opponents were heard to say that they felt the glimmerings of insight or personality from them.

So again we have some problems, at least, where plies will help, and will eventually guarantee a 100% win rate against the best (non-augmented) humans, but they will likely not move beyond what humans can comprehend.

Simply being able to hold more data in your head (or the AI's head) while making a medical diagnosis using epidemiological data, or cross-correlating drug interactions, for example, will definitely improve our lives, and I can imagine an AI doing this. Again, however, the AI's capabilities are unlikely to recede into the distance as something we can't comprehend.

We know that increasing the amount of data you can handle by a factor of a thousand gains you 10x in each dimension for a 3-D model of the atmosphere or ocean, up until chaotic effects begin to take over, and then (as we currently understand it) you can only resort to repeated simulations and statistical measures. The actual calculations done by a climate model long ago reached the point where even a large team of humans couldn't complete them in a lifetime. But they are not calculations we cannot comprehend, in fact, humans design and debug them.

So for problems with answers in the first three categories, I would argue that being smarter is helpful, but being a *lot* smarter is *hard*. The size of computation grows quickly in many problems, and for many problems we believe that sheer computation is fundamentally limited in how well it can correspond to the real world.

But those are just the warmup. Those are things we already ask computers to do for us, even though they are "dumber" than we are. What about the latter three categories?

I'm no expert in creativity, and I know researchers study it intensively, so I'm going to weasel through by saying it is the ability to generate completely new material, which involves some random process. You also need the ability either to generate that material such that it is aesthetically pleasing with high probability, or to prune those new ideas rapidly using some metric that achieves your goal.

For my purposes here, insight is the ability to be creative not just for esthetic purposes, but in a specific technical or social context, and to validate the ideas. (No implication that artists don't have insight is intended, this is just a technical distinction between phases of the operation, for my purposes here.) Einstein's insight for special relativity was that the speed of light is constant. Either he generated many, many hypotheses (possibly unconsciously) and pruned them very rapidly, or his hypothesis generator was capable of generating only a few good ones. In either case, he also had the mathematical chops to prove (or at least analyze effectively) his hypothesis; this analysis likewise involves generating possible paths of proofs through the thicket of possibilities and finding the right one.

So, will someone smarter be able to do this much better? Well, it's really clear that Einstein (or Feynman or Hawking, if your choice of favorite scientist leans that way) produced and validated hypotheses that the rest of us never could have. It's less clear to me exactly how *much* smarter than the rest of us he was; did he generate and prune ten times as many hypotheses? A hundred? A million? My guess is it's closer to the latter than the former. Even generating a single hypothesis that could be said to attack the problem is difficult, and most humans would decline to even try if you asked them to.

Making better devices and systems of any kind requires all of the above capabilities. You must have insight to innovate, and you must be able to quantitatively and qualitatively analyze the new systems, requiring the heavy use of data. As systems get more complex, all of this gets harder. My own favorite example is airplane engines. The Wright Brothers built their own engines for their planes. Today, it takes a team of hundreds to create a jet turbine -- thousands, if you reach back into the supporting materials, combustion and fluid flow research. We humans have been able to continue to innovate by building on the work of prior generations, and especially harnessing teams of people in new ways. Unlike Peter Thiel, I don't believe that our rate of innovation is in any serious danger of some precipitous decline sometime soon, but I do agree that we begin with the low-lying fruit, so that harvesting fruit requires more effort -- or new techniques -- with each passing generation.

The Singularity argument depends on the notion that the AI would design its own successor, or even modify itself to become smarter. Will we watch AIs gradually pull even with us and then ahead, but not disappear into the distance in a Roadrunner-like flash of dust covering just a few frames of film in our dull-witted comprehension?

Ultimately, this is the question on which continued human existence may depend: If an AI is enough smarter than we are, will it find the process of improving itself to be easy, or will each increment of intelligence be a hard problem for the system of the day? This is what Bostrom calls the "recalcitrance" of the problem.

I believe that the range of possible systems grows rapidly as they get more complex, and that evaluating them gets harder; this is hard to quantify, but each step might involve a thousand times as many options, or evaluating each option might be a thousand times harder. Growth in computational power won't dramatically overbalance that and give sustained, rapid and accelerating growth that moves AIs beyond our comprehension quickly. (Don't take these numbers seriously, it's just an example.)

Bostrom believes that recalcitrance will grow more slowly than the resources the AI can bring to bear on the problem, resulting in continuing, and rapid, exponential increases in intelligence -- the arrival of the Singularity. As you can tell from the above, I suspect that the opposite is the case, or that they very roughly balance, but Bostrom argues convincingly. He is forcing me to reconsider.

What about "values", my sixth type of answer, above? Ah, there's where it all goes awry. Chapter eight is titled, "Is the default scenario doom?" and it will keep you awake.

What happens when we put an AI in charge of a paper clip factory, and instruct it to make as many paper clips as it can? With such a simple set of instructions, it will do its best to acquire more resources in order to make more paper clips, building new factories in the process. If it's smart enough, it will even anticipate that we might not like this and attempt to disable it, but it will have the will and means to deflect our feeble strikes against it. Eventually, it will take over every factory on the planet, continuing to produce paper clips until we are buried in them. It may even go on to asteroids and other planets in a single-minded attempt to carpet the Universe in paper clips.

I suppose it goes without saying that Bostrom thinks this would be a bad outcome. Bostrom reasons that AIs ultimately may or may not be similar enough to us that they count as our progeny, but doesn't hesitate to view them as adversaries, or at least rivals, in the pursuit of resources and even existence. Bostrom clearly roots for humanity here. Which means it's incumbent on us to find a way to prevent this from happening.

Bostrom thinks that instilling valu
es that are actually close enough to ours that an AI will "see things our way" is nigh impossible. There are just too many ways that the whole process can go wrong. If an AI is given the goal of "maximizing human happiness," does it count when it decides that the best way to do that is to create the maximum number of digitally emulated human minds, even if that means sacrificing some of the physical humans we already have because the planet's carrying capacity is higher for digital than organic beings?

As long as we're talking about digital humans, what about the idea that a super-smart AI might choose to simulate human minds in enough detail that they are conscious, in the process of trying to figure out humanity? Do those recursively digital beings deserve any legal standing? Do they count as human? If their simulations are stopped and destroyed, have they been euthanized, or even murdered? Some of the mind-bending scenarios that come out of this recursion kept me awake nights as I was reading the book.

He uses a variety of names for different strategies for containing AIs, including "genies" and "oracles". The most carefully circumscribed ones are only allowed to answer questions, maybe even "yes/no" questions, and have no other means of communicating with the outside world. Given that Bostrom attributes nearly infinite brainpower to an AI, it is hard to effectively rule out that an AI could still find some way to manipulate us into doing its will. If the AI's ability to probe the state of the world is likewise limited, Bsotrom argues that it can still turn even single-bit probes of its environment into a coherent picture. It can then decide to get loose and take over the world, and identify security flaws in outside systems that would allow it to do so even with its very limited ability to act.

I think this unlikely. Imagine we set up a system to monitor the AI that alerts us immediately when the AI begins the equivalent of a port scan, for whatever its interaction mechanism is. How could it possibly know of the existence and avoid triggering the alert? Bostrom has gone off the deep end in allowing an intelligence to infer facts about the world even when its data is very limited. Sherlock Holmes always turns out to be right, but that's fiction; in reality, many, many hypotheses would suit the extremely slim amount of data he has. The same will be true with carefully boxed AIs.

At this point, Bostrom has argued that containing a nearly infinitely powerful intelligence is nearly impossible. That seems to me to be effectively tautological.

If we can't contain them, what options do we have? After arguing earlier that we can't give AIs our own values (and presenting mind-bending scenarios for what those values might actually mean in a Universe with digital beings), he then turns around and invests a whole string of chapters in describing how we might actually go about building systems that have those values from the beginning.

At this point, Bostrom began to lose me. Beyond the systems for giving AIs values, I felt he went off the rails in describing human behavior in simplistic terms. We are incapable of balancing our desire to reproduce with a view of the tragedy of the commons, and are inevitably doomed to live out our lives in a rude, resource-constrained existence. There were some interesting bits in the taxonomies of options, but the last third of the book felt very speculative, even more so than the earlier parts.

Bostrom is rational and seems to have thought carefully about the mechanisms by which AIs may actually arise. Here, I largely agree with him. I think his faster scenarios of development, though, are unlikely: being smart, and getting smarter, is hard. He thinks a "singleton", a single, most powerful AI, is the nearly inevitable outcome. I think populations of AIs are more likely, but if anything this appears to make some problems worse. I also think his scenarios for controlling AIs are handicapped in their realism by the nearly infinite powers he assigns them. In either case, Bostrom has convinced me that once an AI is developed, there are many ways it can go wrong, to the detriment and possibly extermination of humanity. Both he and I are opposed to this. I'm not ready to declare a moratorium on AI research, but there are many disturbing possibilities and many difficult moral questions that need to be answered.

The first step in answering them, of course, is to begin discussing them in a rational fashion, while there is still time. Read the first 8 chapters of this book!

Read more here:

Superintelligence: Paths, Dangers, Strategies by Nick ...

Posted in Superintelligence | Comments Off on Superintelligence: Paths, Dangers, Strategies by Nick …

Page 23«..10..21222324