Page 24«..10..21222324

Category Archives: Superintelligence

Superintelligence: Paths, Dangers, Strategies | KurzweilAI

Posted: June 21, 2016 at 11:13 pm

Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostroms work nothing less than a reconceptualization of the essential task of our time.

Amazon.com

Original post:

Superintelligence: Paths, Dangers, Strategies | KurzweilAI

Posted in Superintelligence | Comments Off on Superintelligence: Paths, Dangers, Strategies | KurzweilAI

Superintelligence

Posted: at 11:13 pm

Description

We humans steer the future not because we're the strongest or the fastest but because we're the smartest animal on this planet. However, there are no reasons to assume that blind evolutionary processes have reached the physical limit of intelligence with us. Quite to the contrary, we have already seen how intelligent machines outperform the best of our kind on an increasing number of tasks, ranging from Chess over the quiz show Jeopardy to Go. What will happen when artificial intelligence surpasses human intelligence in a broader range and controls our destiny? Are we ready to make our final invention? And what is to be done from an ethical perspective?

The following discussion paper by the Effective Altruism Foundation examines short- and medium-term implications of increasing automation as well as long-term opportunities and risks from superintelligent AI.

Presentation for the Futures Hub Switzerland on 3 March 2015 (photos by Boaz Heller Avrahami):

Presentation for the ETH Entrepreneur Club on 4 March 2015 (Photography: http://www.manuelmaisch.ch):

Presentation on the long-term future of artificial intelligence at Fantasy Basel on 14 May 2015:

Feedback and suggestions are highly appreciated! Please send them to .

See the original post:

Superintelligence

Posted in Superintelligence | Comments Off on Superintelligence

Superintelligence Audiobook | Nick Bostrom | Audible.com

Posted: at 6:41 am

This book is more frightening than any book you'll ever read. The author makes a great case for what the future holds for us humans. I believe the concepts in "The Singularity is Near" by Ray Kurzweil are mostly spot on, but the one area Kurzweil dismisses prematurely is how the SI (superintelligent advanced artificial intelligence) entity will react to its circumstances.

The book doesn't really dwell much on how the SI will be created. The author mostly assumes a computer algorithm of some kind with perhaps human brain enhancements. If you reject such an SI entity prima facie this book is not for you, since the book mostly deals with assuming such a recursive self aware and self improving entity will be in humanities future.

The author makes some incredibly good points. He mostly hypothesizes that the SI entity will be a singleton and not allow others of its kind to be created independently and will happen on a much faster timeline after certain milestones are fulfilled.

The book points out how hard it is to put safeguards into a procedure to guard against unintended consequences. For example, making 'the greater good for the greatest many' the final goal can lead to unintended consequence such as allowing a Nazi ruled world (he doesn't give that example directly in the book, and I borrow it from Karl Popper who gave it as a refutation for John Stuart Mill's utilitarian philosophy). If the goal is to make us all smile, the SI entity might make brain probes that force us to smile. There is no easy end goal specifiable without unintended consequences.

This kind of thinking within the book is another reason I can recommend the book. As I was listening, I realized that all the ways we try to motivate or control an SI entity to be moral can also be applied to us humans in order to make us moral to. Morality is hard both for us humans and for future SI entities.

There's a movie from the early 70s called "Colossus: The Forbin Project", it really is a template for this book, and I would recommend watching the movie before reading this book.

I just recently listened to the book, "Our Final Invention" by James Barrat. That book covers the same material that is presented in this book. This book is much better even though they overlap very much. The reason why is this author, Nick Bostrom, is a philosopher and knows how to lay out his premises in such a way that the story he is telling is consistent, coherent, and gives a narrative to tie the pieces together (even if the narrative will scare the daylights out of the listener).

This author has really thought about the problems inherent in an SI entity, and this book will be a template for almost all future books on this subject.

Read more from the original source:

Superintelligence Audiobook | Nick Bostrom | Audible.com

Posted in Superintelligence | Comments Off on Superintelligence Audiobook | Nick Bostrom | Audible.com

Superintelligence Audiobook | Nick Bostrom | Audible.com

Posted: June 19, 2016 at 2:40 pm

This book is more frightening than any book you'll ever read. The author makes a great case for what the future holds for us humans. I believe the concepts in "The Singularity is Near" by Ray Kurzweil are mostly spot on, but the one area Kurzweil dismisses prematurely is how the SI (superintelligent advanced artificial intelligence) entity will react to its circumstances.

The book doesn't really dwell much on how the SI will be created. The author mostly assumes a computer algorithm of some kind with perhaps human brain enhancements. If you reject such an SI entity prima facie this book is not for you, since the book mostly deals with assuming such a recursive self aware and self improving entity will be in humanities future.

The author makes some incredibly good points. He mostly hypothesizes that the SI entity will be a singleton and not allow others of its kind to be created independently and will happen on a much faster timeline after certain milestones are fulfilled.

The book points out how hard it is to put safeguards into a procedure to guard against unintended consequences. For example, making 'the greater good for the greatest many' the final goal can lead to unintended consequence such as allowing a Nazi ruled world (he doesn't give that example directly in the book, and I borrow it from Karl Popper who gave it as a refutation for John Stuart Mill's utilitarian philosophy). If the goal is to make us all smile, the SI entity might make brain probes that force us to smile. There is no easy end goal specifiable without unintended consequences.

This kind of thinking within the book is another reason I can recommend the book. As I was listening, I realized that all the ways we try to motivate or control an SI entity to be moral can also be applied to us humans in order to make us moral to. Morality is hard both for us humans and for future SI entities.

There's a movie from the early 70s called "Colossus: The Forbin Project", it really is a template for this book, and I would recommend watching the movie before reading this book.

I just recently listened to the book, "Our Final Invention" by James Barrat. That book covers the same material that is presented in this book. This book is much better even though they overlap very much. The reason why is this author, Nick Bostrom, is a philosopher and knows how to lay out his premises in such a way that the story he is telling is consistent, coherent, and gives a narrative to tie the pieces together (even if the narrative will scare the daylights out of the listener).

This author has really thought about the problems inherent in an SI entity, and this book will be a template for almost all future books on this subject.

Originally posted here:

Superintelligence Audiobook | Nick Bostrom | Audible.com

Posted in Superintelligence | Comments Off on Superintelligence Audiobook | Nick Bostrom | Audible.com

Nick Bostrom’s Home Page

Posted: at 3:44 am

ETHICS & POLICY

Astronomical Waste: The Opportunity Cost of Delayed Technological Development Suns are illuminating and heating empty rooms, unused energy is being flushed down black holes, and our great common endowment of negentropy is being irreversibly degraded into entropy on a cosmic scale. These are resources that an advanced civilization could have used to create value-structures, such as sentient beings living worthwhile lives... [Utilitas, Vol. 15, No. 3 (2003): 308-314] [translation: Russian] [html] [pdf]

Human Enhancement Original essays by various prominent moral philosophers on the ethics of human enhancement. [Eds. Nick Bostrom & Julian Savulescu (Oxford University Press, 2009)].

Enhancement Ethics: The State of the Debate The introductory chapter from the book (w/ Julian Savulescu): 1-22 [pdf]

TRANSHUMANISM

Transhumanist Values Wonderful ways of being may be located in the "posthuman realm", but we can't reach them. If we enhance ourselves using technology, however, we can go out there and realize these values. This paper sketches a transhumanist axiology. [Ethical Issues for the 21st Century, ed. Frederick Adams, Philosophical Documentation Center Press, 2003; reprinted in Review of Contemporary Philosophy, Vol. 4, May (2005)] [translations: Polish, Portugese] [html] [pdf]

RISK & THE FUTURE

Global Catastrophic Risks Twenty-six leading experts look at the gravest risks facing humanity in the 21st century, including natural catastrophes, nuclear war, terrorism, global warming, biological weapons, totalitarianism, advanced nanotechnology, general artificial intelligence, and social collapse. The book also addresses over-arching issuespolicy responses and methods for predicting and managing catastrophes. Foreword by Lord Martin Rees. [Eds. Nick Bostrom & Milan Cirkovic (Oxford University Press, 2008)]. Introduction chapter free here [pdf]

TECHNOLOGY ISSUES

THE NEW BOOK

"I highly recommend this book."Bill Gates

"terribly important ... groundbreaking" "extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplinesengineering, natural sciences, medicine, social sciences and philosophyinto a comprehensible whole" "If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Springfrom 1962, or ever."Olle Haggstrom, Professor of Mathematical Statistics

"Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. ... It marks the beginning of a new era."Stuart Russell, Professor of Computer Science, University of California, Berkley

"Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book." Martin Rees, Past President, Royal Society

"Worth reading.... We need to be super careful with AI. Potentially more dangerous than nukes"Elon Musk

"There is no doubting the force of [Bostrom's] arguments ... the problem is a research challenge worthy of the next generation's best mathematical talent. Human civilisation is at stake." Financial Times

"This superb analysis by one of the world's clearest thinkers tackles one of humanity's greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn't become the last?" Professor Max Tegmark, MIT

"a damn hard read" The Telegraph

ANTHROPICS & PROBABILITY

Cars In the Other Lane Really Do Go Faster When driving on the motorway, have you ever wondered about (and cursed!) the fact that cars in the other lane seem to be getting ahead faster than you? One might be tempted to account for this by invoking Murphy's Law ("If anything can go wrong, it will", discovered by Edward A. Murphy, Jr, in 1949). But there is an alternative explanation, based on observational selection effects... [PLUS, No. 17 (2001)]

PHILOSOPHY OF MIND

DECISION THEORY

BIO

Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), Human Enhancement (ed., OUP, 2009), and the academic book Superintelligence: Paths, Dangers, Strategies (OUP, 2014), which became a New York Times bestseller. He is best known for his work in five areas: (i) existential risk; (ii) the simulation argument; (iii) anthropics (developing the first mathematically explicit theory of observation selection effects); (iv) impacts of future technology, especially machine intelligence; and (v) implications of consequentialism for global strategy.

He is recipient of a Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). He has been listed on Foreign Policy's Top 100 Global Thinkers list twice; and he was included on Prospect magazines World Thinkers list, the youngest person in the top 15 from all fields and the highest-ranked analytic philosopher. His writings have been translated into 24 languages. There have been more than 100 translations and reprints of his works.

BACKGROUND

I was born in Helsingborg, Sweden, and grew up by the seashore. I was bored in school. At age fifteen or sixteen I had an intellectual awakening, and feeling that I had wasted the first one and a half decades of my life, I resolved to focus on what was important. Since I did not know what was important, and I did not know how to find out, I decided to start by trying to place myself in a better position to find out. So I began a project of intellectual self-development, which I pursued with great intensity for the next one and a half decades.

As an undergraduate, I studied many subjects in parallel, and I gather that my performance set a national record. I was once expelled for studying too much, after the head of Ume University psychology department discovered that I was concurrently following several other full-time programs of study (physics, philosophy, and mathematical logic), which he believed to be psychologically impossible.

For my postgraduate work, I went to London, where I studied physics and neuroscience at King's College, and obtained a PhD from the London School of Economics. For a while I did a little bit stand-up comedy on the vibrant London pub and theatre circuit.

During those years, I co-founded, with David Pearce, the World Transhumanist Association, a nonprofit grassroots organization. Later, I was involved in founding the Institute for Ethics and Emerging Technologies, a nonprofit virtual think tank. The objective was to stimulate wider discussion about the implications of future technologies, in particular technologies that might lead to human enhancement. (These organizations have since developed on their own trajectories, and it is very much not the case that I agree with everything said by those who flock under the transhumanist flag.)

Since 2006, I've been the founding director of the Future of Humanity Institute at Oxford University. This unique multidisciplinary research aims to enable a select set of intellects to apply careful thinking to big-picture question for humanity and global priorities. The Institute belongs to the Faculty of Philosophy and the Oxford Martin School. Since 2015, I also direct the Strategic Artificial Intelligence Research Center.

I am in a very fortunate position. I have no teaching duties. I am supported by a staff of assistants and brilliant research fellows. There are virtually no restrictions on what I can work on. I must try very hard to be worthy of this privilege and to cast some light on matters that matter.

CONTACT

For administrative matters, scheduling, and invitations, please contact my assistant, Kyle Scott:

Email: fhipa[atsign]philosophy[dot]ox[dot]ac[dot]uk Phone: +44 (0)1865 286800

If you need to contact me directly (I regret I am unable to respond to all emails): nick[atsign]nickbostrom[dot]com.

VIRTUAL ESTATE

http://www.fhi.ox.ac.ukFuture of Humanity Institute

http://www.anthropic-principle.comPapers on observational selection effects

http://www.simulation-argument.comDevoted to the question, "Are you living in a computer simulation?"

http://www.existential-risk.orgHuman extinction scenarios and related concerns

On the bank at the end Of what was there before us Gazing over to the other side On what we can become Veiled in the mist of nave speculation We are busy here preparing Rafts to carry us across Before the light goes out leaving us In the eternal night of could-have-been

CRUCIAL CONSIDERATIONS

A thread that runs through my work is a concern with "crucial considerations". A crucial consideration is an idea or argument that might plausibly reveal the need for not just some minor course adjustment in our practical endeavours but a major change of direction or priority.

If we have overlooked even just one such consideration, then all our best efforts might be for naughtor less. When headed the wrong way, the last thing needed is progress. It is therefore important to pursue such lines of inquiry as might disclose an unnoticed crucial consideration.

Some of the relevant inquiries are about moral philosophy and values. Others have to do with rationality and reasoning under uncertainty. Still others pertain to specific issues and possibilities, such as existential risks, the simulation hypothesis, human enhancement, infinite utilities, anthropic reasoning, information hazards, the future of machine intelligence, or the singularity hypothesis.

High-leverage questions associated with crucial considerations deserve to be investigated. My research interests are quite wide-ranging; yet they all stem from the quest to understand the big picture for humanity, so that we can more wisely choose what to aim for and what to do. Embarking on this quest has seemed the best way to try to make a positive contribution to the world.

SOME VIDEOS AND LECTURES

SOME ADDITONAL (OLD, COBWEBBED) PAPERS

On this page.

INTERVIEWS

POLICY

MISCELLANEOUS

words trying extra-hard to be more than just words...

Read the original post:

Nick Bostrom's Home Page

Posted in Superintelligence | Comments Off on Nick Bostrom’s Home Page

Superintelligence – Wikipedia, the free encyclopedia

Posted: June 17, 2016 at 4:58 am

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world.

University of Oxford philosopher Nick Bostrom defines superintelligence as "an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills."[1] The program Fritz falls short of superintelligence even though it is much better than humans at chess because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first sentient machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity toeither as a single being or as a new speciesbecome much more powerful than humans, and to displace them.[3]

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself a feature called "recursive self-improvement". It would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything.

Computer components already greatly surpass human performance in speed. Bostrom writes, Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz). Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, "whereas existing electronic processing cores can communicate optically at the speed of light". Thus, the simplest example of a superintelligence may be an emulated human mind that's run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed.[10] Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and primate cognition.) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.

All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence.[13] By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

Selective breeding and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process very rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.

Alternatively, collective
intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism.

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or braincomputer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem.

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on timescales. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.[18]

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft Academic Search), the median year by which respondents expected machines "that can carry out most human professions at least as well as a typical human" (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said 'never' for 50% confidence, and the 16.5% who said 'never' for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:

Responding to Bostrom, Santos-Lang raised concern that developers may attempt to start with a single kind of superintelligence.

It has been suggested that learning computers that rapidly become superintelligent may take unforeseen actions or that robots would out-compete humanity (one technological singularity scenario).[22] Researchers have argued that, by way of an "intelligence explosion" sometime over the next century, a self-improving AI could become so powerful as to be unstoppable by humans.[23]

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[24]

Eliezer Yudkowsky explains: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."[25]

Bill Hibbard advocates for public education about superintelligence and public control over the development of superintelligence.

Original post:

Superintelligence - Wikipedia, the free encyclopedia

Posted in Superintelligence | Comments Off on Superintelligence – Wikipedia, the free encyclopedia

How Long Before Superintelligence? – Nick Bostrom

Posted: at 4:58 am

This is if we take the retina simulation as a model. As the present, however, not enough is known about the neocortex to allow us to simulate it in such an optimized way. But the knowledge might be available by 2004 to 2008 (as we shall see in the next section). What is required, if we are to get human-level AI with hardware power at this lower bound, is the ability to simulate 1000-neuron aggregates in a highly efficient way.

The extreme alternative, which is what we assumed in the derivation of the upper bound, is to simulate each neuron individually. The number of clock cycles that neuroscientists can expend simulating the processes of a single neuron knows of no limits, but that is because their aim is to model the detailed chemical and electrodynamic processes in the nerve cell rather than to just do the minimal amount of computation necessary to replicate those features of its response function which are relevant for the total performance of the neural net. It is not known how much of the detail that is contingent and inessential and how much needs to be preserved in order for the simulation to replicate the performance of the whole. It seems like a good bet though, at least to the author, that the nodes could be strongly simplified and replaced with simple standardized elements. It appears perfectly feasible to have an intelligent neural network with any of a large variety of neuronal output functions and time delays.

It does look plausible, however, that by the time when we know how to simulate an idealized neuron and know enough about the brain's synaptic structure that we can put the artificial neurons together in a way that functionally mirrors how it is done in the brain, then we will also be able to replace whole 1000-neuron modules with something that requires less computational power to simulate than it does to simulate all the neuron in the module individually. We might well get all the way down to a mere 1000 instructions per neuron and second, as is implied by Moravec's estimate (10^14 ops / 10^11 neurons = 1000 operations per second and neuron). But unless we can build these modules without first building a whole brain then this optimization will only be possible after we have already developed human-equivalent artificial intelligence.

If we assume the upper bound on the computational power needed to simulate the human brain, i.e. if we assume enough power to simulate each neuron individually (10^17 ops), then Moore's law says that we will have to wait until about 2015 or 2024 (for doubling times of 12 and 18 months, respectively) before supercomputers with the requisite performance are at hand. But if by then we know how to do the simulation on the level of individual neurons, we will presumably also have figured out how to make at least some optimizations, so we could probably adjust these upper bounds a bit downwards.

So far I have been talking only of processor speed, but computers need a great deal of memory too if they are to replicate the brain's performance. Throughout the history of computers, the ratio between memory and speed has remained more or less constant at about 1 byte/ops. Since a signal is transmitted along a synapse, on average, with a frequency of about 100 Hz and since its memory capacity is probably less than 100 bytes (1 byte looks like a more reasonable estimate), it seems that speed rather than memory would be the bottleneck in brain simulations on the neuronal level. (If we instead assume that we can achieve a thousand-fold leverage in our simulation speed as assumed in Moravec's estimate, then that would bring the requirement of speed down, perhaps, one order of magnitude below the memory requirement. But if we can optimize away three orders of magnitude on speed by simulating 1000-neuron aggregates, we will probably be able to cut away at least one order of magnitude of the memory requirement. Thus the difficulty of building enough memory may be significantly smaller, and is almost certainly not significantly greater, than the difficulty of building a processor that is fast enough. We can therefore focus on speed as the critical parameter on the hardware front.)

This paper does not discuss the possibility that quantum phenomena are irreducibly involved in human cognition. Hameroff and Penrose and others have suggested that coherent quantum states may exist in the microtubules, and that the brain utilizes these phenomena to perform high-level cognitive feats. The author's opinion is that this is implausible. The controversy surrounding this issue won't be entered into here; it will simply be assumed, throughout this paper, that quantum phenomena are not functionally relevant to high-level brain modelling.

In conclusion we can say that the hardware capacity for human-equivalent artificial intelligence will likely exist before the end of the first quater of the next century, and may be reached as early as 2004. A corresponding capacity should be available to leading AI labs within ten years thereafter (or sooner if the potential of human-level AI and superintelligence is by then better appreciated by funding agencies).

Notes

It is possible to nit-pick on this estimate. For example, there is some evidence that some limited amount of communication between nerve cells is possible without synaptic transmission. And we have the regulatory mechanisms consisting neurotransmitters and their sources, receptors and re-uptake channels. While neurotransmitter balances are crucially important for the proper functioning of the human brain, they have an insignificant information content compared to the synaptic structure. Perhaps a more serious point is that that neurons often have rather complex time-integration properties (Koch 1997). Whether a specific set of synaptic inputs result in the firing of a neuron depends on their exact timing. The authors' opinion is that except possibly for a small number of special applications such as auditory stereo perception, the temporal properties of the neurons can easily be accommodated with a time resolution of the simulation on the order of 1 ms. In an unoptimized simulation this would add an order of magnitude to the estimate given above, where we assumed a temporal resolution of 10 ms, corresponding to an average firing rate of 100 Hz. However, the other values on which the estimate was based appear to be too high rather than too low , so we should not change the estimate much to allow for possible fine-grained time-integration effects in a neuron's dendritic tree. (Note that even if we were to adjust our estimate upward by an order of magnitude, this would merely add three to five years to the predicted upper bound on when human-equivalent hardware arrives. The lower bound, which is based on Moravec's estimate, would remain unchanged.)

Software via the bottom-up approach

Superintelligence requires software as well as hardware. There are several approaches to the software problem, varying in the amount of top-down direction they require. At the one extreme we have systems like CYC which is a very large encyclopedia-like knowledge-base and inference-engine. It has been spoon-fed facts, rules of thumb and heuristics for over a decade by a team of human knowledge enterers. While systems like CYC might be good for certain practical tasks, this hardly seems like an approach that will convince AI-skeptics that superintelligence might well happen in the foreseeable future. We have to look at paradigms that require less human input, ones that make more use of bottom-up methods.

Given sufficient hardware and the right sort of programmin
g, we could make the machines learn in the same way a child does, i.e. by interacting with human adults and other objects in the environment. The learning mechanisms used by the brain are currently not completely understood. Artificial neural networks in real-world applications today are usually trained through some variant of the Backpropagation algorithm (which is known to be biologically unrealistic). The Backpropagation algorithm works fine for smallish networks (of up to a few thousand neurons) but it doesn't scale well. The time it takes to train a network tends to increase dramatically with the number of neurons it contains. Another limitation of backpropagation is that it is a form of supervised learning, requiring that signed error terms for each output neuron are specified during learning. It's not clear how such detailed performance feedback on the level of individual neurons could be provided in real-world situations except for certain well-defined specialized tasks.

A biologically more realistic learning mode is the Hebbian algorithm. Hebbian learning is unsupervised and it might also have better scaling properties than Backpropagation. However, it has yet to be explained how Hebbian learning by itself could produce all the forms of learning and adaptation of which the human brain is capable (such the storage of structured representation in long-term memory - Bostrom 1996). Presumably, Hebb's rule would at least need to be supplemented with reward-induced learning (Morillo 1992) and maybe with other learning modes that are yet to be discovered. It does seems plausible, though, to assume that only a very limited set of different learning rules (maybe as few as two or three) are operating in the human brain. And we are not very far from knowing what these rules are.

Creating superintelligence through imitating the functioning of the human brain requires two more things in addition to appropriate learning rules (and sufficiently powerful hardware): it requires having an adequate initial architecture and providing a rich flux of sensory input.

The latter prerequisite is easily provided even with present technology. Using video cameras, microphones and tactile sensors, it is possible to ensure a steady flow of real-world information to the artificial neural network. An interactive element could be arranged by connecting the system to robot limbs and a speaker.

Developing an adequate initial network structure is a more serious problem. It might turn out to be necessary to do a considerable amount of hand-coding in order to get the cortical architecture right. In biological organisms, the brain does not start out at birth as a homogenous tabula rasa; it has an initial structure that is coded genetically. Neuroscience cannot, at its present stage, say exactly what this structure is or how much of it needs be preserved in a simulation that is eventually to match the cognitive competencies of a human adult. One way for it to be unexpectedly difficult to achieve human-level AI through the neural network approach would be if it turned out that the human brain relies on a colossal amount of genetic hardwiring, so that each cognitive function depends on a unique and hopelessly complicated inborn architecture, acquired over aeons in the evolutionary learning process of our species.

Is this the case? A number of considerations that suggest otherwise. We have to contend ourselves with a very brief review here. For a more comprehensive discussion, the reader may consult Phillips & Singer (1997).

Quartz & Sejnowski (1997) argue from recent neurobiological data that the developing human cortex is largely free of domain-specific structures. The representational properties of the specialized circuits that we find in the mature cortex are not generally genetically prespecified. Rather, they are developed through interaction with the problem domains on which the circuits operate. There are genetically coded tendencies for certain brain areas to specialize on certain tasks (for example primary visual processing is usually performed in the primary visual cortex) but this does not mean that other cortical areas couldn't have learnt to perform the same function. In fact, the human neocortex seems to start out as a fairly flexible and general-purpose mechanism; specific modules arise later through self-organizing and through interacting with the environment.

Strongly supporting this view is the fact that cortical lesions, even sizeable ones, can often be compensated for if they occur at an early age. Other cortical areas take over the functions that would normally have been developed in the destroyed region. In one study, sensitivity to visual features was developed in the auditory cortex of neonatal ferrets, after that region's normal auditory input channel had been replaced by visual projections (Sur et al. 1988). Similarly, it has been shown that the visual cortex can take over functions normally performed by the somatosensory cortex (Schlaggar & O'Leary 1991). A recent experiment (Cohen et al. 1997) showed that people who have been blind from an early age can use their visual cortex to process tactile stimulation when reading Braille.

There are some more primitive regions of the brain whose functions cannot be taken over by any other area. For example, people who have their hippocampus removed, lose their ability to learn new episodic or semantic facts. But the neocortex tends to be highly plastic and that is where most of the high-level processing is executed that makes us intellectually superior to other animals. (It would be interesting to examine in more detail to what extent this holds true for all of neocortex. Are there small neocortical regions such that, if excised at birth, the subject will never obtain certain high-level competencies, not even to a limited degree?)

Another consideration that seems to indicate that innate architectural differentiation plays a relatively small part in accounting for the performance of the mature brain is the that neocortical architecture, especially in infants, is remarkably homogeneous over different cortical regions and even over different species:

Laminations and vertical connections between lamina are hallmarks of all cortical systems, the morphological and physiological characteristics of cortical neurons are equivalent in different species, as are the kinds of synaptic interactions involving cortical neurons. This similarity in the organization of the cerebral cortex extends even to the specific details of cortical circuitry. (White 1989, p. 179).

One might object that at this point that cetaceans have much bigger corticies than humans and yet they don't have human-level abstract understanding and language . A large cortex, apparently, is not sufficient for human intelligence. However, one can easily imagine that some very simple difference between human and cetacean brains can account for why we have abstract language and understanding that they lack. It could be something as trivial as that our cortex is provided with a low-level "drive" to learn about abstract relationships whereas dolphins and whales are programmed not to care about or pay much attention to such things (which might be totally irrelevant to them in their natural environment). More likely, there are some structural developments in the human cortex that other animals lack and that are necessary for advanced abstract thinking. But these uniquely human developments may well be the result of relatively simple changes in just a few basic parameters. They do not require a large amount of genetic hardwiring. Indeed, given that bra
in evolution that allowed Homo Sapiens to intellectually outclass other animals took place under a relatively brief period of time, evolution cannot have embedded very much content-specific information in these additional cortical structures that give us our intellectual edge over our humanoid or ape-like ancestors.

These considerations (especially the one of cortical plasticity) suggest that the amount of neuroscientific information needed for the bottom-up approach to succeed may be very limited. (Notice that they do not argue against the modularization of adult human brains. They only indicate that the greatest part of the information that goes into the modularization results from self-organization and perceptual input rather than from an immensely complicated genetic look-up table.)

Further advances in neuroscience are probably needed before we can construct a human-level (or even higher animal-level) artificial intelligence by means of this radically bottom-up approach. While it is true that neuroscience has advanced very rapidly in recent years, it is difficult to estimate how long it will take before enough is known about the brain's neuronal architecture and its learning algorithms to make it possible to replicate these in a computer of sufficient computational power. A wild guess: something like fifteen years. This is not a prediction about how far we are from a complete understanding of all important phenomena in the brain. The estimate refers to the time when we might be expected to know enough about the basic principles of how the brain works to be able to implement these computational paradigms on a computer, without necessarily modelling the brain in any biologically realistic way.

The estimate might seem to some to underestimate the difficulties, and perhaps it does. But consider how much has happened in the past fifteen years. The discipline of computational neuroscience did hardly even exist back in 1982. And future progress will occur not only because research with today's instrumentation will continue to produce illuminating findings, but also because new experimental tools and techniques become available. Large-scale multi-electrode recordings should be feasible within the near future. Neuro/chip interfaces are in development. More powerful hardware is being made available to neuroscientists to do computation-intensive simulations. Neuropharmacologists design drugs with higher specificity, allowing researches to selectively target given receptor subtypes. Present scanning techniques are improved and new ones are under development. The list could be continued. All these innovations will give neuroscientists very powerful new tools that will facilitate their research.

This section has discussed the software problem. It was argued that it can be solved through a bottom-up approach by using present equipment to supply the input and output channels, and by continuing to study the human brain in order to find out about what learning algorithm it uses and about the initial neuronal structure in new-born infants. Considering how large strides computational neuroscience has taken in the last decade, and the new experimental instrumentation that is under development, it seems reasonable to suppose that the required neuroscientific knowledge might be obtained in perhaps fifteen years from now, i.e. by year 2012.

Notes

That dolphins don't have abstract language was recently established in a very elegant experiment. A pool is divided into two halves by a net. Dolphin A is released into one end of the pool where there is a mechanism. After a while, the dolphin figures out how to operate the mechanism which causes dead fish to be released into both ends of the pool. Then A is transferred to the other end of the pool and a dolphin B is released into the end of the pool that has the mechanism. The idea is that if the dolphins had a language, then A would tell B to operate the mechanism. However, it was found that the average time for B to operate the mechanism was the same as for A.

Why the past failure of AI is no argument against its future success

In the seventies and eighties the AI field suffered some stagnation as the exaggerated expectations from the early heydays failed to materialize and progress nearly ground to a halt. The lesson to draw from this episode is not that strong AI is dead and that superintelligent machines will never be built. It shows that AI is more difficult than some of the early pioneers might have thought, but it goes no way towards showing that AI will forever remain unfeasible.

In retrospect we know that the AI project couldn't possibly have succeeded at that stage. The hardware was simply not powerful enough. It seems that at least about 100 Tops is required for human-like performance, and possibly as much as 10^17 ops is needed. The computers in the seventies had a computing power comparable to that of insects. They also achieved approximately insect-level intelligence. Now, on the other hand, we can foresee the arrival of human-equivalent hardware, so the cause of AI's past failure will then no longer be present.

There is also an explanation for the relative absence even of noticeable progress during this period. As Hans Moravec points out:

[F]or several decades the computing power found in advanced Artificial Intelligence and Robotics systems has been stuck at insect brain power of 1 MIPS. While computer power per dollar fell [should be: rose] rapidly during this period, the money available fell just as fast. The earliest days of AI, in the mid 1960s, were fuelled by lavish post-Sputnik defence funding, which gave access to $10,000,000 supercomputers of the time. In the post Vietnam war days of the 1970s, funding declined and only $1,000,000 machines were available. By the early 1980s, AI research had to settle for $100,000 minicomputers. In the late 1980s, the available machines were $10,000 workstations. By the 1990s, much work was done on personal computers costing only a few thousand dollars. Since then AI and robot brain power has risen with improvements in computer efficiency. By 1993 personal computers provided 10 MIPS, by 1995 it was 30 MIPS, and in 1997 it is over 100 MIPS. Suddenly machines are reading text, recognizing speech, and robots are driving themselves cross country. (Moravec 1997)

In general, there seems to be a new-found sense of optimism and excitement among people working in AI, especially among those taking a bottom-up approach, such as researchers in genetic algorithms, neuromorphic engineering and in neural networks hardware implementations. Many experts who have been around, though, are wary not again to underestimate the difficulties ahead.

Once there is human-level AI there will soon be superintelligence

Once artificial intelligence reaches human level, there will be a positive feedback loop that will give the development a further boost. AIs would help constructing better AIs, which in turn would help building better AIs, and so forth.

Even if no further software development took place and the AIs did not accumulate new skills through self-learning, the AIs would still get smarter if processor speed continued to increase. If after 18 months the hardware were upgraded to double the speed, we would have an AI that could think twice as fast as its original implementation. After a few more doublings this would directly lead to what has been called "weak superintelligence", i.e. an intellect that has about the same abilities as a human brain but is much faster.

Also, the marginal utility of improvements in AI when AI reaches human-level would also seem to skyrocket, causing funding to increase. We can therefore make the prediction that once there is human-level artificial intelligence then it will not be long before superintelligence is technologically feasible.

A further point can be made in support of this prediction. In contrast to what's possible for biological intellects, it might be possible to copy skills or cognitive modules from one artificial intellect to another. If one AI has achieved eminence in some field, then subsequent AIs can upload the pioneer's program or synaptic weight-matrix and immediately achieve the same level of performance. It would not be necessary to again go through the training process. Whether it will also be possible to copy the best parts of several AIs and combine them into one will depend on details of implementation and the degree to which the AIs are modularized in a standardized fashion. But as a general rule, the intellectual achievements of artificial intellects are additive in a way that human achievements are not, or only to a much less degree.

The demand for superintelligence

Given that superintelligence will one day be technologically feasible, will people choose to develop it? This question can pretty confidently be answered in the affirmative. Associated with every step along the road to superintelligence are enormous economic payoffs. The computer industry invests huge sums in the next generation of hardware and software, and it will continue doing so as long as there is a competitive pressure and profits to be made. People want better computers and smarter software, and they want the benefits these machines can help produce. Better medical drugs; relief for humans from the need to perform boring or dangerous jobs; entertainment -- there is no end to the list of consumer-benefits. There is also a strong military motive to develop artificial intelligence. And nowhere on the path is there any natural stopping point where technofobics could plausibly argue "hither but not further".

It therefore seems that up to human-equivalence, the driving-forces behind improvements in AI will easily overpower whatever resistance might be present. When the question is about human-level or greater intelligence then it is conceivable that there might be strong political forces opposing further development. Superintelligence might be seen to pose a threat to the supremacy, and even to the survival, of the human species. Whether by suitable programming we can arrange the motivation systems of the superintelligences in such a way as to guarantee perpetual obedience and subservience, or at least non-harmfulness, to humans is a contentious topic. If future policy-makers can be sure that AIs would not endanger human interests then the development of artificial intelligence will continue. If they can't be sure that there would be no danger, then the development might well continue anyway, either because people don't regard the gradual displacement of biological humans with machines as necessarily a bad outcome, or because such strong forces (motivated by short-term profit, curiosity, ideology, or desire for the capabilities that superintelligences might bring to its creators) are active that a collective decision to ban new research in this field can not be reached and successfully implemented.

Conclusion

Depending on degree of optimization assumed, human-level intelligence probably requires between 10^14 and 10^17 ops. It seems quite possible that very advanced optimization could reduce this figure further, but the entrance level would probably not be less than about 10^14 ops. If Moore's law continues to hold then the lower bound will be reached sometime between 2004 and 2008, and the upper bound between 2015 and 2024. The past success of Moore's law gives some inductive reason to believe that it will hold another ten, fifteen years or so; and this prediction is supported by the fact that there are many promising new technologies currently under development which hold great potential to increase procurable computing power. There is no direct reason to suppose that Moore's law will not hold longer than 15 years. It thus seems likely that the requisite hardware for human-level artificial intelligence will be assembled in the first quarter of the next century, possibly within the first few years.

There are several approaches to developing the software. One is to emulate the basic principles of biological brains. It is not implausible to suppose that these principles will be well enough known within 15 years for this approach to succeed, given adequate hardware.

The stagnation of AI during the seventies and eighties does not have much bearing on the likelihood of AI to succeed in the future since we know that the cause responsible for the stagnation (namely, that the hardware available to AI researchers was stuck at about 10^6 ops) is no longer present.

There will be a strong and increasing pressure to improve AI up to human-level. If there is a way of guaranteeing that superior artificial intellects will never harm human beings then such intellects will be created. If there is no way to have such a guarantee then they will probably be created nevertheless.

Go to Nick Bostrom's home page

.

The U.S. Department of Energy has ordered a new supercomputer from IBM, to be installed in the Lawrence Livermore National Laboratory in the year 2000. It will cost $85 million and will perform 10 Tops. This development is in accordance with Moore's law, or possibly slightly more rapid than an extrapolation would have predicted.

Many steps forward that have been taken during the past year. An especially nifty one is the new chip-making techniques being developed at Irvine Sensors Corporation (ISC). They have found a way to stack chips directly on top of each other in a way that will not only save space but, more importantly, allow a larger number of interconnections between neigboring chips. Since the number of interconnections have been a bottleneck in neural network hardware implementations, this breakthrough could prove very important. In principle, it should allow you to have an arbitrarily large cube of neural network modules with high local connectivity and moderate non-local connectivity.

Is progress still on schedule? - In fact, things seem to be moving somewhat faster than expected, at least on the hardware front. (Software progress is more difficult to quantify.) IBM is currently working on a next-generation supercomputer, Blue Gene, which will perform over 10^15 ops. This computer, which is designed to tackle the protein folding problem, is expected to be ready around 2005. It will achieve its enormous power through massive parallelism rather than through dramatically faster processors. Considering the increasing emphasis on parallel computing, and the steadily increasing Internet bandwidth, it becomes important to interpret Moore's law as a statement about how much computing power can be bought for a given sum of (inflation adjusted) money. This measure has historically been growing at the same pace as processor speed or chip density, but the measures may come apart in the future. It is how much computing power that can be bought for, say, 100 million dollars that is relevant when we are trying to guess when superintelligence will be developed, rather than how fast individual processors are.

The fastest supercomputer today is IBM's Blue Gene/L, which has attained 260 Tops (2.6*10^14 ops). The Moravec estimate of
the human brain's processing power (10^14 ops) has thus now been exceeded.

The 'Blue Brain' project was launched by the Brain Mind Institute, EPFL, Switzerland and IBM, USA in May, 2005. It aims to build an accurate software replica of the neocortical column within 2-3 years. The column will consist of 10,000 morphologically complex neurons with active ionic channels. The neurons will be interconnected in a 3-dimensional space with 10^7 -10^8 dynamic synapses. This project will thus use a level of simulation that attempts to capture the functionality of individual neurons at a very detailed level. The simulation is intended to run in real time on a computer preforming 22.8*10^12 flops. Simulating the entire brain in real time at this level of detail (which the researchers indicate as a goal for later stages of the project) would correspond to circa 2*10^19 ops, five orders of magnitude above the current supercomputer record. This is two orders of magnitude greater than the estimate of neural-level simulation given in the original paper above, which assumes a cruder level of simulation of neurons. If the 'Blue Brain' project succeeds, it will give us hard evidence of an upper bound on the computing power needed to achieve human intelligence.

Functional replication of the functionality of early auditory processing (which is quite well understood) has yielded an estimate that agrees with Moravec's assessment based on signal processing in the retina (i.e. 10^14 ops for whole-brain equivalent replication).

No dramatic breakthrough in general artificial intelligence seems to have occurred in recent years. Neuroscience and neuromorphic engineering are proceeding at a rapid clip, however. Much of the paper could now be rewritten and updated to take into account information that has become available in the past 8 years.

Molecular nanotechnology, a technology that in its mature form could enable mind uploading (an extreme version of the bottom-up method, in which a detailed 3-dimensional map is constructed of a particular human brain and then emulated in a computer), has begun to pick up steam, receiving increasing funding and attention. An upload running on a fast computer would be weakly superintelligent -- it would initially be functionally identical to the original organic brain, but it could run at a much higher speed. Once such an upload existed, it might be possible to enhance its architecture to create strong superintelligence that was not only faster but functionally superior to human intelligence.

Original post:

How Long Before Superintelligence? - Nick Bostrom

Posted in Superintelligence | Comments Off on How Long Before Superintelligence? – Nick Bostrom

Ethical Issues In Advanced Artificial Intelligence

Posted: at 4:58 am

The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is a cognitive pursuit, a superintelligence could also easily surpass humans in the quality of its moral thinking. However, it would be up to the designers of the superintelligence to specify its original motivations. Since the superintelligence may become unstoppably powerful because of its intellectual superiority and the technologies it could develop, it is crucial that it be provided with human-friendly motivations. This paper surveys some of the unique ethical issues in creating superintelligence, and discusses what motivations we ought to give a superintelligence, and introduces some cost-benefit considerations relating to whether the development of superintelligent machines ought to be accelerated or retarded.

KEYWORDS: Artificial intelligence, ethics, uploading, superintelligence, global security, cost-benefit analysis

1. INTRODUCTION

A superintelligence is any intellect that is vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.[1] This definition leaves open how the superintelligence is implemented it could be in a digital computer, an ensemble of networked computers, cultured cortical tissue, or something else.

On this definition, Deep Blue is not a superintelligence, since it is only smart within one narrow domain (chess), and even there it is not vastly superior to the best humans. Entities such as corporations or the scientific community are not superintelligences either. Although they can perform a number of intellectual feats of which no individual human is capable, they are not sufficiently integrated to count as intellects, and there are many fields in which they perform much worse than single humans. For example, you cannot have a real-time conversation with the scientific community.

While the possibility of domain-specific superintelligences is also worth exploring, this paper focuses on issues arising from the prospect of general superintelligence. Space constraints prevent us from attempting anything comprehensive or detailed. A cartoonish sketch of a few selected ideas is the most we can aim for in the following few pages.

Several authors have argued that there is a substantial chance that superintelligence may be created within a few decades, perhaps as a result of growing hardware performance and increased ability to implement algorithms and architectures similar to those used by human brains.[2] It might turn out to take much longer, but there seems currently to be no good ground for assigning a negligible probability to the hypothesis that superintelligence will be created within the lifespan of some people alive today. Given the enormity of the consequences of superintelligence, it would make sense to give this prospect some serious consideration even if one thought that there were only a small probability of it happening any time soon.

2. SUPERINTELLIGENCE IS DIFFERENT

A prerequisite for having a meaningful discussion of superintelligence is the realization that superintelligence is not just another technology, another tool that will add incrementally to human capabilities. Superintelligence is radically different. This point bears emphasizing, for anthropomorphizing superintelligence is a most fecund source of misconceptions.

Let us consider some of the unusual aspects of the creation of superintelligence:

Superintelligence may be the last invention humans ever need to make.

Given a superintelligences intellectual superiority, it would be much better at doing scientific research and technological development than any human, and possibly better even than all humans taken together. One immediate consequence of this fact is that:

Technological progress in all other fields will be accelerated by the arrival of advanced artificial intelligence.

It is likely that any technology that we can currently foresee will be speedily developed by the first superintelligence, no doubt along with many other technologies of which we are as yet clueless. The foreseeable technologies that a superintelligence is likely to develop include mature molecular manufacturing, whose applications are wide-ranging:[3]

a) very powerful computers

b) advanced weaponry, probably capable of safely disarming a nuclear power

c) space travel and von Neumann probes (self-reproducing interstellar probes)

d) elimination of aging and disease

e) fine-grained control of human mood, emotion, and motivation

f) uploading (neural or sub-neural scanning of a particular brain and implementation of the same algorithmic structures on a computer in a way that perseveres memory and personality)

g) reanimation of cryonics patients

h) fully realistic virtual reality

Superintelligence will lead to more advanced superintelligence.

This results both from the improved hardware that a superintelligence could create, and also from improvements it could make to its own source code.

Artificial minds can be easily copied.

Since artificial intelligences are software, they can easily and quickly be copied, so long as there is hardware available to store them. The same holds for human uploads. Hardware aside, the marginal cost of creating an additional copy of an upload or an artificial intelligence after the first one has been built is near zero. Artificial minds could therefore quickly come to exist in great numbers, although it is possible that efficiency would favor concentrating computational resources in a single super-intellect.

Emergence of superintelligence may be sudden.

It appears much harder to get from where we are now to human-level artificial intelligence than to get from there to superintelligence. While it may thus take quite a while before we get superintelligence, the final stage may happen swiftly. That is, the transition from a state where we have a roughly human-level artificial intelligence to a state where we have full-blown superintelligence, with revolutionary applications, may be very rapid, perhaps a matter of days rather than years. This possibility of a sudden emergence of superintelligence is referred to as the singularity hypothesis.[4]

Artificial intellects are potentially autonomous agents.

A superintelligence should not necessarily be conceptualized as a mere tool. While specialized superintelligences that can think only about a restricted set of problems may be feasible, general superintelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent.

Artificial intellects need not have humanlike motives.

Human are rarely willing slaves, but there is nothing implausible about the idea of a superintelligence ha
ving as its supergoal to serve humanity or some particular human, with no desire whatsoever to revolt or to liberate itself. It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal. For better or worse, artificial intellects need not share our human motivational tendencies.

Artificial intellects may not have humanlike psyches.

The cognitive architecture of an artificial intellect may also be quite unlike that of humans. Artificial intellects may find it easy to guard against some kinds of human error and bias, while at the same time being at increased risk of other kinds of mistake that not even the most hapless human would make. Subjectively, the inner conscious life of an artificial intellect, if it has one, may also be quite different from ours.

For all of these reasons, one should be wary of assuming that the emergence of superintelligence can be predicted by extrapolating the history of other technological breakthroughs, or that the nature and behaviors of artificial intellects would necessarily resemble those of human or other animal minds.

3. SUPERINTELLIGENT MORAL THINKING

To the extent that ethics is a cognitive pursuit, a superintelligence could do it better than human thinkers. This means that questions about ethics, in so far as they have correct answers that can be arrived at by reasoning and weighting up of evidence, could be more accurately answered by a superintelligence than by humans. The same holds for questions of policy and long-term planning; when it comes to understanding which policies would lead to which results, and which means would be most effective in attaining given aims, a superintelligence would outperform humans.

There are therefore many questions that we would not need to answer ourselves if we had or were about to get superintelligence; we could delegate many investigations and decisions to the superintelligence. For example, if we are uncertain how to evaluate possible outcomes, we could ask the superintelligence to estimate how we would have evaluated these outcomes if we had thought about them for a very long time, deliberated carefully, had had more memory and better intelligence, and so forth. When formulating a goal for the superintelligence, it would not always be necessary to give a detailed, explicit definition of this goal. We could enlist the superintelligence to help us determine the real intention of our request, thus decreasing the risk that infelicitous wording or confusion about what we want to achieve would lead to outcomes that we would disapprove of in retrospect.

4. IMPORTANCE OF INITIAL MOTIVATIONS

The option to defer many decisions to the superintelligence does not mean that we can afford to be complacent in how we construct the superintelligence. On the contrary, the setting up of initial conditions, and in particular the selection of a top-level goal for the superintelligence, is of the utmost importance. Our entire future may hinge on how we solve these problems.

Both because of its superior planning ability and because of the technologies it could develop, it is plausible to suppose that the first superintelligence would be very powerful. Quite possibly, it would be unrivalled: it would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its top goal. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference. Even a fettered superintelligence that was running on an isolated computer, able to interact with the rest of the world only via text interface, might be able to break out of its confinement by persuading its handlers to release it. There is even some preliminary experimental evidence that this would be the case.[5]

It seems that the best way to ensure that a superintelligence will have a beneficial impact on the world is to endow it with philanthropic values. Its top goal should be friendliness.[6] How exactly friendliness should be understood and how it should be implemented, and how the amity should be apportioned between different people and nonhuman creatures is a matter that merits further consideration. I would argue that at least all humans, and probably many other sentient creatures on earth should get a significant share in the superintelligences beneficence. If the benefits that the superintelligence could bestow are enormously vast, then it may be less important to haggle over the detailed distribution pattern and more important to seek to ensure that everybody gets at least some significant share, since on this supposition, even a tiny share would be enough to guarantee a very long and very good life. One risk that must be guarded against is that those who develop the superintelligence would not make it generically philanthropic but would instead give it the more limited goal of serving only some small group, such as its own creators or those who commissioned it.

If a superintelligence starts out with a friendly top goal, however, then it can be relied on to stay friendly, or at least not to deliberately rid itself of its friendliness. This point is elementary. A friend who seeks to transform himself into somebody who wants to hurt you, is not your friend. A true friend, one who really cares about you, also seeks the continuation of his caring for you. Or to put it in a different way, if your top goal is X, and if you think that by changing yourself into someone who instead wants Y you would make it less likely that X will be achieved, then you will not rationally transform yourself into someone who wants Y. The set of options at each point in time is evaluated on the basis of their consequences for realization of the goals held at that time, and generally it will be irrational to deliberately change ones own top goal, since that would make it less likely that the current goals will be attained.

In humans, with our complicated evolved mental ecology of state-dependent competing drives, desires, plans, and ideals, there is often no obvious way to identify what our top goal is; we might not even have one. So for us, the above reasoning need not apply. But a superintelligence may be structured differently. If a superintelligence has a definite, declarative goal-structure with a clearly identified top goal, then the above argument applies. And this is a good reason for us to build the superintelligence with such an explicit motivational architecture.

5. SHOULD DEVELOPMENT BE DELAYED OR ACCELERATED?

It is hard to think of any problem that a superintelligence could not either solve or at least help us solve. Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a superintelligence equipped with advanced nanotechnology would be capable of eliminating. Additionally, a superintelligence could give us indefinite lifespan, either by stopping and reversing the aging process through the use of nanomedicine[7], or by offering us the option to upload ourselves. A superintelligence could also create opportunities for us to vastly increase our own intellectual and emotional capabilities, and it could assist us in creating a highly appealing experiential world in which we could live lives devoted to in joyful game-playing, relating to each other, experiencing, personal growth, and to living closer to our ideals.

The risks in developing superintelligence include the risk of failure to give it the superg
oal of philanthropy. One way in which this could happen is that the creators of the superintelligence decide to build it so that it serves only this select group of humans, rather than humanity in general. Another way for it to happen is that a well-meaning team of programmers make a big mistake in designing its goal system. This could result, to return to the earlier example, in a superintelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities. More subtly, it could result in a superintelligence realizing a state of affairs that we might now judge as desirable but which in fact turns out to be a false utopia, in which things essential to human flourishing have been irreversibly lost. We need to be careful about what we wish for from a superintelligence, because we might get it.

One consideration that should be taken into account when deciding whether to promote the development of superintelligence is that if superintelligence is feasible, it will likely be developed sooner or later. Therefore, we will probably one day have to take the gamble of superintelligence no matter what. But once in existence, a superintelligence could help us reduce or eliminate other existential risks[8], such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism, a serious threat to the long-term survival of intelligent life on earth. If we get to superintelligence first, we may avoid this risk from nanotechnology and many others. If, on the other hand, we get nanotechnology first, we will have to face both the risks from nanotechnology and, if these risks are survived, also the risks from superintelligence. The overall risk seems to be minimized by implementing superintelligence, with great care, as soon as possible.

REFERENCES

Bostrom, N. (1998). "How Long Before Superintelligence?" International Journal of Futures Studies, 2. http://www.nickbostrom.com/superintelligence.html

Bostrom, N. (2002). "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards." Journal of Evolution and Technology, 9. http://www.nickbostrom.com/existential/risks.html

Drexler, K. E. Engines of Creation: The Coming Era of Nanotechnology. (Anchor Books: New York, 1986). http://www.foresight.org/EOC/index.html

Freitas Jr., R. A. Nanomedicine, Volume 1: Basic Capabilities. (Landes Bioscience: Georgetown, TX, 1999). http://www.nanomedicine.com

Hanson, R., et al. (1998). "A Critical Discussion of Vinge's Singularity Concept." Extropy Online. http://www.extropy.org/eo/articles/vi.html

Kurzweil, R. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. (Viking: New York, 1999).

Moravec, H. Robot: Mere Machine to Transcendent Mind. (Oxford University Press: New York, 1999).

Vinge, V. (1993). "The Coming Technological Singularity." Whole Earth Review, Winter issue.

Yudkowsky, E. (2002). "The AI Box Experiment." Webpage. http://sysopmind.com/essays/aibox.html

Yudkowsky, E. (2003). Creating Friendly AI 1.0. http://www.singinst.org/CFAI/index.html

View original post here:

Ethical Issues In Advanced Artificial Intelligence

Posted in Superintelligence | Comments Off on Ethical Issues In Advanced Artificial Intelligence

Top Ten Cybernetic Upgrades Everyone Will Want

Posted: at 4:58 am

by Lifeboat Foundation Scientific Advisory Board member Michael Anissimov. Science fiction, computer games, anime cyborgs are everywhere. Transhumanists are philosophers who believe that one day, cybernetic upgrades will be so powerful, elegant, and inexpensive that everyone will want them. This report lists ten major upgrades that I think will be adopted by 2050. The List 10. Disease Immunity Between 20 and 40 years into the future, we will become capable of building artificial antibodies that outperform their natural equivalents. Instead of using chemical signaling that relies on diffusion to reach its target, these antibodies will communicate with rapid acoustic pulses. Instead of proteins, they will be made using much more durable polymers or even diamond. These antibodies will move through the bloodstream more quickly than other cells in the body, and will take up less space and resources, meaning that there will be room for many more. Using super-biological methods for identifying and neutralizing foreign viruses and bacteria, these tiny robots will still function in harmony with our own bodies. They will probably be powered either by glucose, ATP (like natural antibodies), or acoustically. There are already bloodborne microbots today which are not rejected by the immune system these are the precursors of tomorrows nanorobotics. Through their presence and continued operation, they will eliminate all susceptibility to disease in those who have them running through their veins. This will not make people immortal, but it will allow them to walk into a room contaminated with a flesh-eating virus in nothing but a pair of shorts and a T-shirt. For more on artificial antibodies and other body-integrated nanites, see Nanomedicine. 9. Telemicroscopic, Full-Spectrum Vision There are microscopes that weigh one tenth of an ounce. Some birds of prey have vision so sharp that they can spot a hare a mile away. We have compact devices that can scan the electromagnetic spectrum from x-rays to radio waves, and everything in between. Our eyes in their current form can do none of these things. But in time, they will be upgraded. There are already prosthetic retinas that can provide low-resolution artificial vision for blind people. Its simply a matter of time until better prosthetic eyes are created, and their sharpness, contrast, and resolution is superior to what evolution gave us. The biggest challenge may end up not actually being about building a superior artificial eye, but remodeling the visual cortex so that it can process the info and relay it to the rest of the brain in such a way that its not overwhelmed. 8. Telepathy/Brain-Computer Interfacing Ever wanted to send someone a message with nothing but your mind, or have a neural implant that gives your brain direct access to Google? Hundreds of corporate and academic labs across the world are working on projects that generate progress in this area. Check out the Berlin Brain-Computer Interface, which lets you move the cursor around on a screen with only your EEG waves and 20 minutes of training. Miniature fMRI will allow us to continue increasing the bandwidth between brain and computer, eventually allowing for a mental typewriter that converts thoughts into text. A tiny transmitter could send this to a bone-conduction device on the receiving person, letting them hear the message without sound. NASA is also working on a device to transcribe silent, subvocal speech. Like many transhumanist upgrades, these will probably start as efforts to help people who are handicapped, then evolve into powerful tools that can be used by anyone bold enough to adopt them. 7. Super-Strength Early in 2006, scientists at the University of Texas at Dallas, led by Dr. Ray H. Baughman, developed artificial muscles 100 times stronger than our own, powered by alcohol and hydrogen. Leonid Taranenko, the former Soviet weightlifter, holds the world record for power lifting a 266 kg (586 lbs) dumbbell. If Leos natural muscles were replaced with Dr. Baughmans synthetic polymer muscles, he could lift 26,600 kg, or about 30 tons. Thats equivalent to this yacht, the Nova Spirit. Super-strength is an interesting area in that the technology to do it has already been invented the only step remaining is actually weaving the fiber into a human body which, today, would be complicated and messy, not to mention probably illegal. However, that doesnt mean that it wont be done, probably within the next couple decades. Further improvements to the process could make it safe for normal people, numerous ethics questions notwithstanding. One benefit of improved muscles is that wed be far less vulnerable to unfortunate accidents. They could also provide armor against bullets or other forms of attack. One downside is that people could use them to bully others around. Guess the good guys will need even bigger muscles. 6. Improved Appearance In general, there is a lot of agreement as to who is attractive and who is less so. Numerous experiments have shown that while there are slight subjective differences in who we want to get with, we are biologically programmed to look for certain facial and physical features that correlate with increased fitness. For the time being, this is unavoidable. The only way to change it would be to reach inside our neural circuitry and start severing connections. Until we choose to do that, we can improve our own lives and the lives of those who have to look at us by looking as pretty or handsome as possible. We brush our teeth, keep fit, take showers, and all that other great stuff that helps us score. Some of us even visit the plastic surgeon, with mixed results. Surveys show that certain procedures, like liposuction, have very high patient satisfaction rates. As the safety and precision of our body modification technologies improves, well be able to change our faces and bodies with minimal fuss, and maximal benefit. Everyone will be able to be stunningly attractive. And the really great thing? Well always be able to enjoy it. If everyone becomes attractive, we wont regard the slightly less attractive of the lot as ugly our brain doesnt work that way. An attractive person is attractive, whether or not others are around. A planet full of attractive people could do a lot to improve our quality of life. 5. Psychokinesis In the real world, psychokinesis is a bunch of wishful thinking and pseudoscience. Despite the roughly 30% of people who think that its possible to affect objects through the mind alone, history and evidence make it clear that this is total nonsense. There are no psychics and there never have been. However, that doesnt mean that we cant create technopsychics artificially. By 2030, well be cranking out utility fog swarms of tiny machines that fly through the air and interlock with robotic arms. By combining brain-computer interfaces, like the type used by Claudia Mitchell to move her prosthetic arm, with utility fog, we will have direct-thought connections with powerful external robotics, allowing non-fictional psychokinesis. Utility fog, once all the necessary software for it is developed, will be capable of cooperating to perform practically any physical task or simulate a wide range of materials. Because utility fog could be distributed at low density and still accomplish a lot, a room filled with utility fog would look empty, and people in it could move and breathe normally. They would only notice once the fog is activated either by a central computer, or a neural interface. Once a connection is achieved, practically anything could be accomplished with the proper programming. Throwing objects through the air, hovering over the ground, cracking an egg from across the room, materializing orbs of energy all the antics weve always wanted to perform, but never had the means to. 4. Autopoiesi
s/Allopoiesis Autopoiesis is Greek for self-creation. Allopoiesis is other-creation. Our body engages in both all the time we start as a fetus that creates itself until it becomes an adult, then, essentially stops. Our body produces things external to itself, but usually involving an extended process of cooperation with thousands of other human beings and the entire economy. In the future, there will be cybernetic upgrades that allow for personal autopoietic and allopoietic manufacturing, probably based on molecular nanotechnology. Using whatever raw material is available, complex construction routines, and internal nanomanufacturing units, well be able to literally breathe life into dirt. If our arms or legs get blown off, well be able to use manufacturing modules in other parts of our body to regenerate them. Instead of building robots in a factory, well build them ourselves. The possibilities are quite expansive, but this would require technology more sophisticated than anything discussed thus far in this list. 3. Flight Human flight, outside of an airplane this was recently achieved by former military pilot Yves Rossy, who flew 7,750 ft above the Alps in his 10 ft wide, self-designed aerofoil. You can see a video of it here. The airfoil weighs only 110 lbs and cost just under $300,000. Over the next few decades, the weight will come down, the strength and flexibility will go up, and eventually it will be difficult to distinguish between people in aerofoils and people that can just fly whenever they want. Using high strength-to-weight materials like fullerenes, we will fly using wings that weigh only a fraction of our own weight and fold into our clothing or body when not in use. Rossy achieved speeds of 115 mph, but with superior materials and greater tolerance for acceleration and wind, our cybernetic flight speeds are more likely to top 500 mph. To take off from the ground, well simply use our super-muscles to jump to the highest object around and begin our flight from there. With personal flight, commercial airliners will become obsolete. The only problem left will be dodging each other. 2. Superintelligence When we think of superintelligence, we tend to think of the ways it is portrayed in fiction the character able to multiply 6 fifty digit numbers in his head, learn ten languages in a month, repeat the catch phrase Thats not logical, and other tired cliches. True superintelligence would be something radically different a person able to see the obvious solution that the entire human race missed, conceive of and implement advanced plans or concepts that the greatest geniuses would never think of, understand and rewrite its own cognitive processes on the most fundamental level, and so on. A cybernetic superintelligence would not just be another genius human, it would be something entirely superhuman something that could completely change the world overnight. For the same reason that we cant write a book with a character smarter than ourselves, we cant imagine the thoughts or actions of a true superintelligence, because theyd be beyond us. Whether it is developed through uploading, neuroengineering, or artificial intelligence, remains to be seen. 1. Immortality The ultimate upgrade would be physical immortality. Everything else pales by comparison. Today, there are already entire movements based around the idea. Realizing the possibility of immortality requires seeing a human being as a physical system composed of working parts that cooperate to make up the whole, some of which have the tendency to get old and break down. Cambridge biogerontologist Aubrey de Grey has identified seven causes of aging, which are believed to be comprehensive, because its been decades since a degenerative process has occurred in the body with an unknown cause. Defeating aging, then, would simply require addressing these one by one. They are: cell depletion, supernumerary cells, chromosomal mutations, mitochondrial mutations, cellular junk, extracellular junk, and protein cross-links. A few pioneering researchers are looking towards solutions, but accepting the possibility requires looking at aging as a disease and not as a necessary component of life. Well then, that just about wraps up our list. See you in 2050, alright?

Read more:

Top Ten Cybernetic Upgrades Everyone Will Want

Posted in Superintelligence | Comments Off on Top Ten Cybernetic Upgrades Everyone Will Want

Superintelligence: Paths, Dangers, Strategies – Wikipedia …

Posted: June 13, 2016 at 12:53 pm

Superintelligence: Paths, Dangers, Strategies (2014) is a book by Swedish philosopher Nick Bostrom from the University of Oxford. It argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists.[1] As the fate of gorillas now depends more on humans than on the actions of gorillas themselves, so will the fate of future humanity depend on the actions of the machine superintelligence.[2] The outcome could be an existential catastrophe for humans.[3]

Bostrom's book has been translated into many languages and is available as an audiobook.[4][5]

It is unknown whether human-level artificial intelligence will arrive in a matter of years, later this century, or not until future centuries. Regardless of the initial timescale, once human-level machine intelligence is developed, a "superintelligent" system that "greatly exceeds the cognitive performance of humans in virtually all domains of interest" would follow surprisingly quickly, possibly even instantaneously. Such a superintelligence would be difficult to control or restrain.

While the ultimate goals of superintelligences can vary greatly, a functional superintelligence will spontaneously generate, as natural subgoals, "instrumental goals" such as self-preservation and goal-content integrity, cognitive enhancement, and resource acquisition. For example, an agent whose sole final goal is to solve the Riemann hypothesis (a famous unsolved, mathematical conjecture) could create, and act upon, a subgoal of transforming the entire Earth into some form of computronium (hypothetical "programmable matter") to assist in the calculation. The superintelligence would proactively resist any outside attempts to turn the superintelligence off or otherwise prevent its subgoal completion. In order to prevent such an existential catastrophe, it might be necessary to successfully solve the "AI control problem" for the first superintelligence. The solution might involve instilling the superintelligence with goals that are compatible with human survival and well-being. Solving the control problem is surprisingly difficult because most goals, when translated into machine-implementable code, lead to unforeseen and undesirable consequences.

The book ranked #17 on the New York Times list of best selling science books for August 2014.[6] In the same month, business magnate Elon Musk made headlines by agreeing with the book that artificial intelligence is potentially more dangerous than nuclear weapons.[7][8][9] Bostroms work on superintelligence has also influenced Bill Gatess concern for the existential risks facing humanity over the coming century.[10][11] In a March 2015 interview with Baidu's CEO, Robert Li, Gates claimed he would "highly recommend" Superintelligence.[12]

The science editor of the Financial Times found that Bostroms writing "sometimes veers into opaque language that betrays his background as a philosophy professor" but convincingly demonstrates that the risk from superintelligence is large enough that society should start thinking now about ways to endow future machine intelligence with positive values.[1] A review in The Guardian pointed out that "even the most sophisticated machines created so far are intelligent in only a limited sense" and that "expectations that AI would soon overtake human intelligence were first dashed in the 1960s", but finds common ground with Bostrom in advising that "one would be ill-advised to dismiss the possibility altogether".[3]

Some of Bostrom's colleagues suggest that nuclear war presents a greater threat to humanity than superintelligence, as does the future prospect of the weaponisation of nanotechnology and biotechnology.[13]The Economist stated that "Bostrom is forced to spend much of the book discussing speculations built upon plausible conjecture... but the book is nonetheless valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect of actually doing so seems remote."[14]Ronald Bailey wrote in the libertarian Reason that Bostrom makes a strong case that solving the AI control problem is the "essential task of our age".[15] According to Tom Chivers of The Daily Telegraph, the book is difficult to read, but nonetheless rewarding.[16]

Go here to read the rest:

Superintelligence: Paths, Dangers, Strategies - Wikipedia ...

Posted in Superintelligence | Comments Off on Superintelligence: Paths, Dangers, Strategies – Wikipedia …

Page 24«..10..21222324