Nick Bostrom’s Home Page

Posted: June 19, 2016 at 3:44 am

ETHICS & POLICY

Astronomical Waste: The Opportunity Cost of Delayed Technological Development Suns are illuminating and heating empty rooms, unused energy is being flushed down black holes, and our great common endowment of negentropy is being irreversibly degraded into entropy on a cosmic scale. These are resources that an advanced civilization could have used to create value-structures, such as sentient beings living worthwhile lives... [Utilitas, Vol. 15, No. 3 (2003): 308-314] [translation: Russian] [html] [pdf]

Human Enhancement Original essays by various prominent moral philosophers on the ethics of human enhancement. [Eds. Nick Bostrom & Julian Savulescu (Oxford University Press, 2009)].

Enhancement Ethics: The State of the Debate The introductory chapter from the book (w/ Julian Savulescu): 1-22 [pdf]

TRANSHUMANISM

Transhumanist Values Wonderful ways of being may be located in the "posthuman realm", but we can't reach them. If we enhance ourselves using technology, however, we can go out there and realize these values. This paper sketches a transhumanist axiology. [Ethical Issues for the 21st Century, ed. Frederick Adams, Philosophical Documentation Center Press, 2003; reprinted in Review of Contemporary Philosophy, Vol. 4, May (2005)] [translations: Polish, Portugese] [html] [pdf]

RISK & THE FUTURE

Global Catastrophic Risks Twenty-six leading experts look at the gravest risks facing humanity in the 21st century, including natural catastrophes, nuclear war, terrorism, global warming, biological weapons, totalitarianism, advanced nanotechnology, general artificial intelligence, and social collapse. The book also addresses over-arching issuespolicy responses and methods for predicting and managing catastrophes. Foreword by Lord Martin Rees. [Eds. Nick Bostrom & Milan Cirkovic (Oxford University Press, 2008)]. Introduction chapter free here [pdf]

TECHNOLOGY ISSUES

THE NEW BOOK

"I highly recommend this book."Bill Gates

"terribly important ... groundbreaking" "extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplinesengineering, natural sciences, medicine, social sciences and philosophyinto a comprehensible whole" "If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Springfrom 1962, or ever."Olle Haggstrom, Professor of Mathematical Statistics

"Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. ... It marks the beginning of a new era."Stuart Russell, Professor of Computer Science, University of California, Berkley

"Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book." Martin Rees, Past President, Royal Society

"Worth reading.... We need to be super careful with AI. Potentially more dangerous than nukes"Elon Musk

"There is no doubting the force of [Bostrom's] arguments ... the problem is a research challenge worthy of the next generation's best mathematical talent. Human civilisation is at stake." Financial Times

"This superb analysis by one of the world's clearest thinkers tackles one of humanity's greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn't become the last?" Professor Max Tegmark, MIT

"a damn hard read" The Telegraph

ANTHROPICS & PROBABILITY

Cars In the Other Lane Really Do Go Faster When driving on the motorway, have you ever wondered about (and cursed!) the fact that cars in the other lane seem to be getting ahead faster than you? One might be tempted to account for this by invoking Murphy's Law ("If anything can go wrong, it will", discovered by Edward A. Murphy, Jr, in 1949). But there is an alternative explanation, based on observational selection effects... [PLUS, No. 17 (2001)]

PHILOSOPHY OF MIND

DECISION THEORY

BIO

Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), Human Enhancement (ed., OUP, 2009), and the academic book Superintelligence: Paths, Dangers, Strategies (OUP, 2014), which became a New York Times bestseller. He is best known for his work in five areas: (i) existential risk; (ii) the simulation argument; (iii) anthropics (developing the first mathematically explicit theory of observation selection effects); (iv) impacts of future technology, especially machine intelligence; and (v) implications of consequentialism for global strategy.

He is recipient of a Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). He has been listed on Foreign Policy's Top 100 Global Thinkers list twice; and he was included on Prospect magazines World Thinkers list, the youngest person in the top 15 from all fields and the highest-ranked analytic philosopher. His writings have been translated into 24 languages. There have been more than 100 translations and reprints of his works.

BACKGROUND

I was born in Helsingborg, Sweden, and grew up by the seashore. I was bored in school. At age fifteen or sixteen I had an intellectual awakening, and feeling that I had wasted the first one and a half decades of my life, I resolved to focus on what was important. Since I did not know what was important, and I did not know how to find out, I decided to start by trying to place myself in a better position to find out. So I began a project of intellectual self-development, which I pursued with great intensity for the next one and a half decades.

As an undergraduate, I studied many subjects in parallel, and I gather that my performance set a national record. I was once expelled for studying too much, after the head of Ume University psychology department discovered that I was concurrently following several other full-time programs of study (physics, philosophy, and mathematical logic), which he believed to be psychologically impossible.

For my postgraduate work, I went to London, where I studied physics and neuroscience at King's College, and obtained a PhD from the London School of Economics. For a while I did a little bit stand-up comedy on the vibrant London pub and theatre circuit.

During those years, I co-founded, with David Pearce, the World Transhumanist Association, a nonprofit grassroots organization. Later, I was involved in founding the Institute for Ethics and Emerging Technologies, a nonprofit virtual think tank. The objective was to stimulate wider discussion about the implications of future technologies, in particular technologies that might lead to human enhancement. (These organizations have since developed on their own trajectories, and it is very much not the case that I agree with everything said by those who flock under the transhumanist flag.)

Since 2006, I've been the founding director of the Future of Humanity Institute at Oxford University. This unique multidisciplinary research aims to enable a select set of intellects to apply careful thinking to big-picture question for humanity and global priorities. The Institute belongs to the Faculty of Philosophy and the Oxford Martin School. Since 2015, I also direct the Strategic Artificial Intelligence Research Center.

I am in a very fortunate position. I have no teaching duties. I am supported by a staff of assistants and brilliant research fellows. There are virtually no restrictions on what I can work on. I must try very hard to be worthy of this privilege and to cast some light on matters that matter.

CONTACT

For administrative matters, scheduling, and invitations, please contact my assistant, Kyle Scott:

Email: fhipa[atsign]philosophy[dot]ox[dot]ac[dot]uk Phone: +44 (0)1865 286800

If you need to contact me directly (I regret I am unable to respond to all emails): nick[atsign]nickbostrom[dot]com.

VIRTUAL ESTATE

http://www.fhi.ox.ac.ukFuture of Humanity Institute

http://www.anthropic-principle.comPapers on observational selection effects

http://www.simulation-argument.comDevoted to the question, "Are you living in a computer simulation?"

http://www.existential-risk.orgHuman extinction scenarios and related concerns

On the bank at the end Of what was there before us Gazing over to the other side On what we can become Veiled in the mist of nave speculation We are busy here preparing Rafts to carry us across Before the light goes out leaving us In the eternal night of could-have-been

CRUCIAL CONSIDERATIONS

A thread that runs through my work is a concern with "crucial considerations". A crucial consideration is an idea or argument that might plausibly reveal the need for not just some minor course adjustment in our practical endeavours but a major change of direction or priority.

If we have overlooked even just one such consideration, then all our best efforts might be for naughtor less. When headed the wrong way, the last thing needed is progress. It is therefore important to pursue such lines of inquiry as might disclose an unnoticed crucial consideration.

Some of the relevant inquiries are about moral philosophy and values. Others have to do with rationality and reasoning under uncertainty. Still others pertain to specific issues and possibilities, such as existential risks, the simulation hypothesis, human enhancement, infinite utilities, anthropic reasoning, information hazards, the future of machine intelligence, or the singularity hypothesis.

High-leverage questions associated with crucial considerations deserve to be investigated. My research interests are quite wide-ranging; yet they all stem from the quest to understand the big picture for humanity, so that we can more wisely choose what to aim for and what to do. Embarking on this quest has seemed the best way to try to make a positive contribution to the world.

SOME VIDEOS AND LECTURES

SOME ADDITONAL (OLD, COBWEBBED) PAPERS

On this page.

INTERVIEWS

POLICY

MISCELLANEOUS

words trying extra-hard to be more than just words...

Read the original post:

Nick Bostrom's Home Page

Related Posts