{"id":68641,"date":"2016-06-19T03:44:55","date_gmt":"2016-06-19T07:44:55","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/nick-bostroms-home-page\/"},"modified":"2016-06-19T03:44:55","modified_gmt":"2016-06-19T07:44:55","slug":"nick-bostroms-home-page","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/superintelligence\/nick-bostroms-home-page\/","title":{"rendered":"Nick Bostrom&#8217;s Home Page"},"content":{"rendered":"<p><p>          ETHICS & POLICY        <\/p>\n<p>          Astronomical          Waste: The Opportunity Cost of Delayed Technological          Development          Suns are illuminating and heating empty rooms,          unused energy is being flushed down black holes, and our          great common endowment of negentropy is being          irreversibly degraded into entropy on a cosmic scale.          These are resources that an advanced civilization could          have used to create value-structures, such as sentient          beings living worthwhile lives...          [Utilitas, Vol. 15, No. 3 (2003): 308-314]          [translation: Russian]          [html]          [pdf]        <\/p>\n<p>          Human          Enhancement Original essays          by various prominent moral philosophers on the ethics of          human enhancement. [Eds. Nick Bostrom & Julian          Savulescu (Oxford University Press,          2009)].        <\/p>\n<p>                    Enhancement Ethics: The State of the          Debate The introductory chapter from          the book (w\/ Julian Savulescu): 1-22 [pdf]        <\/p>\n<\/p>\n<p>          TRANSHUMANISM        <\/p>\n<p>          Transhumanist          Values Wonderful ways of being may          be located in the \"posthuman realm\", but we can't reach          them. If we enhance ourselves using technology, however,          we can go out there and realize these values. This paper          sketches a transhumanist axiology. [Ethical Issues for          the 21st Century, ed. Frederick Adams, Philosophical          Documentation Center Press, 2003; reprinted in Review          of Contemporary Philosophy, Vol. 4, May (2005)]          [translations: Polish,                    Portugese] [html]          [pdf]        <\/p>\n<p>          RISK & THE FUTURE        <\/p>\n<p>          Global          Catastrophic Risks Twenty-six          leading experts look at the gravest risks facing humanity          in the 21st century, including natural catastrophes,          nuclear war, terrorism, global warming, biological          weapons, totalitarianism, advanced nanotechnology,          general artificial intelligence, and social collapse. The          book also addresses over-arching issuespolicy          responses and methods for predicting and managing          catastrophes. Foreword by Lord Martin Rees. [Eds. Nick          Bostrom & Milan Cirkovic (Oxford University          Press, 2008)]. Introduction chapter free here          [pdf]        <\/p>\n<\/p>\n<p>          TECHNOLOGY ISSUES        <\/p>\n<p>            THE NEW BOOK          <\/p>\n<\/p>\n<p>                  \"I highly recommend this book.\"Bill                  Gates                <\/p>\n<p>                  \"terribly important ... groundbreaking\"                  \"extraordinary sagacity and clarity, enabling him                  to combine his wide-ranging knowledge over an                  impressively broad spectrum of                  disciplinesengineering, natural                  sciences, medicine, social sciences and                  philosophyinto a comprehensible whole\"                  \"If this book gets the reception that it                  deserves, it may turn out the most important                  alarm bell since Rachel Carson's Silent                  Springfrom 1962, or ever.\"Olle Haggstrom,                  Professor of Mathematical Statistics                <\/p>\n<p>                  \"Nick Bostrom makes a persuasive case that                  the future impact of AI is perhaps the most                  important issue the human race has ever faced.                  ... It marks the beginning of a new                  era.\"Stuart Russell, Professor of Computer                  Science, University of California,                  Berkley                <\/p>\n<p>                  \"Those disposed to dismiss an 'AI takeover'                  as science fiction may think again after reading                  this original and well-argued book.\" Martin                  Rees, Past President, Royal Society                <\/p>\n<p>                  \"Worth reading.... We need to be super                  careful with AI. Potentially more dangerous than                  nukes\"Elon Musk                <\/p>\n<p>                  \"There is no doubting the force of                  [Bostrom's] arguments ... the problem is a                  research challenge worthy of the next                  generation's best mathematical talent. Human                  civilisation is at stake.\" Financial                  Times                <\/p>\n<p>                  \"This superb analysis by one of the world's                  clearest thinkers tackles one of humanity's                  greatest challenges: if future superhuman                  artificial intelligence becomes the biggest event                  in human history, then how can we ensure that it                  doesn't become the last?\" Professor Max                  Tegmark, MIT                <\/p>\n<p>                  \"a damn hard read\" The                  Telegraph                <\/p>\n<\/p>\n<p>                  ANTHROPICS & PROBABILITY                <\/p>\n<p>                            Cars In the Other Lane Really Do Go              Faster When driving on the              motorway, have you ever wondered about (and cursed!)              the fact that cars in the other lane seem to be              getting ahead faster than you? One might be              tempted to account for this by invoking              Murphy's Law (\"If anything can go wrong, it will\",              discovered by Edward A. Murphy, Jr, in 1949). But              there is an alternative explanation, based on              observational selection effects... [PLUS, No.              17 (2001)]            <\/p>\n<\/p>\n<p>          PHILOSOPHY OF MIND        <\/p>\n<\/p>\n<p>          DECISION THEORY        <\/p>\n<p>                    BIO                  <\/p>\n<p>                    Bostrom has a background in physics,                    computational neuroscience, and mathematical                    logic as well as philosophy. He is the author                    of some 200 publications, including Anthropic                    Bias (Routledge, 2002), Global Catastrophic                    Risks (ed., OUP, 2008), Human Enhancement (ed.,                    OUP, 2009), and the academic book                    Superintelligence: Paths, Dangers, Strategies                    (OUP, 2014), which became a New York Times                    bestseller. He is best known for his work in                    five areas: (i) existential risk; (ii) the                    simulation argument; (iii) anthropics                    (developing the first mathematically explicit                    theory of observation selection effects); (iv)                    impacts of future technology, especially                    machine intelligence; and (v) implications of                    consequentialism for global strategy.                  <\/p>\n<p>                    He is recipient of a Eugene R. Gannon                    Award (one person selected annually worldwide                    from the fields of philosophy, mathematics, the                    arts and other humanities, and the natural                    sciences). He has been listed on Foreign                    Policy's Top 100 Global Thinkers list twice;                    and he was included on Prospect magazines                    World Thinkers list, the youngest person in the                    top 15 from all fields and the highest-ranked                    analytic philosopher. His writings have been                    translated into 24 languages. There have been                    more than 100 translations and reprints of his                    works.                  <\/p>\n<p>                    BACKGROUND                  <\/p>\n<p>                    I was born in Helsingborg, Sweden, and                    grew up by the seashore. I was bored in school.                    At age fifteen or sixteen I had an intellectual                    awakening, and feeling that I had wasted the                    first one and a half decades of my life, I                    resolved to focus on what was important. Since                    I did not know what was important, and I did                    not know how to find out, I decided to start by                    trying to place myself in a better position to                    find out. So I began a project of intellectual                    self-development, which I pursued with great                    intensity for the next one and a half                    decades.                  <\/p>\n<p>                    As an undergraduate, I studied many                    subjects in parallel, and I gather that my                    performance set a national record. I was once                    expelled for studying too much, after the head                    of Ume University psychology department                    discovered that I was concurrently following                    several other full-time programs of study                    (physics, philosophy, and mathematical logic),                    which he believed to be psychologically                    impossible.                  <\/p>\n<p>                    For my postgraduate work, I went to                    London, where I studied physics and                    neuroscience at King's College, and obtained a                    PhD from the London School of Economics. For a                    while I did a little bit stand-up comedy on the                    vibrant London pub and theatre circuit.                  <\/p>\n<p>                    During those years, I co-founded, with                    David Pearce, the World Transhumanist                    Association, a nonprofit grassroots                    organization. Later, I was involved in founding                    the Institute for Ethics and Emerging                    Technologies, a nonprofit virtual think tank.                    The objective was to stimulate wider discussion                    about the implications of future technologies,                    in particular technologies that might lead to                    human enhancement. (These organizations have                    since developed on their own trajectories, and                    it is very much not the case that I agree with                    everything said by those who flock under the                    transhumanist flag.)                  <\/p>\n<p>                    Since 2006, I've been the founding                    director of the Future of Humanity Institute at                    Oxford University. This unique                    multidisciplinary research aims to enable a                    select set of intellects to apply careful                    thinking to big-picture question for humanity                    and global priorities. The Institute belongs to                    the Faculty of Philosophy and the Oxford Martin                    School. Since 2015, I also direct the Strategic                    Artificial Intelligence Research Center.                  <\/p>\n<p>                    I am in a very fortunate position. I have                    no teaching duties. I am supported by a staff                    of assistants and brilliant research fellows.                    There are virtually no restrictions on what I                    can work on. I must try very hard to be worthy                    of this privilege and to cast some light on                    matters that matter.                  <\/p>\n<p>                    CONTACT                  <\/p>\n<p>                    For administrative matters, scheduling,                    and invitations, please contact my assistant,                    Kyle Scott:                  <\/p>\n<p>                    Email:                    fhipa[atsign]philosophy[dot]ox[dot]ac[dot]uk                    Phone: +44 (0)1865 286800                  <\/p>\n<p>                    If you need to contact me directly (I                    regret I am unable to respond to all emails):                    nick[atsign]nickbostrom[dot]com.<\/p>\n<\/p>\n<p>                    VIRTUAL ESTATE                  <\/p>\n<p>                    <a href=\"http:\/\/www.fhi.ox.ac.ukFuture\" rel=\"nofollow\">http:\/\/www.fhi.ox.ac.ukFuture<\/a>                    of Humanity Institute                  <\/p>\n<p>                    <a href=\"http:\/\/www.anthropic-principle.comPapers\" rel=\"nofollow\">http:\/\/www.anthropic-principle.comPapers<\/a>                    on observational selection effects                  <\/p>\n<p>                    <a href=\"http:\/\/www.simulation-argument.comDevoted\" rel=\"nofollow\">http:\/\/www.simulation-argument.comDevoted<\/a>                    to the question, \"Are you living in a computer                    simulation?\"                  <\/p>\n<p>                    <a href=\"http:\/\/www.existential-risk.orgHuman\" rel=\"nofollow\">http:\/\/www.existential-risk.orgHuman<\/a>                    extinction scenarios and related                    concerns                  <\/p>\n<\/p>\n<\/p>\n<p>                    On the bank at the end                    Of what was there before us                    Gazing over to the other side                    On what we can become                    Veiled in the mist of nave speculation                    We are busy here preparing                    Rafts to carry us across                    Before the light goes out leaving us                    In the eternal night of could-have-been                  <\/p>\n<p>                    CRUCIAL CONSIDERATIONS                  <\/p>\n<p>                    A thread that runs through my work is a                    concern with \"crucial considerations\". A                    crucial consideration is an idea or argument                    that might plausibly reveal the need for not                    just some minor course adjustment in our                    practical endeavours but a major change of                    direction or priority.                  <\/p>\n<p>                    If we have overlooked even just one such                    consideration, then all our best efforts might                    be for                    naughtor                    less. When headed the wrong way, the last thing                    needed is progress. It is therefore important                    to pursue such lines of inquiry as might                    disclose an unnoticed crucial                    consideration.                  <\/p>\n<p>                    Some of the relevant inquiries are about                    moral philosophy and values. Others have to do                    with rationality and reasoning under                    uncertainty. Still others pertain to specific                    issues and possibilities, such as existential                    risks, the simulation hypothesis, human                    enhancement, infinite utilities, anthropic                    reasoning, information hazards, the future of                    machine intelligence, or the singularity                    hypothesis.                  <\/p>\n<p>                    High-leverage questions associated with                    crucial considerations deserve to be                    investigated. My research                    interests are quite wide-ranging; yet they all                    stem from the quest to understand the big                    picture for humanity, so that we can more                    wisely choose what to aim for and what to do.                    Embarking on this quest has seemed the best way                    to try to make a positive contribution to the                    world.                  <\/p>\n<\/p>\n<p>                    SOME VIDEOS AND LECTURES                  <\/p>\n<\/p>\n<p>                      SOME ADDITONAL (OLD, COBWEBBED)                      PAPERS                    <\/p>\n<p>                      On this                      page.                    <\/p>\n<p>                    INTERVIEWS                  <\/p>\n<\/p>\n<\/p>\n<p>                    POLICY                  <\/p>\n<p>                    MISCELLANEOUS                  <\/p>\n<\/p>\n<p>                                         words trying                    extra-hard to be more than just words...                  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read the original post:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/nickbostrom.com\/\" title=\"Nick Bostrom's Home Page\">Nick Bostrom's Home Page<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> ETHICS &#038; POLICY Astronomical Waste: The Opportunity Cost of Delayed Technological Development Suns are illuminating and heating empty rooms, unused energy is being flushed down black holes, and our great common endowment of negentropy is being irreversibly degraded into entropy on a cosmic scale.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/superintelligence\/nick-bostroms-home-page\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187765],"tags":[],"class_list":["post-68641","post","type-post","status-publish","format-standard","hentry","category-superintelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/68641"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=68641"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/68641\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=68641"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=68641"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=68641"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}