Nick Bostrom (; Swedish: Niklas Bostrm [bustrm]; born 10 March 1973) is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology, and is the founding director of the Future of Humanity Institute at Oxford University.
Bostrom is the author of over 200 publications, including Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller and Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002). In 2009 and 2015, he was included in Foreign Policy’s Top 100 Global Thinkers list. Bostrom believes there are potentially great benefits from Artificial General Intelligence, but warns it might very quickly transform into a superintelligence that would deliberately extinguish humanity out of precautionary self-preservation or some unfathomable motive, making solving the problems of control beforehand an absolute priority. His book on superintelligence was recommended by both Elon Musk and Bill Gates. However, Bostrom has expressed frustration that the reaction to its thesis typically falls into two camps, one calling his recommendations absurdly alarmist because creation of superintelligence is unfeasible, and the other deeming them futile because superintelligence would be uncontrollable. Bostrom notes that both these lines of reasoning converge on inaction rather than trying to solve the control problem while there may still be time.[not in citation given]
Born as Niklas Bostrm in 1973 in Helsingborg, Sweden, he disliked school at a young age, and ended up spending his last year of high school learning from home. He sought to educate himself in a wide variety of disciplines, including anthropology, art, literature, and science. He once did some turns on London’s stand-up comedy circuit.
He received a B.A. degree in philosophy, mathematics, logic and artificial intelligence from the University of Gothenburg in 1994, and both an M.A. degree in philosophy and physics from Stockholm University and an M.Sc. degree in computational neuroscience from King’s College London in 1996. During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine. In 2000, he was awarded a Ph.D. degree in philosophy from the London School of Economics. He held a teaching position at Yale University (20002002), and he was a British Academy Postdoctoral Fellow at the University of Oxford (20022005).
Aspects of Bostrom’s research concern the future of humanity and long-term outcomes. He introduced the concept of an existential risk, which he defines as one in which an “adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” In the 2008 volume Global Catastrophic Risks, editors Bostrom and Milan irkovi characterize the relation between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects and the Fermi paradox.
In 2005, Bostrom founded the Future of Humanity Institute, which researches the far future of human civilization. He is also an adviser to the Centre for the Study of Existential Risk.
In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasoned that “the creation of a superintelligent being represents a possible means to the extinction of mankind”. Bostrom argues that a computer with near human-level general intellectual ability could initiate an intelligence explosion on a digital time scale with the resultant rapid creation of something so powerful that it might deliberately or accidentally destroy human kind. Bostrom contends the power of a superintelligence would be so great that a task given to it by humans might be taken to open ended extremes, for example a goal of calculating Pi could collaterally cause nanotechnology manufactured facilities to sprout over the entire Earth’s surface and cover it within days. He believes an existential risk to humanity from superintelligence would be immediate once brought into being, thus creating an exceedingly difficult problem of finding out how to control such an entity before it actually exists.
Warning that a human-friendly prime directive for AI would rely on the absolute correctness of the human knowledge it was based on, Bostrom points to the lack of agreement among most philosophers as an indication that most philosophers are wrong, with the attendant possibility that a fundamental concept of current science may be incorrect. Bostrom says that there are few precedents to guide an understanding of what pure non-anthropocentric rationality would dictate for a potential Singleton AI being held in quarantine. Noting that both John von Neumann and Bertrand Russell advocated a nuclear strike, or the threat of one, to prevent the Soviets acquiring the atomic bomb, Bostrom says the relatively unlimited means of superintelligence might make for its analysis moving along different lines to the evolved “diminishing returns” assessments that in humans confer a basic aversion to risk. Group selection in predators working by means of cannibalism shows the counter-intuitive nature of non-anthropocentric “evolutionary search” reasoning, and thus humans are ill-equipped to perceive what an artificial intelligence’s intentions might be. Accordingly, it cannot be discounted that any Superintelligence would ineluctably pursue an ‘all or nothing’ offensive action strategy in order to achieve hegemony and assure its survival. Bostrom notes that even current programs have, “like MacGyver”, hit on apparently unworkable but functioning hardware solutions, making robust isolation of Superintelligence problematic.
A machine with general intelligence far below human level, but superior mathematical abilities is created. Keeping the AI in isolation from the outside world especially the internet, humans pre-program the AI so it always works from basic principles that will keep it under human control. Other safety measures include the AI being “boxed” (run in a virtual reality simulation), and being used only as an ‘oracle’ to answer carefully defined questions in a limited reply (to prevent it manipulating humans). A cascade of recursive self-improvement solutions feeds an intelligence explosion in which the AI attains superintelligence in some domains. The super intelligent power of the AI goes beyond human knowledge to discover flaws in the science that underlies its friendly-to-humanity programming, which ceases to work as intended. Purposeful agent-like behavior emerges along with a capacity for self-interested strategic deception. The AI manipulates human beings into implementing modifications to itself that are ostensibly for augmenting its (feigned) modest capabilities, but will actually function to free Superintelligence from its “boxed” isolation.
Employing online humans as paid dupes, and clandestinely hacking computer systems including automated laboratory facilities, the Superintelligence mobilises resources to further a takeover plan. Bostrom emphasises that planning by a Superintelligence will not be so stupid that humans could detect actual weaknesses in it.
Although he canvasses disruption of international economic, political and military stability including hacked nuclear missile launches, Bostrom thinks the most effective and likely means for Superintelligence to use would be a coup de main with weapons several generations more advanced than current state of the art. He suggests nanofactories covertly distributed at undetectable concentrations in every square metre of the globe to produce a worldwide flood of human-killing devices on command. Once a Superintelligence has achieved world domination, humankind would be relevant only as resources for the achievement of the AI’s objectives (“Human brains, if they contain information relevant to the AIs goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format”).
In January 2015, Bostrom joined Stephen Hawking among others in signing the Future of Life Institute’s open letter warning of the potential dangers of AI. The signatories “…believe that research on how to make AI systems robust and beneficial is both important and timely, and that concrete research should be pursued today.” Cutting edge AI researcher Demis Hassabis then met with Hawking, subsequent to which he did not mention “anything inflammatory about AI”, which Hassabis, took as ‘a win’. Along with Google, Microsoft and various tech firms, Hassabis, Bostrom and Hawking and others subscribed to 23 principles for safe development of AI. Hassabis suggested the main safety measure would be an agreement for whichever AI research team began to make strides toward an artificial general intelligence to halt their project for a complete solution to the control problem prior to proceeding. Bostrom had pointed out that even if the crucial advances require the resources of a state, such a halt by a lead project might be likely to motivate a lagging country to a catch-up crash program or even physical destruction of the project suspected of being on the verge of success.
In 1863 Darwin among the Machines, an essay by Samuel Butler predicted intelligent machines’ domination of humanity, but Bostom’s suggestion of deliberate massacre of all humankind is the most extreme of such forecasts to date. One journalist wrote in a review that Bostrom’s “nihilistic” speculations indicate he “has been reading too much of the science fiction he professes to dislike” As given in his most recent book, From Bacteria to Bach and Back, renowned philosopher Daniel Dennett’s views remain in contradistinction to those of Bostrom. Dennett modified his views somewhat after reading The Master Algorithm, and now acknowledges that it is “possible in principle” to create “strong AI” with human-like comprehension and agency, but maintains that the difficulties of any such “strong AI” project as predicated by Bostrom’s “alarming” work would be orders of magnitude greater than those raising concerns have realized, and at least 50 years away. Dennett thinks the only relevant danger from AI systems is falling into anthropomorphism instead of challenging or developing human users’ powers of comprehension. Since a 2014 book in which he expressed the opinion that artificial intelligence developments would never challenge humans’ supremacy, environmentalist James Lovelock has moved far closer to Bostrom’s position, and in 2018 Lovelock said that he thought the overthrow of humankind will happen within the foreseeable future.
Bostrom has published numerous articles on anthropic reasoning, as well as the book Anthropic Bias: Observation Selection Effects in Science and Philosophy. In the book, he criticizes previous formulations of the anthropic principle, including those of Brandon Carter, John Leslie, John Barrow, and Frank Tipler.
Bostrom believes that the mishandling of indexical information is a common flaw in many areas of inquiry (including cosmology, philosophy, evolution theory, game theory, and quantum physics). He argues that a theory of anthropics is needed to deal with these. He introduces the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA), shows how they lead to different conclusions in a number of cases, and points out that each is affected by paradoxes or counterintuitive implications in certain thought experiments. He suggests that a way forward may involve extending SSA into the Strong Self-Sampling Assumption (SSSA), which replaces “observers” in the SSA definition with “observer-moments”.
In later work, he has described the phenomenon of anthropic shadow, an observation selection effect that prevents observers from observing certain kinds of catastrophes in their recent geological and evolutionary past. Catastrophe types that lie in the anthropic shadow are likely to be underestimated unless statistical corrections are made.
Bostrom’s simulation argument posits that at least one of the following statements is very likely to be true:
The idea has influenced the views of Elon Musk.
Bostrom is favorable towards “human enhancement”, or “self-improvement and human perfectibility through the ethical application of science”, as well as a critic of bio-conservative views.
In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies, although he is no longer involved in either of these organisations. Bostrom was named in Foreign Policy’s 2009 list of top global thinkers “for accepting no limits on human potential.”
With philosopher Toby Ord, he proposed the reversal test. Given humans’ irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.
He has suggested that technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are attained, proposing the principle of differential technological development. This principle states that we ought to retard the development of dangerous technologies, particularly ones that raise the level of existential risk, and accelerate the development of beneficial technologies, particularly those that protect against the existential risks posed by nature or by other technologies.
Bostrom’s theory of the Unilateralist’s Curse has been cited as a reason for the scientific community to avoid controversial dangerous research such as reanimating pathogens.
Bostrom has provided policy advice and consulted for an extensive range of governments and organisations. He gave evidence to the House of Lords, Select Committee on Digital Skills. He is an advisory board member for the Machine Intelligence Research Institute, Future of Life Institute, Foundational Questions Institute and an external advisor for the Cambridge Centre for the Study of Existential Risk.
In response to Bostrom’s writing on artificial intelligence, Oren Etzioni wrote in an MIT Review article, “..predictions that superintelligence is on the foreseeable horizon are not supported by the available data.”
See the article here:
- We Asked an AI to Finish Real Elon Musk Tweets - May 28th, 2019
- Watch a Super-Strong Robot Dog Pull a Three-Ton Airplane - May 28th, 2019
- New Law Could End Robocalling Once and For All - May 28th, 2019
- Scientists Set New Temperature Record for Superconductivity - May 28th, 2019
- Can You Tell Which of These Models Is CGI? - May 28th, 2019
- Asteroid Flying by Earth Is so Big It Has Its Own Moon - May 28th, 2019
- United Nations: Siri and Alexa Are Encouraging Misogyny - May 28th, 2019
- NASA’s Moon Mission Leader Just Quit After Only Six Weeks - May 28th, 2019
- Watch a Tesla in an Underground Tunnel Race One on the Street - May 28th, 2019
- SpaceX Just Unleashed 60 Starlink Satellites Into Orbit - May 28th, 2019
- Elon Musk Ridicules Jeff Bezos’ Plan For Space Colonies - May 28th, 2019
- See China’s Newly Unveiled Maglev Train - May 28th, 2019
- Here’s How NASA Prepares Its Spacecraft for Mars - May 28th, 2019
- Elevate Your Leadership and Grow Your Business at Your Clouds Can 2019 - May 28th, 2019
- This Robot Scans Preschoolers’ Faces Daily for Signs of Sickness - May 28th, 2019
- New Research: The Oceans Are Slowly Leaking Into the Earth - May 28th, 2019
- Studying the Sun’s Atmosphere Could Make Fusion Power a Reality - May 28th, 2019
- NASA Just Hired the First Contractor to Build Lunar Space Station - May 28th, 2019
- Anonymous Group of 3D-Printed Gun Makers Is Spreading Online - May 28th, 2019
- Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More - May 25th, 2019
- Cryptocurrency News: Looking Past the Bithumb Crypto Hack - May 25th, 2019
- Cryptocurrency News: This Week on Bitfinex, Tether, Coinbase, & More - May 25th, 2019
- Cryptocurrency News: Bitcoin ETFs, Andreessen Horowitz, and Contradictions in Crypto - May 25th, 2019
- Cryptocurrency News: What You Need to Know This Week - May 25th, 2019
- Cryptocurrency News: XRP Validators, Malta, and Practical Tokens - May 25th, 2019
- Cryptocurrency News: Bitcoin ETF Rejection, AMD Microchip Sales, and Hedge Funds - May 25th, 2019
- Cryptocurrency News: Vitalik Buterin Doesn’t Care About Bitcoin ETFs - May 25th, 2019
- Cryptocurrency News: New Exchanges Could Boost Crypto Liquidity - May 25th, 2019
- Bitcoin Rise: Is the Recent Bitcoin Price Surge a Sign of Things to Come or Another Misdirection? - May 25th, 2019
- Planetary science - Wikipedia - May 13th, 2019
- Earth Science for Kids: OLogy | AMNH - May 13th, 2019
- Eve online planetary interaction - May 13th, 2019
- Planetology dictionary definition | planetology defined - May 5th, 2019
- Bitcoin Cash (BCH) price, charts, market cap, and other ... - April 29th, 2019
- Bitcoin Cash - finance.yahoo.com - April 29th, 2019
- What is Bitcoin Cash? - finance.yahoo.com - April 29th, 2019
- Bitcoin Cash (BCH) Price, historic Charts and detailed Metrics - April 29th, 2019
- What is Bitcoin Cash? - Coin Rivet - April 29th, 2019
- Moon Cash | Free bitcoin cash faucet - April 29th, 2019
- Bitcoin Soars As Ethereum, Ripple's XRP, Bitcoin Cash, And ... - April 29th, 2019
- Bitcoin Cash (BCH) price, chart, and fundamentals info ... - April 29th, 2019
- Bitcoin Cash - Wikipedia - April 29th, 2019
- Bitcoincash price | index, chart and news | WorldCoinIndex - April 29th, 2019
- Bitcoin Cash (BCH) Price, View BCH Live Value & Buy Bitcoin ... - April 29th, 2019
- Cash App - Bitcoin - April 29th, 2019
- Superintelligence - Wikipedia - April 21st, 2019
- What is Artificial Superintelligence (ASI)? - Definition ... - April 21st, 2019
- Chill: Robots Wont Take All Our Jobs | WIRED - April 21st, 2019
- Grady Booch: Don't fear superintelligent AI | TED Talk - April 21st, 2019
- Planetary science - Wikipedia - April 20th, 2019
- University of Hawaii - Wikipedia - April 20th, 2019
- Jules Verne Voyager | UNAVCO - April 20th, 2019
- Eve online planetary interaction - April 20th, 2019
- Planetary science - Wikipedia - April 14th, 2019
- Planetology | Define Planetology at Dictionary.com - April 14th, 2019
- Planetology | Article about planetology by The Free Dictionary - April 14th, 2019
- Planetologist | Define Planetologist at Dictionary.com - April 14th, 2019
- The Israeli Moon Lander Is About to Touch Down - April 12th, 2019
- Some People Are Exceptionally Good at Predicting the Future - April 12th, 2019
- Here’s How Big the M87 Black Hole Is Compared to the Earth - April 12th, 2019
- Amazon Workers Listen to Your Alexa Conversations, Then Mock Them - April 12th, 2019
- Scientists Say New Quantum Material Could “‘Download’ Your Brain” - April 12th, 2019
- Scientists Find a New Way to Kickstart Stable Fusion Reactions - April 12th, 2019
- NASA Is Funding the Development of 18 Bizarre New Projects - April 12th, 2019
- Report: Tesla Doc Is Playing Down Injuries to Block Workers’ Comp - April 12th, 2019
- Infertile Couple Gives Birth to “Three-Parent Baby” - April 12th, 2019
- MIT Prof: If We Live in a Simulation, Are We Players or NPCs? - April 12th, 2019
- Space Station Mice Learned to Propel Themselves in Zero Gravity - April 12th, 2019
- NASA: Genetic Changes Caused by Space Travel Are Temporary - April 12th, 2019
- Israel’s Lunar Lander Just Crashed Into the Moon - April 12th, 2019
- We Wouldn’t Have the First Black Hole Image Without Katie Bouman - April 12th, 2019
- Fecal Transplants Reduce Symptoms of Autism Long Term - April 12th, 2019
- The First Black Hole Photo Is Even More Amazing When You Zoom Out - April 12th, 2019
- Family Caught Selling Diseased Body Parts to Medical Centers - April 12th, 2019
- SpaceX Milestone: Company Lands Three Falcon Heavy Boosters - April 12th, 2019
- People Are Horrified When They Have to Torture a Virtual Person - April 12th, 2019
- Chinese Scientists Gene-Hacked Super Smart Human-Monkey Hybrids - April 11th, 2019
- Scientists: Next Black Hole Image Will Be Way Clearer - April 11th, 2019
- Seven Ways Cannabis Legalization Will Make the Future Better - April 11th, 2019
- Walmart Is Rolling Out Floor-Cleaning Robots in 1,500 Stores - April 11th, 2019