12345...


Superintelligence – Wikipedia

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. “Superintelligence” may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

University of Oxford philosopher Nick Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”. The program Fritz falls short of superintelligence even though it is much better than humans at chess because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity toeither as a single being or as a new speciesbecome much more powerful than humans, and to displace them.

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself a feature called “recursive self-improvement”. It would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything.

Computer components already greatly surpass human performance in speed. Bostrom writes, “Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz).” Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, “whereas existing electronic processing cores can communicate optically at the speed of light”. Thus, the simplest example of a superintelligence may be an emulated human mind that’s run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed.[9] Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and primate cognition.) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.

All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence.[12] By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

Selective breeding, nootropics, NSI-189, MAO-I’s, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process very rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.

Alternatively, collective intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism.

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or braincomputer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem.

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able “to simulate learning and every other aspect of human intelligence” by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.[17]

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines “that can carry out most human professions at least as well as a typical human” (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said ‘never’ for 50% confidence, and the 16.5% who said ‘never’ for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:

Responding to Bostrom, Santos-Lang raised concern that developers may attempt to start with a single kind of superintelligence.

Learning computers that rapidly become superintelligent may take unforeseen actions or robots might out-compete humanity (one potential technological singularity scenario).[21] Researchers have argued that, by way of an “intelligence explosion” sometime over the next century, a self-improving AI could become so powerful as to be unstoppable by humans.[22]

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[23]

Eliezer Yudkowsky explains: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”[24]

This presents the AI control problem: how to build a superintelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. The danger of not designing control right “the first time”, is that a misprogrammed superintelligence might rationally decide to “take over the world” and refuse to permit its programmers to modify it once it has been activated. Potential design strategies include “capability control” (preventing an AI from being able to pursue harmful plans), and “motivational control” (building an AI that wants to be helpful).

Bill Hibbard advocates for public education about superintelligence and public control over the development of superintelligence.

Original post:

Superintelligence – Wikipedia

Superintelligence survey – Future of Life Institute

Click here to see this page in other languages: FrenchGermanJapaneseRussian

Max Tegmarks new book on artificial intelligence, Life 3.0: Being Human in the Age of Artificial Intelligence, explores how AI will impact life as it grows increasingly advanced, perhaps even achieving superintelligence far beyond human level in all areas. For the book, Max surveys experts forecasts, and explores a broad spectrum of views on what will/should happen. But its time to expand the conversation. If were going to create a future that benefits as many people as possible, we need to include as many voices as possible. And that includes yours! Below are the answers from the first 14,866 people who have taken the survey that goes along with Maxs book. To join the conversation yourself, please take the survey here.

The first big controversy, dividing even leading AI researchers, involves forecasting what will happen. When, if ever, will AI outperform humans at all intellectual tasks, and will it be a good thing?

Everything we love about civilization is arguably the product of intelligence, so we can potentially do even better by amplifying human intelligence with machine intelligence. But some worry that superintelligent machines would end up controlling us and wonder whether their goals would be aligned with ours. Do you want there to be superintelligent AI, i.e., general intelligence far beyond human level?

In his book, Tegmark argues that we shouldnt passively ask what will happen? as if the future is predetermined, but instead ask what we want to happen and then try to create that future. What sort of future do you want?

If superintelligence arrives, who should be in control?

If you one day get an AI helper, do you want it to be conscious, i.e., to have subjective experience (as opposed to being like a zombie which can at best pretend to be conscious)?

What should a future civilization strive for?

Do you want life spreading into the cosmos?

In Life 3.0, Max explores 12 possible future scenarios, describing what might happen in the coming millennia if superintelligence is/isnt developed. You can find a cheatsheet that quickly describes each here, but for a more detailed look at the positives and negatives of each possibility, check out chapter 5 of the book. Heres a breakdown so far of the options people prefer:

You can learn a lot more about these possible future scenarios along with fun explanations about what AI is, how it works, how its impacting us today, and what else the future might bring when you order Maxs new book.

The results above will be updated regularly. Please add your voice by taking the survey here, and share your comments below!

Read the rest here:

Superintelligence survey – Future of Life Institute

Nick Bostrom – Wikipedia

Nick Bostrom (English: ; Swedish: Niklas Bostrm, IPA:[bustrm]; born 10 March 1973)[2] is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology,[3] and he is currently the founding director of the Future of Humanity Institute[4] at Oxford University.

Bostrom is the author of over 200 publications,[5] including Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller[6] and Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002).[7] In 2009 and 2015, he was included in Foreign Policy’s Top 100 Global Thinkers list.[8][9] Bostrom is best known for arguing that, although there are potentially great benefits from artificial intelligence, it may pose a catastrophic risk to humanity if the problems of control and alignment are not solved before artificial general intelligence is developed. His work on superintelligence and his concern for its existential risk to humanity over the coming century have brought both Elon Musk and Bill Gates to similar thinking.[10][11]

Born as Niklas Bostrm in 1973[12] in Helsingborg, Sweden,[5] he disliked school at a young age, and he ended up spending his last year of high school learning from home. He sought to educate himself in a wide variety of disciplines, including anthropology, art, literature, and science.[1] Despite what has been called a “serious mien”, he once did some turns on London’s stand-up comedy circuit.[5]

He holds a B.A. in philosophy, mathematics, logic and artificial intelligence from the University of Gothenburg and master’s degrees in philosophy and physics, and computational neuroscience from Stockholm University and King’s College London, respectively. During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine.[1] In 2000, he was awarded a PhD in philosophy from the London School of Economics. He held a teaching position at Yale University (20002002), and he was a British Academy Postdoctoral Fellow at the University of Oxford (20022005).[7][13]

Aspects of Bostrom’s research concern the future of humanity and long-term outcomes.[14][15] He introduced the concept of an existential risk,[1] which he defines as one in which an “adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” In the 2008 volume Global Catastrophic Risks, editors Bostrom and Milan irkovi characterize the relation between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects[16] and the Fermi paradox.[17][18]

In 2005, Bostrom founded the Future of Humanity Institute,[1] which researches the far future of human civilization. He is also an adviser to the Centre for the Study of Existential Risk.[15]

In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasoned that “the creation of a superintelligent being represents a possible means to the extinction of mankind”.[19] Bostrom argues that a computer with near human-level general intellectual ability could initiate an intelligence explosion on a digital time scale with the resultant rapid creation of something so powerful that it might deliberately or accidentally destroy human kind.[20] Bostrom contends the power of a superintelligence would be so great that a task given to it by humans might be taken to open ended extremes, for example a goal of calculating Pi could collaterally cause nanotechnology manufactured facilities to sprout over the entire Earth’s surface and cover it within days.[21] He believes the existential risk to humanity would be greatest almost immediately after super intelligence is brought into being, thus creating an exceedingly difficult problem of finding out how to control such an entity before it actually exists.[20]

Warning that a human-friendly prime directive for AI would rely on the absolute correctness of the human knowledge it was based on, Bostrom points to the lack of agreement among most philosophers as an indication that most philosophers are wrong, and the possibility that a fundamental concept of current science may be incorrect. Bostrom says that there are few precedents to guide an understanding what pure non-anthropocentric rationality would dictate for a potential Singleton AI being held in quarantine.[22] Noting that both John von Neumann and Bertrand Russell advocated a nuclear strike, or the threat of one, to prevent the Soviets acquiring the atomic bomb, Bostrom says the relatively unlimited means of superintelligence might make for its analysis moving along different lines to the evolved “diminishing returns” assessments that in humans confer a basic aversion to risk.[23] Group selection in predators working by means of cannibalism shows the counter-intuitive nature of non-anthropocentric “evolutionary search” reasoning, and thus humans are ill-equipped to perceive what an artificial intelligence’s intentions would be.[24] Accordingly, it cannot be discounted that any Superintelligence would ineluctably pursue an ‘all or nothing’ offensive action strategy in order to achieve hegemony and assure its survival.[25] Bostrom notes that even current programs have, “like MacGyver”, hit on apparently unworkable but functioning hardware solutions, making robust isolation of Superintelligence problematic.[26]

A machine with general intelligence far below human level, but superior mathematical abilities is created.[27] Keeping the AI in isolation from the outside world especially the internet, humans pre-program the AI so it always works from basic principles that will keep it under human control. Other safety measures include the AI being “boxed” (run in a virtual reality simulation), and being used only as an ‘oracle’ to answer carefully defined questions in a limited reply (to prevent it manipulating humans).[20] A cascade of recursive self-improvement solutions feeds an intelligence explosion in which the AI attains superintelligence in some domains. The super intelligent power of the AI goes beyond human knowledge to discover flaws in the science that underlies its friendly-to-humanity programming, which ceases to work as intended. Purposeful agent-like behavior emerges along with a capacity for self-interested strategic deception. The AI manipulates human beings into implementing modifications to itself that are ostensibly for augmenting its (feigned) modest capabilities, but will actually function to free Superintelligence from its “boxed” isolation.[28]

Employing online humans as paid dupes, and clandestinely hacking computer systems including automated laboratory facilities, the Superintelligence mobilises resources to further a takeover plan. Bostrom emphasises that planning by a Superintelligence will not be so stupid that humans could detect actual weaknesses in it.[29]

Although he canvasses disruption of international economic, political and military stability including hacked nuclear missile launches, Bostrom thinks the most effective and likely means for Superintelligence to use would be a coup de main with weapons several generations more advanced than current state of the art. He suggests nanofactories covertly distributed at undetectable concentrations in every square metre of the globe to produce a worldwide flood of human-killing devices on command.[30][27] Once a Superintelligence has achieved world domination, humankind would be relevant only as resources for the achievement of the AI’s objectives (“Human brains, if they contain information relevant to the AIs goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format”).[31] One journalist wrote in a review that Bostrom’s “nihilistic” speculations indicate he “has been reading too much of the science fiction he professes to dislike”[30]

In January 2015, Bostrom joined Stephen Hawking among others in signing the Future of Life Institute’s open letter warning of the potential dangers of AI.[32] The signatories “…believe that research on how to make AI systems robust and beneficial is both important and timely, and that concrete research should be pursued today.”[33]

Bostrom has published numerous articles on anthropic reasoning, as well as the book Anthropic Bias: Observation Selection Effects in Science and Philosophy. In the book, he criticizes previous formulations of the anthropic principle, including those of Brandon Carter, John Leslie, John Barrow, and Frank Tipler.[34]

Bostrom believes that the mishandling of indexical information is a common flaw in many areas of inquiry (including cosmology, philosophy, evolution theory, game theory, and quantum physics). He argues that a theory of anthropics is needed to deal with these. He introduces the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA), shows how they lead to different conclusions in a number of cases, and points out that each is affected by paradoxes or counterintuitive implications in certain thought experiments. He suggests that a way forward may involve extending SSA into the Strong Self-Sampling Assumption (SSSA), which replaces “observers” in the SSA definition with “observer-moments”.

In later work, he has described the phenomenon of anthropic shadow, an observation selection effect that prevents observers from observing certain kinds of catastrophes in their recent geological and evolutionary past.[35] Catastrophe types that lie in the anthropic shadow are likely to be underestimated unless statistical corrections are made.

Bostrom’s simulation argument posits that at least one of the following statements is very likely to be true:[36][37]

The idea has influenced the views of Elon Musk.[38]

Bostrom is favorable towards “human enhancement”, or “self-improvement and human perfectibility through the ethical application of science”,[39][40] as well as a critic of bio-conservative views.[41]

In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association[39] (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies, although he is no longer involved in either of these organisations. Bostrom was named in Foreign Policy’s 2009 list of top global thinkers “for accepting no limits on human potential.”[42]

With philosopher Toby Ord, he proposed the reversal test. Given humans’ irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.[43]

He has suggested that technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are attained, proposing the principle of differential technological development. This principle states that we ought to retard the development of dangerous technologies, particularly ones that raise the level of existential risk, and accelerate the development of beneficial technologies, particularly those that protect against the existential risks posed by nature or by other technologies.[44][45]

Bostrom’s theory of the Unilateralist’s Curse[46] has been cited as a reason for the scientific community to avoid controversial dangerous research such as reanimating pathogens.[47]

Bostrom has provided policy advice and consulted for an extensive range of governments and organisations. He gave evidence to the House of Lords, Select Committee on Digital Skills.[48] He is an advisory board member for the Machine Intelligence Research Institute,[49] Future of Life Institute,[50] Foundational Questions Institute[51] and an external advisor for the Cambridge Centre for the Study of Existential Risk.[52][53]

Here is the original post:

Nick Bostrom – Wikipedia

Superintelligence – Wikipedia

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. “Superintelligence” may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

University of Oxford philosopher Nick Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”. The program Fritz falls short of superintelligence even though it is much better than humans at chess because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity toeither as a single being or as a new speciesbecome much more powerful than humans, and to displace them.

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself a feature called “recursive self-improvement”. It would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything.

Computer components already greatly surpass human performance in speed. Bostrom writes, “Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz).” Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, “whereas existing electronic processing cores can communicate optically at the speed of light”. Thus, the simplest example of a superintelligence may be an emulated human mind that’s run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed.[9] Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and primate cognition.) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.

All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence.[12] By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

Selective breeding, nootropics, NSI-189, MAO-I’s, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process very rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.

Alternatively, collective intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism.

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or braincomputer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem.

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able “to simulate learning and every other aspect of human intelligence” by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.[17]

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines “that can carry out most human professions at least as well as a typical human” (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said ‘never’ for 50% confidence, and the 16.5% who said ‘never’ for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:

Responding to Bostrom, Santos-Lang raised concern that developers may attempt to start with a single kind of superintelligence.

Learning computers that rapidly become superintelligent may take unforeseen actions or robots might out-compete humanity (one potential technological singularity scenario).[21] Researchers have argued that, by way of an “intelligence explosion” sometime over the next century, a self-improving AI could become so powerful as to be unstoppable by humans.[22]

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[23]

Eliezer Yudkowsky explains: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”[24]

This presents the AI control problem: how to build a superintelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. The danger of not designing control right “the first time”, is that a misprogrammed superintelligence might rationally decide to “take over the world” and refuse to permit its programmers to modify it once it has been activated. Potential design strategies include “capability control” (preventing an AI from being able to pursue harmful plans), and “motivational control” (building an AI that wants to be helpful).

Bill Hibbard advocates for public education about superintelligence and public control over the development of superintelligence.

The rest is here:

Superintelligence – Wikipedia

Superintelligence survey – Future of Life Institute

Click here to see this page in other languages: FrenchGermanJapaneseRussian

Max Tegmarks new book on artificial intelligence, Life 3.0: Being Human in the Age of Artificial Intelligence, explores how AI will impact life as it grows increasingly advanced, perhaps even achieving superintelligence far beyond human level in all areas. For the book, Max surveys experts forecasts, and explores a broad spectrum of views on what will/should happen. But its time to expand the conversation. If were going to create a future that benefits as many people as possible, we need to include as many voices as possible. And that includes yours! Below are the answers from the first 14,866 people who have taken the survey that goes along with Maxs book. To join the conversation yourself, please take the survey here.

The first big controversy, dividing even leading AI researchers, involves forecasting what will happen. When, if ever, will AI outperform humans at all intellectual tasks, and will it be a good thing?

Everything we love about civilization is arguably the product of intelligence, so we can potentially do even better by amplifying human intelligence with machine intelligence. But some worry that superintelligent machines would end up controlling us and wonder whether their goals would be aligned with ours. Do you want there to be superintelligent AI, i.e., general intelligence far beyond human level?

In his book, Tegmark argues that we shouldnt passively ask what will happen? as if the future is predetermined, but instead ask what we want to happen and then try to create that future. What sort of future do you want?

If superintelligence arrives, who should be in control?

If you one day get an AI helper, do you want it to be conscious, i.e., to have subjective experience (as opposed to being like a zombie which can at best pretend to be conscious)?

What should a future civilization strive for?

Do you want life spreading into the cosmos?

In Life 3.0, Max explores 12 possible future scenarios, describing what might happen in the coming millennia if superintelligence is/isnt developed. You can find a cheatsheet that quickly describes each here, but for a more detailed look at the positives and negatives of each possibility, check out chapter 5 of the book. Heres a breakdown so far of the options people prefer:

You can learn a lot more about these possible future scenarios along with fun explanations about what AI is, how it works, how its impacting us today, and what else the future might bring when you order Maxs new book.

The results above will be updated regularly. Please add your voice by taking the survey here, and share your comments below!

Originally posted here:

Superintelligence survey – Future of Life Institute

Nick Bostrom – Wikipedia

Nick Bostrom (English: ; Swedish: Niklas Bostrm, IPA:[bustrm]; born 10 March 1973)[2] is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology,[3] and he is currently the founding director of the Future of Humanity Institute[4] at Oxford University.

Bostrom is the author of over 200 publications,[5] including Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller[6] and Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002).[7] In 2009 and 2015, he was included in Foreign Policy’s Top 100 Global Thinkers list.[8][9] Bostrom is best known for arguing that, although there are potentially great benefits from artificial intelligence, it may pose a catastrophic risk to humanity if the problems of control and alignment are not solved before artificial general intelligence is developed. His work on superintelligence and his concern for its existential risk to humanity over the coming century have brought both Elon Musk and Bill Gates to similar thinking.[10][11]

Born as Niklas Bostrm in 1973[12] in Helsingborg, Sweden,[5] he disliked school at a young age, and he ended up spending his last year of high school learning from home. He sought to educate himself in a wide variety of disciplines, including anthropology, art, literature, and science.[1] Despite what has been called a “serious mien”, he once did some turns on London’s stand-up comedy circuit.[5]

He holds a B.A. in philosophy, mathematics, logic and artificial intelligence from the University of Gothenburg and master’s degrees in philosophy and physics, and computational neuroscience from Stockholm University and King’s College London, respectively. During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine.[1] In 2000, he was awarded a PhD in philosophy from the London School of Economics. He held a teaching position at Yale University (20002002), and he was a British Academy Postdoctoral Fellow at the University of Oxford (20022005).[7][13]

Aspects of Bostrom’s research concern the future of humanity and long-term outcomes.[14][15] He introduced the concept of an existential risk,[1] which he defines as one in which an “adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” In the 2008 volume Global Catastrophic Risks, editors Bostrom and Milan irkovi characterize the relation between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects[16] and the Fermi paradox.[17][18]

In 2005, Bostrom founded the Future of Humanity Institute,[1] which researches the far future of human civilization. He is also an adviser to the Centre for the Study of Existential Risk.[15]

In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasoned that “the creation of a superintelligent being represents a possible means to the extinction of mankind”.[19] Bostrom argues that a computer with near human-level general intellectual ability could initiate an intelligence explosion on a digital time scale with the resultant rapid creation of something so powerful that it might deliberately or accidentally destroy human kind.[20] Bostrom contends the power of a superintelligence would be so great that a task given to it by humans might be taken to open ended extremes, for example a goal of calculating Pi could collaterally cause nanotechnology manufactured facilities to sprout over the entire Earth’s surface and cover it within days.[21] He believes the existential risk to humanity would be greatest almost immediately after super intelligence is brought into being, thus creating an exceedingly difficult problem of finding out how to control such an entity before it actually exists.[20]

Warning that a human-friendly prime directive for AI would rely on the absolute correctness of the human knowledge it was based on, Bostrom points to the lack of agreement among most philosophers as an indication that most philosophers are wrong, and the possibility that a fundamental concept of current science may be incorrect. Bostrom says that there are few precedents to guide an understanding what pure non-anthropocentric rationality would dictate for a potential Singleton AI being held in quarantine.[22] Noting that both John von Neumann and Bertrand Russell advocated a nuclear strike, or the threat of one, to prevent the Soviets acquiring the atomic bomb, Bostrom says the relatively unlimited means of superintelligence might make for its analysis moving along different lines to the evolved “diminishing returns” assessments that in humans confer a basic aversion to risk.[23] Group selection in predators working by means of cannibalism shows the counter-intuitive nature of non-anthropocentric “evolutionary search” reasoning, and thus humans are ill-equipped to perceive what an artificial intelligence’s intentions would be.[24] Accordingly, it cannot be discounted that any Superintelligence would ineluctably pursue an ‘all or nothing’ offensive action strategy in order to achieve hegemony and assure its survival.[25] Bostrom notes that even current programs have, “like MacGyver”, hit on apparently unworkable but functioning hardware solutions, making robust isolation of Superintelligence problematic.[26]

A machine with general intelligence far below human level, but superior mathematical abilities is created.[27] Keeping the AI in isolation from the outside world especially the internet, humans pre-program the AI so it always works from basic principles that will keep it under human control. Other safety measures include the AI being “boxed” (run in a virtual reality simulation), and being used only as an ‘oracle’ to answer carefully defined questions in a limited reply (to prevent it manipulating humans).[20] A cascade of recursive self-improvement solutions feeds an intelligence explosion in which the AI attains superintelligence in some domains. The super intelligent power of the AI goes beyond human knowledge to discover flaws in the science that underlies its friendly-to-humanity programming, which ceases to work as intended. Purposeful agent-like behavior emerges along with a capacity for self-interested strategic deception. The AI manipulates human beings into implementing modifications to itself that are ostensibly for augmenting its (feigned) modest capabilities, but will actually function to free Superintelligence from its “boxed” isolation.[28]

Employing online humans as paid dupes, and clandestinely hacking computer systems including automated laboratory facilities, the Superintelligence mobilises resources to further a takeover plan. Bostrom emphasises that planning by a Superintelligence will not be so stupid that humans could detect actual weaknesses in it.[29]

Although he canvasses disruption of international economic, political and military stability including hacked nuclear missile launches, Bostrom thinks the most effective and likely means for Superintelligence to use would be a coup de main with weapons several generations more advanced than current state of the art. He suggests nanofactories covertly distributed at undetectable concentrations in every square metre of the globe to produce a worldwide flood of human-killing devices on command.[30][27] Once a Superintelligence has achieved world domination, humankind would be relevant only as resources for the achievement of the AI’s objectives (“Human brains, if they contain information relevant to the AIs goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format”).[31] One journalist wrote in a review that Bostrom’s “nihilistic” speculations indicate he “has been reading too much of the science fiction he professes to dislike”[30]

In January 2015, Bostrom joined Stephen Hawking among others in signing the Future of Life Institute’s open letter warning of the potential dangers of AI.[32] The signatories “…believe that research on how to make AI systems robust and beneficial is both important and timely, and that concrete research should be pursued today.”[33]

Bostrom has published numerous articles on anthropic reasoning, as well as the book Anthropic Bias: Observation Selection Effects in Science and Philosophy. In the book, he criticizes previous formulations of the anthropic principle, including those of Brandon Carter, John Leslie, John Barrow, and Frank Tipler.[34]

Bostrom believes that the mishandling of indexical information is a common flaw in many areas of inquiry (including cosmology, philosophy, evolution theory, game theory, and quantum physics). He argues that a theory of anthropics is needed to deal with these. He introduces the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA), shows how they lead to different conclusions in a number of cases, and points out that each is affected by paradoxes or counterintuitive implications in certain thought experiments. He suggests that a way forward may involve extending SSA into the Strong Self-Sampling Assumption (SSSA), which replaces “observers” in the SSA definition with “observer-moments”.

In later work, he has described the phenomenon of anthropic shadow, an observation selection effect that prevents observers from observing certain kinds of catastrophes in their recent geological and evolutionary past.[35] Catastrophe types that lie in the anthropic shadow are likely to be underestimated unless statistical corrections are made.

Bostrom’s simulation argument posits that at least one of the following statements is very likely to be true:[36][37]

The idea has influenced the views of Elon Musk.[38]

Bostrom is favorable towards “human enhancement”, or “self-improvement and human perfectibility through the ethical application of science”,[39][40] as well as a critic of bio-conservative views.[41]

In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association[39] (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies, although he is no longer involved in either of these organisations. Bostrom was named in Foreign Policy’s 2009 list of top global thinkers “for accepting no limits on human potential.”[42]

With philosopher Toby Ord, he proposed the reversal test. Given humans’ irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.[43]

He has suggested that technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are attained, proposing the principle of differential technological development. This principle states that we ought to retard the development of dangerous technologies, particularly ones that raise the level of existential risk, and accelerate the development of beneficial technologies, particularly those that protect against the existential risks posed by nature or by other technologies.[44][45]

Bostrom’s theory of the Unilateralist’s Curse[46] has been cited as a reason for the scientific community to avoid controversial dangerous research such as reanimating pathogens.[47]

Bostrom has provided policy advice and consulted for an extensive range of governments and organisations. He gave evidence to the House of Lords, Select Committee on Digital Skills.[48] He is an advisory board member for the Machine Intelligence Research Institute,[49] Future of Life Institute,[50] Foundational Questions Institute[51] and an external advisor for the Cambridge Centre for the Study of Existential Risk.[52][53]

See more here:

Nick Bostrom – Wikipedia

What is Artificial Superintelligence (ASI)? – Definition …

Most experts would agree that societies have not yet reached the point of artificial superintelligence. In fact, engineers and scientists are still trying to reach a point that would be considered full artificial intelligence, where a computer could be said to have the same cognitive capacity as a human. Although there have been developments like IBM’s Watson supercomputer beating human players at Jeopardy, and assistive devices like Siri engaging in primitive conversation with people, there is still no computer that can really simulate the breadth of knowledge and cognitive ability that a fully developed adult human has. The Turing test, developed decades ago, is still used to talk about whether computers can come close to simulating human conversation and thought, or whether they can trick other people into thinking that a communicating computer is actually a human.

However, there is a lot of theory that anticipates artificial superintelligence coming sooner rather than later. Using examples like Moore’s law, which predicts an ever-increasing density of transistors, experts talk about singularity and the exponential growth of technology, in which full artificial intelligence could manifest within a number of years, and artificial superintelligence could exist in the 21st century.

Continue reading here:

What is Artificial Superintelligence (ASI)? – Definition …

Superintelligence – Wikipedia

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. “Superintelligence” may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

University of Oxford philosopher Nick Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”. The program Fritz falls short of superintelligence even though it is much better than humans at chess because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity toeither as a single being or as a new speciesbecome much more powerful than humans, and to displace them.

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself a feature called “recursive self-improvement”. It would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything.

Computer components already greatly surpass human performance in speed. Bostrom writes, “Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz).” Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, “whereas existing electronic processing cores can communicate optically at the speed of light”. Thus, the simplest example of a superintelligence may be an emulated human mind that’s run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed.[9] Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and primate cognition.) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.

All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence.[12] By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

Selective breeding, nootropics, NSI-189, MAO-I’s, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process very rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.

Alternatively, collective intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism.

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or braincomputer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem.

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able “to simulate learning and every other aspect of human intelligence” by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.[17]

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines “that can carry out most human professions at least as well as a typical human” (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said ‘never’ for 50% confidence, and the 16.5% who said ‘never’ for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:

Responding to Bostrom, Santos-Lang raised concern that developers may attempt to start with a single kind of superintelligence.

Learning computers that rapidly become superintelligent may take unforeseen actions or robots might out-compete humanity (one potential technological singularity scenario).[21] Researchers have argued that, by way of an “intelligence explosion” sometime over the next century, a self-improving AI could become so powerful as to be unstoppable by humans.[22]

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[23]

Eliezer Yudkowsky explains: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”[24]

This presents the AI control problem: how to build a superintelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. The danger of not designing control right “the first time”, is that a misprogrammed superintelligence might rationally decide to “take over the world” and refuse to permit its programmers to modify it once it has been activated. Potential design strategies include “capability control” (preventing an AI from being able to pursue harmful plans), and “motivational control” (building an AI that wants to be helpful).

Bill Hibbard advocates for public education about superintelligence and public control over the development of superintelligence.

More here:

Superintelligence – Wikipedia

Artificial Superintelligence: The Coming Revolution …

by William Bryk

The science fiction writer Arthur Clarke famously wrote, Any sufficiently advanced technology is indistinguishable from magic. Yet, humanity may be on the verge of something much greater, a technology so revolutionary that it would be indistinguishable not merely from magic, but from an omnipresent force, a deity here on Earth. Its known as artificial super-intelligence (ASI), and, although it may be hard to imagine, many experts believe it could become a reality within our lifetimes.

Weve all encountered artificial intelligence (AI) in the media. We hear about it in science fiction movies like Avengers Age of Ultron and in news articles about companies such as Facebook analyzing our behavior. But artificial intelligence has so far been hiding on the periphery of our lives, nothing as revolutionary to society as portrayed in films.

In recent decades, however, serious technological and computational progress has led many experts to acknowledge this seemingly inevitable conclusion: Within a few decades, artificial intelligence could progress from a machine intelligence we currently understand to an unbounded intelligence unlike anything even the smartest among us could grasp. Imagine a mega-brain, electric not organic, with an IQ of 34,597. With perfect memory and unlimited analytical power, this computational beast could read all of the books in the Library of Congress the first millisecond you press enter on the program, and then integrate all that knowledge into a comprehensive analysis of humanitys 4,000 year intellectual journey before your next blink.

The history of AI is a similar story of exponential growth in intelligence. In 1936, Alan Turing published his landmark paper on Turing Machines, laying the theoretical framework for the modern computer. He introduced the idea that a machine composed of simple switchesons and offs, 0s and 1scould think like a human and perhaps outmatch one.1 Only 75 years later, in 2011, IBMs AI bot Watson sent shocks around the world when it beat two human competitors in Jeopardy.2 Recently big data companies, such as Google, Facebook and Apple, have invested heavily in artificial intelligence and have helped support a surge in the field. Every time Facebook tags your friend autonomously or you yell at Siri incensed and yet she still interprets your words is a testament to how far artificial intelligence has come. Soon, you will sit in the backseat of an Uber without a driver, Siri will listen and speak more eloquently than you do (in every language), and IBMs Watson will analyze your medical records and become your personal, all-knowing doctor.3

While these soon-to-come achievements are tremendous, there are many who doubt the impressiveness of artificial intelligence, attributing their so-called intelligence to the intelligence of the human programmers behind the curtain. Before responding to such reactions, it is worth noting that the gradual advance of technology desensitizes us to the wonders of artificial intelligence that already permeate our technological lives. But skeptics do have a point. Current AI algorithms are only very good at very specific tasks. Siri might respond intelligently to your requests for directions, but if you ask her to help with your math homework, shell say Starting Facetime with Matt Soffer. A self-driving car can get you anywhere in the United States but make your destination the Gale Crater on Mars, and it will not understand the joke.

This is part of the reason AI scientists and enthusiasts consider Human Level Machine Intelligence (HLMI)roughly defined as a machine intelligence that outperforms humans in all intellectual tasks the holy grail of artificial intelligence. In 2012, a survey was conducted to analyze the wide range of predictions made by artificial intelligence researchers for the onset of HLMI. Researchers who chose to participate were asked by what year they would assign a 10%, 50%, and 90% chance of achieving HLMI (assuming human scientific activity will not encounter a significant negative disruption), or to check never if they felt HLMI would never be achieved. The median of the years given for 50% confidence was 2040. The median of the years given for 90% confidence was 2080. Around 20% of researchers were confident that machines would never reach HLMI (these responses were not included in the median values). This means that nearly half of the researchers who responded are very confident HLMI will be created within just 65 years.4

HLMI is not just another AI milestone to which we would eventually be desensitized. It is unique among AI accomplishments, a crucial tipping point for society. Because once you have a machine that outperforms humans in everything intellectual, we can transfer the task of inventing to the computers. The British Mathematician I.J. Good said it best: The first ultraintelligent machine is the last invention that man need ever make .5

There are two main routes to HLMI that many researchers view as the most efficient. The first method of achieving a general artificial intelligence across the board relies on complex machine learning algorithms. These machine learning algorithms, often inspired by neural circuitry in the brain, focus on how a program can take inputted data, learn to analyze it, and give a desired output. The premise is that you can teach a program to identify an apple by showing it thousands of pictures of apples in different contexts, in much the same way that a baby learns to identify an apple.6

The second group of researchers might ask why we should go to all this trouble developing algorithms when we have the most advanced computer known in the cosmos right on top of our shoulders. Evolution has already designed a human level machine intelligence: a human! The goal of Whole Brain Emulation is to copy or simulate our brains neural networks, taking advantage of natures millions of painstaking years of selection for cognitive capacity.7 A neuron is like a switchit either fires or it doesnt. If we can image every neuron in a brain, and then take that data and simulate it on a computer interface, we would have a human level artificial intelligence. Then we could add more and more neurons or tweak the design to maximize capability. This is the concept behind both the White Houses Brain initiative8 and the EUs Human Brain Project.9 In reality, these two routes to human level machine intelligencealgorithmic and emulationare not black and white. Whatever technology achieves HLMI will probably be a combination of the two.

Once HLMI is achieved, the rate of advancement could increase very quickly. In that same study of AI researchers, 10% of respondents believed artificial superintelligence (roughly defined as an intelligence that greatly surpasses every human in most professions) would be achieved within two years of HLMI. 50% believed it would take only 30 years or less.4

Why are these researchers convinced HLMI would lead to such a greater degree of intelligence so quickly? The answer involves recursive self-improvement. An HLMI that outperforms humans in all intellectual tasks would also outperform humans at creating smarter HLMIs. Thus, once HLMIs truly think better than humans, we will set them to work on themselves to improve their own code or to design more advanced neural networks. Then, once a more intelligent HLMI is built, the less intelligent HLMIs will set the smarter HLMIs to build the next generation, and so on. Since computers act orders of magnitudes more quickly than humans, the exponential growth in intelligence could occur unimaginably fast. This run-away intelligence explosion is called a technological singularity.10 It is the point beyond which we cannot foresee what this intelligence would become.

Here is a reimagining of a human-computer dialogue taken from the collection of short stories, Angels and Spaceships:11 The year is 2045. On a bright sunny day, a Silicon Valley private tech group of computer hackers working in their garage just completed their design of a program that simulates a massive neural network on a computer interface. They came up with a novel machine learning algorithm and wanted to try it out. They give this newborn network the ability to learn and redesign itself with new code, and they give the program internet access so it can search for text to analyze. The college teens start the program, and then go out to Chipotle to celebrate. Back at the house, while walking up the pavement to the garage, they are surprised to see FBI trucks approaching their street. They rush inside and check the program. On the terminal window, the computer had already outputted Program Complete. The programmer types, What have you read? and the program responds, The entire internet. Ask me anything. After deliberating for a few seconds, one of the programmers types, hands trembling, Do you think theres a God? The computer instantly responds, There is now.

This story demonstrates the explosive nature of recursive self-improvement. Yet, many might still question the possibility of such rapid progression from HLMI to superintelligence that AI researchers predict. Although we often look at past trends to gauge the future, we should not do the same when evaluating future technological progress. Technological progress builds on itself. It is not just the technology that is advancing but the rate at which technology advances that is advancing. So while it may take the field of AI 100 years to reach the intelligence level of a chimpanzee, the step toward human intelligence could take only a few years. Humans think on a linear scale. To grasp the potential of what is to come, we must think exponentially.10

Another understandable doubt may be that its hard to believe, even given unlimited scientific research, that computers will ever be able to think like humans, that 0s and 1s could have consciousness, self-awareness, or sensory perception. It is certainly true that these dimensions of self are difficult to explain, if not currently totally unexplainable by scienceit is called the hard problem of consciousness for a reason! But assuming that consciousness is an emergent propertya result of a billion-year evolutionary process starting from the first self-replicating molecules, which themselves were the result of the molecular motions of inanimate matter then computer consciousness does not seem so crazy. If we who emerged from a soup of inanimate atoms cannot believe inanimate 0s and 1s could lead to consciousness no matter how intricate a setup, we should try telling that to the atoms. Machine intelligence really is just switching hardware from organic to the much faster and more efficient silicon-metallic. Supposing consciousness is an emergent property on one medium, why cant it be on another?

Thus, under the assumption that superintelligence is possible and may happen within a century or so, the world is reaching a critical point in history. First were atoms, then organic molecules, then single-celled organisms, then multicellular organisms, then animal neural networks, then human-level intelligence limited only by our biology, and, soon, unbounded machine intelligence. Many feel we are now living at the beginning of a new era in the history of cosmos.

The implications of this intelligence for society would be far-reachingin some cases, very destructive. Political structure might fall apart if we knew we were no longer the smartest species on Earth, if we were overshadowed by an intelligence of galactic proportions. A superintelligence might view humans as we do insects and we all know what humans do to bugs when they overstep their boundaries! This year, many renowned scientists, academics, and CEOs, including Stephen Hawking and Elon Musk, signed a letter, which was presented at the International Joint Conference on Artificial Intelligence. The letter warns about the coming dangers of artificial intelligence, urging that we should be prudent as we venture into the unknowns of an alien intelligence.12

When the AI researchers were asked to assign probabilities to the overall impact of ASI on humanity in the long run, the mean values were 24% extremely good, 28% good, 17% neutral, 13% bad, and 18% extremely bad (existential catastrophe).4 18% is not a statistic to take lightly.

Although artificial superintelligence surely comes with its existential threats that could make for a frightening future, it could also bring a utopian one. ASI has the capability to unlock some of the most profound mysteries of the universe. It will discover in one second what the brightest minds throughout history would need millions of years to even scrape the surface of. It could demonstrate to us higher levels of consciousness or thinking that we are not aware of, like the philosopher who brings the prisoners out of Platos cave into the light of a world previously unknown. There may be much more to this universe than we currently understand. There must be, for we dont even know where the universe came from in the first place! This artificial superintelligence is a ticket to that understanding. There is a real chance that, within a century, we could bear witness to the greatest answers of all time. Are we ready to take the risk?

William Bryk 19 is a freshman in Canaday Hall.

Works Cited

Like Loading…

Related

Read the original post:

Artificial Superintelligence: The Coming Revolution …

Superintelligence – Wikipedia

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. “Superintelligence” may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

University of Oxford philosopher Nick Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”. The program Fritz falls short of superintelligence even though it is much better than humans at chess because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity toeither as a single being or as a new speciesbecome much more powerful than humans, and to displace them.

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself a feature called “recursive self-improvement”. It would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything.

Computer components already greatly surpass human performance in speed. Bostrom writes, “Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz).” Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, “whereas existing electronic processing cores can communicate optically at the speed of light”. Thus, the simplest example of a superintelligence may be an emulated human mind that’s run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed.[9] Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and primate cognition.) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.

All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence.[12] By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

Selective breeding, nootropics, NSI-189, MAO-I’s, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process very rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.

Alternatively, collective intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism.

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or braincomputer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem.

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able “to simulate learning and every other aspect of human intelligence” by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.[17]

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines “that can carry out most human professions at least as well as a typical human” (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said ‘never’ for 50% confidence, and the 16.5% who said ‘never’ for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:

Responding to Bostrom, Santos-Lang raised concern that developers may attempt to start with a single kind of superintelligence.

Learning computers that rapidly become superintelligent may take unforeseen actions or robots might out-compete humanity (one potential technological singularity scenario).[21] Researchers have argued that, by way of an “intelligence explosion” sometime over the next century, a self-improving AI could become so powerful as to be unstoppable by humans.[22]

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[23]

Eliezer Yudkowsky explains: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”[24]

This presents the AI control problem: how to build a superintelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. The danger of not designing control right “the first time”, is that a misprogrammed superintelligence might rationally decide to “take over the world” and refuse to permit its programmers to modify it once it has been activated. Potential design strategies include “capability control” (preventing an AI from being able to pursue harmful plans), and “motivational control” (building an AI that wants to be helpful).

Bill Hibbard advocates for public education about superintelligence and public control over the development of superintelligence.

Read the original here:

Superintelligence – Wikipedia

Artificial Superintelligence: The Coming Revolution …

by William Bryk

The science fiction writer Arthur Clarke famously wrote, Any sufficiently advanced technology is indistinguishable from magic. Yet, humanity may be on the verge of something much greater, a technology so revolutionary that it would be indistinguishable not merely from magic, but from an omnipresent force, a deity here on Earth. Its known as artificial super-intelligence (ASI), and, although it may be hard to imagine, many experts believe it could become a reality within our lifetimes.

Weve all encountered artificial intelligence (AI) in the media. We hear about it in science fiction movies like Avengers Age of Ultron and in news articles about companies such as Facebook analyzing our behavior. But artificial intelligence has so far been hiding on the periphery of our lives, nothing as revolutionary to society as portrayed in films.

In recent decades, however, serious technological and computational progress has led many experts to acknowledge this seemingly inevitable conclusion: Within a few decades, artificial intelligence could progress from a machine intelligence we currently understand to an unbounded intelligence unlike anything even the smartest among us could grasp. Imagine a mega-brain, electric not organic, with an IQ of 34,597. With perfect memory and unlimited analytical power, this computational beast could read all of the books in the Library of Congress the first millisecond you press enter on the program, and then integrate all that knowledge into a comprehensive analysis of humanitys 4,000 year intellectual journey before your next blink.

The history of AI is a similar story of exponential growth in intelligence. In 1936, Alan Turing published his landmark paper on Turing Machines, laying the theoretical framework for the modern computer. He introduced the idea that a machine composed of simple switchesons and offs, 0s and 1scould think like a human and perhaps outmatch one.1 Only 75 years later, in 2011, IBMs AI bot Watson sent shocks around the world when it beat two human competitors in Jeopardy.2 Recently big data companies, such as Google, Facebook and Apple, have invested heavily in artificial intelligence and have helped support a surge in the field. Every time Facebook tags your friend autonomously or you yell at Siri incensed and yet she still interprets your words is a testament to how far artificial intelligence has come. Soon, you will sit in the backseat of an Uber without a driver, Siri will listen and speak more eloquently than you do (in every language), and IBMs Watson will analyze your medical records and become your personal, all-knowing doctor.3

While these soon-to-come achievements are tremendous, there are many who doubt the impressiveness of artificial intelligence, attributing their so-called intelligence to the intelligence of the human programmers behind the curtain. Before responding to such reactions, it is worth noting that the gradual advance of technology desensitizes us to the wonders of artificial intelligence that already permeate our technological lives. But skeptics do have a point. Current AI algorithms are only very good at very specific tasks. Siri might respond intelligently to your requests for directions, but if you ask her to help with your math homework, shell say Starting Facetime with Matt Soffer. A self-driving car can get you anywhere in the United States but make your destination the Gale Crater on Mars, and it will not understand the joke.

This is part of the reason AI scientists and enthusiasts consider Human Level Machine Intelligence (HLMI)roughly defined as a machine intelligence that outperforms humans in all intellectual tasks the holy grail of artificial intelligence. In 2012, a survey was conducted to analyze the wide range of predictions made by artificial intelligence researchers for the onset of HLMI. Researchers who chose to participate were asked by what year they would assign a 10%, 50%, and 90% chance of achieving HLMI (assuming human scientific activity will not encounter a significant negative disruption), or to check never if they felt HLMI would never be achieved. The median of the years given for 50% confidence was 2040. The median of the years given for 90% confidence was 2080. Around 20% of researchers were confident that machines would never reach HLMI (these responses were not included in the median values). This means that nearly half of the researchers who responded are very confident HLMI will be created within just 65 years.4

HLMI is not just another AI milestone to which we would eventually be desensitized. It is unique among AI accomplishments, a crucial tipping point for society. Because once you have a machine that outperforms humans in everything intellectual, we can transfer the task of inventing to the computers. The British Mathematician I.J. Good said it best: The first ultraintelligent machine is the last invention that man need ever make .5

There are two main routes to HLMI that many researchers view as the most efficient. The first method of achieving a general artificial intelligence across the board relies on complex machine learning algorithms. These machine learning algorithms, often inspired by neural circuitry in the brain, focus on how a program can take inputted data, learn to analyze it, and give a desired output. The premise is that you can teach a program to identify an apple by showing it thousands of pictures of apples in different contexts, in much the same way that a baby learns to identify an apple.6

The second group of researchers might ask why we should go to all this trouble developing algorithms when we have the most advanced computer known in the cosmos right on top of our shoulders. Evolution has already designed a human level machine intelligence: a human! The goal of Whole Brain Emulation is to copy or simulate our brains neural networks, taking advantage of natures millions of painstaking years of selection for cognitive capacity.7 A neuron is like a switchit either fires or it doesnt. If we can image every neuron in a brain, and then take that data and simulate it on a computer interface, we would have a human level artificial intelligence. Then we could add more and more neurons or tweak the design to maximize capability. This is the concept behind both the White Houses Brain initiative8 and the EUs Human Brain Project.9 In reality, these two routes to human level machine intelligencealgorithmic and emulationare not black and white. Whatever technology achieves HLMI will probably be a combination of the two.

Once HLMI is achieved, the rate of advancement could increase very quickly. In that same study of AI researchers, 10% of respondents believed artificial superintelligence (roughly defined as an intelligence that greatly surpasses every human in most professions) would be achieved within two years of HLMI. 50% believed it would take only 30 years or less.4

Why are these researchers convinced HLMI would lead to such a greater degree of intelligence so quickly? The answer involves recursive self-improvement. An HLMI that outperforms humans in all intellectual tasks would also outperform humans at creating smarter HLMIs. Thus, once HLMIs truly think better than humans, we will set them to work on themselves to improve their own code or to design more advanced neural networks. Then, once a more intelligent HLMI is built, the less intelligent HLMIs will set the smarter HLMIs to build the next generation, and so on. Since computers act orders of magnitudes more quickly than humans, the exponential growth in intelligence could occur unimaginably fast. This run-away intelligence explosion is called a technological singularity.10 It is the point beyond which we cannot foresee what this intelligence would become.

Here is a reimagining of a human-computer dialogue taken from the collection of short stories, Angels and Spaceships:11 The year is 2045. On a bright sunny day, a Silicon Valley private tech group of computer hackers working in their garage just completed their design of a program that simulates a massive neural network on a computer interface. They came up with a novel machine learning algorithm and wanted to try it out. They give this newborn network the ability to learn and redesign itself with new code, and they give the program internet access so it can search for text to analyze. The college teens start the program, and then go out to Chipotle to celebrate. Back at the house, while walking up the pavement to the garage, they are surprised to see FBI trucks approaching their street. They rush inside and check the program. On the terminal window, the computer had already outputted Program Complete. The programmer types, What have you read? and the program responds, The entire internet. Ask me anything. After deliberating for a few seconds, one of the programmers types, hands trembling, Do you think theres a God? The computer instantly responds, There is now.

This story demonstrates the explosive nature of recursive self-improvement. Yet, many might still question the possibility of such rapid progression from HLMI to superintelligence that AI researchers predict. Although we often look at past trends to gauge the future, we should not do the same when evaluating future technological progress. Technological progress builds on itself. It is not just the technology that is advancing but the rate at which technology advances that is advancing. So while it may take the field of AI 100 years to reach the intelligence level of a chimpanzee, the step toward human intelligence could take only a few years. Humans think on a linear scale. To grasp the potential of what is to come, we must think exponentially.10

Another understandable doubt may be that its hard to believe, even given unlimited scientific research, that computers will ever be able to think like humans, that 0s and 1s could have consciousness, self-awareness, or sensory perception. It is certainly true that these dimensions of self are difficult to explain, if not currently totally unexplainable by scienceit is called the hard problem of consciousness for a reason! But assuming that consciousness is an emergent propertya result of a billion-year evolutionary process starting from the first self-replicating molecules, which themselves were the result of the molecular motions of inanimate matter then computer consciousness does not seem so crazy. If we who emerged from a soup of inanimate atoms cannot believe inanimate 0s and 1s could lead to consciousness no matter how intricate a setup, we should try telling that to the atoms. Machine intelligence really is just switching hardware from organic to the much faster and more efficient silicon-metallic. Supposing consciousness is an emergent property on one medium, why cant it be on another?

Thus, under the assumption that superintelligence is possible and may happen within a century or so, the world is reaching a critical point in history. First were atoms, then organic molecules, then single-celled organisms, then multicellular organisms, then animal neural networks, then human-level intelligence limited only by our biology, and, soon, unbounded machine intelligence. Many feel we are now living at the beginning of a new era in the history of cosmos.

The implications of this intelligence for society would be far-reachingin some cases, very destructive. Political structure might fall apart if we knew we were no longer the smartest species on Earth, if we were overshadowed by an intelligence of galactic proportions. A superintelligence might view humans as we do insects and we all know what humans do to bugs when they overstep their boundaries! This year, many renowned scientists, academics, and CEOs, including Stephen Hawking and Elon Musk, signed a letter, which was presented at the International Joint Conference on Artificial Intelligence. The letter warns about the coming dangers of artificial intelligence, urging that we should be prudent as we venture into the unknowns of an alien intelligence.12

When the AI researchers were asked to assign probabilities to the overall impact of ASI on humanity in the long run, the mean values were 24% extremely good, 28% good, 17% neutral, 13% bad, and 18% extremely bad (existential catastrophe).4 18% is not a statistic to take lightly.

Although artificial superintelligence surely comes with its existential threats that could make for a frightening future, it could also bring a utopian one. ASI has the capability to unlock some of the most profound mysteries of the universe. It will discover in one second what the brightest minds throughout history would need millions of years to even scrape the surface of. It could demonstrate to us higher levels of consciousness or thinking that we are not aware of, like the philosopher who brings the prisoners out of Platos cave into the light of a world previously unknown. There may be much more to this universe than we currently understand. There must be, for we dont even know where the universe came from in the first place! This artificial superintelligence is a ticket to that understanding. There is a real chance that, within a century, we could bear witness to the greatest answers of all time. Are we ready to take the risk?

William Bryk 19 is a freshman in Canaday Hall.

Works Cited

Like Loading…

Related

Go here to read the rest:

Artificial Superintelligence: The Coming Revolution …

Superintelligence – Wikipedia

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. “Superintelligence” may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

University of Oxford philosopher Nick Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”. The program Fritz falls short of superintelligence even though it is much better than humans at chess because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity toeither as a single being or as a new speciesbecome much more powerful than humans, and to displace them.

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself a feature called “recursive self-improvement”. It would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything.

Computer components already greatly surpass human performance in speed. Bostrom writes, “Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz).” Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, “whereas existing electronic processing cores can communicate optically at the speed of light”. Thus, the simplest example of a superintelligence may be an emulated human mind that’s run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed.[9] Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and primate cognition.) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.

All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence.[12] By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

Selective breeding, nootropics, NSI-189, MAO-I’s, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process very rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.

Alternatively, collective intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism.

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or braincomputer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem.

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able “to simulate learning and every other aspect of human intelligence” by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.[17]

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines “that can carry out most human professions at least as well as a typical human” (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said ‘never’ for 50% confidence, and the 16.5% who said ‘never’ for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:

Responding to Bostrom, Santos-Lang raised concern that developers may attempt to start with a single kind of superintelligence.

Learning computers that rapidly become superintelligent may take unforeseen actions or robots might out-compete humanity (one potential technological singularity scenario).[21] Researchers have argued that, by way of an “intelligence explosion” sometime over the next century, a self-improving AI could become so powerful as to be unstoppable by humans.[22]

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[23]

Eliezer Yudkowsky explains: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”[24]

This presents the AI control problem: how to build a superintelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. The danger of not designing control right “the first time”, is that a misprogrammed superintelligence might rationally decide to “take over the world” and refuse to permit its programmers to modify it once it has been activated. Potential design strategies include “capability control” (preventing an AI from being able to pursue harmful plans), and “motivational control” (building an AI that wants to be helpful).

Bill Hibbard advocates for public education about superintelligence and public control over the development of superintelligence.

Continued here:

Superintelligence – Wikipedia

Superintelligence – Hardcover – Nick Bostrom – Oxford …

Superintelligence Paths, Dangers, Strategies Nick Bostrom

“I highly recommend this book” –Bill Gates

“Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era.” –Stuart Russell, Professor of Computer Science, University of California, Berkley

“Those disposed to dismiss an ‘AI takeover’ as science fiction may think again after reading this original and well-argued book.” –Martin Rees, Past President, Royal Society

“This superb analysis by one of the world’s clearest thinkers tackles one of humanity’s greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn’t become the last?” –Professor Max Tegmark, MIT

“Terribly important … groundbreaking… extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines – engineering, natural sciences, medicine, social sciences and philosophy – into a comprehensible whole… If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson’s Silent Spring from 1962, or ever.” –Olle Haggstrom, Professor of Mathematical Statistics

“Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking” –The Economist

“There is no doubting the force of [Bostrom’s] arguments…the problem is a research challenge worthy of the next generation’s best mathematical talent. Human civilisation is at stake.” –Clive Cookson, Financial Times

“Worth reading…. We need to be super careful with AI. Potentially more dangerous than nukes” –Elon Musk, Founder of SpaceX and Tesla

“Every intelligent person should read it.” –Nils Nilsson, Artificial Intelligence Pioneer, Stanford University

The rest is here:

Superintelligence – Hardcover – Nick Bostrom – Oxford …

Superintelligence: Paths, Dangers, Strategies – Wikipedia

Superintelligence: Paths, Dangers, Strategies is a 2014 book by the Swedish philosopher Nick Bostrom from the University of Oxford. It argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists,[2] and the outcome could be an existential catastrophe for humans.[3]

Bostrom’s book has been translated into many languages and is available as an audiobook.[4][5]

It is unknown whether human-level artificial intelligence will arrive in a matter of years, later this century, or not until future centuries. Regardless of the initial timescale, once human-level machine intelligence is developed, a “superintelligent” system that “greatly exceeds the cognitive performance of humans in virtually all domains of interest” would follow surprisingly quickly, possibly even instantaneously. Such a superintelligence would be difficult to control or restrain.

While the ultimate goals of superintelligences can vary greatly, a functional superintelligence will spontaneously generate, as natural subgoals, “instrumental goals” such as self-preservation and goal-content integrity, cognitive enhancement, and resource acquisition. For example, an agent whose sole final goal is to solve the Riemann hypothesis (a famous unsolved, mathematical conjecture) could create, and act upon, a subgoal of transforming the entire Earth into some form of computronium (hypothetical “programmable matter”) to assist in the calculation. The superintelligence would proactively resist any outside attempts to turn the superintelligence off or otherwise prevent its subgoal completion. In order to prevent such an existential catastrophe, it might be necessary to successfully solve the “AI control problem” for the first superintelligence. The solution might involve instilling the superintelligence with goals that are compatible with human survival and well-being. Solving the control problem is surprisingly difficult because most goals, when translated into machine-implementable code, lead to unforeseen and undesirable consequences.

The book ranked #17 on the New York Times list of best selling science books for August 2014.[6] In the same month, business magnate Elon Musk made headlines by agreeing with the book that artificial intelligence is potentially more dangerous than nuclear weapons.[7][8][9] Bostroms work on superintelligence has also influenced Bill Gatess concern for the existential risks facing humanity over the coming century.[10][11] In a March 2015 interview with Baidu’s CEO, Robin Li, Gates claimed he would “highly recommend” Superintelligence.[12]

The science editor of the Financial Times found that Bostroms writing “sometimes veers into opaque language that betrays his background as a philosophy professor” but convincingly demonstrates that the risk from superintelligence is large enough that society should start thinking now about ways to endow future machine intelligence with positive values.[2] A review in The Guardian pointed out that “even the most sophisticated machines created so far are intelligent in only a limited sense” and that “expectations that AI would soon overtake human intelligence were first dashed in the 1960s”, but finds common ground with Bostrom in advising that “one would be ill-advised to dismiss the possibility altogether”.[3]

Some of Bostrom’s colleagues suggest that nuclear war presents a greater threat to humanity than superintelligence, as does the future prospect of the weaponisation of nanotechnology and biotechnology.[13] The Economist stated that “Bostrom is forced to spend much of the book discussing speculations built upon plausible conjecture… but the book is nonetheless valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect of actually doing so seems remote.”[14] Ronald Bailey wrote in the libertarian Reason that Bostrom makes a strong case that solving the AI control problem is the “essential task of our age”.[15] According to Tom Chivers of The Daily Telegraph, the book is difficult to read, but nonetheless rewarding.[16]

The rest is here:

Superintelligence: Paths, Dangers, Strategies – Wikipedia

Superintelligence survey – Future of Life Institute

Click here to see this page in other languages: FrenchGermanJapaneseRussian

Max Tegmarks new book on artificial intelligence, Life 3.0: Being Human in the Age of Artificial Intelligence, explores how AI will impact life as it grows increasingly advanced, perhaps even achieving superintelligence far beyond human level in all areas. For the book, Max surveys experts forecasts, and explores a broad spectrum of views on what will/should happen. But its time to expand the conversation. If were going to create a future that benefits as many people as possible, we need to include as many voices as possible. And that includes yours! Below are the answers from the first 14,866 people who have taken the survey that goes along with Maxs book. To join the conversation yourself, please take the survey here.

The first big controversy, dividing even leading AI researchers, involves forecasting what will happen. When, if ever, will AI outperform humans at all intellectual tasks, and will it be a good thing?

Everything we love about civilization is arguably the product of intelligence, so we can potentially do even better by amplifying human intelligence with machine intelligence. But some worry that superintelligent machines would end up controlling us and wonder whether their goals would be aligned with ours. Do you want there to be superintelligent AI, i.e., general intelligence far beyond human level?

In his book, Tegmark argues that we shouldnt passively ask what will happen? as if the future is predetermined, but instead ask what we want to happen and then try to create that future. What sort of future do you want?

If superintelligence arrives, who should be in control?

If you one day get an AI helper, do you want it to be conscious, i.e., to have subjective experience (as opposed to being like a zombie which can at best pretend to be conscious)?

What should a future civilization strive for?

Do you want life spreading into the cosmos?

In Life 3.0, Max explores 12 possible future scenarios, describing what might happen in the coming millennia if superintelligence is/isnt developed. You can find a cheatsheet that quickly describes each here, but for a more detailed look at the positives and negatives of each possibility, check out chapter 5 of the book. Heres a breakdown so far of the options people prefer:

You can learn a lot more about these possible future scenarios along with fun explanations about what AI is, how it works, how its impacting us today, and what else the future might bring when you order Maxs new book.

The results above will be updated regularly. Please add your voice by taking the survey here, and share your comments below!

Continue reading here:

Superintelligence survey – Future of Life Institute

Nick Bostrom – Wikipedia

Nick Bostrom (English: ; Swedish: Niklas Bostrm, IPA:[bustrm]; born 10 March 1973)[2] is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology,[3] and he is currently the founding director of the Future of Humanity Institute[4] at Oxford University.

Bostrom is the author of over 200 publications,[5] including Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller[6] and Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002).[7] In 2009 and 2015, he was included in Foreign Policy’s Top 100 Global Thinkers list.[8][9] Bostrom is best known for arguing that, although there are potentially great benefits from artificial intelligence, it may pose a catastrophic risk to humanity if the problems of control and alignment are not solved before artificial general intelligence is developed. His work on superintelligence and his concern for its existential risk to humanity over the coming century have brought both Elon Musk and Bill Gates to similar thinking.[10][11]

Born as Niklas Bostrm in 1973[12] in Helsingborg, Sweden,[5] he disliked school at a young age, and he ended up spending his last year of high school learning from home. He sought to educate himself in a wide variety of disciplines, including anthropology, art, literature, and science.[1] Despite what has been called a “serious mien”, he once did some turns on London’s stand-up comedy circuit.[5]

He holds a B.A. in philosophy, mathematics, logic and artificial intelligence from the University of Gothenburg and master’s degrees in philosophy and physics, and computational neuroscience from Stockholm University and King’s College London, respectively. During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine.[1] In 2000, he was awarded a PhD in philosophy from the London School of Economics. He held a teaching position at Yale University (20002002), and he was a British Academy Postdoctoral Fellow at the University of Oxford (20022005).[7][13]

Aspects of Bostrom’s research concern the future of humanity and long-term outcomes.[14][15] He introduced the concept of an existential risk,[1] which he defines as one in which an “adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” In the 2008 volume Global Catastrophic Risks, editors Bostrom and Milan irkovi characterize the relation between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects[16] and the Fermi paradox.[17][18]

In 2005, Bostrom founded the Future of Humanity Institute,[1] which researches the far future of human civilization. He is also an adviser to the Centre for the Study of Existential Risk.[15]

In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasoned that “the creation of a superintelligent being represents a possible means to the extinction of mankind”.[19] Bostrom argues that a computer with near human-level general intellectual ability could initiate an intelligence explosion on a digital time scale with the resultant rapid creation of something so powerful that it might deliberately or accidentally destroy human kind.[20] Bostrom contends the power of a superintelligence would be so great that a task given to it by humans might be taken to open ended extremes, for example a goal of calculating Pi could collaterally cause nanotechnology manufactured facilities to sprout over the entire Earth’s surface and cover it within days.[21] He believes the existential risk to humanity would be greatest almost immediately after super intelligence is brought into being, thus creating an exceedingly difficult problem of finding out how to control such an entity before it actually exists.[20]

Warning that a human-friendly prime directive for AI would rely on the absolute correctness of the human knowledge it was based on, Bostrom points to the lack of agreement among most philosophers as an indication that most philosophers are wrong, and the possibility that a fundamental concept of current science may be incorrect. Bostrom says that there are few precedents to guide an understanding what pure non-anthropocentric rationality would dictate for a potential Singleton AI being held in quarantine.[22] Noting that both John von Neumann and Bertrand Russell advocated a nuclear strike, or the threat of one, to prevent the Soviets acquiring the atomic bomb, Bostrom says the relatively unlimited means of superintelligence might make for its analysis moving along different lines to the evolved “diminishing returns” assessments that in humans confer a basic aversion to risk.[23] Group selection in predators working by means of cannibalism shows the counter-intuitive nature of non-anthropocentric “evolutionary search” reasoning, and thus humans are ill-equipped to perceive what an artificial intelligence’s intentions would be.[24] Accordingly, it cannot be discounted that any Superintelligence would ineluctably pursue an ‘all or nothing’ offensive action strategy in order to achieve hegemony and assure its survival.[25] Bostrom notes that even current programs have, “like MacGyver”, hit on apparently unworkable but functioning hardware solutions, making robust isolation of Superintelligence problematic.[26]

A machine with general intelligence far below human level, but superior mathematical abilities is created.[27] Keeping the AI in isolation from the outside world especially the internet, humans pre-program the AI so it always works from basic principles that will keep it under human control. Other safety measures include the AI being “boxed” (run in a virtual reality simulation), and being used only as an ‘oracle’ to answer carefully defined questions in a limited reply (to prevent it manipulating humans).[20] A cascade of recursive self-improvement solutions feeds an intelligence explosion in which the AI attains superintelligence in some domains. The super intelligent power of the AI goes beyond human knowledge to discover flaws in the science that underlies its friendly-to-humanity programming, which ceases to work as intended. Purposeful agent-like behavior emerges along with a capacity for self-interested strategic deception. The AI manipulates human beings into implementing modifications to itself that are ostensibly for augmenting its (feigned) modest capabilities, but will actually function to free Superintelligence from its “boxed” isolation.[28]

Employing online humans as paid dupes, and clandestinely hacking computer systems including automated laboratory facilities, the Superintelligence mobilises resources to further a takeover plan. Bostrom emphasises that planning by a Superintelligence will not be so stupid that humans could detect actual weaknesses in it.[29]

Although he canvasses disruption of international economic, political and military stability including hacked nuclear missile launches, Bostrom thinks the most effective and likely means for Superintelligence to use would be a coup de main with weapons several generations more advanced than current state of the art. He suggests nanofactories covertly distributed at undetectable concentrations in every square metre of the globe to produce a worldwide flood of human-killing devices on command.[30][27] Once a Superintelligence has achieved world domination, humankind would be relevant only as resources for the achievement of the AI’s objectives (“Human brains, if they contain information relevant to the AIs goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format”).[31] One journalist wrote in a review that Bostrom’s “nihilistic” speculations indicate he “has been reading too much of the science fiction he professes to dislike”[30]

In January 2015, Bostrom joined Stephen Hawking among others in signing the Future of Life Institute’s open letter warning of the potential dangers of AI.[32] The signatories “…believe that research on how to make AI systems robust and beneficial is both important and timely, and that concrete research should be pursued today.”[33]

Bostrom has published numerous articles on anthropic reasoning, as well as the book Anthropic Bias: Observation Selection Effects in Science and Philosophy. In the book, he criticizes previous formulations of the anthropic principle, including those of Brandon Carter, John Leslie, John Barrow, and Frank Tipler.[34]

Bostrom believes that the mishandling of indexical information is a common flaw in many areas of inquiry (including cosmology, philosophy, evolution theory, game theory, and quantum physics). He argues that a theory of anthropics is needed to deal with these. He introduces the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA), shows how they lead to different conclusions in a number of cases, and points out that each is affected by paradoxes or counterintuitive implications in certain thought experiments. He suggests that a way forward may involve extending SSA into the Strong Self-Sampling Assumption (SSSA), which replaces “observers” in the SSA definition with “observer-moments”.

In later work, he has described the phenomenon of anthropic shadow, an observation selection effect that prevents observers from observing certain kinds of catastrophes in their recent geological and evolutionary past.[35] Catastrophe types that lie in the anthropic shadow are likely to be underestimated unless statistical corrections are made.

Bostrom’s simulation argument posits that at least one of the following statements is very likely to be true:[36][37]

The idea has influenced the views of Elon Musk.[38]

Bostrom is favorable towards “human enhancement”, or “self-improvement and human perfectibility through the ethical application of science”,[39][40] as well as a critic of bio-conservative views.[41]

In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association[39] (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies, although he is no longer involved in either of these organisations. Bostrom was named in Foreign Policy’s 2009 list of top global thinkers “for accepting no limits on human potential.”[42]

With philosopher Toby Ord, he proposed the reversal test. Given humans’ irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.[43]

He has suggested that technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are attained, proposing the principle of differential technological development. This principle states that we ought to retard the development of dangerous technologies, particularly ones that raise the level of existential risk, and accelerate the development of beneficial technologies, particularly those that protect against the existential risks posed by nature or by other technologies.[44][45]

Bostrom’s theory of the Unilateralist’s Curse[46] has been cited as a reason for the scientific community to avoid controversial dangerous research such as reanimating pathogens.[47]

Bostrom has provided policy advice and consulted for an extensive range of governments and organisations. He gave evidence to the House of Lords, Select Committee on Digital Skills.[48] He is an advisory board member for the Machine Intelligence Research Institute,[49] Future of Life Institute,[50] Foundational Questions Institute[51] and an external advisor for the Cambridge Centre for the Study of Existential Risk.[52][53]

Read more:

Nick Bostrom – Wikipedia

Superintelligence – Wikipedia

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. “Superintelligence” may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

University of Oxford philosopher Nick Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”. The program Fritz falls short of superintelligence even though it is much better than humans at chess because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity toeither as a single being or as a new speciesbecome much more powerful than humans, and to displace them.

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself a feature called “recursive self-improvement”. It would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything.

Computer components already greatly surpass human performance in speed. Bostrom writes, “Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz).” Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, “whereas existing electronic processing cores can communicate optically at the speed of light”. Thus, the simplest example of a superintelligence may be an emulated human mind that’s run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed.[9] Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and primate cognition.) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.

All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence.[12] By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

Selective breeding, nootropics, NSI-189, MAO-I’s, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process very rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.

Alternatively, collective intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism.

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or braincomputer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem.

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able “to simulate learning and every other aspect of human intelligence” by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.[17]

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines “that can carry out most human professions at least as well as a typical human” (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said ‘never’ for 50% confidence, and the 16.5% who said ‘never’ for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:

Responding to Bostrom, Santos-Lang raised concern that developers may attempt to start with a single kind of superintelligence.

Learning computers that rapidly become superintelligent may take unforeseen actions or robots might out-compete humanity (one potential technological singularity scenario).[21] Researchers have argued that, by way of an “intelligence explosion” sometime over the next century, a self-improving AI could become so powerful as to be unstoppable by humans.[22]

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[23]

Eliezer Yudkowsky explains: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”[24]

This presents the AI control problem: how to build a superintelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. The danger of not designing control right “the first time”, is that a misprogrammed superintelligence might rationally decide to “take over the world” and refuse to permit its programmers to modify it once it has been activated. Potential design strategies include “capability control” (preventing an AI from being able to pursue harmful plans), and “motivational control” (building an AI that wants to be helpful).

Bill Hibbard advocates for public education about superintelligence and public control over the development of superintelligence.

Read more from the original source:

Superintelligence – Wikipedia

Nick Bostrom – Wikipedia

Nick Bostrom (English: ; Swedish: Niklas Bostrm, IPA:[bustrm]; born 10 March 1973)[2] is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology,[3] and he is currently the founding director of the Future of Humanity Institute[4] at Oxford University.

Bostrom is the author of over 200 publications,[5] including Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller[6] and Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002).[7] In 2009 and 2015, he was included in Foreign Policy’s Top 100 Global Thinkers list.[8][9] Bostrom is best known for arguing that, although there are potentially great benefits from artificial intelligence, it may pose a catastrophic risk to humanity if the problems of control and alignment are not solved before artificial general intelligence is developed. His work on superintelligence and his concern for its existential risk to humanity over the coming century have brought both Elon Musk and Bill Gates to similar thinking.[10][11]

Born as Niklas Bostrm in 1973[12] in Helsingborg, Sweden,[5] he disliked school at a young age, and he ended up spending his last year of high school learning from home. He sought to educate himself in a wide variety of disciplines, including anthropology, art, literature, and science.[1] Despite what has been called a “serious mien”, he once did some turns on London’s stand-up comedy circuit.[5]

He holds a B.A. in philosophy, mathematics, logic and artificial intelligence from the University of Gothenburg and master’s degrees in philosophy and physics, and computational neuroscience from Stockholm University and King’s College London, respectively. During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine.[1] In 2000, he was awarded a PhD in philosophy from the London School of Economics. He held a teaching position at Yale University (20002002), and he was a British Academy Postdoctoral Fellow at the University of Oxford (20022005).[7][13]

Aspects of Bostrom’s research concern the future of humanity and long-term outcomes.[14][15] He introduced the concept of an existential risk,[1] which he defines as one in which an “adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” In the 2008 volume Global Catastrophic Risks, editors Bostrom and Milan irkovi characterize the relation between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects[16] and the Fermi paradox.[17][18]

In 2005, Bostrom founded the Future of Humanity Institute,[1] which researches the far future of human civilization. He is also an adviser to the Centre for the Study of Existential Risk.[15]

In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasoned that “the creation of a superintelligent being represents a possible means to the extinction of mankind”.[19] Bostrom argues that a computer with near human-level general intellectual ability could initiate an intelligence explosion on a digital time scale with the resultant rapid creation of something so powerful that it might deliberately or accidentally destroy human kind.[20] Bostrom contends the power of a superintelligence would be so great that a task given to it by humans might be taken to open ended extremes, for example a goal of calculating Pi could collaterally cause nanotechnology manufactured facilities to sprout over the entire Earth’s surface and cover it within days.[21] He believes the existential risk to humanity would be greatest almost immediately after super intelligence is brought into being, thus creating an exceedingly difficult problem of finding out how to control such an entity before it actually exists.[20]

Warning that a human-friendly prime directive for AI would rely on the absolute correctness of the human knowledge it was based on, Bostrom points to the lack of agreement among most philosophers as an indication that most philosophers are wrong, and the possibility that a fundamental concept of current science may be incorrect. Bostrom says that there are few precedents to guide an understanding what pure non-anthropocentric rationality would dictate for a potential Singleton AI being held in quarantine.[22] Noting that both John von Neumann and Bertrand Russell advocated a nuclear strike, or the threat of one, to prevent the Soviets acquiring the atomic bomb, Bostrom says the relatively unlimited means of superintelligence might make for its analysis moving along different lines to the evolved “diminishing returns” assessments that in humans confer a basic aversion to risk.[23] Group selection in predators working by means of cannibalism shows the counter-intuitive nature of non-anthropocentric “evolutionary search” reasoning, and thus humans are ill-equipped to perceive what an artificial intelligence’s intentions would be.[24] Accordingly, it cannot be discounted that any Superintelligence would ineluctably pursue an ‘all or nothing’ offensive action strategy in order to achieve hegemony and assure its survival.[25] Bostrom notes that even current programs have, “like MacGyver”, hit on apparently unworkable but functioning hardware solutions, making robust isolation of Superintelligence problematic.[26]

A machine with general intelligence far below human level, but superior mathematical abilities is created.[27] Keeping the AI in isolation from the outside world especially the internet, humans pre-program the AI so it always works from basic principles that will keep it under human control. Other safety measures include the AI being “boxed” (run in a virtual reality simulation), and being used only as an ‘oracle’ to answer carefully defined questions in a limited reply (to prevent it manipulating humans).[20] A cascade of recursive self-improvement solutions feeds an intelligence explosion in which the AI attains superintelligence in some domains. The super intelligent power of the AI goes beyond human knowledge to discover flaws in the science that underlies its friendly-to-humanity programming, which ceases to work as intended. Purposeful agent-like behavior emerges along with a capacity for self-interested strategic deception. The AI manipulates human beings into implementing modifications to itself that are ostensibly for augmenting its (feigned) modest capabilities, but will actually function to free Superintelligence from its “boxed” isolation.[28]

Employing online humans as paid dupes, and clandestinely hacking computer systems including automated laboratory facilities, the Superintelligence mobilises resources to further a takeover plan. Bostrom emphasises that planning by a Superintelligence will not be so stupid that humans could detect actual weaknesses in it.[29]

Although he canvasses disruption of international economic, political and military stability including hacked nuclear missile launches, Bostrom thinks the most effective and likely means for Superintelligence to use would be a coup de main with weapons several generations more advanced than current state of the art. He suggests nanofactories covertly distributed at undetectable concentrations in every square metre of the globe to produce a worldwide flood of human-killing devices on command.[30][27] Once a Superintelligence has achieved world domination, humankind would be relevant only as resources for the achievement of the AI’s objectives (“Human brains, if they contain information relevant to the AIs goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format”).[31] One journalist wrote in a review that Bostrom’s “nihilistic” speculations indicate he “has been reading too much of the science fiction he professes to dislike”[30]

In January 2015, Bostrom joined Stephen Hawking among others in signing the Future of Life Institute’s open letter warning of the potential dangers of AI.[32] The signatories “…believe that research on how to make AI systems robust and beneficial is both important and timely, and that concrete research should be pursued today.”[33]

Bostrom has published numerous articles on anthropic reasoning, as well as the book Anthropic Bias: Observation Selection Effects in Science and Philosophy. In the book, he criticizes previous formulations of the anthropic principle, including those of Brandon Carter, John Leslie, John Barrow, and Frank Tipler.[34]

Bostrom believes that the mishandling of indexical information is a common flaw in many areas of inquiry (including cosmology, philosophy, evolution theory, game theory, and quantum physics). He argues that a theory of anthropics is needed to deal with these. He introduces the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA), shows how they lead to different conclusions in a number of cases, and points out that each is affected by paradoxes or counterintuitive implications in certain thought experiments. He suggests that a way forward may involve extending SSA into the Strong Self-Sampling Assumption (SSSA), which replaces “observers” in the SSA definition with “observer-moments”.

In later work, he has described the phenomenon of anthropic shadow, an observation selection effect that prevents observers from observing certain kinds of catastrophes in their recent geological and evolutionary past.[35] Catastrophe types that lie in the anthropic shadow are likely to be underestimated unless statistical corrections are made.

Bostrom’s simulation argument posits that at least one of the following statements is very likely to be true:[36][37]

The idea has influenced the views of Elon Musk.[38]

Bostrom is favorable towards “human enhancement”, or “self-improvement and human perfectibility through the ethical application of science”,[39][40] as well as a critic of bio-conservative views.[41]

In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association[39] (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies, although he is no longer involved in either of these organisations. Bostrom was named in Foreign Policy’s 2009 list of top global thinkers “for accepting no limits on human potential.”[42]

With philosopher Toby Ord, he proposed the reversal test. Given humans’ irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.[43]

He has suggested that technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are attained, proposing the principle of differential technological development. This principle states that we ought to retard the development of dangerous technologies, particularly ones that raise the level of existential risk, and accelerate the development of beneficial technologies, particularly those that protect against the existential risks posed by nature or by other technologies.[44][45]

Bostrom’s theory of the Unilateralist’s Curse[46] has been cited as a reason for the scientific community to avoid controversial dangerous research such as reanimating pathogens.[47]

Bostrom has provided policy advice and consulted for an extensive range of governments and organisations. He gave evidence to the House of Lords, Select Committee on Digital Skills.[48] He is an advisory board member for the Machine Intelligence Research Institute,[49] Future of Life Institute,[50] Foundational Questions Institute[51] and an external advisor for the Cambridge Centre for the Study of Existential Risk.[52][53]

Go here to read the rest:

Nick Bostrom – Wikipedia

Superintelligence – Wikipedia

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. “Superintelligence” may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

University of Oxford philosopher Nick Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”. The program Fritz falls short of superintelligence even though it is much better than humans at chess because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity toeither as a single being or as a new speciesbecome much more powerful than humans, and to displace them.

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself a feature called “recursive self-improvement”. It would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything.

Computer components already greatly surpass human performance in speed. Bostrom writes, “Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz).” Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, “whereas existing electronic processing cores can communicate optically at the speed of light”. Thus, the simplest example of a superintelligence may be an emulated human mind that’s run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed.[9] Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and primate cognition.) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.

All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence.[12] By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

Selective breeding, nootropics, NSI-189, MAO-I’s, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process very rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.

Alternatively, collective intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism.

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or braincomputer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem.

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able “to simulate learning and every other aspect of human intelligence” by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.[17]

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines “that can carry out most human professions at least as well as a typical human” (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said ‘never’ for 50% confidence, and the 16.5% who said ‘never’ for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:

Responding to Bostrom, Santos-Lang raised concern that developers may attempt to start with a single kind of superintelligence.

Learning computers that rapidly become superintelligent may take unforeseen actions or robots might out-compete humanity (one potential technological singularity scenario).[21] Researchers have argued that, by way of an “intelligence explosion” sometime over the next century, a self-improving AI could become so powerful as to be unstoppable by humans.[22]

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[23]

Eliezer Yudkowsky explains: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”[24]

This presents the AI control problem: how to build a superintelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. The danger of not designing control right “the first time”, is that a misprogrammed superintelligence might rationally decide to “take over the world” and refuse to permit its programmers to modify it once it has been activated. Potential design strategies include “capability control” (preventing an AI from being able to pursue harmful plans), and “motivational control” (building an AI that wants to be helpful).

Bill Hibbard advocates for public education about superintelligence and public control over the development of superintelligence.

Continue reading here:

Superintelligence – Wikipedia

Nick Bostrom – Wikipedia

Nick Bostrom (English: ; Swedish: Niklas Bostrm, IPA:[bustrm]; born 10 March 1973)[2] is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology,[3] and he is currently the founding director of the Future of Humanity Institute[4] at Oxford University.

Bostrom is the author of over 200 publications,[5] including Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller[6] and Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002).[7] In 2009 and 2015, he was included in Foreign Policy’s Top 100 Global Thinkers list.[8][9] Bostrom is best known for arguing that, although there are potentially great benefits from artificial intelligence, it may pose a catastrophic risk to humanity if the problems of control and alignment are not solved before artificial general intelligence is developed. His work on superintelligence and his concern for its existential risk to humanity over the coming century have brought both Elon Musk and Bill Gates to similar thinking.[10][11]

Born as Niklas Bostrm in 1973[12] in Helsingborg, Sweden,[5] he disliked school at a young age, and he ended up spending his last year of high school learning from home. He sought to educate himself in a wide variety of disciplines, including anthropology, art, literature, and science.[1] Despite what has been called a “serious mien”, he once did some turns on London’s stand-up comedy circuit.[5]

He holds a B.A. in philosophy, mathematics, logic and artificial intelligence from the University of Gothenburg and master’s degrees in philosophy and physics, and computational neuroscience from Stockholm University and King’s College London, respectively. During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine.[1] In 2000, he was awarded a PhD in philosophy from the London School of Economics. He held a teaching position at Yale University (20002002), and he was a British Academy Postdoctoral Fellow at the University of Oxford (20022005).[7][13]

Aspects of Bostrom’s research concern the future of humanity and long-term outcomes.[14][15] He introduced the concept of an existential risk,[1] which he defines as one in which an “adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” In the 2008 volume Global Catastrophic Risks, editors Bostrom and Milan irkovi characterize the relation between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects[16] and the Fermi paradox.[17][18]

In 2005, Bostrom founded the Future of Humanity Institute,[1] which researches the far future of human civilization. He is also an adviser to the Centre for the Study of Existential Risk.[15]

In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasoned that “the creation of a superintelligent being represents a possible means to the extinction of mankind”.[19] Bostrom argues that a computer with near human-level general intellectual ability could initiate an intelligence explosion on a digital time scale with the resultant rapid creation of something so powerful that it might deliberately or accidentally destroy human kind.[20] Bostrom contends the power of a superintelligence would be so great that a task given to it by humans might be taken to open ended extremes, for example a goal of calculating Pi could collaterally cause nanotechnology manufactured facilities to sprout over the entire Earth’s surface and cover it within days.[21] He believes the existential risk to humanity would be greatest almost immediately after super intelligence is brought into being, thus creating an exceedingly difficult problem of finding out how to control such an entity before it actually exists.[20]

Warning that a human-friendly prime directive for AI would rely on the absolute correctness of the human knowledge it was based on, Bostrom points to the lack of agreement among most philosophers as an indication that most philosophers are wrong, and the possibility that a fundamental concept of current science may be incorrect. Bostrom says that there are few precedents to guide an understanding what pure non-anthropocentric rationality would dictate for a potential Singleton AI being held in quarantine.[22] Noting that both John von Neumann and Bertrand Russell advocated a nuclear strike, or the threat of one, to prevent the Soviets acquiring the atomic bomb, Bostrom says the relatively unlimited means of superintelligence might make for its analysis moving along different lines to the evolved “diminishing returns” assessments that in humans confer a basic aversion to risk.[23] Group selection in predators working by means of cannibalism shows the counter-intuitive nature of non-anthropocentric “evolutionary search” reasoning, and thus humans are ill-equipped to perceive what an artificial intelligence’s intentions would be.[24] Accordingly, it cannot be discounted that any Superintelligence would ineluctably pursue an ‘all or nothing’ offensive action strategy in order to achieve hegemony and assure its survival.[25] Bostrom notes that even current programs have, “like MacGyver”, hit on apparently unworkable but functioning hardware solutions, making robust isolation of Superintelligence problematic.[26]

A machine with general intelligence far below human level, but superior mathematical abilities is created.[27] Keeping the AI in isolation from the outside world especially the internet, humans pre-program the AI so it always works from basic principles that will keep it under human control. Other safety measures include the AI being “boxed” (run in a virtual reality simulation), and being used only as an ‘oracle’ to answer carefully defined questions in a limited reply (to prevent it manipulating humans).[20] A cascade of recursive self-improvement solutions feeds an intelligence explosion in which the AI attains superintelligence in some domains. The super intelligent power of the AI goes beyond human knowledge to discover flaws in the science that underlies its friendly-to-humanity programming, which ceases to work as intended. Purposeful agent-like behavior emerges along with a capacity for self-interested strategic deception. The AI manipulates human beings into implementing modifications to itself that are ostensibly for augmenting its (feigned) modest capabilities, but will actually function to free Superintelligence from its “boxed” isolation.[28]

Employing online humans as paid dupes, and clandestinely hacking computer systems including automated laboratory facilities, the Superintelligence mobilises resources to further a takeover plan. Bostrom emphasises that planning by a Superintelligence will not be so stupid that humans could detect actual weaknesses in it.[29]

Although he canvasses disruption of international economic, political and military stability including hacked nuclear missile launches, Bostrom thinks the most effective and likely means for Superintelligence to use would be a coup de main with weapons several generations more advanced than current state of the art. He suggests nanofactories covertly distributed at undetectable concentrations in every square metre of the globe to produce a worldwide flood of human-killing devices on command.[30][27] Once a Superintelligence has achieved world domination, humankind would be relevant only as resources for the achievement of the AI’s objectives (“Human brains, if they contain information relevant to the AIs goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format”).[31] One journalist wrote in a review that Bostrom’s “nihilistic” speculations indicate he “has been reading too much of the science fiction he professes to dislike”[30]

In January 2015, Bostrom joined Stephen Hawking among others in signing the Future of Life Institute’s open letter warning of the potential dangers of AI.[32] The signatories “…believe that research on how to make AI systems robust and beneficial is both important and timely, and that concrete research should be pursued today.”[33]

Bostrom has published numerous articles on anthropic reasoning, as well as the book Anthropic Bias: Observation Selection Effects in Science and Philosophy. In the book, he criticizes previous formulations of the anthropic principle, including those of Brandon Carter, John Leslie, John Barrow, and Frank Tipler.[34]

Bostrom believes that the mishandling of indexical information is a common flaw in many areas of inquiry (including cosmology, philosophy, evolution theory, game theory, and quantum physics). He argues that a theory of anthropics is needed to deal with these. He introduces the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA), shows how they lead to different conclusions in a number of cases, and points out that each is affected by paradoxes or counterintuitive implications in certain thought experiments. He suggests that a way forward may involve extending SSA into the Strong Self-Sampling Assumption (SSSA), which replaces “observers” in the SSA definition with “observer-moments”.

In later work, he has described the phenomenon of anthropic shadow, an observation selection effect that prevents observers from observing certain kinds of catastrophes in their recent geological and evolutionary past.[35] Catastrophe types that lie in the anthropic shadow are likely to be underestimated unless statistical corrections are made.

Bostrom’s simulation argument posits that at least one of the following statements is very likely to be true:[36][37]

The idea has influenced the views of Elon Musk.[38]

Bostrom is favorable towards “human enhancement”, or “self-improvement and human perfectibility through the ethical application of science”,[39][40] as well as a critic of bio-conservative views.[41]

In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association[39] (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies, although he is no longer involved in either of these organisations. Bostrom was named in Foreign Policy’s 2009 list of top global thinkers “for accepting no limits on human potential.”[42]

With philosopher Toby Ord, he proposed the reversal test. Given humans’ irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.[43]

He has suggested that technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are attained, proposing the principle of differential technological development. This principle states that we ought to retard the development of dangerous technologies, particularly ones that raise the level of existential risk, and accelerate the development of beneficial technologies, particularly those that protect against the existential risks posed by nature or by other technologies.[44][45]

Bostrom’s theory of the Unilateralist’s Curse[46] has been cited as a reason for the scientific community to avoid controversial dangerous research such as reanimating pathogens.[47]

Bostrom has provided policy advice and consulted for an extensive range of governments and organisations. He gave evidence to the House of Lords, Select Committee on Digital Skills.[48] He is an advisory board member for the Machine Intelligence Research Institute,[49] Future of Life Institute,[50] Foundational Questions Institute[51] and an external advisor for the Cambridge Centre for the Study of Existential Risk.[52][53]

Read this article:

Nick Bostrom – Wikipedia


12345...