Technological singularity – Wikipedia

The technological singularity (also, simply, the singularity)[1] is the hypothesis that the invention of artificial superintelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.[2] According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a “runaway reaction” of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence. Stanislaw Ulam reports a discussion with John von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[3] Subsequent authors have echoed this viewpoint.[2][4] I. J. Good’s “intelligence explosion” model predicts that a future superintelligence will trigger a singularity.[5] Emeritus professor of computer science at San Diego State University and science fiction author Vernor Vinge said in his 1993 essay The Coming Technological Singularity that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.[5]

Four polls were conducted in 2012 and 2013 which suggested that the median estimate was a one in two chance that artificial general intelligence (AGI) would be developed by 20402050, depending on the poll.[6][7]

In the 2010s public figures such as Stephen Hawking and Elon Musk expressed concern that full artificial intelligence could result in human extinction.[8][9] The consequences of the singularity and its potential benefit or harm to the human race have been hotly debated.

I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. Good’s scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (ever more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[10]

John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence. They argue that it is difficult or impossible for present-day humans to predict what human beings’ lives would be like in a post-singularity world.[5][11]

Some writers use “the singularity” in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[12][13][14] although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[5] Many writers also tie the singularity to observations of exponential growth in various technologies (with Moore’s law being the most prominent example), using such observations as a basis for predicting that the singularity is likely to happen sometime within the 21st century.[13][15]

Many prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose law is often cited in support of the concept.[16][17][18]

The exponential growth in computing technology suggested by Moore’s law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore’s law. Computer scientist and futurist Hans Moravec proposed in a 1998 book[19] that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit.

Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[20]) increases exponentially, generalizing Moore’s law in the same manner as Moravec’s proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[21] Between 1986 and 2007, machines’ application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world’s general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world’s storage capacity per capita doubled every 40 months.[22]

Kurzweil reserves the term “singularity” for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine”.[23] He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date “will not represent the Singularity” because they do “not yet correspond to a profound expansion of our intelligence.”[24]

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term “singularity” in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[3]

Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the “law of accelerating returns”. Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to “technological change so rapid and profound it represents a rupture in the fabric of human history”.[25] Kurzweil believes that the singularity will occur by approximately 2045.[26] His predictions differ from Vinge’s in that he predicts a gradual ascent to the singularity, rather than Vinge’s rapidly self-improving superhuman intelligence.

Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy’s Wired magazine article “Why the future doesn’t need us”.[4][27]

Some critics assert that no computer or machine will ever achieve human intelligence, while others hold that the definition of intelligence is irrelevant if the net result is the same.[28]

Steven Pinker stated in 2008:

… There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobilesall staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. …[16]

University of California, Berkeley, philosophy professor John Searle writes:

[Computers] have, literally …, no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. … [T]he machinery has no beliefs, desires, [or] motivations.[29]

Martin Ford in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future[30] postulates a “technology paradox” in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to work traditionally considered to be “routine”.[31]

Theodore Modis[32][33] and Jonathan Huebner[34] argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore’s prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advancements in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.[35] While Kurzweil used Modis’ resources, and Modis’ work was around accelerating change, Modis distanced himself from Kurzweil’s thesis of a “technological singularity”, claiming that it lacks scientific rigor.[33]

Others[who?] propose that other “singularities” can be found through analysis of trends in world population, world gross domestic product, and other indices. Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.[36][improper synthesis?]

In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore’s law to 19th-century computers.[37]

In a 2007 paper, Schmidhuber stated that the frequency of subjectively “notable events” appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.[38]

Paul Allen argues the opposite of accelerating returns, the complexity brake;[18] the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies,[39] a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[34] The growth of complexity eventually becomes self-limiting, and leads to a widespread “general systems collapse”.

Jaron Lanier refutes the idea that the Singularity is inevitable. He states: “I do not think the technology is creating itself. It’s not an autonomous process.”[40] He goes on to assert: “The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it’s the same thing operationally as denying people clout, dignity, and self-determination … to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics.”[40]

Economist Robert J. Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J. Good.[41]

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil’s iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary “events” were picked arbitrarily.[42] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[43]

The term “technological singularity” reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.[44][45] It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat,[46][47] as the issue has not been dealt with by most artificial general intelligence researchers, although the topic of friendly artificial intelligence is investigated by the Future of Humanity Institute and the Machine Intelligence Research Institute.[44]

While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description.[citation needed] In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. A 2016 article in Trends in Ecology & Evolution argues that “humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels… we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes… With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction”. The article argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life’s evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition. The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, “the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (5×10^21 bytes). In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1×10^19 bytes. The digital realm stored 500 times more information than this in 2014 (…see Figure)… The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3×10^37 base pairs, equivalent to 1.325×10^37 bytes of information. If growth in digital storage continues at its current rate of 3038% compound annual growth per year,[22] it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years”.[48]

In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.[49]

Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a “cockroach” stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[49]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[50][improper synthesis?]

In his 2005 book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.[51] Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.[52]

K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical biological machines, in his 1986 book Engines of Creation. According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman’s theoretical micromachines . Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) “swallow the doctor”. The idea was incorporated into Feynman’s 1959 essay There’s Plenty of Room at the Bottom.[53]

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called “Digital Ascension” that involves “people dying in the flesh and being uploaded into a computer and remaining conscious”.[54] Singularitarianism has also been likened to a religion by John Horgan.[55]

In his obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”[3]

In 1965, Good wrote his essay postulating an “intelligence explosion” of recursive self-improvement of a machine intelligence. In 1985, in “The Time Scale of Artificial Intelligence”, artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an “infinity point”: if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.[4][56]

In 1981, Stanisaw Lem published his science fiction novel Golem XIV. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.

In 1983, Vinge greatly popularized Good’s intelligence explosion in a number of writings, first addressing the topic in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term “singularity” in a way that was specifically tied to the creation of intelligent machines:[57][58] writing

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.

Vinge’s 1993 article “The Coming Technological Singularity: How to Survive in the Post-Human Era”,[5] spread widely on the internet and helped to popularize the idea.[59] This article contains the statement, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.[5]

In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of the singularity.[27]

In 2005, Kurzweil published The Singularity is Near. Kurzweil’s publicity campaign included an appearance on The Daily Show with Jon Stewart.[60]

In 2007, Eliezer Yudkowsky suggested that many of the varied definitions that have been assigned to “singularity” are mutually incompatible rather than mutually supporting.[13][61] For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good’s proposed discontinuous upswing in intelligence and Vinge’s thesis on unpredictability.[13]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is “to educate, inspire and empower leaders to apply exponential technologies to address humanity’s grand challenges.”[62] Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA’s Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during the northern-hemisphere summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

In 2007, the joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity.[63][64][65]

Former President of the United States Barack Obama spoke about singularity in his interview to Wired in 2016:[66]

One thing that we haven’t talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people aren’t spending a lot of time right now worrying about singularitythey are worrying about “Well, is my job going to be replaced by a machine?”

Read more from the original source:

Technological singularity – Wikipedia

Technological singularity – Wikipedia

The technological singularity (also, simply, the singularity)[1] is the hypothesis that the invention of artificial superintelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.[2] According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a “runaway reaction” of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence. Stanislaw Ulam reports a discussion with John von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[3] Subsequent authors have echoed this viewpoint.[2][4] I. J. Good’s “intelligence explosion” model predicts that a future superintelligence will trigger a singularity.[5] Emeritus professor of computer science at San Diego State University and science fiction author Vernor Vinge said in his 1993 essay The Coming Technological Singularity that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.[5]

Four polls were conducted in 2012 and 2013 which suggested that the median estimate was a one in two chance that artificial general intelligence (AGI) would be developed by 20402050, depending on the poll.[6][7]

In the 2010s public figures such as Stephen Hawking and Elon Musk expressed concern that full artificial intelligence could result in human extinction.[8][9] The consequences of the singularity and its potential benefit or harm to the human race have been hotly debated.

I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. Good’s scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (ever more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[10]

John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence. They argue that it is difficult or impossible for present-day humans to predict what human beings’ lives would be like in a post-singularity world.[5][11]

Some writers use “the singularity” in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[12][13][14] although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[5] Many writers also tie the singularity to observations of exponential growth in various technologies (with Moore’s law being the most prominent example), using such observations as a basis for predicting that the singularity is likely to happen sometime within the 21st century.[13][15]

Many prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose law is often cited in support of the concept.[16][17][18]

The exponential growth in computing technology suggested by Moore’s law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore’s law. Computer scientist and futurist Hans Moravec proposed in a 1998 book[19] that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit.

Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[20]) increases exponentially, generalizing Moore’s law in the same manner as Moravec’s proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[21] Between 1986 and 2007, machines’ application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world’s general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world’s storage capacity per capita doubled every 40 months.[22]

Kurzweil reserves the term “singularity” for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine”.[23] He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date “will not represent the Singularity” because they do “not yet correspond to a profound expansion of our intelligence.”[24]

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term “singularity” in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[3]

Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the “law of accelerating returns”. Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to “technological change so rapid and profound it represents a rupture in the fabric of human history”.[25] Kurzweil believes that the singularity will occur by approximately 2045.[26] His predictions differ from Vinge’s in that he predicts a gradual ascent to the singularity, rather than Vinge’s rapidly self-improving superhuman intelligence.

Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy’s Wired magazine article “Why the future doesn’t need us”.[4][27]

Some critics assert that no computer or machine will ever achieve human intelligence, while others hold that the definition of intelligence is irrelevant if the net result is the same.[28]

Steven Pinker stated in 2008:

… There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobilesall staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. …[16]

University of California, Berkeley, philosophy professor John Searle writes:

[Computers] have, literally …, no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. … [T]he machinery has no beliefs, desires, [or] motivations.[29]

Martin Ford in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future[30] postulates a “technology paradox” in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to work traditionally considered to be “routine”.[31]

Theodore Modis[32][33] and Jonathan Huebner[34] argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore’s prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advancements in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.[35] While Kurzweil used Modis’ resources, and Modis’ work was around accelerating change, Modis distanced himself from Kurzweil’s thesis of a “technological singularity”, claiming that it lacks scientific rigor.[33]

Others[who?] propose that other “singularities” can be found through analysis of trends in world population, world gross domestic product, and other indices. Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.[36][improper synthesis?]

In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore’s law to 19th-century computers.[37]

In a 2007 paper, Schmidhuber stated that the frequency of subjectively “notable events” appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.[38]

Paul Allen argues the opposite of accelerating returns, the complexity brake;[18] the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies,[39] a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[34] The growth of complexity eventually becomes self-limiting, and leads to a widespread “general systems collapse”.

Jaron Lanier refutes the idea that the Singularity is inevitable. He states: “I do not think the technology is creating itself. It’s not an autonomous process.”[40] He goes on to assert: “The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it’s the same thing operationally as denying people clout, dignity, and self-determination … to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics.”[40]

Economist Robert J. Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J. Good.[41]

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil’s iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary “events” were picked arbitrarily.[42] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[43]

The term “technological singularity” reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.[44][45] It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat,[46][47] as the issue has not been dealt with by most artificial general intelligence researchers, although the topic of friendly artificial intelligence is investigated by the Future of Humanity Institute and the Machine Intelligence Research Institute.[44]

While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description.[citation needed] In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. A 2016 article in Trends in Ecology & Evolution argues that “humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels… we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes… With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction”. The article argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life’s evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition. The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, “the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (5×10^21 bytes). In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1×10^19 bytes. The digital realm stored 500 times more information than this in 2014 (…see Figure)… The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3×10^37 base pairs, equivalent to 1.325×10^37 bytes of information. If growth in digital storage continues at its current rate of 3038% compound annual growth per year,[22] it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years”.[48]

In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.[49]

Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a “cockroach” stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[49]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[50][improper synthesis?]

In his 2005 book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.[51] Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.[52]

K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical biological machines, in his 1986 book Engines of Creation. According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman’s theoretical micromachines . Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) “swallow the doctor”. The idea was incorporated into Feynman’s 1959 essay There’s Plenty of Room at the Bottom.[53]

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called “Digital Ascension” that involves “people dying in the flesh and being uploaded into a computer and remaining conscious”.[54] Singularitarianism has also been likened to a religion by John Horgan.[55]

In his obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”[3]

In 1965, Good wrote his essay postulating an “intelligence explosion” of recursive self-improvement of a machine intelligence. In 1985, in “The Time Scale of Artificial Intelligence”, artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an “infinity point”: if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.[4][56]

In 1981, Stanisaw Lem published his science fiction novel Golem XIV. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.

In 1983, Vinge greatly popularized Good’s intelligence explosion in a number of writings, first addressing the topic in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term “singularity” in a way that was specifically tied to the creation of intelligent machines:[57][58] writing

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.

Vinge’s 1993 article “The Coming Technological Singularity: How to Survive in the Post-Human Era”,[5] spread widely on the internet and helped to popularize the idea.[59] This article contains the statement, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.[5]

In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of the singularity.[27]

In 2005, Kurzweil published The Singularity is Near. Kurzweil’s publicity campaign included an appearance on The Daily Show with Jon Stewart.[60]

In 2007, Eliezer Yudkowsky suggested that many of the varied definitions that have been assigned to “singularity” are mutually incompatible rather than mutually supporting.[13][61] For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good’s proposed discontinuous upswing in intelligence and Vinge’s thesis on unpredictability.[13]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is “to educate, inspire and empower leaders to apply exponential technologies to address humanity’s grand challenges.”[62] Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA’s Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during the northern-hemisphere summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

In 2007, the joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity.[63][64][65]

Former President of the United States Barack Obama spoke about singularity in his interview to Wired in 2016:[66]

One thing that we haven’t talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people aren’t spending a lot of time right now worrying about singularitythey are worrying about “Well, is my job going to be replaced by a machine?”

Read the original here:

Technological singularity – Wikipedia

The Atlantean Conspiracy

Freedom activist, comedian, and former People’s Voice hostess Elissa Hawke sits down with IFERS President Eric Dubay to discuss dinosaurs, flat Earth, conspiracies, controlled opposition, and to address and clear-up several slanderous lies, accusations and rumors regarding Eric’s background, family, and personal life.

Big thanks to Eddie Bravo, Kron Gracie, and the rest of the guys for this ground-breaking podcast where we expose the dinosaur hoax, the nuclear hoax, evolution, the big bang, controlled opposition, NASA, and the flat Earth conspiracy. The podcast starts at 1:25 and contains a few technical audio issues, but overall came out very well and is essential viewing for flat-Earthers and ballers alike! Please help like, comment, share, subscribe, download and re-upload to help spread the word.

In the following hangout with Del and the other guys from Beyond the Imaginary Curve, we discuss a variety of subjects including natural science, the law of perspective, spirituality, philosophy, veganism, martial arts, and of course the flat Earth.

Eric Dubay, Head of the International Flat Earth Research Society is interviewed by Sean Condon of Truth Seekers Farm about several subjects including the Holohoax, International Jewry, Adolf Hitler, Pizzagate, Psychopathy, and Flat Earth.

Thanks to Mel Fabregas of Veritas Radio for publicly releasing this second part of our important interview (normally reserved for subscribers only) where we delve into topics from the fake spinning ball Earth, to fake aliens, fake dinosaurs, fake ape-men, and evidence of real giant human beings covered up by the Masonic establishment.

Eric Dubay says the greatest lie and most successful cover-up in history, NASA and Freemasonry’s biggest secret, is that we live on a plane, not a planet; that Earth is the flat, stationary center of the universe. Eric is an American living in Thailand where he teaches Yoga and Wing Chun part-time while exposing the New World Order full-time. He is the author of five books and is the president of the International Flat Earth Research Society. From Mel Fabregas: Let me begin this interview by stating that I have no attachment to the flat earth. I have no attachment to the oblate spheroid, and even to the sphere. If our home is any of these, so be it. I wouldn’t be surprised of any. What I continue to be surprised and suspicious of are the people who continue attacking those who simply ask questions. I recently attended a conference that deals with the most open minded topics you could possibly imagine. However, when it comes to the flat earth topic, it was a no no. Look, I can’t say I blame people for thinking this is the most absurd topic under the sun, or the firmament, rather, but you, open minded people who discuss aliens, UFOs, reptilians who rule the world, Bigfoot, and the rest of it, why do you continue telling people to stop looking into the flat earth? Those of you who study the pyramids and ancient civilizations, you venerate these ancient ones, and rightfully so, and some of these very ancient ones believed the earth was flat. Why do you then continue looking into their achievements if the notion of a flat earth is so absurd? Shouldn’t that discredit them too? And those of you who continue writing to me saying the ancient ones knew they lived on a sphere, how did they know? Perhaps they had advanced technology that allowed seeing the skies above. Just because you see sphere above you doesn’t prove you are standing on one. You can still play pool on a flat table and basketball on a flat court. Perhaps the psyop is questioning those who question the true shape of our plane(t).

Thanks to Evita Ochel of EBTV for the wonderful interview covering topics ranging from spirituality and science, to health and veganism, to conspiracies and the flat Earth. Please like, share, comment and subscribe to help me spread the word on these most important subjects!

Tonight Eric Dubay returns to the show. Eric is a leading and prominent voice behind the revival of the Flat Earth theory. He currently lives in Thailand where he teaches Yoga and Wing Chun which is the traditional Chinese martial art of self defense. Eric has also written a number of books one of which is The Flat Earth Conspiracy where Eric explains how the world has been systematically brainwashed and indoctrinated for centuries into believing the greatest lie of all time that the Earth is a spinning globe. He has also recently released his free eBook 200 Proofs Earth is Not a Spinning Ball. The discussion with Eric was wide ranging as we talk about the traction the Flat Earth theory is gaining throughout the world along with Erics thoughts on Zionism, controlled opposition, psychopathic behavior and forbidden topics like the Holocaust.

Continuing my discussion with Mel Fabregas of Veritas Radio, we discuss subjects ranging from the Sun, Moon, and stars, UFOs, aliens, and DMT, Jews, Hitler and WWII, and of course, the flat Earth. If you missed part one be sure to watch this first: https://www.youtube.com/watch?v=5yRAQG5-ca8 And special thanks to Steve Wilner for the awesome psychedelic visuals: https://www.youtube.com/user/soundlessdawn

Join me and Mel Fabregas of Veritas Radio for a discussion about the greatest deception in history, the mother of all conspiracies, NASA and Freemasonry’s biggest secret, that Earth is flat, motionless, and the center of the universe. Stay tuned for part 2 coming out next week, or better yet show Mel your support and subscribe to Veritas radio to listen today.

In this Firestarter radio re-upload we cover the Flat Earth, NASA, Freemasonry, Judaism, Veganism, Kosher Slaughter, Circumcision, Hitler, the Holocaust Hoax, Controlled Opposition, who really runs the world and why they have convinced you you are living on a spinning ball!

In the following re-upload, Patricia Aiken from Sacred Cow BBQ and I discuss the Flat Earth, WWII, Hitler, Gaddafi, Rothschild / Jewish power, Controlled Opposition, and Voluntaryism.

In this roundtable discussion with John Le Bon and the Ball Earth Skeptics we cover topics ranging from shills in the flat Earth movement to astrotheology and kundalini awakening.

Tonight my very special guest is Eric Dubay. Eric is one of the leading voices behind the revival of the Flat Earth theory. He currently lives in Thailand where he teaches Yoga and Wing Chun which is the traditional Chinese martial art of self defense. Eric has also written a number of books one of which is The Flat Earth Conspiracy where Eric explains how the world has been systematically brainwashed and indoctrinated for centuries into believing the greatest lie of all time that the Earth is a spinning globe. Our discussion was wide ranging as we talk through many of the proof points supporting the Flat Earth model as well addressing some the arguments used by heliocentrists, or the globe believers, in their attempts to debunk the flat Earth theory. I strongly urge everyone to listen with an open and critical mind. I believe youll find the topic extremely interesting and not as cut and dry as you might think.

Coast 2 Coast AM regular stand-in and host of Zoomer radio’s Conspiracy Show, Richard Syrett, interviews Eric Dubay, president of the International Flat Earth Research Society about the greatest lie and most successful cover-up in history, NASA and Freemasonry’s biggest secret, that we live on a plane, not a planet, that Earth is the flat, stationary center of the universe.

In this episode of The Anarchast I speak with Jeff Berwick about Statism, Anarchy/Voluntaryism and of course, the Flat Earth! Please be sure to share, like, and subscribe for more interviews and flat Earth research. Also check out my Anarchy article archive at Atlantean Conspiracy: http://www.atlanteanconspiracy.com/search/label/Anarchy

The oldest and largest secret society in existence has a secret so huge and well-hidden, so contrary to what we have been taught to believe, that its exposure threatens to not only completely and single-handedly crush their New World Order “United Nations” but to radically reshape the entirety of modern academia, universities, the mainstream / alternative medias, all the world’s governments, space agencies, and the very Earth beneath our feet.

In this highly-informative and humorous podcast President of the International Flat Earth Research Society and webmaster of AtlanteanConspiracy.com, Eric Dubay, talks with fellow conspiracy author and webmaster of WaykiWayki.com, Mark Knight. Topics covered include flat Earth science vs. ball Earth pseudo-science, the various proofs/evidence for the geocentric flat Earth and debunking the supposed proofs/evidence for the heliocentric ball-Earth, the North Pole and South Pole (or lack thereof) and the Antarctic ice-rim, the Sun, Moon, eclipses, seasons, Polaris, stars, planets, NASA, the fake Moon and Mars landings, the controlled opposition Flat Earth Society vs. the legitimate International Flat Earth Research Society, and disinformation agents like Mark Sargent (codename: Sargent Non-Sense).

The following interview features myself, Eric Dubay, a genuine, legitimate flat Earth researcher talking with DJ Buttamilk (Dan Lefkowitz) of Brattleboro, Vermont community radio.

Visit link:

The Atlantean Conspiracy

Technological singularity – Wikipedia

The technological singularity (also, simply, the singularity)[1] is the hypothesis that the invention of artificial superintelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.[2] According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a “runaway reaction” of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence. Stanislaw Ulam reports a discussion with John von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[3] Subsequent authors have echoed this viewpoint.[2][4] I. J. Good’s “intelligence explosion” model predicts that a future superintelligence will trigger a singularity.[5] Emeritus professor of computer science at San Diego State University and science fiction author Vernor Vinge said in his 1993 essay The Coming Technological Singularity that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.[5]

Four polls were conducted in 2012 and 2013 which suggested that the median estimate was a one in two chance that artificial general intelligence (AGI) would be developed by 2040-2050, depending on the poll.[6][7]

In the 2010s public figures such as Stephen Hawking and Elon Musk expressed concern that full artificial intelligence could result in human extinction.[8][9] The consequences of the singularity and its potential benefit or harm to the human race have been hotly debated.

I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. Good’s scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (ever more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[10]

John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence. They argue that it is difficult or impossible for present-day humans to predict what human beings’ lives would be like in a post-singularity world.[5][11]

Some writers use “the singularity” in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[12][13][14] although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[5] Many writers also tie the singularity to observations of exponential growth in various technologies (with Moore’s law being the most prominent example), using such observations as a basis for predicting that the singularity is likely to happen sometime within the 21st century.[13][15]

Many prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose law is often cited in support of the concept.[16][17][18]

The exponential growth in computing technology suggested by Moore’s law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore’s law. Computer scientist and futurist Hans Moravec proposed in a 1998 book[19] that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit.

Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[20]) increases exponentially, generalizing Moore’s law in the same manner as Moravec’s proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[21] Between 1986 and 2007, machines’ application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world’s general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world’s storage capacity per capita doubled every 40 months.[22]

Kurzweil reserves the term “singularity” for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine”.[23] He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date “will not represent the Singularity” because they do “not yet correspond to a profound expansion of our intelligence.”[24]

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term “singularity” in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[3]

Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the “law of accelerating returns”. Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to “technological change so rapid and profound it represents a rupture in the fabric of human history”.[25] Kurzweil believes that the singularity will occur by approximately 2045.[26] His predictions differ from Vinge’s in that he predicts a gradual ascent to the singularity, rather than Vinge’s rapidly self-improving superhuman intelligence.

Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy’s Wired magazine article “Why the future doesn’t need us”.[4][27]

Some critics assert that no computer or machine will ever achieve human intelligence, while others hold that the definition of intelligence is irrelevant if the net result is the same.[28]

Steven Pinker stated in 2008:

… There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobilesall staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. …[16]

University of California, Berkeley, philosophy professor John Searle writes:

[Computers] have, literally …, no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. … [T]he machinery has no beliefs, desires, [or] motivations.[29]

Martin Ford in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future[30] postulates a “technology paradox” in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to work traditionally considered to be “routine”.[31]

Theodore Modis[32][33] and Jonathan Huebner[34] argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore’s prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advancements in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.[35] While Kurzweil used Modis’ resources, and Modis’ work was around accelerating change, Modis distanced himself from Kurzweil’s thesis of a “technological singularity”, claiming that it lacks scientific rigor.[33]

Others[who?] propose that other “singularities” can be found through analysis of trends in world population, world gross domestic product, and other indices. Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.[36][improper synthesis?]

In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore’s law to 19th-century computers.[37]

In a 2007 paper, Schmidhuber stated that the frequency of subjectively “notable events” appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.[38]

Paul Allen argues the opposite of accelerating returns, the complexity brake;[18] the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies,[39] a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[34] The growth of complexity eventually becomes self-limiting, and leads to a widespread “general systems collapse”.

Jaron Lanier refutes the idea that the Singularity is inevitable. He states: “I do not think the technology is creating itself. It’s not an autonomous process.”[40] He goes on to assert: “The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it’s the same thing operationally as denying people clout, dignity, and self-determination … to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics.”[40]

Economist Robert J. Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J. Good.[41]

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil’s iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary “events” were picked arbitrarily.[42] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[43]

The term “technological singularity” reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.[44][45] It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat,[46][47] as the issue has not been dealt with by most artificial general intelligence researchers, although the topic of friendly artificial intelligence is investigated by the Future of Humanity Institute and the Machine Intelligence Research Institute.[44]

While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description.[citation needed] In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. A 2016 article in Trends in Ecology & Evolution argues that “humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels… we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes… With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction”. The article argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life’s evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition. The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, “the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (5×10^21 bytes). In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1×10^19 bytes. The digital realm stored 500 times more information than this in 2014 (…see Figure)… The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3×10^37 base pairs, equivalent to 1.325×10^37 bytes of information. If growth in digital storage continues at its current rate of 3038% compound annual growth per year,[22] it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years”.[48]

In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.[49]

Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a “cockroach” stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[49]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[50][improper synthesis?]

In his 2005 book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.[51] Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.[52]

K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical biological machines, in his 1986 book Engines of Creation. According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman’s theoretical micromachines . Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) “swallow the doctor”. The idea was incorporated into Feynman’s 1959 essay There’s Plenty of Room at the Bottom.[53]

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called “Digital Ascension” that involves “people dying in the flesh and being uploaded into a computer and remaining conscious”.[54] Singularitarianism has also been likened to a religion by John Horgan.[55]

In his obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”[3]

In 1965, Good wrote his essay postulating an “intelligence explosion” of recursive self-improvement of a machine intelligence. In 1985, in “The Time Scale of Artificial Intelligence”, artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an “infinity point”: if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.[4][56]

In 1981, Stanisaw Lem published his science fiction novel Golem XIV. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.

In 1983, Vinge greatly popularized Good’s intelligence explosion in a number of writings, first addressing the topic in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term “singularity” in a way that was specifically tied to the creation of intelligent machines:[57][58] writing

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.

Vinge’s 1993 article “The Coming Technological Singularity: How to Survive in the Post-Human Era”,[5] spread widely on the internet and helped to popularize the idea.[59] This article contains the statement, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.[5]

In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of the singularity.[27]

In 2005, Kurzweil published The Singularity is Near. Kurzweil’s publicity campaign included an appearance on The Daily Show with Jon Stewart.[60]

In 2007, Eliezer Yudkowsky suggested that many of the varied definitions that have been assigned to “singularity” are mutually incompatible rather than mutually supporting.[13][61] For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good’s proposed discontinuous upswing in intelligence and Vinge’s thesis on unpredictability.[13]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is “to educate, inspire and empower leaders to apply exponential technologies to address humanity’s grand challenges.”[62] Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA’s Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during the northern-hemisphere summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

In 2007, the joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity.[63][64][65]

Former President of the United States Barack Obama spoke about singularity in his interview to Wired in 2016:[66]

One thing that we haven’t talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people aren’t spending a lot of time right now worrying about singularitythey are worrying about “Well, is my job going to be replaced by a machine?”

Here is the original post:

Technological singularity – Wikipedia

Pepe the Frog – Wikipedia

“Frog meme” redirects here. For the image of a frog riding a unicycle, see Dat Boi.For other uses, see Pepe.

Pepe the Frog () is a popular Internet meme. A green anthropomorphic frog with a humanoid body, Pepe originated in a comic by Matt Furie called Boy’s Club.[2] It became an Internet meme when its popularity steadily grew across Myspace, Gaia Online and 4chan in 2008. By 2015, it had become one of the most popular memes used on 4chan and Tumblr.[3]

By 2016, the character’s image had been appropriated[4] as a symbol of the controversial alt-right movement.[5] The Anti-Defamation League added certain incarnations of Pepe the Frog to their database of hate symbols in 2016, adding that not all Pepe memes are racist.[6] Since then, Pepe’s creator has publicly expressed his dismay at Pepe being used as a hate symbol.[7]

The meme’s original use has evolved over time and has many variants, including Sad Frog, Smug Frog, Feels Frog, and “You will never…” Frog.[8]

Pepe the Frog was created by American artist and cartoonist Matt Furie in 2005. Its usage as a meme came from his comic, Boy’s Club #1. The progenitor of Boy’s Club was a zine that Furie made on Microsoft Paint called Playtime, which included Pepe as a character.[9] He posted his comic in a series of blog posts on Myspace in 2005.[8][10]

In the comic, Pepe is seen urinating with his pants pulled down to his ankles and the catchphrase “feels good man” was his rationale.[11][12] Furie took those posts down when the printed edition was published in 2006.[8]

“My Pepe philosophy is simple: ‘Feels good man.’ It is based on the meaning of the word Pepe: ‘To go Pepe.’ I find complete joy in physically, emotionally, and spiritually serving Pepe and his friends through comics. Each comic is sacred, and the compassion of my readers transcends any differences, the pain, and fear of ‘feeling good.'”

Matt Furie, 2015 interview with The Daily Dot[2]

Pepe was used in blog posts on Myspace and became an in-joke on Internet forums. In 2008, the page containing Pepe and the catchphrase was scanned and uploaded to 4chan’s /b/ board, which has been described as the meme’s “permanent home”.[8] The meme took off among 4chan users, who adapted Pepe’s face and the catchphrase to fit different scenarios and emotions, such as melancholy, anger, and surprise.[2] Color was also added; originally a black and white line drawing, Pepe became green with brown lips, sometimes in a blue shirt.[10][11] “Feels Guy”, or “Wojak”, originally an unrelated character typically used to express melancholy, was eventually often paired with Pepe in user-made comics or images.[12]

In 2014, images of Pepe were shared on social media by celebrities such as Katy Perry and Nicki Minaj.[8][11][13] As Pepe became more widespread, 4chan users began referring to particularly creative and unique variants of the meme as “rare Pepes.” These images, sometimes as physical paintings,[14][15] were put up for sale and auction on eBay and posted in listings on Craigslist.[2][8] 4chan users referred to those who used the meme outside the website as “normies” (or “normalfags”) in response to the meme’s increase in usage.[8] In 2015, Pepe was #6 on Daily News and Analysis’ list of the most important memes and was the most retweeted meme on Twitter.[16][17]

Social media service Gab uses a Pepe-like illustration of a frog (named “Gabby”) as their logo. The site is popular with the alt-right.[18]

During the 2016 United States presidential election, the meme was connected to Donald Trump’s campaign. In October 2015, Donald Trump retweeted a Pepe representation of himself, associated with a video called “You Can’t Stump the Trump (Volume 4)”.[6][19] Later in the election, Roger Stone and Donald Trump Jr. posted a parody movie poster of The Expendables on Twitter and Instagram titled “The Deplorables”, a play of Hillary Clinton’s controversial phrase, basket of deplorables, which included Pepe’s face among those of members of the Trump family and other figures popular among the alt-right.[20]

Also during the election, associations of the character with white nationalism and the alt-right were described by various news organizations.[21][22][23] In May 2016, Olivia Nuzzi of The Daily Beast wrote how there was “an actual campaign to reclaim Pepe from normies” and that “turning Pepe into a white nationalist icon” was an explicit goal of some on the alt-right.[24] In September 2016, an article published on Hillary Clinton’s campaign website described Pepe as “a symbol associated with white supremacy” and denounced Donald Trump’s campaign for its supposed promotion of the meme.[25][26] The same month, the two sources for Nuzzi’s Daily Beast article revealed to The Daily Caller that they had coordinated beforehand to mislead Nuzzi (particularly about the existence of a campaign) under the expectation that she would uncritically repeat what she was told, with one saying, “Basically, I interspersed various nuggets of truth and exaggerated a lot of things, and sometimes outright liedin the interest of making a journalist believe that online Trump supporters are largely a group of meme-jihadis who use a cartoon frog to push Nazi propaganda. Because this was funny to me.”[27] The Anti-Defamation League, an American organization opposed to antisemitism, included Pepe in its hate symbol database but noted that most instances of Pepe were not used in a hate-related context.[28][29] In January 2017, in a response to “pundits” calling on Theresa May to disrupt Trump’s relationship with Russia, The Russian Embassy in the United Kingdom tweeted an image of Pepe.[30][31] White supremacist Richard B. Spencer, during a street interview after Trump’s inauguration, was preparing to explain the meaning of a Pepe pin on his jacket when he was punched in the face, with the resulting video itself becoming the source of many memes.[32][33]

In an interview with Esquire, Furie commented on Pepe’s usage as a hate symbol, stating: “It sucks, but I can’t control it more than anyone can control frogs on the Internet”.[34] Fantagraphics Books, Furie’s publisher, issued a statement condemning the “illegal and repulsive appropriations of the character”.[35] On October 17, Furie published a satirical take of Pepe’s appropriation by the alt-right movement on The Nib.[36][37] This was his first comic for the character since he ended Boy’s Club in 2012.[1] In May 2017 it was announced that Furie had killed Pepe off in response to the character’s continued use as a hate symbol.[38] However, in an interview with Carol Off on her interview show As It Happens Furie stated that despite news of the Pepe’s death, fans haven’t seen the last of him stating: “The end is a chance for a new beginning,” and that “I got some plans for Pepe that I can’t really discuss, but he’s going to rise from the ashes like a phoenix in a puff of marijuana smoke.”[39][40] Shortly later, Furie announced his intention to “resurrect” Pepe, launching a crowdfunding campaign to raise funds for a new comic book featuring Pepe.[41]

On June 2017, a proposed app and Flappy Bird clone called “Pepe Scream” was rejected from the Apple App Store due to its depiction of Pepe the Frog. The developer of the app, under the name “MrSnrhms”, posted a screenshot of his rejection letter on /r/The Donald. The app in question is available on Google Play.[42][43]

A children’s book appropriating the Pepe character, titled The Adventures of Pepe and Pede, advanced “racist, Islamophobic and hate-filled themes”, according to a federal lawsuit filed by Furie. The lawsuit was settled out of court in August 2017, with the terms including the withdrawal of the book from publication and the profits being donated to the non-profit Council on American-Islamic Relations. Initially self-published, the book was subsequently published by Post Hill Press.[44] The book’s author, a vice-principal with the Denton Independent School District, was reassigned after the publicity.[45]

“Esoteric Kekism”,[46] or the Cult of Kek,[47] is a term for the parody religion of worshipping Pepe the Frog, which sprung from the similarity of the slang term for laughter, “kek,” and the name of the ancient Egyptian frog god of darkness, Kek.[48] This deity, in turn, was associated with Pepe the Frog on internet forums.[48][49] The internet meme has its origin on the internet message forum 4chan and other chans, and the board /pol/ in particular.[48][50] Kek references are closely associated with the alt-right[51][52][53][54] and Donald Trump.[55][56][57][58]

The phrase “kek” originated as a variation of the phrase “lol”[59][60] and seems to originate from the video game World of Warcraft.[61] The phrase then became associated with the Egyptian deity of the same name.[48] “Esoteric Kekism” references the “Esoteric Hitlerism” of writer Savitri Devi.[46][62]

During the 2016 United States presidential election, Kek became associated with alt-right politics.[63][64][65][66][67][68] Kek is associated with the occurrence of repeating digits, known as “dubs”,[original research?] on 4chan, as if he had the ability to influence reality through internet memes.[69]

Online message boards, such as 4chan, first noted a similarity between Kek and the character Pepe the Frog.[70][49][71][72] The phrase is widely used[48] and 4chan users see Kek as the “‘god’ of memes.”[73]

Kekistan is a fictional country created by 4chan members that has become a political meme and online movement.[74] According to Ian Miles Cheong writing in Heat Street, the name is a portmanteau of “kek” and the suffix “-stan”, which is Persian for “place of” (and the end of several names of actual Central Asian countries and regions), and Kekistanis identify themselves as ‘shitposters’ persecuted by excessive political correctness.[75][76] Self-identified Kekistanis have created a fictional history around the meme, including the invasion and overthrow of other fictional countries such as “Normistan” and “Cuckistan”.[77][76] Kekistanis have also adopted Internet personality Gordon Hurd (in his “Big Man Tyrone” persona) as their ‘president’ and 1986[clarification needed] Italo disco record “Shadilay” as a national anthem.[76] The record gained attention from the group in September 2016 because the name of the group (P.E.P.E) and art on the record which depicts a frog holding a magic wand.[50]

Cheong credits Carl Benjamin, who uses the pseudonym Sargon of Akkad on YouTube, for popularizing the meme.[75] Benjamin claimed that shitposters could technically classify as an ethnic group for the British Census, and he contacted the Office for National Statistics and requested that Kekistani be added.[78][further explanation needed] Benjamin was unsuccessful in getting the fake ethnicity added.[79][bettersourceneeded]

Since late 2016, the satirical ethnicity has been used by U.S.-based alt-right protesters opposed to what they view as political correctness. These ‘Kekistanis’ decry the ‘oppression’ of their people and troll counter-protesters by waving the ‘national flag of Kekistan’ (modeled after the Nazi War Flag, with the red replaced by green, the Iron Cross replaced by the logo for 4chan, and the swastika replaced by a rubric for KEK).[74][77][80] This flag was prominently displayed at the 2017 Berkeley protest for free speech in mid-April,[81][82] and the Unite the Right rally in August 2017.[83][84]

Read more here:

Pepe the Frog – Wikipedia

Technological singularity – Wikipedia

The technological singularity (also, simply, the singularity)[1] is the hypothesis that the invention of artificial superintelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.[2] According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a “runaway reaction” of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence. Stanislaw Ulam reports a discussion with John von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[3] Subsequent authors have echoed this viewpoint.[2][4] I. J. Good’s “intelligence explosion” model predicts that a future superintelligence will trigger a singularity.[5] Emeritus professor of computer science at San Diego State University and science fiction author Vernor Vinge said in his 1993 essay The Coming Technological Singularity that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.[5]

Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when artificial general intelligence (AGI) would arrive was 2040 to 2050, depending on the poll.[6][7]

In the 2010s public figures such as Stephen Hawking and Elon Musk expressed concern that full artificial intelligence could result in human extinction.[8][9] The consequences of the singularity and its potential benefit or harm to the human race have been hotly debated.

I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. Good’s scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (ever more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[10]

John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence. They argue that it is difficult or impossible for present-day humans to predict what human beings’ lives would be like in a post-singularity world.[5][11]

Some writers use “the singularity” in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[12][13][14] although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[5] Many writers also tie the singularity to observations of exponential growth in various technologies (with Moore’s law being the most prominent example), using such observations as a basis for predicting that the singularity is likely to happen sometime within the 21st century.[13][15]

Many prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose law is often cited in support of the concept.[16][17][18]

The exponential growth in computing technology suggested by Moore’s law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore’s law. Computer scientist and futurist Hans Moravec proposed in a 1998 book[19] that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit.

Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[20]) increases exponentially, generalizing Moore’s law in the same manner as Moravec’s proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[21] Between 1986 and 2007, machines’ application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world’s general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world’s storage capacity per capita doubled every 40 months.[22]

Kurzweil reserves the term “singularity” for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine”.[23] He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date “will not represent the Singularity” because they do “not yet correspond to a profound expansion of our intelligence.”[24]

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term “singularity” in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[3]

Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the “law of accelerating returns”. Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to “technological change so rapid and profound it represents a rupture in the fabric of human history”.[25] Kurzweil believes that the singularity will occur by approximately 2045.[26] His predictions differ from Vinge’s in that he predicts a gradual ascent to the singularity, rather than Vinge’s rapidly self-improving superhuman intelligence.

Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy’s Wired magazine article “Why the future doesn’t need us”.[4][27]

Some critics assert that no computer or machine will ever achieve human intelligence, while others hold that the definition of intelligence is irrelevant if the net result is the same.[28]

Steven Pinker stated in 2008:

… There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobilesall staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. …[16]

University of California, Berkeley, philosophy professor John Searle writes:

[Computers] have, literally …, no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. … [T]he machinery has no beliefs, desires, [or] motivations.[29]

Martin Ford in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future[30] postulates a “technology paradox” in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to work traditionally considered to be “routine”.[31]

Theodore Modis[32][33] and Jonathan Huebner[34] argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore’s prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advancements in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.[35] While Kurzweil used Modis’ resources, and Modis’ work was around accelerating change, Modis distanced himself from Kurzweil’s thesis of a “technological singularity”, claiming that it lacks scientific rigor.[33]

Others[who?] propose that other “singularities” can be found through analysis of trends in world population, world gross domestic product, and other indices. Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.[36][improper synthesis?]

In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore’s law to 19th-century computers.[37]

In a 2007 paper, Schmidhuber stated that the frequency of subjectively “notable events” appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.[38]

Paul Allen argues the opposite of accelerating returns, the complexity brake;[18] the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies,[39] a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[34] The growth of complexity eventually becomes self-limiting, and leads to a widespread “general systems collapse”.

Jaron Lanier refutes the idea that the Singularity is inevitable. He states: “I do not think the technology is creating itself. It’s not an autonomous process.”[40] He goes on to assert: “The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it’s the same thing operationally as denying people clout, dignity, and self-determination … to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics.”[40]

Economist Robert J. Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J. Good.[41]

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil’s iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary “events” were picked arbitrarily.[42] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[43]

The term “technological singularity” reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.[44][45] It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat,[46][47] as the issue has not been dealt with by most artificial general intelligence researchers, although the topic of friendly artificial intelligence is investigated by the Future of Humanity Institute and the Machine Intelligence Research Institute.[44]

While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description.[citation needed] In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. A 2016 article in Trends in Ecology & Evolution argues that “humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels… we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes… With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction”. The article argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life’s evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition. The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, “the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (5×10^21 bytes). In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1×10^19 bytes. The digital realm stored 500 times more information than this in 2014 (…see Figure)… The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3×10^37 base pairs, equivalent to 1.325×10^37 bytes of information. If growth in digital storage continues at its current rate of 3038% compound annual growth per year,[22] it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years”.[48]

In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.[49]

Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a “cockroach” stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[49]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[50][improper synthesis?]

In his 2005 book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.[51] Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.[52]

K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical biological machines, in his 1986 book Engines of Creation. According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman’s theoretical micromachines . Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) “swallow the doctor”. The idea was incorporated into Feynman’s 1959 essay There’s Plenty of Room at the Bottom.[53]

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called “Digital Ascension” that involves “people dying in the flesh and being uploaded into a computer and remaining conscious”.[54] Singularitarianism has also been likened to a religion by John Horgan.[55]

In his obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”[3]

In 1965, Good wrote his essay postulating an “intelligence explosion” of recursive self-improvement of a machine intelligence. In 1985, in “The Time Scale of Artificial Intelligence”, artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an “infinity point”: if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.[4][56]

In 1981, Stanisaw Lem published his science fiction novel Golem XIV. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.

In 1983, Vinge greatly popularized Good’s intelligence explosion in a number of writings, first addressing the topic in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term “singularity” in a way that was specifically tied to the creation of intelligent machines:[57][58] writing

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.

Vinge’s 1993 article “The Coming Technological Singularity: How to Survive in the Post-Human Era”,[5] spread widely on the internet and helped to popularize the idea.[59] This article contains the statement, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.[5]

In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of the singularity.[27]

In 2005, Kurzweil published The Singularity is Near. Kurzweil’s publicity campaign included an appearance on The Daily Show with Jon Stewart.[60]

In 2007, Eliezer Yudkowsky suggested that many of the varied definitions that have been assigned to “singularity” are mutually incompatible rather than mutually supporting.[13][61] For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good’s proposed discontinuous upswing in intelligence and Vinge’s thesis on unpredictability.[13]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is “to educate, inspire and empower leaders to apply exponential technologies to address humanity’s grand challenges.”[62] Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA’s Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during the northern-hemisphere summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

In 2007, the joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity.[63][64][65]

Former President of the United States Barack Obama spoke about singularity in his interview to Wired in 2016:[66]

One thing that we haven’t talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people aren’t spending a lot of time right now worrying about singularitythey are worrying about “Well, is my job going to be replaced by a machine?”

Go here to read the rest:

Technological singularity – Wikipedia

Technological singularity – Wikipedia

The technological singularity (also, simply, the singularity)[1] is the hypothesis that the invention of artificial superintelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.[2] According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a “runaway reaction” of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence. Stanislaw Ulam reports a discussion with John von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[3] Subsequent authors have echoed this viewpoint.[2][4] I. J. Good’s “intelligence explosion” model predicts that a future superintelligence will trigger a singularity.[5] Emeritus professor of computer science at San Diego State University and science fiction author Vernor Vinge said in his 1993 essay The Coming Technological Singularity that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.[5]

Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when artificial general intelligence (AGI) would arrive was 2040 to 2050, depending on the poll.[6][7]

In the 2010s public figures such as Stephen Hawking and Elon Musk expressed concern that full artificial intelligence could result in human extinction.[8][9] The consequences of the singularity and its potential benefit or harm to the human race have been hotly debated.

I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. Good’s scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (ever more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[10]

John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence. They argue that it is difficult or impossible for present-day humans to predict what human beings’ lives would be like in a post-singularity world.[5][11]

Some writers use “the singularity” in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[12][13][14] although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[5] Many writers also tie the singularity to observations of exponential growth in various technologies (with Moore’s law being the most prominent example), using such observations as a basis for predicting that the singularity is likely to happen sometime within the 21st century.[13][15]

Many prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose law is often cited in support of the concept.[16][17][18]

The exponential growth in computing technology suggested by Moore’s law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore’s law. Computer scientist and futurist Hans Moravec proposed in a 1998 book[19] that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit.

Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[20]) increases exponentially, generalizing Moore’s law in the same manner as Moravec’s proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[21] Between 1986 and 2007, machines’ application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world’s general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world’s storage capacity per capita doubled every 40 months.[22]

Kurzweil reserves the term “singularity” for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine”.[23] He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date “will not represent the Singularity” because they do “not yet correspond to a profound expansion of our intelligence.”[24]

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term “singularity” in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[3]

Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the “law of accelerating returns”. Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to “technological change so rapid and profound it represents a rupture in the fabric of human history”.[25] Kurzweil believes that the singularity will occur by approximately 2045.[26] His predictions differ from Vinge’s in that he predicts a gradual ascent to the singularity, rather than Vinge’s rapidly self-improving superhuman intelligence.

Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy’s Wired magazine article “Why the future doesn’t need us”.[4][27]

Some critics assert that no computer or machine will ever achieve human intelligence, while others hold that the definition of intelligence is irrelevant if the net result is the same.[28]

Steven Pinker stated in 2008:

… There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobilesall staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. …[16]

University of California, Berkeley, philosophy professor John Searle writes:

[Computers] have, literally …, no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. … [T]he machinery has no beliefs, desires, [or] motivations.[29]

Martin Ford in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future[30] postulates a “technology paradox” in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to work traditionally considered to be “routine”.[31]

Theodore Modis[32][33] and Jonathan Huebner[34] argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore’s prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advancements in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.[35] While Kurzweil used Modis’ resources, and Modis’ work was around accelerating change, Modis distanced himself from Kurzweil’s thesis of a “technological singularity”, claiming that it lacks scientific rigor.[33]

Others[who?] propose that other “singularities” can be found through analysis of trends in world population, world gross domestic product, and other indices. Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.[36][improper synthesis?]

In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore’s law to 19th-century computers.[37]

In a 2007 paper, Schmidhuber stated that the frequency of subjectively “notable events” appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.[38]

Paul Allen argues the opposite of accelerating returns, the complexity brake;[18] the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies,[39] a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[34] The growth of complexity eventually becomes self-limiting, and leads to a widespread “general systems collapse”.

Jaron Lanier refutes the idea that the Singularity is inevitable. He states: “I do not think the technology is creating itself. It’s not an autonomous process.”[40] He goes on to assert: “The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it’s the same thing operationally as denying people clout, dignity, and self-determination … to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics.”[40]

Economist Robert J. Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J. Good.[41]

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil’s iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary “events” were picked arbitrarily.[42] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[43]

The term “technological singularity” reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.[44][45] It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat,[46][47] as the issue has not been dealt with by most artificial general intelligence researchers, although the topic of friendly artificial intelligence is investigated by the Future of Humanity Institute and the Machine Intelligence Research Institute.[44]

While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description.[citation needed] In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. A 2016 article in Trends in Ecology & Evolution argues that “humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels… we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes… With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction”. The article argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life’s evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition. The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, “the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (5×10^21 bytes). In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1×10^19 bytes. The digital realm stored 500 times more information than this in 2014 (…see Figure)… The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3×10^37 base pairs, equivalent to 1.325×10^37 bytes of information. If growth in digital storage continues at its current rate of 3038% compound annual growth per year,[22] it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years”.[48]

In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.[49]

Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a “cockroach” stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[49]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[50][improper synthesis?]

In his 2005 book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.[51] Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.[52]

K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical biological machines, in his 1986 book Engines of Creation. According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman’s theoretical micromachines . Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) “swallow the doctor”. The idea was incorporated into Feynman’s 1959 essay There’s Plenty of Room at the Bottom.[53]

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called “Digital Ascension” that involves “people dying in the flesh and being uploaded into a computer and remaining conscious”.[54] Singularitarianism has also been likened to a religion by John Horgan.[55]

In his obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”[3]

In 1965, Good wrote his essay postulating an “intelligence explosion” of recursive self-improvement of a machine intelligence. In 1985, in “The Time Scale of Artificial Intelligence”, artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an “infinity point”: if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.[4][56]

In 1981, Stanisaw Lem published his science fiction novel Golem XIV. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.

In 1983, Vinge greatly popularized Good’s intelligence explosion in a number of writings, first addressing the topic in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term “singularity” in a way that was specifically tied to the creation of intelligent machines:[57][58] writing

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.

Vinge’s 1993 article “The Coming Technological Singularity: How to Survive in the Post-Human Era”,[5] spread widely on the internet and helped to popularize the idea.[59] This article contains the statement, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.[5]

In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of the singularity.[27]

In 2005, Kurzweil published The Singularity is Near. Kurzweil’s publicity campaign included an appearance on The Daily Show with Jon Stewart.[60]

In 2007, Eliezer Yudkowsky suggested that many of the varied definitions that have been assigned to “singularity” are mutually incompatible rather than mutually supporting.[13][61] For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good’s proposed discontinuous upswing in intelligence and Vinge’s thesis on unpredictability.[13]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is “to educate, inspire and empower leaders to apply exponential technologies to address humanity’s grand challenges.”[62] Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA’s Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during the northern-hemisphere summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

In 2007, the joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity.[63][64][65]

Former President of the United States Barack Obama spoke about singularity in his interview to Wired in 2016:[66]

One thing that we haven’t talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people aren’t spending a lot of time right now worrying about singularitythey are worrying about “Well, is my job going to be replaced by a machine?”

Read more:

Technological singularity – Wikipedia

Technological singularity – Wikipedia

The technological singularity (also, simply, the singularity)[1] is the hypothesis that the invention of artificial superintelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.[2] According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a “runaway reaction” of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence. Stanislaw Ulam reports a discussion with John von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[3] Subsequent authors have echoed this viewpoint.[2][4] I. J. Good’s “intelligence explosion” model predicts that a future superintelligence will trigger a singularity.[5] Emeritus professor of computer science at San Diego State University and science fiction author Vernor Vinge said in his 1993 essay The Coming Technological Singularity that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.[5]

Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when artificial general intelligence (AGI) would arrive was 2040 to 2050, depending on the poll.[6][7]

In the 2010s public figures such as Stephen Hawking and Elon Musk expressed concern that full artificial intelligence could result in human extinction.[8][9] The consequences of the singularity and its potential benefit or harm to the human race have been hotly debated.

I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. Good’s scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (ever more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[10]

John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence. They argue that it is difficult or impossible for present-day humans to predict what human beings’ lives would be like in a post-singularity world.[5][11]

Some writers use “the singularity” in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[12][13][14] although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[5] Many writers also tie the singularity to observations of exponential growth in various technologies (with Moore’s law being the most prominent example), using such observations as a basis for predicting that the singularity is likely to happen sometime within the 21st century.[13][15]

Many prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose law is often cited in support of the concept.[16][17][18]

The exponential growth in computing technology suggested by Moore’s law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore’s law. Computer scientist and futurist Hans Moravec proposed in a 1998 book[19] that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit.

Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[20]) increases exponentially, generalizing Moore’s law in the same manner as Moravec’s proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[21] Between 1986 and 2007, machines’ application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world’s general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world’s storage capacity per capita doubled every 40 months.[22]

Kurzweil reserves the term “singularity” for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine”.[23] He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date “will not represent the Singularity” because they do “not yet correspond to a profound expansion of our intelligence.”[24]

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term “singularity” in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[3]

Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the “law of accelerating returns”. Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to “technological change so rapid and profound it represents a rupture in the fabric of human history”.[25] Kurzweil believes that the singularity will occur by approximately 2045.[26] His predictions differ from Vinge’s in that he predicts a gradual ascent to the singularity, rather than Vinge’s rapidly self-improving superhuman intelligence.

Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy’s Wired magazine article “Why the future doesn’t need us”.[4][27]

Some critics assert that no computer or machine will ever achieve human intelligence, while others hold that the definition of intelligence is irrelevant if the net result is the same.[28]

Steven Pinker stated in 2008:

… There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobilesall staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. …[16]

University of California, Berkeley, philosophy professor John Searle writes:

[Computers] have, literally …, no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. … [T]he machinery has no beliefs, desires, [or] motivations.[29]

Martin Ford in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future[30] postulates a “technology paradox” in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to work traditionally considered to be “routine”.[31]

Theodore Modis[32][33] and Jonathan Huebner[34] argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore’s prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advancements in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.[35] While Kurzweil used Modis’ resources, and Modis’ work was around accelerating change, Modis distanced himself from Kurzweil’s thesis of a “technological singularity”, claiming that it lacks scientific rigor.[33]

Others[who?] propose that other “singularities” can be found through analysis of trends in world population, world gross domestic product, and other indices. Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.[36][improper synthesis?]

In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore’s law to 19th-century computers.[37]

In a 2007 paper, Schmidhuber stated that the frequency of subjectively “notable events” appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.[38]

Paul Allen argues the opposite of accelerating returns, the complexity brake;[18] the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies,[39] a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[34] The growth of complexity eventually becomes self-limiting, and leads to a widespread “general systems collapse”.

Jaron Lanier refutes the idea that the Singularity is inevitable. He states: “I do not think the technology is creating itself. It’s not an autonomous process.”[40] He goes on to assert: “The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it’s the same thing operationally as denying people clout, dignity, and self-determination … to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics.”[40]

Economist Robert J. Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J. Good.[41]

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil’s iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary “events” were picked arbitrarily.[42] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[43]

The term “technological singularity” reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.[44][45] It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat,[46][47] as the issue has not been dealt with by most artificial general intelligence researchers, although the topic of friendly artificial intelligence is investigated by the Future of Humanity Institute and the Machine Intelligence Research Institute.[44]

While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description.[citation needed] In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. A 2016 article in Trends in Ecology & Evolution argues that “humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels… we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes… With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction”. The article argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life’s evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition. The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, “the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (5×10^21 bytes). In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1×10^19 bytes. The digital realm stored 500 times more information than this in 2014 (…see Figure)… The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3×10^37 base pairs, equivalent to 1.325×10^37 bytes of information. If growth in digital storage continues at its current rate of 3038% compound annual growth per year,[22] it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years”.[48]

In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.[49]

Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a “cockroach” stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[49]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[50][improper synthesis?]

In his 2005 book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.[51] Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.[52]

K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical biological machines, in his 1986 book Engines of Creation. According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman’s theoretical micromachines . Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) “swallow the doctor”. The idea was incorporated into Feynman’s 1959 essay There’s Plenty of Room at the Bottom.[53]

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called “Digital Ascension” that involves “people dying in the flesh and being uploaded into a computer and remaining conscious”.[54] Singularitarianism has also been likened to a religion by John Horgan.[55]

In his obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”[3]

In 1965, Good wrote his essay postulating an “intelligence explosion” of recursive self-improvement of a machine intelligence. In 1985, in “The Time Scale of Artificial Intelligence”, artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an “infinity point”: if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.[4][56]

In 1981, Stanisaw Lem published his science fiction novel Golem XIV. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.

In 1983, Vinge greatly popularized Good’s intelligence explosion in a number of writings, first addressing the topic in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term “singularity” in a way that was specifically tied to the creation of intelligent machines:[57][58] writing

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.

Vinge’s 1993 article “The Coming Technological Singularity: How to Survive in the Post-Human Era”,[5] spread widely on the internet and helped to popularize the idea.[59] This article contains the statement, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.[5]

In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of the singularity.[27]

In 2005, Kurzweil published The Singularity is Near. Kurzweil’s publicity campaign included an appearance on The Daily Show with Jon Stewart.[60]

In 2007, Eliezer Yudkowsky suggested that many of the varied definitions that have been assigned to “singularity” are mutually incompatible rather than mutually supporting.[13][61] For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good’s proposed discontinuous upswing in intelligence and Vinge’s thesis on unpredictability.[13]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is “to educate, inspire and empower leaders to apply exponential technologies to address humanity’s grand challenges.”[62] Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA’s Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during the northern-hemisphere summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

In 2007, the joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity.[63][64][65]

Former President of the United States Barack Obama spoke about singularity in his interview to Wired in 2016:[66]

One thing that we haven’t talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people aren’t spending a lot of time right now worrying about singularitythey are worrying about “Well, is my job going to be replaced by a machine?”

See original here:

Technological singularity – Wikipedia

Technological singularity – Wikipedia

The technological singularity (also, simply, the singularity)[1] is the hypothesis that the invention of artificial superintelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.[2] According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a “runaway reaction” of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence. Stanislaw Ulam reports a discussion with John von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[3] Subsequent authors have echoed this viewpoint.[2][4] I. J. Good’s “intelligence explosion” model predicts that a future superintelligence will trigger a singularity.[5] Emeritus professor of computer science at San Diego State University and science fiction author Vernor Vinge said in his 1993 essay The Coming Technological Singularity that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.[5]

Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when artificial general intelligence (AGI) would arrive was 2040 to 2050, depending on the poll.[6][7]

In the 2010s public figures such as Stephen Hawking and Elon Musk expressed concern that full artificial intelligence could result in human extinction.[8][9] The consequences of the singularity and its potential benefit or harm to the human race have been hotly debated.

I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. Good’s scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (ever more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[10]

John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence. They argue that it is difficult or impossible for present-day humans to predict what human beings’ lives would be like in a post-singularity world.[5][11]

Some writers use “the singularity” in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[12][13][14] although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[5] Many writers also tie the singularity to observations of exponential growth in various technologies (with Moore’s law being the most prominent example), using such observations as a basis for predicting that the singularity is likely to happen sometime within the 21st century.[13][15]

Many prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose law is often cited in support of the concept.[16][17][18]

The exponential growth in computing technology suggested by Moore’s law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore’s law. Computer scientist and futurist Hans Moravec proposed in a 1998 book[19] that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit.

Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[20]) increases exponentially, generalizing Moore’s law in the same manner as Moravec’s proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[21] Between 1986 and 2007, machines’ application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world’s general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world’s storage capacity per capita doubled every 40 months.[22]

Kurzweil reserves the term “singularity” for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine”.[23] He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date “will not represent the Singularity” because they do “not yet correspond to a profound expansion of our intelligence.”[24]

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term “singularity” in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[3]

Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the “law of accelerating returns”. Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to “technological change so rapid and profound it represents a rupture in the fabric of human history”.[25] Kurzweil believes that the singularity will occur by approximately 2045.[26] His predictions differ from Vinge’s in that he predicts a gradual ascent to the singularity, rather than Vinge’s rapidly self-improving superhuman intelligence.

Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy’s Wired magazine article “Why the future doesn’t need us”.[4][27]

Some critics assert that no computer or machine will ever achieve human intelligence, while others hold that the definition of intelligence is irrelevant if the net result is the same.[28]

Steven Pinker stated in 2008:

… There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobilesall staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. …[16]

University of California, Berkeley, philosophy professor John Searle writes:

[Computers] have, literally …, no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. … [T]he machinery has no beliefs, desires, [or] motivations.[29]

Martin Ford in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future[30] postulates a “technology paradox” in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to work traditionally considered to be “routine”.[31]

Theodore Modis[32][33] and Jonathan Huebner[34] argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore’s prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advancements in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.[35] While Kurzweil used Modis’ resources, and Modis’ work was around accelerating change, Modis distanced himself from Kurzweil’s thesis of a “technological singularity”, claiming that it lacks scientific rigor.[33]

Others[who?] propose that other “singularities” can be found through analysis of trends in world population, world gross domestic product, and other indices. Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.[36][improper synthesis?]

In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore’s law to 19th-century computers.[37]

In a 2007 paper, Schmidhuber stated that the frequency of subjectively “notable events” appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.[38]

Paul Allen argues the opposite of accelerating returns, the complexity brake;[18] the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies,[39] a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[34] The growth of complexity eventually becomes self-limiting, and leads to a widespread “general systems collapse”.

Jaron Lanier refutes the idea that the Singularity is inevitable. He states: “I do not think the technology is creating itself. It’s not an autonomous process.”[40] He goes on to assert: “The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it’s the same thing operationally as denying people clout, dignity, and self-determination … to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics.”[40]

Economist Robert J. Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J. Good.[41]

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil’s iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary “events” were picked arbitrarily.[42] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[43]

The term “technological singularity” reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.[44][45] It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat,[46][47] as the issue has not been dealt with by most artificial general intelligence researchers, although the topic of friendly artificial intelligence is investigated by the Future of Humanity Institute and the Machine Intelligence Research Institute.[44]

While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description.[citation needed] In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. A 2016 article in Trends in Ecology & Evolution argues that “humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels… we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes… With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction”. The article argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life’s evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition. The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, “the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (5×10^21 bytes). In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1×10^19 bytes. The digital realm stored 500 times more information than this in 2014 (…see Figure)… The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3×10^37 base pairs, equivalent to 1.325×10^37 bytes of information. If growth in digital storage continues at its current rate of 3038% compound annual growth per year,[22] it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years”.[48]

In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.[49]

Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a “cockroach” stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[49]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[50][improper synthesis?]

In his 2005 book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.[51] Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.[52]

K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and utilizing as yet hypothetical biological machines, in his 1986 book Engines of Creation. According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman’s theoretical micromachines . Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) “swallow the doctor”. The idea was incorporated into Feynman’s 1959 essay There’s Plenty of Room at the Bottom.[53]

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called “Digital Ascension” that involves “people dying in the flesh and being uploaded into a computer and remaining conscious”.[54] Singularitarianism has also been likened to a religion by John Horgan.[55]

In his obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”[3]

In 1965, Good wrote his essay postulating an “intelligence explosion” of recursive self-improvement of a machine intelligence. In 1985, in “The Time Scale of Artificial Intelligence”, artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an “infinity point”: if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.[4][56]

In 1981, Stanisaw Lem published his science fiction novel Golem XIV. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.

In 1983, Vinge greatly popularized Good’s intelligence explosion in a number of writings, first addressing the topic in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term “singularity” in a way that was specifically tied to the creation of intelligent machines:[57][58] writing

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.

Vinge’s 1993 article “The Coming Technological Singularity: How to Survive in the Post-Human Era”,[5] spread widely on the internet and helped to popularize the idea.[59] This article contains the statement, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.[5]

In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of the singularity.[27]

In 2005, Kurzweil published The Singularity is Near. Kurzweil’s publicity campaign included an appearance on The Daily Show with Jon Stewart.[60]

In 2007, Eliezer Yudkowsky suggested that many of the varied definitions that have been assigned to “singularity” are mutually incompatible rather than mutually supporting.[13][61] For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good’s proposed discontinuous upswing in intelligence and Vinge’s thesis on unpredictability.[13]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is “to educate, inspire and empower leaders to apply exponential technologies to address humanity’s grand challenges.”[62] Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA’s Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during the northern-hemisphere summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

In 2007, the joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity.[63][64][65]

Former President of the United States Barack Obama spoke about singularity in his interview to Wired in 2016:[66]

One thing that we haven’t talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people aren’t spending a lot of time right now worrying about singularitythey are worrying about “Well, is my job going to be replaced by a machine?”

Excerpt from:

Technological singularity – Wikipedia

The Atlantean Conspiracy

Freedom activist, comedian, and former People’s Voice hostess Elissa Hawke sits down with IFERS President Eric Dubay to discuss dinosaurs, flat Earth, conspiracies, controlled opposition, and to address and clear-up several slanderous lies, accusations and rumors regarding Eric’s background, family, and personal life.

Big thanks to Eddie Bravo, Kron Gracie, and the rest of the guys for this ground-breaking podcast where we expose the dinosaur hoax, the nuclear hoax, evolution, the big bang, controlled opposition, NASA, and the flat Earth conspiracy. The podcast starts at 1:25 and contains a few technical audio issues, but overall came out very well and is essential viewing for flat-Earthers and ballers alike! Please help like, comment, share, subscribe, download and re-upload to help spread the word.

In the following hangout with Del and the other guys from Beyond the Imaginary Curve, we discuss a variety of subjects including natural science, the law of perspective, spirituality, philosophy, veganism, martial arts, and of course the flat Earth.

Eric Dubay, Head of the International Flat Earth Research Society is interviewed by Sean Condon of Truth Seekers Farm about several subjects including the Holohoax, International Jewry, Adolf Hitler, Pizzagate, Psychopathy, and Flat Earth.

Thanks to Mel Fabregas of Veritas Radio for publicly releasing this second part of our important interview (normally reserved for subscribers only) where we delve into topics from the fake spinning ball Earth, to fake aliens, fake dinosaurs, fake ape-men, and evidence of real giant human beings covered up by the Masonic establishment.

Eric Dubay says the greatest lie and most successful cover-up in history, NASA and Freemasonry’s biggest secret, is that we live on a plane, not a planet; that Earth is the flat, stationary center of the universe. Eric is an American living in Thailand where he teaches Yoga and Wing Chun part-time while exposing the New World Order full-time. He is the author of five books and is the president of the International Flat Earth Research Society. From Mel Fabregas: Let me begin this interview by stating that I have no attachment to the flat earth. I have no attachment to the oblate spheroid, and even to the sphere. If our home is any of these, so be it. I wouldn’t be surprised of any. What I continue to be surprised and suspicious of are the people who continue attacking those who simply ask questions. I recently attended a conference that deals with the most open minded topics you could possibly imagine. However, when it comes to the flat earth topic, it was a no no. Look, I can’t say I blame people for thinking this is the most absurd topic under the sun, or the firmament, rather, but you, open minded people who discuss aliens, UFOs, reptilians who rule the world, Bigfoot, and the rest of it, why do you continue telling people to stop looking into the flat earth? Those of you who study the pyramids and ancient civilizations, you venerate these ancient ones, and rightfully so, and some of these very ancient ones believed the earth was flat. Why do you then continue looking into their achievements if the notion of a flat earth is so absurd? Shouldn’t that discredit them too? And those of you who continue writing to me saying the ancient ones knew they lived on a sphere, how did they know? Perhaps they had advanced technology that allowed seeing the skies above. Just because you see sphere above you doesn’t prove you are standing on one. You can still play pool on a flat table and basketball on a flat court. Perhaps the psyop is questioning those who question the true shape of our plane(t).

Thanks to Evita Ochel of EBTV for the wonderful interview covering topics ranging from spirituality and science, to health and veganism, to conspiracies and the flat Earth. Please like, share, comment and subscribe to help me spread the word on these most important subjects!

Tonight Eric Dubay returns to the show. Eric is a leading and prominent voice behind the revival of the Flat Earth theory. He currently lives in Thailand where he teaches Yoga and Wing Chun which is the traditional Chinese martial art of self defense. Eric has also written a number of books one of which is The Flat Earth Conspiracy where Eric explains how the world has been systematically brainwashed and indoctrinated for centuries into believing the greatest lie of all time that the Earth is a spinning globe. He has also recently released his free eBook 200 Proofs Earth is Not a Spinning Ball. The discussion with Eric was wide ranging as we talk about the traction the Flat Earth theory is gaining throughout the world along with Erics thoughts on Zionism, controlled opposition, psychopathic behavior and forbidden topics like the Holocaust.

Continuing my discussion with Mel Fabregas of Veritas Radio, we discuss subjects ranging from the Sun, Moon, and stars, UFOs, aliens, and DMT, Jews, Hitler and WWII, and of course, the flat Earth. If you missed part one be sure to watch this first: https://www.youtube.com/watch?v=5yRAQG5-ca8 And special thanks to Steve Wilner for the awesome psychedelic visuals: https://www.youtube.com/user/soundlessdawn

Join me and Mel Fabregas of Veritas Radio for a discussion about the greatest deception in history, the mother of all conspiracies, NASA and Freemasonry’s biggest secret, that Earth is flat, motionless, and the center of the universe. Stay tuned for part 2 coming out next week, or better yet show Mel your support and subscribe to Veritas radio to listen today.

In this Firestarter radio re-upload we cover the Flat Earth, NASA, Freemasonry, Judaism, Veganism, Kosher Slaughter, Circumcision, Hitler, the Holocaust Hoax, Controlled Opposition, who really runs the world and why they have convinced you you are living on a spinning ball!

In the following re-upload, Patricia Aiken from Sacred Cow BBQ and I discuss the Flat Earth, WWII, Hitler, Gaddafi, Rothschild / Jewish power, Controlled Opposition, and Voluntaryism.

In this roundtable discussion with John Le Bon and the Ball Earth Skeptics we cover topics ranging from shills in the flat Earth movement to astrotheology and kundalini awakening.

Tonight my very special guest is Eric Dubay. Eric is one of the leading voices behind the revival of the Flat Earth theory. He currently lives in Thailand where he teaches Yoga and Wing Chun which is the traditional Chinese martial art of self defense. Eric has also written a number of books one of which is The Flat Earth Conspiracy where Eric explains how the world has been systematically brainwashed and indoctrinated for centuries into believing the greatest lie of all time that the Earth is a spinning globe. Our discussion was wide ranging as we talk through many of the proof points supporting the Flat Earth model as well addressing some the arguments used by heliocentrists, or the globe believers, in their attempts to debunk the flat Earth theory. I strongly urge everyone to listen with an open and critical mind. I believe youll find the topic extremely interesting and not as cut and dry as you might think.

Coast 2 Coast AM regular stand-in and host of Zoomer radio’s Conspiracy Show, Richard Syrett, interviews Eric Dubay, president of the International Flat Earth Research Society about the greatest lie and most successful cover-up in history, NASA and Freemasonry’s biggest secret, that we live on a plane, not a planet, that Earth is the flat, stationary center of the universe.

In this episode of The Anarchast I speak with Jeff Berwick about Statism, Anarchy/Voluntaryism and of course, the Flat Earth! Please be sure to share, like, and subscribe for more interviews and flat Earth research. Also check out my Anarchy article archive at Atlantean Conspiracy: http://www.atlanteanconspiracy.com/search/label/Anarchy

The oldest and largest secret society in existence has a secret so huge and well-hidden, so contrary to what we have been taught to believe, that its exposure threatens to not only completely and single-handedly crush their New World Order “United Nations” but to radically reshape the entirety of modern academia, universities, the mainstream / alternative medias, all the world’s governments, space agencies, and the very Earth beneath our feet.

In this highly-informative and humorous podcast President of the International Flat Earth Research Society and webmaster of AtlanteanConspiracy.com, Eric Dubay, talks with fellow conspiracy author and webmaster of WaykiWayki.com, Mark Knight. Topics covered include flat Earth science vs. ball Earth pseudo-science, the various proofs/evidence for the geocentric flat Earth and debunking the supposed proofs/evidence for the heliocentric ball-Earth, the North Pole and South Pole (or lack thereof) and the Antarctic ice-rim, the Sun, Moon, eclipses, seasons, Polaris, stars, planets, NASA, the fake Moon and Mars landings, the controlled opposition Flat Earth Society vs. the legitimate International Flat Earth Research Society, and disinformation agents like Mark Sargent (codename: Sargent Non-Sense).

The following interview features myself, Eric Dubay, a genuine, legitimate flat Earth researcher talking with DJ Buttamilk (Dan Lefkowitz) of Brattleboro, Vermont community radio.

Follow this link:

The Atlantean Conspiracy

Ron Paul Institute for Peace and Prosperity

apr 17, 2018 Lawrence Wilkerson: Cowardly Congress, Apathetic Americans Allow US Military Intervention Retired United States Army Colonel Lawrence Wilkerson, who was chief of staff for Secretary of State Colin Powell in the George W. Bush administration, explains in a new interview the reason he believes the US president can do anything he pleases with regard to the armed forces of the United States anytime he pleases. That reason, says Wilkerson in a The Real News Interview with host Sharmini Peries, is because the American people are apathetic and their representatives in the Congress are cowards who, but for few exceptions like [Sen.] Mike Lee [(R-UT)] and [Sen.] Bernie Sanders [(I-VT)] and some of the others, will not do anything to restrain such exercise of presidential power. read on…

apr 17, 2018 US: Russia Hacked The Evidence Of Chemical Attack In Syria We are now being told (and I assure you I am not making this up) that if the Organization for the Prohibition of Chemical Weapons doesnt find evidence that the Syrian government conducted a chemical weapons attack in Douma last week, its because Russia hid the evidence. read on…

apr 14, 2018 Five Minutes Five Issues: Hemp Bill, Militarized America, Marijuana Disinfo, Utah Marijuana, Pain Killers A new episode of Five Minutes Five Issues is out. You can listen to it, and read a transcript, below. You can also find previous episodes of the show at Stitcher, iTunes, YouTube, and SoundCloud. read on…

apr 13, 2018 Trump Ordering Syria Attack Would Be an Unconstitutional, But Not Uncommon, Presidential Action Louis Fisher, a Ron Paul Institute for Peace and Prosperity Academic Board member and United States Constitution scholar, is quoted, in a Thursday Washington Examiner article by Steven Nelson, declaring that a unilateral decision by President Donald Trump to use military force against Syria would be unconstitutional as well as not authorized under the War Powers Resolution. Yet, Trump taking such action, says Fisher, would also be in line with military actions regularly pursued by previous presidents. read on…

apr 13, 2018 US War Making: Whats in It for You? Ifyou are an American, have you asked yourself these questions?

1. What purpose of mine is served by the US governments making war in Iraq?2. What purpose of mine is served by theUSgovernments making war in Afghanistan?3. What purpose of mine is served by theUSgovernments making war in Syria?4. What purpose of mine is served by theUSgovernments making war in Yemen?5. What purpose of mine is served by theUSgovernments making war in Africa?6. What purpose of mine would be served by theUSgovernments making war in Iran? read on…

Visit link:

Ron Paul Institute for Peace and Prosperity

The Atlantean Conspiracy

Freedom activist, comedian, and former People’s Voice hostess Elissa Hawke sits down with IFERS President Eric Dubay to discuss dinosaurs, flat Earth, conspiracies, controlled opposition, and to address and clear-up several slanderous lies, accusations and rumors regarding Eric’s background, family, and personal life.

Big thanks to Eddie Bravo, Kron Gracie, and the rest of the guys for this ground-breaking podcast where we expose the dinosaur hoax, the nuclear hoax, evolution, the big bang, controlled opposition, NASA, and the flat Earth conspiracy. The podcast starts at 1:25 and contains a few technical audio issues, but overall came out very well and is essential viewing for flat-Earthers and ballers alike! Please help like, comment, share, subscribe, download and re-upload to help spread the word.

In the following hangout with Del and the other guys from Beyond the Imaginary Curve, we discuss a variety of subjects including natural science, the law of perspective, spirituality, philosophy, veganism, martial arts, and of course the flat Earth.

Eric Dubay, Head of the International Flat Earth Research Society is interviewed by Sean Condon of Truth Seekers Farm about several subjects including the Holohoax, International Jewry, Adolf Hitler, Pizzagate, Psychopathy, and Flat Earth.

Thanks to Mel Fabregas of Veritas Radio for publicly releasing this second part of our important interview (normally reserved for subscribers only) where we delve into topics from the fake spinning ball Earth, to fake aliens, fake dinosaurs, fake ape-men, and evidence of real giant human beings covered up by the Masonic establishment.

Eric Dubay says the greatest lie and most successful cover-up in history, NASA and Freemasonry’s biggest secret, is that we live on a plane, not a planet; that Earth is the flat, stationary center of the universe. Eric is an American living in Thailand where he teaches Yoga and Wing Chun part-time while exposing the New World Order full-time. He is the author of five books and is the president of the International Flat Earth Research Society. From Mel Fabregas: Let me begin this interview by stating that I have no attachment to the flat earth. I have no attachment to the oblate spheroid, and even to the sphere. If our home is any of these, so be it. I wouldn’t be surprised of any. What I continue to be surprised and suspicious of are the people who continue attacking those who simply ask questions. I recently attended a conference that deals with the most open minded topics you could possibly imagine. However, when it comes to the flat earth topic, it was a no no. Look, I can’t say I blame people for thinking this is the most absurd topic under the sun, or the firmament, rather, but you, open minded people who discuss aliens, UFOs, reptilians who rule the world, Bigfoot, and the rest of it, why do you continue telling people to stop looking into the flat earth? Those of you who study the pyramids and ancient civilizations, you venerate these ancient ones, and rightfully so, and some of these very ancient ones believed the earth was flat. Why do you then continue looking into their achievements if the notion of a flat earth is so absurd? Shouldn’t that discredit them too? And those of you who continue writing to me saying the ancient ones knew they lived on a sphere, how did they know? Perhaps they had advanced technology that allowed seeing the skies above. Just because you see sphere above you doesn’t prove you are standing on one. You can still play pool on a flat table and basketball on a flat court. Perhaps the psyop is questioning those who question the true shape of our plane(t).

Thanks to Evita Ochel of EBTV for the wonderful interview covering topics ranging from spirituality and science, to health and veganism, to conspiracies and the flat Earth. Please like, share, comment and subscribe to help me spread the word on these most important subjects!

Tonight Eric Dubay returns to the show. Eric is a leading and prominent voice behind the revival of the Flat Earth theory. He currently lives in Thailand where he teaches Yoga and Wing Chun which is the traditional Chinese martial art of self defense. Eric has also written a number of books one of which is The Flat Earth Conspiracy where Eric explains how the world has been systematically brainwashed and indoctrinated for centuries into believing the greatest lie of all time that the Earth is a spinning globe. He has also recently released his free eBook 200 Proofs Earth is Not a Spinning Ball. The discussion with Eric was wide ranging as we talk about the traction the Flat Earth theory is gaining throughout the world along with Erics thoughts on Zionism, controlled opposition, psychopathic behavior and forbidden topics like the Holocaust.

Continuing my discussion with Mel Fabregas of Veritas Radio, we discuss subjects ranging from the Sun, Moon, and stars, UFOs, aliens, and DMT, Jews, Hitler and WWII, and of course, the flat Earth. If you missed part one be sure to watch this first: https://www.youtube.com/watch?v=5yRAQG5-ca8 And special thanks to Steve Wilner for the awesome psychedelic visuals: https://www.youtube.com/user/soundlessdawn

Join me and Mel Fabregas of Veritas Radio for a discussion about the greatest deception in history, the mother of all conspiracies, NASA and Freemasonry’s biggest secret, that Earth is flat, motionless, and the center of the universe. Stay tuned for part 2 coming out next week, or better yet show Mel your support and subscribe to Veritas radio to listen today.

In this Firestarter radio re-upload we cover the Flat Earth, NASA, Freemasonry, Judaism, Veganism, Kosher Slaughter, Circumcision, Hitler, the Holocaust Hoax, Controlled Opposition, who really runs the world and why they have convinced you you are living on a spinning ball!

In the following re-upload, Patricia Aiken from Sacred Cow BBQ and I discuss the Flat Earth, WWII, Hitler, Gaddafi, Rothschild / Jewish power, Controlled Opposition, and Voluntaryism.

In this roundtable discussion with John Le Bon and the Ball Earth Skeptics we cover topics ranging from shills in the flat Earth movement to astrotheology and kundalini awakening.

Tonight my very special guest is Eric Dubay. Eric is one of the leading voices behind the revival of the Flat Earth theory. He currently lives in Thailand where he teaches Yoga and Wing Chun which is the traditional Chinese martial art of self defense. Eric has also written a number of books one of which is The Flat Earth Conspiracy where Eric explains how the world has been systematically brainwashed and indoctrinated for centuries into believing the greatest lie of all time that the Earth is a spinning globe. Our discussion was wide ranging as we talk through many of the proof points supporting the Flat Earth model as well addressing some the arguments used by heliocentrists, or the globe believers, in their attempts to debunk the flat Earth theory. I strongly urge everyone to listen with an open and critical mind. I believe youll find the topic extremely interesting and not as cut and dry as you might think.

Coast 2 Coast AM regular stand-in and host of Zoomer radio’s Conspiracy Show, Richard Syrett, interviews Eric Dubay, president of the International Flat Earth Research Society about the greatest lie and most successful cover-up in history, NASA and Freemasonry’s biggest secret, that we live on a plane, not a planet, that Earth is the flat, stationary center of the universe.

In this episode of The Anarchast I speak with Jeff Berwick about Statism, Anarchy/Voluntaryism and of course, the Flat Earth! Please be sure to share, like, and subscribe for more interviews and flat Earth research. Also check out my Anarchy article archive at Atlantean Conspiracy: http://www.atlanteanconspiracy.com/search/label/Anarchy

The oldest and largest secret society in existence has a secret so huge and well-hidden, so contrary to what we have been taught to believe, that its exposure threatens to not only completely and single-handedly crush their New World Order “United Nations” but to radically reshape the entirety of modern academia, universities, the mainstream / alternative medias, all the world’s governments, space agencies, and the very Earth beneath our feet.

In this highly-informative and humorous podcast President of the International Flat Earth Research Society and webmaster of AtlanteanConspiracy.com, Eric Dubay, talks with fellow conspiracy author and webmaster of WaykiWayki.com, Mark Knight. Topics covered include flat Earth science vs. ball Earth pseudo-science, the various proofs/evidence for the geocentric flat Earth and debunking the supposed proofs/evidence for the heliocentric ball-Earth, the North Pole and South Pole (or lack thereof) and the Antarctic ice-rim, the Sun, Moon, eclipses, seasons, Polaris, stars, planets, NASA, the fake Moon and Mars landings, the controlled opposition Flat Earth Society vs. the legitimate International Flat Earth Research Society, and disinformation agents like Mark Sargent (codename: Sargent Non-Sense).

The following interview features myself, Eric Dubay, a genuine, legitimate flat Earth researcher talking with DJ Buttamilk (Dan Lefkowitz) of Brattleboro, Vermont community radio.

Here is the original post:

The Atlantean Conspiracy

The Atlantean Conspiracy

Freedom activist, comedian, and former People’s Voice hostess Elissa Hawke sits down with IFERS President Eric Dubay to discuss dinosaurs, flat Earth, conspiracies, controlled opposition, and to address and clear-up several slanderous lies, accusations and rumors regarding Eric’s background, family, and personal life.

Big thanks to Eddie Bravo, Kron Gracie, and the rest of the guys for this ground-breaking podcast where we expose the dinosaur hoax, the nuclear hoax, evolution, the big bang, controlled opposition, NASA, and the flat Earth conspiracy. The podcast starts at 1:25 and contains a few technical audio issues, but overall came out very well and is essential viewing for flat-Earthers and ballers alike! Please help like, comment, share, subscribe, download and re-upload to help spread the word.

In the following hangout with Del and the other guys from Beyond the Imaginary Curve, we discuss a variety of subjects including natural science, the law of perspective, spirituality, philosophy, veganism, martial arts, and of course the flat Earth.

Eric Dubay, Head of the International Flat Earth Research Society is interviewed by Sean Condon of Truth Seekers Farm about several subjects including the Holohoax, International Jewry, Adolf Hitler, Pizzagate, Psychopathy, and Flat Earth.

Thanks to Mel Fabregas of Veritas Radio for publicly releasing this second part of our important interview (normally reserved for subscribers only) where we delve into topics from the fake spinning ball Earth, to fake aliens, fake dinosaurs, fake ape-men, and evidence of real giant human beings covered up by the Masonic establishment.

Eric Dubay says the greatest lie and most successful cover-up in history, NASA and Freemasonry’s biggest secret, is that we live on a plane, not a planet; that Earth is the flat, stationary center of the universe. Eric is an American living in Thailand where he teaches Yoga and Wing Chun part-time while exposing the New World Order full-time. He is the author of five books and is the president of the International Flat Earth Research Society. From Mel Fabregas: Let me begin this interview by stating that I have no attachment to the flat earth. I have no attachment to the oblate spheroid, and even to the sphere. If our home is any of these, so be it. I wouldn’t be surprised of any. What I continue to be surprised and suspicious of are the people who continue attacking those who simply ask questions. I recently attended a conference that deals with the most open minded topics you could possibly imagine. However, when it comes to the flat earth topic, it was a no no. Look, I can’t say I blame people for thinking this is the most absurd topic under the sun, or the firmament, rather, but you, open minded people who discuss aliens, UFOs, reptilians who rule the world, Bigfoot, and the rest of it, why do you continue telling people to stop looking into the flat earth? Those of you who study the pyramids and ancient civilizations, you venerate these ancient ones, and rightfully so, and some of these very ancient ones believed the earth was flat. Why do you then continue looking into their achievements if the notion of a flat earth is so absurd? Shouldn’t that discredit them too? And those of you who continue writing to me saying the ancient ones knew they lived on a sphere, how did they know? Perhaps they had advanced technology that allowed seeing the skies above. Just because you see sphere above you doesn’t prove you are standing on one. You can still play pool on a flat table and basketball on a flat court. Perhaps the psyop is questioning those who question the true shape of our plane(t).

Thanks to Evita Ochel of EBTV for the wonderful interview covering topics ranging from spirituality and science, to health and veganism, to conspiracies and the flat Earth. Please like, share, comment and subscribe to help me spread the word on these most important subjects!

Tonight Eric Dubay returns to the show. Eric is a leading and prominent voice behind the revival of the Flat Earth theory. He currently lives in Thailand where he teaches Yoga and Wing Chun which is the traditional Chinese martial art of self defense. Eric has also written a number of books one of which is The Flat Earth Conspiracy where Eric explains how the world has been systematically brainwashed and indoctrinated for centuries into believing the greatest lie of all time that the Earth is a spinning globe. He has also recently released his free eBook 200 Proofs Earth is Not a Spinning Ball. The discussion with Eric was wide ranging as we talk about the traction the Flat Earth theory is gaining throughout the world along with Erics thoughts on Zionism, controlled opposition, psychopathic behavior and forbidden topics like the Holocaust.

Continuing my discussion with Mel Fabregas of Veritas Radio, we discuss subjects ranging from the Sun, Moon, and stars, UFOs, aliens, and DMT, Jews, Hitler and WWII, and of course, the flat Earth. If you missed part one be sure to watch this first: https://www.youtube.com/watch?v=5yRAQG5-ca8 And special thanks to Steve Wilner for the awesome psychedelic visuals: https://www.youtube.com/user/soundlessdawn

Join me and Mel Fabregas of Veritas Radio for a discussion about the greatest deception in history, the mother of all conspiracies, NASA and Freemasonry’s biggest secret, that Earth is flat, motionless, and the center of the universe. Stay tuned for part 2 coming out next week, or better yet show Mel your support and subscribe to Veritas radio to listen today.

In this Firestarter radio re-upload we cover the Flat Earth, NASA, Freemasonry, Judaism, Veganism, Kosher Slaughter, Circumcision, Hitler, the Holocaust Hoax, Controlled Opposition, who really runs the world and why they have convinced you you are living on a spinning ball!

In the following re-upload, Patricia Aiken from Sacred Cow BBQ and I discuss the Flat Earth, WWII, Hitler, Gaddafi, Rothschild / Jewish power, Controlled Opposition, and Voluntaryism.

In this roundtable discussion with John Le Bon and the Ball Earth Skeptics we cover topics ranging from shills in the flat Earth movement to astrotheology and kundalini awakening.

Tonight my very special guest is Eric Dubay. Eric is one of the leading voices behind the revival of the Flat Earth theory. He currently lives in Thailand where he teaches Yoga and Wing Chun which is the traditional Chinese martial art of self defense. Eric has also written a number of books one of which is The Flat Earth Conspiracy where Eric explains how the world has been systematically brainwashed and indoctrinated for centuries into believing the greatest lie of all time that the Earth is a spinning globe. Our discussion was wide ranging as we talk through many of the proof points supporting the Flat Earth model as well addressing some the arguments used by heliocentrists, or the globe believers, in their attempts to debunk the flat Earth theory. I strongly urge everyone to listen with an open and critical mind. I believe youll find the topic extremely interesting and not as cut and dry as you might think.

Coast 2 Coast AM regular stand-in and host of Zoomer radio’s Conspiracy Show, Richard Syrett, interviews Eric Dubay, president of the International Flat Earth Research Society about the greatest lie and most successful cover-up in history, NASA and Freemasonry’s biggest secret, that we live on a plane, not a planet, that Earth is the flat, stationary center of the universe.

In this episode of The Anarchast I speak with Jeff Berwick about Statism, Anarchy/Voluntaryism and of course, the Flat Earth! Please be sure to share, like, and subscribe for more interviews and flat Earth research. Also check out my Anarchy article archive at Atlantean Conspiracy: http://www.atlanteanconspiracy.com/search/label/Anarchy

The oldest and largest secret society in existence has a secret so huge and well-hidden, so contrary to what we have been taught to believe, that its exposure threatens to not only completely and single-handedly crush their New World Order “United Nations” but to radically reshape the entirety of modern academia, universities, the mainstream / alternative medias, all the world’s governments, space agencies, and the very Earth beneath our feet.

In this highly-informative and humorous podcast President of the International Flat Earth Research Society and webmaster of AtlanteanConspiracy.com, Eric Dubay, talks with fellow conspiracy author and webmaster of WaykiWayki.com, Mark Knight. Topics covered include flat Earth science vs. ball Earth pseudo-science, the various proofs/evidence for the geocentric flat Earth and debunking the supposed proofs/evidence for the heliocentric ball-Earth, the North Pole and South Pole (or lack thereof) and the Antarctic ice-rim, the Sun, Moon, eclipses, seasons, Polaris, stars, planets, NASA, the fake Moon and Mars landings, the controlled opposition Flat Earth Society vs. the legitimate International Flat Earth Research Society, and disinformation agents like Mark Sargent (codename: Sargent Non-Sense).

The following interview features myself, Eric Dubay, a genuine, legitimate flat Earth researcher talking with DJ Buttamilk (Dan Lefkowitz) of Brattleboro, Vermont community radio.

See the original post:

The Atlantean Conspiracy

The Atlantean Conspiracy

Freedom activist, comedian, and former People’s Voice hostess Elissa Hawke sits down with IFERS President Eric Dubay to discuss dinosaurs, flat Earth, conspiracies, controlled opposition, and to address and clear-up several slanderous lies, accusations and rumors regarding Eric’s background, family, and personal life.

Big thanks to Eddie Bravo, Kron Gracie, and the rest of the guys for this ground-breaking podcast where we expose the dinosaur hoax, the nuclear hoax, evolution, the big bang, controlled opposition, NASA, and the flat Earth conspiracy. The podcast starts at 1:25 and contains a few technical audio issues, but overall came out very well and is essential viewing for flat-Earthers and ballers alike! Please help like, comment, share, subscribe, download and re-upload to help spread the word.

In the following hangout with Del and the other guys from Beyond the Imaginary Curve, we discuss a variety of subjects including natural science, the law of perspective, spirituality, philosophy, veganism, martial arts, and of course the flat Earth.

Eric Dubay, Head of the International Flat Earth Research Society is interviewed by Sean Condon of Truth Seekers Farm about several subjects including the Holohoax, International Jewry, Adolf Hitler, Pizzagate, Psychopathy, and Flat Earth.

Thanks to Mel Fabregas of Veritas Radio for publicly releasing this second part of our important interview (normally reserved for subscribers only) where we delve into topics from the fake spinning ball Earth, to fake aliens, fake dinosaurs, fake ape-men, and evidence of real giant human beings covered up by the Masonic establishment.

Eric Dubay says the greatest lie and most successful cover-up in history, NASA and Freemasonry’s biggest secret, is that we live on a plane, not a planet; that Earth is the flat, stationary center of the universe. Eric is an American living in Thailand where he teaches Yoga and Wing Chun part-time while exposing the New World Order full-time. He is the author of five books and is the president of the International Flat Earth Research Society. From Mel Fabregas: Let me begin this interview by stating that I have no attachment to the flat earth. I have no attachment to the oblate spheroid, and even to the sphere. If our home is any of these, so be it. I wouldn’t be surprised of any. What I continue to be surprised and suspicious of are the people who continue attacking those who simply ask questions. I recently attended a conference that deals with the most open minded topics you could possibly imagine. However, when it comes to the flat earth topic, it was a no no. Look, I can’t say I blame people for thinking this is the most absurd topic under the sun, or the firmament, rather, but you, open minded people who discuss aliens, UFOs, reptilians who rule the world, Bigfoot, and the rest of it, why do you continue telling people to stop looking into the flat earth? Those of you who study the pyramids and ancient civilizations, you venerate these ancient ones, and rightfully so, and some of these very ancient ones believed the earth was flat. Why do you then continue looking into their achievements if the notion of a flat earth is so absurd? Shouldn’t that discredit them too? And those of you who continue writing to me saying the ancient ones knew they lived on a sphere, how did they know? Perhaps they had advanced technology that allowed seeing the skies above. Just because you see sphere above you doesn’t prove you are standing on one. You can still play pool on a flat table and basketball on a flat court. Perhaps the psyop is questioning those who question the true shape of our plane(t).

Thanks to Evita Ochel of EBTV for the wonderful interview covering topics ranging from spirituality and science, to health and veganism, to conspiracies and the flat Earth. Please like, share, comment and subscribe to help me spread the word on these most important subjects!

Tonight Eric Dubay returns to the show. Eric is a leading and prominent voice behind the revival of the Flat Earth theory. He currently lives in Thailand where he teaches Yoga and Wing Chun which is the traditional Chinese martial art of self defense. Eric has also written a number of books one of which is The Flat Earth Conspiracy where Eric explains how the world has been systematically brainwashed and indoctrinated for centuries into believing the greatest lie of all time that the Earth is a spinning globe. He has also recently released his free eBook 200 Proofs Earth is Not a Spinning Ball. The discussion with Eric was wide ranging as we talk about the traction the Flat Earth theory is gaining throughout the world along with Erics thoughts on Zionism, controlled opposition, psychopathic behavior and forbidden topics like the Holocaust.

Continuing my discussion with Mel Fabregas of Veritas Radio, we discuss subjects ranging from the Sun, Moon, and stars, UFOs, aliens, and DMT, Jews, Hitler and WWII, and of course, the flat Earth. If you missed part one be sure to watch this first: https://www.youtube.com/watch?v=5yRAQG5-ca8 And special thanks to Steve Wilner for the awesome psychedelic visuals: https://www.youtube.com/user/soundlessdawn

Join me and Mel Fabregas of Veritas Radio for a discussion about the greatest deception in history, the mother of all conspiracies, NASA and Freemasonry’s biggest secret, that Earth is flat, motionless, and the center of the universe. Stay tuned for part 2 coming out next week, or better yet show Mel your support and subscribe to Veritas radio to listen today.

In this Firestarter radio re-upload we cover the Flat Earth, NASA, Freemasonry, Judaism, Veganism, Kosher Slaughter, Circumcision, Hitler, the Holocaust Hoax, Controlled Opposition, who really runs the world and why they have convinced you you are living on a spinning ball!

In the following re-upload, Patricia Aiken from Sacred Cow BBQ and I discuss the Flat Earth, WWII, Hitler, Gaddafi, Rothschild / Jewish power, Controlled Opposition, and Voluntaryism.

In this roundtable discussion with John Le Bon and the Ball Earth Skeptics we cover topics ranging from shills in the flat Earth movement to astrotheology and kundalini awakening.

Tonight my very special guest is Eric Dubay. Eric is one of the leading voices behind the revival of the Flat Earth theory. He currently lives in Thailand where he teaches Yoga and Wing Chun which is the traditional Chinese martial art of self defense. Eric has also written a number of books one of which is The Flat Earth Conspiracy where Eric explains how the world has been systematically brainwashed and indoctrinated for centuries into believing the greatest lie of all time that the Earth is a spinning globe. Our discussion was wide ranging as we talk through many of the proof points supporting the Flat Earth model as well addressing some the arguments used by heliocentrists, or the globe believers, in their attempts to debunk the flat Earth theory. I strongly urge everyone to listen with an open and critical mind. I believe youll find the topic extremely interesting and not as cut and dry as you might think.

Coast 2 Coast AM regular stand-in and host of Zoomer radio’s Conspiracy Show, Richard Syrett, interviews Eric Dubay, president of the International Flat Earth Research Society about the greatest lie and most successful cover-up in history, NASA and Freemasonry’s biggest secret, that we live on a plane, not a planet, that Earth is the flat, stationary center of the universe.

In this episode of The Anarchast I speak with Jeff Berwick about Statism, Anarchy/Voluntaryism and of course, the Flat Earth! Please be sure to share, like, and subscribe for more interviews and flat Earth research. Also check out my Anarchy article archive at Atlantean Conspiracy: http://www.atlanteanconspiracy.com/search/label/Anarchy

The oldest and largest secret society in existence has a secret so huge and well-hidden, so contrary to what we have been taught to believe, that its exposure threatens to not only completely and single-handedly crush their New World Order “United Nations” but to radically reshape the entirety of modern academia, universities, the mainstream / alternative medias, all the world’s governments, space agencies, and the very Earth beneath our feet.

In this highly-informative and humorous podcast President of the International Flat Earth Research Society and webmaster of AtlanteanConspiracy.com, Eric Dubay, talks with fellow conspiracy author and webmaster of WaykiWayki.com, Mark Knight. Topics covered include flat Earth science vs. ball Earth pseudo-science, the various proofs/evidence for the geocentric flat Earth and debunking the supposed proofs/evidence for the heliocentric ball-Earth, the North Pole and South Pole (or lack thereof) and the Antarctic ice-rim, the Sun, Moon, eclipses, seasons, Polaris, stars, planets, NASA, the fake Moon and Mars landings, the controlled opposition Flat Earth Society vs. the legitimate International Flat Earth Research Society, and disinformation agents like Mark Sargent (codename: Sargent Non-Sense).

The following interview features myself, Eric Dubay, a genuine, legitimate flat Earth researcher talking with DJ Buttamilk (Dan Lefkowitz) of Brattleboro, Vermont community radio.

Here is the original post:

The Atlantean Conspiracy

The Atlantean Conspiracy

Freedom activist, comedian, and former People’s Voice hostess Elissa Hawke sits down with IFERS President Eric Dubay to discuss dinosaurs, flat Earth, conspiracies, controlled opposition, and to address and clear-up several slanderous lies, accusations and rumors regarding Eric’s background, family, and personal life.

Big thanks to Eddie Bravo, Kron Gracie, and the rest of the guys for this ground-breaking podcast where we expose the dinosaur hoax, the nuclear hoax, evolution, the big bang, controlled opposition, NASA, and the flat Earth conspiracy. The podcast starts at 1:25 and contains a few technical audio issues, but overall came out very well and is essential viewing for flat-Earthers and ballers alike! Please help like, comment, share, subscribe, download and re-upload to help spread the word.

In the following hangout with Del and the other guys from Beyond the Imaginary Curve, we discuss a variety of subjects including natural science, the law of perspective, spirituality, philosophy, veganism, martial arts, and of course the flat Earth.

Eric Dubay, Head of the International Flat Earth Research Society is interviewed by Sean Condon of Truth Seekers Farm about several subjects including the Holohoax, International Jewry, Adolf Hitler, Pizzagate, Psychopathy, and Flat Earth.

Thanks to Mel Fabregas of Veritas Radio for publicly releasing this second part of our important interview (normally reserved for subscribers only) where we delve into topics from the fake spinning ball Earth, to fake aliens, fake dinosaurs, fake ape-men, and evidence of real giant human beings covered up by the Masonic establishment.

Eric Dubay says the greatest lie and most successful cover-up in history, NASA and Freemasonry’s biggest secret, is that we live on a plane, not a planet; that Earth is the flat, stationary center of the universe. Eric is an American living in Thailand where he teaches Yoga and Wing Chun part-time while exposing the New World Order full-time. He is the author of five books and is the president of the International Flat Earth Research Society. From Mel Fabregas: Let me begin this interview by stating that I have no attachment to the flat earth. I have no attachment to the oblate spheroid, and even to the sphere. If our home is any of these, so be it. I wouldn’t be surprised of any. What I continue to be surprised and suspicious of are the people who continue attacking those who simply ask questions. I recently attended a conference that deals with the most open minded topics you could possibly imagine. However, when it comes to the flat earth topic, it was a no no. Look, I can’t say I blame people for thinking this is the most absurd topic under the sun, or the firmament, rather, but you, open minded people who discuss aliens, UFOs, reptilians who rule the world, Bigfoot, and the rest of it, why do you continue telling people to stop looking into the flat earth? Those of you who study the pyramids and ancient civilizations, you venerate these ancient ones, and rightfully so, and some of these very ancient ones believed the earth was flat. Why do you then continue looking into their achievements if the notion of a flat earth is so absurd? Shouldn’t that discredit them too? And those of you who continue writing to me saying the ancient ones knew they lived on a sphere, how did they know? Perhaps they had advanced technology that allowed seeing the skies above. Just because you see sphere above you doesn’t prove you are standing on one. You can still play pool on a flat table and basketball on a flat court. Perhaps the psyop is questioning those who question the true shape of our plane(t).

Thanks to Evita Ochel of EBTV for the wonderful interview covering topics ranging from spirituality and science, to health and veganism, to conspiracies and the flat Earth. Please like, share, comment and subscribe to help me spread the word on these most important subjects!

Tonight Eric Dubay returns to the show. Eric is a leading and prominent voice behind the revival of the Flat Earth theory. He currently lives in Thailand where he teaches Yoga and Wing Chun which is the traditional Chinese martial art of self defense. Eric has also written a number of books one of which is The Flat Earth Conspiracy where Eric explains how the world has been systematically brainwashed and indoctrinated for centuries into believing the greatest lie of all time that the Earth is a spinning globe. He has also recently released his free eBook 200 Proofs Earth is Not a Spinning Ball. The discussion with Eric was wide ranging as we talk about the traction the Flat Earth theory is gaining throughout the world along with Erics thoughts on Zionism, controlled opposition, psychopathic behavior and forbidden topics like the Holocaust.

Continuing my discussion with Mel Fabregas of Veritas Radio, we discuss subjects ranging from the Sun, Moon, and stars, UFOs, aliens, and DMT, Jews, Hitler and WWII, and of course, the flat Earth. If you missed part one be sure to watch this first: https://www.youtube.com/watch?v=5yRAQG5-ca8 And special thanks to Steve Wilner for the awesome psychedelic visuals: https://www.youtube.com/user/soundlessdawn

Join me and Mel Fabregas of Veritas Radio for a discussion about the greatest deception in history, the mother of all conspiracies, NASA and Freemasonry’s biggest secret, that Earth is flat, motionless, and the center of the universe. Stay tuned for part 2 coming out next week, or better yet show Mel your support and subscribe to Veritas radio to listen today.

In this Firestarter radio re-upload we cover the Flat Earth, NASA, Freemasonry, Judaism, Veganism, Kosher Slaughter, Circumcision, Hitler, the Holocaust Hoax, Controlled Opposition, who really runs the world and why they have convinced you you are living on a spinning ball!

In the following re-upload, Patricia Aiken from Sacred Cow BBQ and I discuss the Flat Earth, WWII, Hitler, Gaddafi, Rothschild / Jewish power, Controlled Opposition, and Voluntaryism.

In this roundtable discussion with John Le Bon and the Ball Earth Skeptics we cover topics ranging from shills in the flat Earth movement to astrotheology and kundalini awakening.

Tonight my very special guest is Eric Dubay. Eric is one of the leading voices behind the revival of the Flat Earth theory. He currently lives in Thailand where he teaches Yoga and Wing Chun which is the traditional Chinese martial art of self defense. Eric has also written a number of books one of which is The Flat Earth Conspiracy where Eric explains how the world has been systematically brainwashed and indoctrinated for centuries into believing the greatest lie of all time that the Earth is a spinning globe. Our discussion was wide ranging as we talk through many of the proof points supporting the Flat Earth model as well addressing some the arguments used by heliocentrists, or the globe believers, in their attempts to debunk the flat Earth theory. I strongly urge everyone to listen with an open and critical mind. I believe youll find the topic extremely interesting and not as cut and dry as you might think.

Coast 2 Coast AM regular stand-in and host of Zoomer radio’s Conspiracy Show, Richard Syrett, interviews Eric Dubay, president of the International Flat Earth Research Society about the greatest lie and most successful cover-up in history, NASA and Freemasonry’s biggest secret, that we live on a plane, not a planet, that Earth is the flat, stationary center of the universe.

In this episode of The Anarchast I speak with Jeff Berwick about Statism, Anarchy/Voluntaryism and of course, the Flat Earth! Please be sure to share, like, and subscribe for more interviews and flat Earth research. Also check out my Anarchy article archive at Atlantean Conspiracy: http://www.atlanteanconspiracy.com/search/label/Anarchy

The oldest and largest secret society in existence has a secret so huge and well-hidden, so contrary to what we have been taught to believe, that its exposure threatens to not only completely and single-handedly crush their New World Order “United Nations” but to radically reshape the entirety of modern academia, universities, the mainstream / alternative medias, all the world’s governments, space agencies, and the very Earth beneath our feet.

In this highly-informative and humorous podcast President of the International Flat Earth Research Society and webmaster of AtlanteanConspiracy.com, Eric Dubay, talks with fellow conspiracy author and webmaster of WaykiWayki.com, Mark Knight. Topics covered include flat Earth science vs. ball Earth pseudo-science, the various proofs/evidence for the geocentric flat Earth and debunking the supposed proofs/evidence for the heliocentric ball-Earth, the North Pole and South Pole (or lack thereof) and the Antarctic ice-rim, the Sun, Moon, eclipses, seasons, Polaris, stars, planets, NASA, the fake Moon and Mars landings, the controlled opposition Flat Earth Society vs. the legitimate International Flat Earth Research Society, and disinformation agents like Mark Sargent (codename: Sargent Non-Sense).

The following interview features myself, Eric Dubay, a genuine, legitimate flat Earth researcher talking with DJ Buttamilk (Dan Lefkowitz) of Brattleboro, Vermont community radio.

Follow this link:

The Atlantean Conspiracy

Technological singularity – Wikipedia

The technological singularity (also, simply, the singularity)[1] is the hypothesis that the invention of artificial superintelligence (ASI) will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.[2] According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a “runaway reaction” of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence. Stanislaw Ulam reports a discussion with John von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[3] Subsequent authors have echoed this viewpoint.[2][4] I. J. Good’s “intelligence explosion” model predicts that a future superintelligence will trigger a singularity.[5] Emeritus professor of computer science at San Diego State University and science fiction author Vernor Vinge said in his 1993 essay The Coming Technological Singularity that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.[5]

Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when artificial general intelligence (AGI) would arrive was 2040 to 2050, depending on the poll.[6][7]

In the 2010s public figures such as Stephen Hawking and Elon Musk expressed concern that full artificial intelligence could result in human extinction.[8][9] The consequences of the singularity and its potential benefit or harm to the human race have been hotly debated.

I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. Good’s scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (ever more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[10]

John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence. They argue that it is difficult or impossible for present-day humans to predict what human beings’ lives would be like in a post-singularity world.[5][11]

Some writers use “the singularity” in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[12][13][14] although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[5] Many writers also tie the singularity to observations of exponential growth in various technologies (with Moore’s law being the most prominent example), using such observations as a basis for predicting that the singularity is likely to happen sometime within the 21st century.[13][15]

Many prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose law is often cited in support of the concept.[16][17][18]

The exponential growth in computing technology suggested by Moore’s law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore’s law. Computer scientist and futurist Hans Moravec proposed in a 1998 book[19] that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit.

Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[20]) increases exponentially, generalizing Moore’s law in the same manner as Moravec’s proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[21] Between 1986 and 2007, machines’ application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world’s general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world’s storage capacity per capita doubled every 40 months.[22]

Kurzweil reserves the term “singularity” for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine”.[23] He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date “will not represent the Singularity” because they do “not yet correspond to a profound expansion of our intelligence.”[24]

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term “singularity” in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[3]

Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the “law of accelerating returns”. Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to “technological change so rapid and profound it represents a rupture in the fabric of human history”.[25] Kurzweil believes that the singularity will occur by approximately 2045.[26] His predictions differ from Vinge’s in that he predicts a gradual ascent to the singularity, rather than Vinge’s rapidly self-improving superhuman intelligence.

Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy’s Wired magazine article “Why the future doesn’t need us”.[4][27]

Some critics assert that no computer or machine will ever achieve human intelligence, while others hold that the definition of intelligence is irrelevant if the net result is the same.[28]

Steven Pinker stated in 2008:

… There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobilesall staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. …[16]

University of California, Berkeley, philosophy professor John Searle writes:

[Computers] have, literally …, no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. … [T]he machinery has no beliefs, desires, [or] motivations.[29]

Martin Ford in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future[30] postulates a “technology paradox” in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to work traditionally considered to be “routine”.[31]

Theodore Modis[32][33] and Jonathan Huebner[34] argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore’s prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advancements in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.[35] While Kurzweil used Modis’ resources, and Modis’ work was around accelerating change, Modis distanced himself from Kurzweil’s thesis of a “technological singularity”, claiming that it lacks scientific rigor.[33]

Others[who?] propose that other “singularities” can be found through analysis of trends in world population, world gross domestic product, and other indices. Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.[36][improper synthesis?]

In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore’s law to 19th-century computers.[37]

In a 2007 paper, Schmidhuber stated that the frequency of subjectively “notable events” appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.[38]

Paul Allen argues the opposite of accelerating returns, the complexity brake;[18] the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies,[39] a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[34] The growth of complexity eventually becomes self-limiting, and leads to a widespread “general systems collapse”.

Jaron Lanier refutes the idea that the Singularity is inevitable. He states: “I do not think the technology is creating itself. It’s not an autonomous process.”[40] He goes on to assert: “The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it’s the same thing operationally as denying people clout, dignity, and self-determination … to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics.”[40]

Economist Robert J. Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J. Good.[41]

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil’s iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary “events” were picked arbitrarily.[42] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[43]

The term “technological singularity” reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.[44][45] It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat,[46][47] as the issue has not been dealt with by most artificial general intelligence researchers, although the topic of friendly artificial intelligence is investigated by the Future of Humanity Institute and the Machine Intelligence Research Institute.[44]

While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description. In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. A 2016 article in Trends in Ecology & Evolution argues that “humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels… we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes… With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction”. The article argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life’s evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition. The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, “the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (5×10^21 bytes). In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1×10^19 bytes. The digital realm stored 500 times more information than this in 2014 (…see Figure)… The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3×10^37 base pairs, equivalent to 1.325×10^37 bytes of information. If growth in digital storage continues at its current rate of 3038% compound annual growth per year,[22] it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years”.[48]

In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.[49]

Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a “cockroach” stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[49]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[50][improper synthesis?]

In his 2005 book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.[51] Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.[52]

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called “Digital Ascension” that involves “people dying in the flesh and being uploaded into a computer and remaining conscious”.[53] Singularitarianism has also been likened to a religion by John Horgan.[54]

In his obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”[3]

In 1965, Good wrote his essay postulating an “intelligence explosion” of recursive self-improvement of a machine intelligence. In 1985, in “The Time Scale of Artificial Intelligence”, artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an “infinity point”: if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.[4][55]

In 1981, Stanisaw Lem published his science fiction novel Golem XIV. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.

In 1983, Vinge greatly popularized Good’s intelligence explosion in a number of writings, first addressing the topic in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term “singularity” in a way that was specifically tied to the creation of intelligent machines:[56][57] writing

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.

Vinge’s 1993 article “The Coming Technological Singularity: How to Survive in the Post-Human Era”,[5] spread widely on the internet and helped to popularize the idea.[58] This article contains the statement, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.[5]

In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of the singularity.[27]

In 2005, Kurzweil published The Singularity is Near. Kurzweil’s publicity campaign included an appearance on The Daily Show with Jon Stewart.[59]

In 2007, Eliezer Yudkowsky suggested that many of the varied definitions that have been assigned to “singularity” are mutually incompatible rather than mutually supporting.[13][60] For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good’s proposed discontinuous upswing in intelligence and Vinge’s thesis on unpredictability.[13]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is “to educate, inspire and empower leaders to apply exponential technologies to address humanity’s grand challenges.”[61] Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA’s Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during the northern-hemisphere summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

In 2007, the joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity.[62][63][64]

Former President of the United States Barack Obama spoke about singularity in his interview to Wired in 2016:[65]

One thing that we haven’t talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people aren’t spending a lot of time right now worrying about singularitythey are worrying about “Well, is my job going to be replaced by a machine?”

Read more:

Technological singularity – Wikipedia

Technological singularity – Wikipedia

The technological singularity (also, simply, the singularity)[1] is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.[2] According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a “runaway reaction” of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence. Stanislaw Ulam reports a discussion with John von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[3] Subsequent authors have echoed this viewpoint.[2][4] I. J. Good’s “intelligence explosion” model predicts that a future superintelligence will trigger a singularity.[5] Emeritus professor of computer science at San Diego State University and science fiction author Vernor Vinge said in his 1993 essay The Coming Technological Singularity that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.[5]

Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when artificial general intelligence (AGI) would arrive was 2040 to 2050, depending on the poll.[6][7]

In the 2010s public figures such as Stephen Hawking and Elon Musk expressed concern that full artificial intelligence could result in human extinction.[8][9] The consequences of the singularity and its potential benefit or harm to the human race have been hotly debated.

I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. Good’s scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (ever more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[10]

John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence. They argue that it is difficult or impossible for present-day humans to predict what human beings’ lives would be like in a post-singularity world.[5][11]

Some writers use “the singularity” in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[12][13][14] although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[5] Many writers also tie the singularity to observations of exponential growth in various technologies (with Moore’s law being the most prominent example), using such observations as a basis for predicting that the singularity is likely to happen sometime within the 21st century.[13][15]

Many prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose law is often cited in support of the concept.[16][17][18]

The exponential growth in computing technology suggested by Moore’s law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore’s law. Computer scientist and futurist Hans Moravec proposed in a 1998 book[19] that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit.

Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[20]) increases exponentially, generalizing Moore’s law in the same manner as Moravec’s proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[21] Between 1986 and 2007, machines’ application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world’s general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world’s storage capacity per capita doubled every 40 months.[22]

Kurzweil reserves the term “singularity” for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine”.[23] He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date “will not represent the Singularity” because they do “not yet correspond to a profound expansion of our intelligence.”[24]

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term “singularity” in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[3]

Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the “law of accelerating returns”. Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to “technological change so rapid and profound it represents a rupture in the fabric of human history”.[25] Kurzweil believes that the singularity will occur by approximately 2045.[26] His predictions differ from Vinge’s in that he predicts a gradual ascent to the singularity, rather than Vinge’s rapidly self-improving superhuman intelligence.

Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy’s Wired magazine article “Why the future doesn’t need us”.[4][27]

Some critics assert that no computer or machine will ever achieve human intelligence, while others hold that the definition of intelligence is irrelevant if the net result is the same.[28]

Steven Pinker stated in 2008:

… There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobilesall staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. …[16]

University of California, Berkeley, philosophy professor John Searle writes:

[Computers] have, literally …, no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. … [T]he machinery has no beliefs, desires, [or] motivations.[29]

Martin Ford in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future[30] postulates a “technology paradox” in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to work traditionally considered to be “routine”.[31]

Theodore Modis[32][33] and Jonathan Huebner[34] argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore’s prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advancements in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.[35] While Kurzweil used Modis’ resources, and Modis’ work was around accelerating change, Modis distanced himself from Kurzweil’s thesis of a “technological singularity”, claiming that it lacks scientific rigor.[33]

Others[who?] propose that other “singularities” can be found through analysis of trends in world population, world gross domestic product, and other indices. Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.[36][improper synthesis?]

In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore’s law to 19th-century computers.[37]

In a 2007 paper, Schmidhuber stated that the frequency of subjectively “notable events” appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.[38]

Paul Allen argues the opposite of accelerating returns, the complexity brake;[18] the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies,[39] a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[34] The growth of complexity eventually becomes self-limiting, and leads to a widespread “general systems collapse”.

Jaron Lanier refutes the idea that the Singularity is inevitable. He states: “I do not think the technology is creating itself. It’s not an autonomous process.”[40] He goes on to assert: “The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it’s the same thing operationally as denying people clout, dignity, and self-determination … to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics.”[40]

Economist Robert J. Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J. Good.[41]

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil’s iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary “events” were picked arbitrarily.[42] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[43]

The term “technological singularity” reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.[44][45] It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat,[46][47] as the issue has not been dealt with by most artificial general intelligence researchers, although the topic of friendly artificial intelligence is investigated by the Future of Humanity Institute and the Machine Intelligence Research Institute.[44]

While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description. In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. A 2016 article in Trends in Ecology & Evolution argues that “humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels… we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes… With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction”. The article argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life’s evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition. The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, “the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (5×10^21 bytes). In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1×10^19 bytes. The digital realm stored 500 times more information than this in 2014 (…see Figure)… The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3×10^37 base pairs, equivalent to 1.325×10^37 bytes of information. If growth in digital storage continues at its current rate of 3038% compound annual growth per year,[22] it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years”.[48]

In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.[49]

Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a “cockroach” stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[49]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[50][improper synthesis?]

In his 2005 book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.[51] Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.[52]

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called “Digital Ascension” that involves “people dying in the flesh and being uploaded into a computer and remaining conscious”.[53] Singularitarianism has also been likened to a religion by John Horgan.[54]

In his obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”[3]

In 1965, Good wrote his essay postulating an “intelligence explosion” of recursive self-improvement of a machine intelligence. In 1985, in “The Time Scale of Artificial Intelligence”, artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an “infinity point”: if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.[4][55]

In 1981, Stanisaw Lem published his science fiction novel Golem XIV. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.

In 1983, Vinge greatly popularized Good’s intelligence explosion in a number of writings, first addressing the topic in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term “singularity” in a way that was specifically tied to the creation of intelligent machines:[56][57] writing

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.

Vinge’s 1993 article “The Coming Technological Singularity: How to Survive in the Post-Human Era”,[5] spread widely on the internet and helped to popularize the idea.[58] This article contains the statement, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.[5]

In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of the singularity.[27]

In 2005, Kurzweil published The Singularity is Near. Kurzweil’s publicity campaign included an appearance on The Daily Show with Jon Stewart.[59]

In 2007, Eliezer Yudkowsky suggested that many of the varied definitions that have been assigned to “singularity” are mutually incompatible rather than mutually supporting.[13][60] For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good’s proposed discontinuous upswing in intelligence and Vinge’s thesis on unpredictability.[13]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is “to educate, inspire and empower leaders to apply exponential technologies to address humanity’s grand challenges.”[61] Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA’s Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during the northern-hemisphere summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

In 2007, the joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity.[62][63][64]

Former President of the United States Barack Obama spoke about singularity in his interview to Wired in 2016:[65]

One thing that we haven’t talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people aren’t spending a lot of time right now worrying about singularitythey are worrying about “Well, is my job going to be replaced by a machine?”

Read more:

Technological singularity – Wikipedia

Technological singularity – Wikipedia

The technological singularity (also, simply, the singularity)[1] is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.[2] According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a “runaway reaction” of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence. Stanislaw Ulam reports a discussion with John von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[3] Subsequent authors have echoed this viewpoint.[2][4] I. J. Good’s “intelligence explosion” model predicts that a future superintelligence will trigger a singularity.[5] Emeritus professor of computer science at San Diego State University and science fiction author Vernor Vinge said in his 1993 essay The Coming Technological Singularity that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.[5]

Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when artificial general intelligence (AGI) would arrive was 2040 to 2050, depending on the poll.[6][7]

Many notable personalities, including Stephen Hawking, Elon Musk and Sam Harris consider the uncontrolled rise of artificial intelligence as a matter of alarm and concern for humanity’s future.[8][9] The consequences of the singularity and its potential benefit or harm to the human race have been hotly debated by various intellectual circles.[citation needed]

I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. Good’s scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (ever more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[10]

John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence. They argue that it is difficult or impossible for present-day humans to predict what human beings’ lives would be like in a post-singularity world.[5][11]

Some writers use “the singularity” in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[12][13][14] although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[5] Many writers also tie the singularity to observations of exponential growth in various technologies (with Moore’s law being the most prominent example), using such observations as a basis for predicting that the singularity is likely to happen sometime within the 21st century.[13][15]

Many prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose law is often cited in support of the concept.[16][17][18]

The exponential growth in computing technology suggested by Moore’s law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore’s law. Computer scientist and futurist Hans Moravec proposed in a 1998 book[19] that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit.

Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[20]) increases exponentially, generalizing Moore’s law in the same manner as Moravec’s proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[21] Between 1986 and 2007, machines’ application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world’s general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world’s storage capacity per capita doubled every 40 months.[22]

Kurzweil reserves the term “singularity” for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine”.[23] He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date “will not represent the Singularity” because they do “not yet correspond to a profound expansion of our intelligence.”[24]

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term “singularity” in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[3]

Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the “law of accelerating returns”. Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to “technological change so rapid and profound it represents a rupture in the fabric of human history”.[25] Kurzweil believes that the singularity will occur by approximately 2045.[26] His predictions differ from Vinge’s in that he predicts a gradual ascent to the singularity, rather than Vinge’s rapidly self-improving superhuman intelligence.

Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy’s Wired magazine article “Why the future doesn’t need us”.[4][27]

Some critics assert that no computer or machine will ever achieve human intelligence, while others hold that the definition of intelligence is irrelevant if the net result is the same.[28]

Steven Pinker stated in 2008:

… There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobilesall staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. …[16]

University of California, Berkeley, philosophy professor John Searle writes:

[Computers] have, literally …, no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. … [T]he machinery has no beliefs, desires, [or] motivations.[29]

Martin Ford in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future[30] postulates a “technology paradox” in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to work traditionally considered to be “routine”.[31]

Theodore Modis[32][33] and Jonathan Huebner[34] argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore’s prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advancements in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.[35] While Kurzweil used Modis’ resources, and Modis’ work was around accelerating change, Modis distanced himself from Kurzweil’s thesis of a “technological singularity”, claiming that it lacks scientific rigor.[33]

Others[who?] propose that other “singularities” can be found through analysis of trends in world population, world gross domestic product, and other indices. Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.[36][improper synthesis?]

In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore’s law to 19th-century computers.[37]

In a 2007 paper, Schmidhuber stated that the frequency of subjectively “notable events” appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.[38]

Paul Allen argues the opposite of accelerating returns, the complexity brake;[18] the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies,[39] a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[34] The growth of complexity eventually becomes self-limiting, and leads to a widespread “general systems collapse”.

Jaron Lanier refutes the idea that the Singularity is inevitable. He states: “I do not think the technology is creating itself. It’s not an autonomous process.”[40] He goes on to assert: “The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it’s the same thing operationally as denying people clout, dignity, and self-determination … to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics.”[40]

Economist Robert J. Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J. Good.[41]

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil’s iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary “events” were picked arbitrarily.[42] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[43]

The term “technological singularity” reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.[44][45] It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat,[46][47] as the issue has not been dealt with by most artificial general intelligence researchers, although the topic of friendly artificial intelligence is investigated by the Future of Humanity Institute and the Machine Intelligence Research Institute.[44]

While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description. In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. A 2016 article in Trends in Ecology & Evolution argues that “humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels… we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes… With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction”. The article argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life’s evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition. The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, “the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (5×10^21 bytes). In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1×10^19 bytes. The digital realm stored 500 times more information than this in 2014 (…see Figure)… The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3×10^37 base pairs, equivalent to 1.325×10^37 bytes of information. If growth in digital storage continues at its current rate of 3038% compound annual growth per year,[22] it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years”.[48]

In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.[49]

Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a “cockroach” stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[49]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[50][improper synthesis?]

In his 2005 book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.[51] Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.[52]

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called “Digital Ascension” that involves “people dying in the flesh and being uploaded into a computer and remaining conscious”.[53] Singularitarianism has also been likened to a religion by John Horgan.[54]

In his obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”[3]

In 1965, Good wrote his essay postulating an “intelligence explosion” of recursive self-improvement of a machine intelligence. In 1985, in “The Time Scale of Artificial Intelligence”, artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an “infinity point”: if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.[4][55]

In 1981, Stanisaw Lem published his science fiction novel Golem XIV. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.

In 1983, Vinge greatly popularized Good’s intelligence explosion in a number of writings, first addressing the topic in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term “singularity” in a way that was specifically tied to the creation of intelligent machines:[56][57] writing

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.

Vinge’s 1993 article “The Coming Technological Singularity: How to Survive in the Post-Human Era”,[5] spread widely on the internet and helped to popularize the idea.[58] This article contains the statement, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.[5]

In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of the singularity.[27]

In 2005, Kurzweil published The Singularity is Near. Kurzweil’s publicity campaign included an appearance on The Daily Show with Jon Stewart.[59]

In 2007, Eliezer Yudkowsky suggested that many of the varied definitions that have been assigned to “singularity” are mutually incompatible rather than mutually supporting.[13][60] For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good’s proposed discontinuous upswing in intelligence and Vinge’s thesis on unpredictability.[13]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is “to educate, inspire and empower leaders to apply exponential technologies to address humanity’s grand challenges.”[61] Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA’s Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during the northern-hemisphere summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

In 2007, the joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity.[62][63][64]

Former President of the United States Barack Obama spoke about singularity in his interview to Wired in 2016:[65]

One thing that we haven’t talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people aren’t spending a lot of time right now worrying about singularitythey are worrying about “Well, is my job going to be replaced by a machine?”

Go here to see the original:

Technological singularity – Wikipedia

Technological singularity – Wikipedia

The technological singularity (also, simply, the singularity)[1] is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.[2] According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a “runaway reaction” of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence. Stanislaw Ulam reports a discussion with John von Neumann “centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.[3] Subsequent authors have echoed this viewpoint.[2][4] I. J. Good’s “intelligence explosion” model predicts that a future superintelligence will trigger a singularity.[5] Emeritus professor of computer science at San Diego State University and science fiction author Vernor Vinge said in his 1993 essay The Coming Technological Singularity that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.[5]

Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when artificial general intelligence (AGI) would arrive was 2040 to 2050, depending on the poll.[6][7]

Many notable personalities, including Stephen Hawking, Elon Musk and Sam Harris consider the uncontrolled rise of artificial intelligence as a matter of alarm and concern for humanity’s future.[8][9] The consequences of the singularity and its potential benefit or harm to the human race have been hotly debated by various intellectual circles.[citation needed]

I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. Good’s scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (ever more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[10]

John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence. They argue that it is difficult or impossible for present-day humans to predict what human beings’ lives would be like in a post-singularity world.[5][11]

Some writers use “the singularity” in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[12][13][14] although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[5] Many writers also tie the singularity to observations of exponential growth in various technologies (with Moore’s law being the most prominent example), using such observations as a basis for predicting that the singularity is likely to happen sometime within the 21st century.[13][15]

Many prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose law is often cited in support of the concept.[16][17][18]

The exponential growth in computing technology suggested by Moore’s law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore’s law. Computer scientist and futurist Hans Moravec proposed in a 1998 book[19] that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit.

Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[20]) increases exponentially, generalizing Moore’s law in the same manner as Moravec’s proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[21] Between 1986 and 2007, machines’ application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world’s general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world’s storage capacity per capita doubled every 40 months.[22]

Kurzweil reserves the term “singularity” for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that “The Singularity will allow us to transcend these limitations of our biological bodies and brains … There will be no distinction, post-Singularity, between human and machine”.[23] He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date “will not represent the Singularity” because they do “not yet correspond to a profound expansion of our intelligence.”[24]

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term “singularity” in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[3]

Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the “law of accelerating returns”. Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to “technological change so rapid and profound it represents a rupture in the fabric of human history”.[25] Kurzweil believes that the singularity will occur by approximately 2045.[26] His predictions differ from Vinge’s in that he predicts a gradual ascent to the singularity, rather than Vinge’s rapidly self-improving superhuman intelligence.

Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy’s Wired magazine article “Why the future doesn’t need us”.[4][27]

Some critics assert that no computer or machine will ever achieve human intelligence, while others hold that the definition of intelligence is irrelevant if the net result is the same.[28]

Steven Pinker stated in 2008:

… There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobilesall staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. …[16]

University of California, Berkeley, philosophy professor John Searle writes:

[Computers] have, literally …, no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. … [T]he machinery has no beliefs, desires, [or] motivations.[29]

Martin Ford in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future[30] postulates a “technology paradox” in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to work traditionally considered to be “routine”.[31]

Theodore Modis[32][33] and Jonathan Huebner[34] argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore’s prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advancements in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.[35] While Kurzweil used Modis’ resources, and Modis’ work was around accelerating change, Modis distanced himself from Kurzweil’s thesis of a “technological singularity”, claiming that it lacks scientific rigor.[33]

Others[who?] propose that other “singularities” can be found through analysis of trends in world population, world gross domestic product, and other indices. Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.[36][improper synthesis?]

In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore’s law to 19th-century computers.[37]

In a 2007 paper, Schmidhuber stated that the frequency of subjectively “notable events” appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.[38]

Paul Allen argues the opposite of accelerating returns, the complexity brake;[18] the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies,[39] a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[34] The growth of complexity eventually becomes self-limiting, and leads to a widespread “general systems collapse”.

Jaron Lanier refutes the idea that the Singularity is inevitable. He states: “I do not think the technology is creating itself. It’s not an autonomous process.”[40] He goes on to assert: “The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it’s the same thing operationally as denying people clout, dignity, and self-determination … to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics.”[40]

Economist Robert J. Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J. Good.[41]

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil’s iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary “events” were picked arbitrarily.[42] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[43]

The term “technological singularity” reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.[44][45] It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat,[46][47] as the issue has not been dealt with by most artificial general intelligence researchers, although the topic of friendly artificial intelligence is investigated by the Future of Humanity Institute and the Machine Intelligence Research Institute.[44]

While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description. In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. A 2016 article in Trends in Ecology & Evolution argues that “humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels… we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes… With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction”. The article argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life’s evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition. The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, “the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (5×10^21 bytes). In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1×10^19 bytes. The digital realm stored 500 times more information than this in 2014 (…see Figure)… The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3×10^37 base pairs, equivalent to 1.325×10^37 bytes of information. If growth in digital storage continues at its current rate of 3038% compound annual growth per year,[22] it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years”.[48]

In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.[49]

Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a “cockroach” stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[49]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[50][improper synthesis?]

In his 2005 book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.[51] Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.[52]

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called “Digital Ascension” that involves “people dying in the flesh and being uploaded into a computer and remaining conscious”.[53] Singularitarianism has also been likened to a religion by John Horgan.[54]

In his obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”[3]

In 1965, Good wrote his essay postulating an “intelligence explosion” of recursive self-improvement of a machine intelligence. In 1985, in “The Time Scale of Artificial Intelligence”, artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an “infinity point”: if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.[4][55]

In 1981, Stanisaw Lem published his science fiction novel Golem XIV. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirement because it finds them lacking internal logical consistency.

In 1983, Vinge greatly popularized Good’s intelligence explosion in a number of writings, first addressing the topic in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term “singularity” in a way that was specifically tied to the creation of intelligent machines:[56][57] writing

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.

Vinge’s 1993 article “The Coming Technological Singularity: How to Survive in the Post-Human Era”,[5] spread widely on the internet and helped to popularize the idea.[58] This article contains the statement, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.[5]

In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of the singularity.[27]

In 2005, Kurzweil published The Singularity is Near. Kurzweil’s publicity campaign included an appearance on The Daily Show with Jon Stewart.[59]

In 2007, Eliezer Yudkowsky suggested that many of the varied definitions that have been assigned to “singularity” are mutually incompatible rather than mutually supporting.[13][60] For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good’s proposed discontinuous upswing in intelligence and Vinge’s thesis on unpredictability.[13]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is “to educate, inspire and empower leaders to apply exponential technologies to address humanity’s grand challenges.”[61] Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA’s Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during the northern-hemisphere summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

In 2007, the joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity.[62][63][64]

Former President of the United States Barack Obama spoke about singularity in his interview to Wired in 2016:[65]

One thing that we haven’t talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people aren’t spending a lot of time right now worrying about singularitythey are worrying about “Well, is my job going to be replaced by a machine?”

Visit link:

Technological singularity – Wikipedia

Jordan Peterson – Wikipedia

Jordan Bernt Peterson (born June 12, 1962) is a Canadian clinical psychologist, cultural critic, and professor of psychology at the University of Toronto. His main areas of study are in abnormal, social, and personality psychology,[1] with a particular interest in the psychology of religious and ideological belief,[2] and the assessment and improvement of personality and performance.[3]

Peterson studied at the University of Alberta and McGill University. He remained at McGill as a post-doctoral fellow from 1991 to 1993 before moving to Harvard University, where he was an assistant and an associate professor in the psychology department. In 1998, he moved back in Canada to the University of Toronto as a full professor.

His first book Maps of Meaning: The Architecture of Belief was published in 1999, a work which examined several academic fields to describe the structure of systems of beliefs and myths, their role in the regulation of emotion, creation of meaning, and motivation for genocide.[4][5][6] His second book, 12 Rules for Life: An Antidote to Chaos, was released in January 2018.[7][8][9]

In 2016, Peterson released a series of videos on his YouTube channel in which he criticized political correctness and the Canadian government’s Bill C-16. He subsequently received significant media coverage.[7][8][9]

Peterson was born on June 12, 1962, and grew up in Fairview, Alberta, a small town northwest of his birthplace Edmonton, in Canada. He was the eldest of three children born to Beverley, a librarian at the Fairview campus of Grande Prairie Regional College, and Walter Peterson, a schoolteacher.[10] His middle name is Bernt ( BAIRNT), after his Norwegian great-grandfather.[11][12]

When he was 13, he was introduced to the writings of George Orwell, Aldous Huxley, Aleksandr Solzhenitsyn, and Ayn Rand by his school librarian Sandy Notley mother of Rachel Notley, leader of the Alberta New Democratic Party and 17th Premier of Alberta.[13] He also worked for the New Democratic Party (NDP) throughout his teenage years, but grew disenchanted with the party due to what he saw as a preponderance of “the intellectual, tweed-wearing middle-class socialist” who “didn’t like the poor; they just hated the rich”.[10] He left the NDP at age 18.[14]

After graduating from Fairview High School in 1979, Peterson entered the Grande Prairie Regional College to study political science and English literature.[2] He later transferred to the University of Alberta, where he completed his B.A. in 1982.[14] Afterwards, he took a year off to visit Europe. There he developed an interest in the psychological origins of the Cold War, particularly 20th century European totalitarianism,[2][15] and was plagued by apocalyptic nightmares about the escalation of the nuclear arms race. As a result, he became concerned about mankind’s capacity for evil and destruction, and delved into the works of Carl Jung, Friedrich Nietzsche, Aleksandr Solzhenitsyn,[10] and Fyodor Dostoyevsky.[15] He then returned to the University of Alberta and received a B.A. in psychology in 1984.[16] In 1985, he moved to Montreal to attend McGill University. He earned his Ph.D. in clinical psychology under the supervision of Robert O. Pihl in 1991, and remained as a post-doctoral fellow at McGill’s Douglas Hospital until June 1993, working with Pihl and Maurice Dongier.[2][17]

From July 1993 to June 1998,[1] Peterson lived in Arlington, Massachusetts, while teaching and conducting research at Harvard University as an assistant and an associate professor in the psychology department. During his time at Harvard, he studied aggression arising from drug and alcohol abuse and supervised a number of unconventional thesis proposals.[14] Two former Ph.D. students, Shelley Carson, a psychologist and teacher from Harvard, and author Gregg Hurwitz recalled that Peterson’s lectures were already highly admired by the students.[8] In July 1998, he returned to Canada and took up a post as a full professor at the University of Toronto.[1][16]

Peterson’s areas of study and research are in the fields of psychopharmacology, abnormal, neuro, clinical, personality, social, industrial and organizational,[1] religious, ideological,[2] political, and creativity psychology.[3] Peterson has authored or co-authored more than a hundred academic papers.[18] Peterson has over 20 years of clinical practice, seeing 20 people a week, but in 2017, he decided to put the practice on hold because of new projects.[7]

In 2004, a 13-part TV series based on Peterson’s book Maps of Meaning: The Architecture of Belief aired on TVOntario.[10][16][19] He has also appeared on that network on shows such as Big Ideas, and as a frequent guest and essayist on The Agenda with Steve Paikin since 2008.[20][21]

Jordan Peterson, 1998 (Descensus ad Inferos)[22]

In 1999, Routledge published Maps of Meaning: The Architecture of Belief. The book, which took Peterson 13 years to complete, describes a comprehensive theory about how people construct meaning, beliefs and make narratives using ideas from various fields including mythology, religion, literature, philosophy and psychology in accordance to the modern scientific understanding of how the brain functions.[14][22][23]

According to Peterson, his main goal was to examine why both individuals and groups participate in social conflict, explore the reasoning and motivation individuals take to support their belief systems (i.e. ideological identification[14]) that eventually results in killing and pathological atrocities like the Gulag, the Auschwitz concentration camp and the Rwandan genocide.[14][22][23] He considers that an “analysis of the world’s religious ideas might allow us to describe our essential morality and eventually develop a universal system of morality”.[23]

In January 2018, Penguin Random House published Peterson’s second book, 12 Rules for Life: An Antidote to Chaos. The work contains abstract ethical principles about life, in a more accessible style than Maps of Meaning.[7][8][9] To promote the book, Peterson went on a world tour.[24][25][26] As part of the tour, Peterson was interviewed by Cathy Newman on Channel 4 News. In a short time the interview received considerable attention and over seven million views on YouTube.[27][28][29] The book was ranked the number one bestselling book on Amazon in the United States and Canada and number four in the United Kingdom.[30][31] It also topped bestselling lists in Canada, US and the United Kingdom.[32][33]

In 2013, Peterson began recording his lectures (“Personality and Its Transformations”, “Maps of Meaning: The Architecture of Belief”[34]) and uploading them to YouTube. His YouTube channel has gathered more than 800,000 subscribers and his videos have received more than 35 million views as of January 2018.[35] In January 2017, he hired a production team to film his psychology lectures at the University of Toronto. He used funds received via the crowd-sourced funding website Patreon after he became embroiled in the Bill C-16 controversy in September 2016. His funding through Patreon has increased from $1,000 per month in August 2016 to $14,000 by January 2017 to more than $50,000 by July 2017.[13][35][36]

Peterson has appeared on The Joe Rogan Experience, The Gavin McInnes Show, Steven Crowder’s Louder with Crowder, Dave Rubin’s The Rubin Report, Stefan Molyneux’s Freedomain Radio, h3h3Productions’s H3 Podcast, Sam Harris’s Waking Up podcast, Gad Saad’s The Saad Truth series and other online shows.[37] In December 2016, Peterson started his own podcast, The Jordan B. Peterson Podcast, which has 39 episodes as of February 20, 2018, including academic guests such as Camille Paglia, Martin Daly, and James W. Pennebaker,[38] while on his channel he has also interviewed Stephen Hicks, Richard J. Haier, and Jonathan Haidt among others. Peterson supported engineer James Damore in his action against Google.[9]

In May 2017, Peterson began The Psychological Significance of the Biblical stories,[39] a series of live theatre lectures, also published as podcasts, in which he analyzes archetypal narratives in Genesis as patterns of behavior vital for personal, social and cultural stability.[9][40]

Peterson and his colleagues Robert O. Pihl, Daniel Higgins, and Michaela Schippers[41] produced a writing therapy program with series of online writing exercises, titled the Self Authoring Suite.[42] It includes the Past Authoring Program, a guided autobiography; two Present Authoring Programs, which allow the participant to analyze their personality faults and virtues in terms of the Big Five personality model; and the Future Authoring Program, which guides participants through the process of planning their desired futures. The latter program was used with McGill University undergraduates on academic probation to improve their grades, as well since 2011 at Rotterdam School of Management, Erasmus University.[43][44] The Self Authoring Programs were developed partially from research by James W. Pennebaker at the University of Texas at Austin and Gary Latham at the Rotman School of Management of the University of Toronto. Pennebaker demonstrated that writing about traumatic or uncertain events and situations improved mental and physical health, while Latham demonstrated that personal planning exercises help make people more productive.[44] According to Peterson, more than 10,000 students have used the program as of January 2017, with drop-out rates decreasing by 25% and GPAs rising by 20%.[10]

Peterson’s critiques of political correctness range over issues such as postmodernism, postmodern feminism, white privilege, cultural appropriation, and environmentalism.[37][45][46] Writing in the National Post, Chris Selley said Peterson’s opponents had “underestimated the fury being inspired by modern preoccupations like white privilege and cultural appropriation, and by the marginalization, shouting down or outright cancellation of other viewpoints in polite society’s institutions”,[47] while in The Spectator, Tim Lott stated Peterson became “an outspoken critic of mainstream academia”.[15] Peterson’s social media presence has magnified the impact of these views; Simona Chiose of The Globe and Mail noted: “few University of Toronto professors in the humanities and social sciences have enjoyed the global name recognition Prof. Peterson has won”.[48]

According to his study conducted with one of his students, Christine Brophy of the relationship between political belief and personality, political correctness exists in two types: PC-Egalitarianism and PC-Authoritarianism, which is a manifestation of “offense sensitivity”.[49] The first type is represented by a group of classical liberals, while the latter by the group known as “social justice warriors”[10] who “weaponize compassion”.[2] The study also found an overlap between PC-authoritarians and right-wing authoritarians.[49]

Peterson considers that the universities should be held as among the most responsible for the wave of political correctness which appeared in North America and Europe.[48] He watched the rise of political correctness on campuses since the early 1990s,[50] and considers that the humanities have become corrupt, less reliant on science, and instead of “intelligent conversation, we are having an ideological conversation”. From his own experience as a university professor, he states that the students who are coming to his classes are uneducated and unaware about the mass exterminations and crimes by Stalinism and Maoism, which were not given the same attention as fascism and Nazism. He also says that “instead of being ennobled or inculcated into the proper culture, the last vestiges of structure are stripped from [the students] by post-modernism and neo-Marxism, which defines everything in terms of relativism and power”.[15][51][52]

Peterson, 2017[51]

Peterson believes that postmodern philosophers and sociologists since the 1960s,[45] while typically claiming to reject Marxism and Communism, because they were discredited as economic ideologies as well by the exposure of crimes in the Soviet Union, have actually built upon and extended their core tenets. He states that it is difficult to understand contemporary society without considering the influence of postmodernism which initially spread from France to the United States through the English department at Yale University. He argues that they “started to play a sleight of hand, and instead of pitting the proletariat, the working class, against the bourgeois, they started to pit the oppressed against the oppressor. That opened up the avenue to identifying any number of groups as oppressed and oppressor and to continue the same narrative under a different name… The people who hold this doctrine this radical, postmodern, communitarian doctrine that makes racial identity or sexual identity or gender identity or some kind of group identity paramount they’ve got control over most low-to-mid level bureaucratic structures, and many governments as well”.[51][18]

He emphasizes that the state should halt funding to faculties and courses he describes as neo-Marxist, and advises students to avoid disciplines like women’s studies, ethnic studies and racial studies, as well other fields of study he believes are “corrupted” by the ideology such as sociology, anthropology and English literature.[53][54] He states that these fields, under the pretense of academic inquiry, propagate unscientific methods, fraudulent peer-review processes for academic journals, publications that garner zero citations,[55] cult-like behaviour,[53] safe-spaces,[56] and radical left-wing political activism for students.[45] Peterson has proposed launching a website which uses AI to identify and showcase the amount of ideologization in specific courses. He announced in November 2017 that he had temporarily postponed the project as “it might add excessively to current polarization”.[57][58]

Peterson has criticized the use of the term “white privilege”, stating that “being called out on their white privilege, identified with a particular racial group and then made to suffer the consequences of the existence of that racial group and its hypothetical crimes, and that sort of thing has to come to a stop. …[It’s] racist in its extreme.”[45] In response to the 2017 protest in Charlottesville, Virginia, he criticized the far right’s use of identity politics, and said that “the Caucasians shouldn’t revert to being white. It’s a bad idea, it’s a dangerous idea, and it’s coming fast, and I don’t like to see that!” He stated that the notion of group identity is “seriously pathological… reprehensible… genocidal” and “it will bring down our civilization if we pursue it”.[59] He has also been prominent in the debate about cultural appropriation, stating it promotes self-censorship in society and journalism.[60]

On September 27, 2016, Peterson released the first installment of a three-part lecture video series, entitled “Professor against political correctness: Part I: Fear and the Law”.[13][61] In the video, he stated he would not use the preferred gender pronouns of students and faculty as part of compelled speech, and announced his objection to the Canadian government’s Bill C-16, which proposed to add “gender identity or expression” as a prohibited ground of discrimination under the Canadian Human Rights Act, and to similarly expand the definitions of promoting genocide and publicly inciting hatred in the Criminal Code.[61][62]

He stated that his objection to the bill was based on potential free speech implications if the Criminal Code is amended, as he claimed he could then be prosecuted under provincial human rights laws if he refuses to call a transsexual student or faculty member by the individual’s preferred pronoun.[63] Furthermore, he argued that the new amendments paired with section 46.3 of the Ontario Human Rights Code would make it possible for employers and organizations to be subject to punishment under the code if any employee or associate says anything that can be construed “directly or indirectly” as offensive, “whether intentionally or unintentionally”.[64] Other academics challenged Peterson’s interpretation of C-16,[63] while some scholars such as Robert P. George supported Peterson’s initiative.[13]

The series of videos drew criticism from transgender activists, faculty and labour unions, and critics accused Peterson of “helping to foster a climate for hate to thrive”.[13] Protests erupted on campus, some including violence, and the controversy attracted international media attention.[65][66][67] When asked in September 2016 if he would comply with the request of a student to use a preferred pronoun, Peterson said “it would depend on how they asked me… If I could detect that there was a chip on their shoulder, or that they were [asking me] with political motives, then I would probably say no… If I could have a conversation like the one we’re having now, I could probably meet them on an equal level”.[67] Two months later, the National Post published an op-ed by Peterson in which he elaborated on his opposition to the bill and explained why he publicly made a stand against it:

I will never use words I hate, like the trendy and artificially constructed words “zhe” and “zher.” These words are at the vanguard of a post-modern, radical leftist ideology that I detest, and which is, in my professional opinion, frighteningly similar to the Marxist doctrines that killed at least 100 million people in the 20th century.

I have been studying authoritarianism on the right and the left for 35 years. I wrote a book, Maps of Meaning: The Architecture of Belief, on the topic, which explores how ideologies hijack language and belief. As a result of my studies, I have come to believe that Marxism is a murderous ideology. I believe its practitioners in modern universities should be ashamed of themselves for continuing to promote such vicious, untenable and anti-human ideas, and for indoctrinating their students with these beliefs. I am therefore not going to mouth Marxist words. That would make me a puppet of the radical left, and that is not going to happen. Period.[68]

In response to the controversy, academic administrators at the University of Toronto sent Peterson two letters of warning, one noting that free speech had to be made in accordance with human rights legislation and the other adding that his refusal to use the preferred personal pronouns of students and faculty upon request could constitute discrimination. Peterson speculated that these warning letters were leading up to formal disciplinary action against him, but in December the university assured him that he would retain his professorship, and in January 2017 he returned to teach his psychology class at the University of Toronto.[13]

In February 2017, Maxime Bernier, candidate for leader of the Conservative Party of Canada, stated that he shifted his position on Bill C-16 after meeting with Peterson and discussing it.[69] Peterson’s analysis of the bill was also frequently cited by senators who were opposed to its passage.[70]

In April 2017, Peterson was denied a Social Sciences and Humanities Research Council grant for the first time in his career, which he interpreted as retaliation for his statements regarding Bill C-16.[71] A media relations adviser for SSHRC said “[c]ommittees assess only the information contained in the application”.[72] In response, The Rebel Media launched an Indiegogo campaign on Peterson’s behalf.[73] The campaign raised $195,000 by its end on May 6, equivalent to over two years of research funding.[74]

In May 2017, Peterson spoke against Bill C-16 at a Senate committee on legal and constitutional affairs hearing. He was one of 24 witnesses who were invited to speak on the bill.[70]

In August 2017, an announced event at Ryerson University titled “The Stifling of Free Speech on University Campuses”, organized by former social worker Sarina Singh with panelists Peterson, Gad Saad, Oren Amitay, and Faith Goldy was shut down because of pressure on the university administration from the group “No Fascists in Our City”.[75] However, another version of the panel (without Goldy) was held on November 11 at Canada Christian College with an audience of 1,500.[76][77]

In November 2017 a teaching assistant (TA) at Wilfrid Laurier University (WLU) was censured by her professors and WLU’s Manager of Gendered Violence Prevention and Support for showing a segment of The Agenda, which featured Peterson debating Bill C-16, during a classroom discussion.[78][79][80] The reasons given for the censure included the clip creating a “toxic climate” and being itself in violation of Bill C-16.[81] The case was criticized by several newspaper editorial boards[82][83][84] and national newspaper columnists[85][86][87][88] as an example of the suppression of free speech on university campuses. WLU announced a third-party investigation.[89] After the release of the audio recording of the meeting in which the TA was censured,[90] WLU President Deborah MacLatchy and the TA’s supervising professor Nathan Rambukkana published letters of formal apology.[91][92][93] According to the investigation no students had complained about the lesson, there was no informal concern related to Laurier policy, and according to MacLatchy the meeting “never should have happened at all”.[94][95]

Peterson married Tammy Roberts in 1989.[13] They have one daughter and one son.[10][13] He became a grandfather in August 2017.[96]

Politically, Peterson has described himself as a classic British liberal.[97][15] He is a philosophical pragmatist.[40] In a 2017 interview, Peterson identified as a Christian,[98] but in 2018 he did not.[99] He emphasized his conceptualization of Christianity is probably not what it is generally understood, stating that the ethical responsibility of a Christian is to imitate Christ, for him meaning “something like you need to take responsibility for the evil in the world as if you were responsible for it … to understand that you determine the direction of the world, whether its toward heaven or hell”.[99] When asked if he believes in God, Peterson responded: “I think the proper response to that is No, but I’m afraid He might exist”.[7] Writing for The Spectator, Tim Lott said Peterson draws inspiration from Jung’s philosophy of religion, and holds views similar to the Christian existentialism of Sren Kierkegaard and Paul Tillich. Lott also said Peterson has respect for Taoism, as it views nature as a struggle between order and chaos, and posits that life would be meaningless without this duality.[15]

Visit link:

Jordan Peterson – Wikipedia