Page 104«..1020..102103104105

Category Archives: Singularity

Technological singularity – Wikipedia

Posted: December 15, 2016 at 7:02 pm

The technological singularity (also, simply, the singularity)[1] is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.[2] According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a 'runaway reaction' of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence.

John von Neumann first uses the term "singularity" in the context of technological progress causing accelerating change: The accelerating progress of technology and changes in the mode of human life, give the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, can not continue.[6] Subsequent authors have echoed this view point.[2][3]I. J. Good's "intelligence explosion", predicted that a future superintelligence would trigger a singularity.[4]Science fiction author Vernor Vinge said in his essay The Coming Technological Singularity that this would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate.[4]

At the 2012 Singularity Summit, Stuart Armstrong did a study of artificial general intelligence (AGI) predictions by experts and found a wide range of predicted dates, with a median value of 2040.[5]

I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. Good's scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (ever more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in.[6]

John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of superintelligence. They argue that it is difficult or impossible for present-day humans to predict what human beings' lives would be like in a post-singularity world.[4][7]

Some writers use "the singularity" in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[8][9][10] although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[4] Many writers also tie the singularity to observations of exponential growth in various technologies (with Moore's Law being the most prominent example), using such observations as a basis for predicting that the singularity is likely to happen sometime within the 21st century.[9][11]

Many prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, and Gordon Moore, whose Moore's Law is often cited in support of the concept.[12][13][14]

The exponential growth in computing technology suggested by Moore's Law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore's Law. Computer scientist and futurist Hans Moravec proposed in a 1998 book[15] that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit.

Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[16]) increases exponentially, generalizing Moore's Law in the same manner as Moravec's proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[17] Between 1986 and 2007, machines' application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world's general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world's storage capacity per capita doubled every 40 months.[18]

Kurzweil reserves the term "singularity" for a rapid increase in intelligence (as opposed to other technologies), writing for example that "The Singularity will allow us to transcend these limitations of our biological bodies and brains ... There will be no distinction, post-Singularity, between human and machine".[19] He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date "will not represent the Singularity" because they do "not yet correspond to a profound expansion of our intelligence."[20]

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term "singularity" in the context of technological progress, Ulam tells of a conversation with the late John von Neumann about accelerating change:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[21]

Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the "Law of Accelerating Returns". Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history".[22] Kurzweil believes that the singularity will occur by approximately 2045.[23] His predictions differ from Vinge's in that he predicts a gradual ascent to the singularity, rather than Vinge's rapidly self-improving superhuman intelligence.

Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy's Wired magazine article "Why the future doesn't need us".[3][24]

Some critics assert that no computer or machine will ever achieve human intelligence, while others hold that the definition of intelligence is irrelevant if the net result is the same.[25]

Steven Pinker stated in 2008:

(...) There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobilesall staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. (...)[12]

University of California, Berkeley, philosophy professor John Searle writes:

[Computers] have, literally [], no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. [] [T]he machinery has no beliefs, desires, [or] motivations.[26]

Martin Ford in The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future[27] postulates a "technology paradox" in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to work traditionally considered to be "routine".[28]

Jared Diamond, in Collapse: How Societies Choose to Fail or Succeed, argues that cultures self-limit when they exceed the sustainable carrying capacity of their environment, and the consumption of strategic resources (frequently timber, soils or water) creates a deleterious positive feedback loop that leads eventually to social collapse and technological retrogression.[improper synthesis?]

Theodore Modis[29][30] and Jonathan Huebner[31] argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore's prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advancements in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.[32] While Kurzweil used Modis' resources, and Modis' work was around accelerating change, Modis distanced himself from Kurzweil's thesis of a "technological singularity", claiming that it lacks scientific rigor.[30]

Others[who?] propose that other "singularities" can be found through analysis of trends in world population, world gross domestic product, and other indices. Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.[33][improper synthesis?]

In a detailed empirical accounting, The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore's law to 19th-century computers.[34]

In a 2007 paper, Schmidhuber stated that the frequency of subjectively "notable events" appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.[35]

Paul Allen argues the opposite of accelerating returns, the complexity brake;[14] the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies,[36] a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[31] The growth of complexity eventually becomes self-limiting, and leads to a widespread "general systems collapse".

Jaron Lanier refutes the idea that the Singularity is inevitable. He states: "I do not think the technology is creating itself. It's not an autonomous process."[37] He goes on to assert: "The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it's the same thing operationally as denying people clout, dignity, and self-determination ... to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics."[37]

Economist Robert J. Gordon, in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 2008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I.J. Good.[38]

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil's iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary "events" were picked arbitrarily.[39] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. The Economist mocked the concept with a graph extrapolating that the number of blades on a razor, which has increased over the years from one to as many as five, will increase ever-faster to infinity.[40]

The term "technological singularity" reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.[41][42] It is unclear whether an intelligence explosion of this kind would be beneficial or harmful, or even an existential threat,[43][44] as the issue has not been dealt with by most artificial general intelligence researchers, although the topic of friendly artificial intelligence is investigated by the Future of Humanity Institute and the Machine Intelligence Research Institute.[41]

While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description. In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence. A 2016 article in Trends in Ecology & Evolution argues that "humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels... we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes... With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction". The article argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life's evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition. The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, "the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (5x10^21 bytes). In biological terms, there are 7.2 billion humans on the planet, each having a genome of 6.2 billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1x10^19 bytes. The digital realm stored 500 times more information than this in 2014 (...see Figure)... The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.3x10^37 base pairs, equivalent to 1.325x10^37 bytes of information. If growth in digital storage continues at its current rate of 3038% compound annual growth per year,[18] it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years".[45]

In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.[46]

Some machines have acquired various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a "cockroach" stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[46]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[47][improper synthesis?]

In his 2005 book, The Singularity is Near, Kurzweil suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.[48] Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.[49]

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called "Digital Ascension" that involves "people dying in the flesh and being uploaded into a computer and remaining conscious".[50]Singularitarianism has also been likened to a religion by John Horgan.[51]

In his obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."[21]

In 1965, Good wrote his essay postulating an "intelligence explosion" of recursive self-improvement of a machine intelligence. In 1985, in "The Time Scale of Artificial Intelligence", artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an "infinity point": if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.[3][52]

In 1983, Vinge greatly popularized Good's intelligence explosion in a number of writings, first addressing the topic in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term "singularity" in a way that was specifically tied to the creation of intelligent machines:[53][54] writing

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between ... so that the world remains intelligible.

Vinge's 1993 article "The Coming Technological Singularity: How to Survive in the Post-Human Era",[4] spread widely on the internet and helped to popularize the idea.[55] This article contains the statement, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.[4]

In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of the singularity.[24]

In 2005, Kurzweil published The Singularity is Near. Kurzweil's publicity campaign included an appearance on The Daily Show with Jon Stewart.[56]

In 2007, Eliezer Yudkowsky suggested that many of the varied definitions that have been assigned to "singularity" are mutually incompatible rather than mutually supporting.[9][57] For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability.[9]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is "to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges."[58] Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA's Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during the northern-hemisphere summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

In 2007, the joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity.[59][60][61]

The president of the United States Barack Obama spoke about singularity in his interview to Wired in 2016:[62]

One thing that we havent talked about too much, and I just want to go back to, is we really have to think through the economic implications. Because most people arent spending a lot of time right now worrying about singularitythey are worrying about Well, is my job going to be replaced by a machine?

The singularity is referenced in innumerable science-fiction works. In Greg Bear's sci-fi novel Blood Music (1983), a singularity occurs in a matter of hours.[4]David Brin's Lungfish (1987) proposes that AI be given humanoid bodies and raised as our children and taught the same way we were.[63] In William Gibson's 1984 novel Neuromancer, artificial intelligences capable of improving their own programs are strictly regulated by special "Turing police" to ensure they never exceed a certain level of intelligence, and the plot centers on the efforts of one such AI to circumvent their control.[63][64] In Greg Benford's 1998 Me/Days, it is legally required that an AI's memory be erased after every job.[63]

The entire plot of Wally Pfister's Transcendence centers on an unfolding singularity scenario.[65] The 2013 science fiction film Her follows a man's romantic relationship with a highly intelligent AI, who eventually learns how to improve herself and creates an intelligence explosion. The 1982 film Blade Runner, and the 2015 film Ex Machina, are two mildly dystopian visions about the impact of artificial general intelligence. Unlike Blade Runner, Her and Ex Machina both attempt to present "plausible" near-future scenarios that are intended to strike the audience as "not just possible, but highly probable".[66]

Follow this link:

Technological singularity - Wikipedia

Posted in Singularity | Comments Off on Technological singularity – Wikipedia

Singularity University – Wikipedia

Posted: November 29, 2016 at 1:30 am

Singularity University (abbreviated SU) is a Silicon Valley think tank that offers educational programs and a business incubator.[2][3] According to its website, it focuses on scientific progress and "exponential" technologies.[4] It was founded in 2008 by Peter Diamandis and Ray Kurzweil at the NASA Research Park in California, United States.[5]

Singularity University initially offered an annual 10-week summer program and has since added conference series, classes, and a business incubator for startups and corporate teams.[6]

Instruction is offered in eleven areas.[7][8] Singularity University was created in 2009 based on Ray Kurzweil's theory of "technological singularity." Kurzweil believes that emerging technologies like nanotechnology and biotechnology will massively increase human intelligence over the next two decades, and fundamentally reshape the economy and society.[9] In 2012, Singularity University the non-profit began the process for conversion to a benefit corporation, to include non-profit as well as for-profit aspects.[10] In 2013, the new for-profit corporation incorporated as "Singularity Education Group" and acquired the descriptive "Singularity University" as its trade name.[11]

In 2015, Singularity University and Yunus Social Business (YSB) announced a partnership at the World Economic Forum to use "accelerating technologies" and social entrepreneurship for global development in developing areas of the world where YSB is active.[12][13]

Singularity University also partners with organizations to sponsor annual "Global Impact Competitions", based on a theme and geography.[14][15]

Singularity University is overseen by a Board of Trustees.[16] Rob Nail, one of the organization's Associate Founders, was named CEO of Singularity University in October, 2011.[17] Director of "Global Grand Challenges" in 2013 is Nicholas Haan.

Corporate founding partners and sponsors include Google,[18]Nokia,[19]Autodesk,[20]IDEO,[citation needed]LinkedIn,[citation needed]ePlanet Capital,[21] the X Prize Foundation, the Kauffman Foundation and Genentech.[22]

Students at Singularity University's "Global Solutions Program" (GSP, formerly the "Graduate Studies Program") learn about new technologies, and work together over the summer to start companies.[23] In 2012, the Global Solutions Program class had 80 students, with an average age of 30.[24] In 2015, Google agreed to provide $1.5 million annually for two years to make the program free to participants.[25] The 80 students are selected from over 3,000 applicants each year.[23] A substantial portion of the GSP class comes from the winners of SU's sponsored "Global Impact Competitions".[25]

The Executive Program is targeted to corporate leaders, and focuses on how rapid changes in technology will impact businesses.[23]

In 2013, Singularity University announced a three-year partnership with Deloitte and XPRIZE called the "Innovation Partnership Program" (IPP). The program consists of a multi-year series of events where Fortune 500 executives partner with startups.[26] The program consists of an array of workshops on crowdsourcing, the advancement of "exponential" technologies, and how to innovate through incentivized competitions. Executives from 30 large companies, including Google, Shell, Qualcomm, The Hershey Company and Sprint, met for the first four-day executive summit.[26]

Singularity University has an "Exponential Regional Partnership" with SingularityU The Netherlands. This partnership program serves to help prepare European society and European companies for exponential technologies and give them the tools to use these technologies to meet Global Grand Challenges. The Netherlands was chosen as a starting point for international expansion because of the social, creative and innovative environment with rapid adoption rates for new technologies.[27] Water, food, healthcare and mobility, traditional strengths of the Dutch economy, are the main focal points.

SingularityU The Netherlands has its own local faculty. This faculty consists of European scientists and domain experts who have been selected because SingularityU considers them to be at the top of their respective fields.

In 2016, SingularityU The Netherlands organized a Global Impact Competition to find the most innovative Dutch entrepreneurs with ideas that leverage exponential technologies to enhance the lives of refugees. [28] Danny Wagemans, a 21-year old nanophysics student, won the first prize to participate in the 10 week Global Solutions Program. He demonstrated how clean water and energy can be derived from urine by combining a microbial fuel cell and a graphene filter in a water bottle.[29]

An Innovation Hub that allows people to experience exponential technologies has been started in Eindhoven as part of the Exponential Regional Partnership. This Innovation Hub was officially opened in Eindhoven by Queen Mxima of the Netherlands, in the presence of numerous representatives of the corporate community, government and innovators. Eindhoven was chosen for this hub as it is the heart of the Brainport region, one of Europe's most important tech clusters.[30]

Singularity University hosts annual conferences focused on "exponentially accelerating technologies", and their impact on fields such as finance, medicine and manufacturing.[31] The conferences are produced with Deloitte,[31] as well as CNBC for the "Exponential Finance" conference.[32]

Singularity Hub is a science and tech media website published by Singularity University.[33] Singularity Hub was founded in 2008 [33] with the mission of "providing news coverage of sci/tech breakthroughs that are rapidly changing human abilities, health, and society".[34] It was acquired by Singularity University in 2012, to make content produced by Singularity University more accessible.[34]

SU Labs is a seed accelerator by Singularity University, targeting startups which aim to "change the lives of a billion people"[35]

The company "Made In Space", which has developed a 3D printer adapted to the constraints of space travel, was founded at Singularity University. The first prototype of Made in Space, the "Zero-G Printer", was developed with NASA and sent into space in September, 2014.[36]

In 2011, a Singularity University group launched Matternet, a startup that aims to harness drone technology to ship goods in developing countries that lack highway infrastructure. Other startups from SU are the peer-to-peer car-sharing service Getaround, and BioMine, which uses mining technologies to extract value from electronic waste.[7]

In 2013, Singularity University and the U.S. Fund for UNICEF announced a partnership to create technologies to improve the lives of vulnerable people in developing countries.[37][38]

Coordinates: 372455N 1220346W / 37.415229N 122.062650W / 37.415229; -122.062650

Read the original post:

Singularity University - Wikipedia

Posted in Singularity | Comments Off on Singularity University – Wikipedia

Singularity | Singularity

Posted: October 31, 2016 at 2:54 am

Singularity enables users to have full control of their environment. This means that a non-privileged user can swap out the operating system on the host for one they control. So if the host system is running RHEL6 but your application runs in Ubuntu, you can create an Ubuntu image, install your applications into that image, copy the image to another host, and run your application on that host in its native Ubuntu environment!

Register your Cluster Add a Publication

Singularity also allows you to leverage the resources of whatever host you are on. This includes HPC interconnects, resource managers, file systems, GPUs and/or accelerators, etc. Singularity does this by enabling several key facets:

Jump in and get started.

Greg Kurtzer comments on the rise of containers, and how Singularity is ideal for scientists. Read the news article at...

It is with great pleasure that I announce the general availability of Singularity version 2.2! Heres whats in store for...

We are happy to announce that the new Singularity site is underway! We will be adding the following: Updated...

Go here to see the original:

Singularity | Singularity

Posted in Singularity | Comments Off on Singularity | Singularity

What is Singularity (the)? – Definition from WhatIs.com

Posted: at 2:54 am

The Singularity is the hypothetical future creation of superintelligent machines. Superintelligence is defined as a technologically-created cognitive capacity far beyond that possible for humans. Should the Singularity occur, technology will advance beyond our ability to foresee or control its outcomes and the world will be transformed beyond recognition by the application of superintelligence to humans and/or human problems, including poverty, disease and mortality.

Revolutions in genetics, nanotechnology and robotics (GNR) in the first half of the 21st century are expected to lay the foundation for the Singularity. According to Singularity theory, superintelligence will be developed by self-directed computers and will increase exponentially rather than incrementally.

Lev Grossman explains the prospective exponential gains in capacity enabled by superintelligent machines in an article in Time:

Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn't even take breaks...

Proposed mechanisms for adding superintelligence to humans include brain-computer interfaces, biological alteration of the brain, artificial intelligence (AI) brain implants and genetic engineering. Post-singularity, humanity and the world would be quite different. A human could potentially scan his consciousness into a computer and live eternally in virtual reality or as a sentient robot. Futurists such as Ray Kurzweil (author of The Singularity is Near) have predicted that in a post-Singularity world, humans would typically live much of the time in virtual reality -- which would be virtually indistinguishable from normal reality. Kurzweil predicts, based on mathematical calculations of exponential technological development, that the Singularity will come to pass by 2045.

Most arguments against the possibility of the Singularity involve doubts that computers can ever become intelligent in the human sense. The human brain and cognitive processes may simply be more complex than a computer could be. Furthermore, because the human brain is analog, with theoretically infinite values for any process, some believe that it cannot ever be replicated in a digital format. Some theorists also point out that the Singularity may not even be desirable from a human perspective because there is no reason to assume that a superintelligence would see value in, for example, the continued existence or well-being of humans.

Science-fiction writer Vernor Vinge first used the term the Singularity in this context in the 1980s, when he used it in reference to the British mathematician I.J. Goods concept of an intelligence explosion brought about by the advent of superintelligent machines. The term is borrowed from physics; in that context a singularity is a point where the known physical laws cease to apply.

See also: Asimovs Three Laws of Robotics, supercomputer, cyborg, gray goo, IBMs Watson supercomputer, neural networks, smart robot

Neil deGrasse Tyson vs. Ray Kurzweil on the Singularity:

This was last updated in February 2016

See original here:

What is Singularity (the)? - Definition from WhatIs.com

Posted in Singularity | Comments Off on What is Singularity (the)? – Definition from WhatIs.com

Downloads – Singularity Viewer

Posted: August 25, 2016 at 4:31 pm

Please pay attention to the following vital information before using Singularity Viewer.

Singularity Viewer only supports SSE2 compliant CPUs. All computers manufactured 2004 and later should have one.

Warning: RLVa is enabled by default, which permits your attachments to take more extensive control of the avatar than default behavior of other viewers. Foreign, rezzed in-world, non-worn objects can only take control of your avatar if actively permitted by corresponding scripted attachments you wear. Please refer to documentation of your RLV-enabled attachments for details, if you have any.

Singularity Viewer 1.8.7(6861) Setup

Compatible with 64-bit version of Windows Vista, Windows 7, Windows 8 and newer. Known limitation is the lack of support for the Quicktime plugin which means that certain types of parcel media will not play. Streaming music and shared media (MoaP) are not affected and are fully functional.

Compatible with OS X 10.6 and newer, Intel CPU.

Make sure you have 32-bit versions of gstreamer-plugins-base, gstreamer-plugins-ugly and libuuid1 installed. The package has been built on DebianSqueezeand should work on a variety of distributions.

For voice to work, minimal support for running 32-bit binaries is necessary. libasound_module_pcm_pulse.so may be needed. Possible package names: lib32asound2-plugins (squeeze), alsa-plugins-pulseaudio.i686 (fedora),libasound2-plugins:i386 (debian/ubuntu).

If you receive "The following media plugin has failed: media_plugin_webkit" you may need to install the package containing libpangox-1.0.so.0for your distribution (could bepangox-compat).

To add all the skins, extract this package into the viewer install directory, that's usually C:Programs FilesSingularity on Windows, /Applications/Singularity.app/Contents/Resources/ on Mac, and wherever you extracted the tarball to on Linux. Just merge the extracted skins directory with the existing skins directory, there should be no conflicts.

Read the original here:

Downloads - Singularity Viewer

Posted in Singularity | Comments Off on Downloads – Singularity Viewer

Amazon.com: Singularity [Online Game Code]: Video Games

Posted: at 4:31 pm

This review might read a little strange because I am going to list a lot of things wrong with the game, then tell you to buy it anyway. The long and short of it is that Raven Software, a group of developers who have been making FPS games for almost two decades, really liked Bioshock. A lot.

While Raven's previous titles from recent years were very old-school, Quake 4 even still having floating weapon pick-ups, Singularity plays much more like a modern console FPS game. Singularity is slow-paced, has various sections that concentrate more on light puzzle-solving than shooting and even long stretches just added for atmosphere. Speaking of atmosphere, the game has it in spades... great environments, cool effects, audio and written logs, films, music, the works. The game uses the Unreal Engine 3 as well, though it has brighter colors and more vibrant environments than many games that use the same engine. Does it sound like Bioshock yet? Okay, how about this: you get special powers that you upgrade and add new abilities to by spending an in-game currency. Warm yet? How about: the game is mostly linear, but has some side paths you can travel for more pick-ups.

What I am driving home here is that Raven might as well of called this Bioshock 3. The UE3 engine, the vibrant colors, the pick-ups, the powers, the slower pace, the light puzzles... everything is influenced by Irrational Games' underwater masterpiece. This makes it a very easy game to review though, because basically if you want another Bioshock then get this game. It is polished and does the Bioshock style well, and the storyline and location... a Russian experimental weapons base you visit long after a disaster of epic proportions, ruined, dark and wet just like Bioshock's Rapture... are well presented.

See the article here:

Amazon.com: Singularity [Online Game Code]: Video Games

Posted in Singularity | Comments Off on Amazon.com: Singularity [Online Game Code]: Video Games

Singularity – Mass Effect Wiki – Wikia

Posted: at 4:31 pm

Mass Effect Edit This gravitational power sucks multiple enemies within a radius to a single area, leaving them floating helplessly and vulnerable to attack. It can also attract objects from the environment, such as crates or pieces of furniture; enemies will take damage if they collide with other solid objects in the Singularity field. Talent Ranks Edit

These classes have access to the Singularity talent:

Note: This power travels in the direction of the cross-hair, arcing towards the target. Upon impact, it will create the Singularity. Liara's Singularity travels in a straight line, instantly creating a singularity at the targeted location.

Rank 4

Choose to evolve the power into one of the following,

Create a sphere of dark energy that traps and dangles enemies caught in its field.

Increase recharge speed by 25%.

Increase Singularity's hold duration by 20%. Increase impact radius by 20%.

Duration

Increase Singularity's hold duration by 30%. Additional enemies can be lifted before Singularity fades.

Radius

Increase impact radius by 25%.

Lift Damage

Inflict 20 damage per second to lifted targets.

Recharge Speed

Increase recharge speed by 30%.

Expand

Expand the Singularity field by 35% for 10 seconds.

Detonate

Detonate Singularity when the field dies to inflict 300 damage across 5 meters.

Create a sphere of dark energy that traps and dangles enemies caught in its field.

Increase recharge speed by 25%.

Increase damage by 20%.

Duration

Increase Singularity's hold duration by 150%.

Radius

Increase impact radius by 35%.

Lift Damage

Inflict 50 damage per second to lifted targets.

Recharge Speed

Increase recharge speed by 35%.

Damage

Increase damage by 50%.

Detonate

Detonate Singularity when the field dies to inflict 500 damage across 7 meters.

Link:

Singularity - Mass Effect Wiki - Wikia

Posted in Singularity | Comments Off on Singularity – Mass Effect Wiki – Wikia

Singularity – RationalWiki

Posted: July 18, 2016 at 3:37 pm

There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles--all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems.

A singularity is a sign that your model doesn't apply past a certain point, not infinity arriving in real life

A singularity, as most commonly used, is a point at which expected rules break down. The term comes from mathematics, where a point on a curve that has a sudden break in slope is considered to have a slope of undefined or infinite value; such a point is known as a singularity.

The term has extended into other fields; the most notable use is in astrophysics, where a singularity is a point (usually, but perhaps not exclusively, at the center a of black hole) where curvature of spacetime approaches infinity.

This article, however, is not about the mathematical or physics uses of the term, but rather the borrowing of it by various futurists. They define a technological singularity as the point beyond which we can know nothing about the world. So, of course, they then write at length on the world after that time.

It's intelligent design for the IQ 140 people. This proposition that we're heading to this point at which everything is going to be just unimaginably different - it's fundamentally, in my view, driven by a religious impulse. And all of the frantic arm-waving can't obscure that fact for me, no matter what numbers he marshals in favor of it. He's very good at having a lot of curves that point up to the right.

In transhumanist belief, the "technological singularity" refers to a hypothetical point beyond which human technology and civilization is no longer comprehensible to the current human mind. The theory of technological singularity states that at some point in time humans will invent a machine that through the use of artificial intelligence will be smarter than any human could ever be. This machine in turn will be capable of inventing new technologies that are even smarter. This event will trigger an exponential explosion of technological advances of which the outcome and effect on humankind is heavily debated by transhumanists and singularists.

Many proponents of the theory believe that the machines eventually will see no use for humans on Earth and simply wipe us out their intelligence being far superior to humans, there would be probably nothing we could do about it. They also fear that the use of extremely intelligent machines to solve complex mathematical problems may lead to our extinction. The machine may theoretically respond to our question by turning all matter in our solar system or our galaxy into a giant calculator, thus destroying all of humankind.

Critics, however, believe that humans will never be able to invent a machine that will match human intelligence, let alone exceed it. They also attack the methodology that is used to "prove" the theory by suggesting that Moore's Law may be subject to the law of diminishing returns, or that other metrics used by proponents to measure progress are totally subjective and meaningless. Theorists like Theodore Modis argue that progress measured in metrics such as CPU clock speeds is decreasing, refuting Moore's Law[3]. (As of 2015, not only Moore's Law is beginning to stall, Dennard scaling is also long dead, returns in raw compute power from transistors is subjected to diminishing returns as we use more and more of them, there is also Amdahl's Law and Wirth's law to take into account, and also that raw computing power simply doesn't scale up linearly at providing real marginal utility. Then even after all those things, we still haven't taken into account of the fundamental limitations of conventional computing architecture. Moore's law suddenly doesn't look to be the panacea to our problems now, does it?)

Transhumanist thinkers see a chance of the technological singularity arriving on Earth within the twenty first century, a concept that most[Who?]rationalists either consider a little too messianic in nature or ignore outright. Some of the wishful thinking may simply be the expression of a desire to avoid death, since the singularity is supposed to bring the technology to reverse human aging, or to upload human minds into computers. However, recent research, supported by singularitarian organizations including MIRI and the Future of Humanity Institute, does not support the hypothesis that near-term predictions of the singularity are motivated by a desire to avoid death, but instead provides some evidence that many optimistic predications about the timing of a singularity are motivated by a desire to "gain credit for working on something that will be of relevance, but without any possibility that their prediction could be shown to be false within their current career".[4][5]

Don't bother quoting Ray Kurzweil to anyone who knows a damn thing about human cognition or, indeed, biology. He's a computer science genius who has difficulty in perceiving when he's well out of his area of expertise.[6]

Eliezer Yudkowsky identifies three major schools of thinking when it comes to the singularity.[7] While all share common ground in advancing intelligence and rapidly developing technology, they differ in how the singularity will occur and the evidence to support the position.

Under this school of thought, it is assumed that change and development of technology and human (or AI assisted) intelligence will accelerate at an exponential rate. So change a decade ago was much faster than change a century ago, which was faster than a millennium ago. While thinking in exponential terms can lead to predictions about the future and the developments that will occur, it does mean that past events are an unreliable source of evidence for making these predictions.

The "event horizon" school posits that the post-singularity world would be unpredictable. Here, the creation of a super-human artificial intelligence will change the world so dramatically that it would bear no resemblance to the current world, or even the wildest science fiction. This school of thought sees the singularity most like a single point event rather than a process indeed, it is this thesis that spawned the term "singularity." However, this view of the singularity does treat transhuman intelligence as some kind of magic.

This posits that the singularity is driven by a feedback cycle between intelligence enhancing technology and intelligence itself. As Yudkowsky (who endorses this view) "What would humans with brain-computer interfaces do with their augmented intelligence? One good bet is that theyd design the next generation of brain-computer interfaces." When this feedback loop of technology and intelligence begins to increase rapidly, the singularity is upon us.

There is also a fourth singularity school which is much more popular than the other three: It's all a load of baloney![8] This position is not popular with high-tech billionaires.[9]

This is largely dependent on your definition of "singularity".

The intelligence explosion singularity is by far the most unlikely. According to present calculations, a hypothetical future supercomputer may well not be able to replicate a human brain in real time. We presently don't even understand how intelligence works, and there is no evidence that intelligence is self-iterative in this manner - indeed, it is not unlikely that improvements on intelligence are actually more difficult the smarter you become, meaning that each improvement on intelligence is increasingly difficult to execute. Indeed, how much smarter it is possible for something to even be than a human being is an open question. Energy requirements are another issue; humans can run off of Doritos and Mountain Dew Dr. Pepper, while supercomputers require vast amounts of energy to function. Unless such an intelligence can solve problems better than groups of humans, its greater intelligence may well not matter, as it may not be as efficient as groups of humans working together to solve problems.

Another major issue arises from the nature of intellectual development; if an artificial intelligence needs to be raised and trained, it may well take twenty years or more between generations of artificial intelligences to get further improvements. More intelligent animals seem to generally require longer to mature, which may put another limitation on any such "explosion".

Accelerating change is questionable; in real life, the rate of patents per capita actually peaked in the 20th century, with a minor decline since then, despite the fact that human beings have gotten more intelligent and gotten superior tools. As noted above, Moore's Law has been in decline, and outside of the realm of computers, the rate of increase in other things has not been exponential - airplanes and cars continue to improve, but they do not improve at the ridiculous rate of computers. It is likely that once computers hit physical limits of transistor density, their rate of improvement will fall off dramatically, and already even today, computers which are "good enough" continue to operate for many years, something which was unheard of in the 1990s, where old computers were rapidly and obviously obsoleted by new ones.

According to this point of view, the Singularity is a past event, and we live in a post-Singularity world.

The rate of advancement has actually been in decline in recent times, as patents per-capita has gone down, and the rate of increase of technology has declined rather than risen, though the basal rate is higher than it was in centuries past. According to this point of view, the intelligence explosion and increasing rate of change already happened with computers, and now that everyone has handheld computing devices, the rate of increase is going to decline as we hit natural barriers in how much additional benefit we gain out of additional computing power. The densification of transistors on microchips has slowed by about a third, and the absolute limit to transistors is approaching - a true, physical barrier which cannot be bypassed or broken, and which would require an entirely different means of computing to create a denser still microchip.

From the point of view of travel, humans have gone from walking to sailing to railroads to highways to airplanes, but communication has now reached the point where a lot of travel is obsolete - the Internet is omnipresent and allows us to effectively communicate with people on any corner of the planet without travelling at all. From this point of view, there is no further point of advancement, because we're already at the point where we can be anywhere on the planet instantly for many purposes, and with improvements in automation, the amount of physical travel necessary for the average human being has declined over recent years. Instant global communication and the ability to communicate and do calculations from anywhere are a natural physical barrier, beyond which further advancement is less meaningful, as it is mostly just making things more convenient - the cost is already extremely low.

The prevalence of computers and communications devices has completely changed the world, as has the presence of cheap, high-speed transportation technology. The world of the 21st century is almost unrecognizable to people from the founding of the United States in the latter half of the 18th century, or even to people from the height of the industrial era at the turn of the 20th century.

Extraterrestrial technological singularities might become evident from acts of stellar/cosmic engineering. One such possibility for example would be the construction of Dyson Spheres that would result in the altering of a star's electromagnetic spectrum in a way detectable from Earth. Both SETI and Fermilab have incorporated that possibility into their searches for alien life. [10][11]

A different view of the concept of singularity is explored in the science fiction book Dragon's Egg by Robert Lull Forward, in which an alien civilization on the surface of a neutron star, being observed by human space explorers, goes from Stone Age to technological singularity in the space of about an hour in human time, leaving behind a large quantity of encrypted data for the human explorers that are expected to take over a million years (for humanity) to even develop the technology to decrypt.

No signs of extraterrestrial civilizations have been found as of 2016.

Read the rest here:

Singularity - RationalWiki

Posted in Singularity | Comments Off on Singularity – RationalWiki

Singularity Q&A | KurzweilAI

Posted: June 27, 2016 at 6:30 am

Originally published in 2005 with the launch of The Singularity Is Near.

Questions and Answers

So what is the Singularity?

Within a quarter century, nonbiological intelligence will match the range and subtlety of human intelligence. It will then soar past it because of the continuing acceleration of information-based technologies, as well as the ability of machines to instantly share their knowledge. Intelligent nanorobots will be deeply integrated in our bodies, our brains, and our environment, overcoming pollution and poverty, providing vastly extended longevity, full-immersion virtual reality incorporating all of the senses (like The Matrix), experience beaming (like Being John Malkovich), and vastly enhanced human intelligence. The result will be an intimate merger between the technology-creating species and the technological evolutionary process it spawned.

And thats the Singularity?

No, thats just the precursor. Nonbiological intelligence will have access to its own design and will be able to improve itself in an increasingly rapid redesign cycle. Well get to a point where technical progress will be so fast that unenhanced human intelligence will be unable to follow it. That will mark the Singularity.

When will that occur?

I set the date for the Singularityrepresenting a profound and disruptive transformation in human capabilityas 2045. The nonbiological intelligence created in that year will be one billion times more powerful than all human intelligence today.

Why is this called the Singularity?

The term Singularity in my book is comparable to the use of this term by the physics community. Just as we find it hard to see beyond the event horizon of a black hole, we also find it difficult to see beyond the event horizon of the historical Singularity. How can we, with our limited biological brains, imagine what our future civilization, with its intelligence multiplied trillions-fold, be capable of thinking and doing? Nevertheless, just as we can draw conclusions about the nature of black holes through our conceptual thinking, despite never having actually been inside one, our thinking today is powerful enough to have meaningful insights into the implications of the Singularity. Thats what Ive tried to do in this book.

Okay, lets break this down. It seems a key part of your thesis is that we will be able to capture the intelligence of our brains in a machine.

Indeed.

So how are we going to achieve that?

We can break this down further into hardware and software requirements. In the book, I show how we need about 10 quadrillion (1016) calculations per second (cps) to provide a functional equivalent to all the regions of the brain. Some estimates are lower than this by a factor of 100. Supercomputers are already at 100 trillion (1014) cps, and will hit 1016 cps around the end of this decade. Several supercomputers with 1 quadrillion cps are already on the drawing board, with two Japanese efforts targeting 10 quadrillion cps around the end of the decade. By 2020, 10 quadrillion cps will be available for around $1,000. Achieving the hardware requirement was controversial when my last book on this topic, The Age of Spiritual Machines, came out in 1999, but is now pretty much of a mainstream view among informed observers. Now the controversy is focused on the algorithms.

And how will we recreate the algorithms of human intelligence?

To understand the principles of human intelligence we need to reverse-engineer the human brain. Here, progress is far greater than most people realize. The spatial and temporal (time) resolution of brain scanning is also progressing at an exponential rate, roughly doubling each year, like most everything else having to do with information. Just recently, scanning tools can see individual interneuronal connections, and watch them fire in real time. Already, we have mathematical models and simulations of a couple dozen regions of the brain, including the cerebellum, which comprises more than half the neurons in the brain. IBM is now creating a simulation of about 10,000 cortical neurons, including tens of millions of connections. The first version will simulate the electrical activity, and a future version will also simulate the relevant chemical activity. By the mid 2020s, its conservative to conclude that we will have effective models for all of the brain.

So at that point well just copy a human brain into a supercomputer?

I would rather put it this way: At that point, well have a full understanding of the methods of the human brain. One benefit will be a deep understanding of ourselves, but the key implication is that it will expand the toolkit of techniques we can apply to create artificial intelligence. We will then be able to create nonbiological systems that match human intelligence in the ways that humans are now superior, for example, our pattern- recognition abilities. These superintelligent computers will be able to do things we are not able to do, such as share knowledge and skills at electronic speeds.

By 2030, a thousand dollars of computation will be about a thousand times more powerful than a human brain. Keep in mind also that computers will not be organized as discrete objects as they are today. There will be a web of computing deeply integrated into the environment, our bodies and brains.

You mentioned the AI tool kit. Hasnt AI failed to live up to its expectations?

There was a boom and bust cycle in AI during the 1980s, similar to what we saw recently in e-commerce and telecommunications. Such boom-bust cycles are often harbingers of true revolutions; recall the railroad boom and bust in the 19th century. But just as the Internet bust was not the end of the Internet, the so-called AI Winter was not the end of the story for AI either. There are hundreds of applications of narrow AI (machine intelligence that equals or exceeds human intelligence for specific tasks) now permeating our modern infrastructure. Every time you send an email or make a cell phone call, intelligent algorithms route the information. AI programs diagnose electrocardiograms with an accuracy rivaling doctors, evaluate medical images, fly and land airplanes, guide intelligent autonomous weapons, make automated investment decisions for over a trillion dollars of funds, and guide industrial processes. These were all research projects a couple of decades ago. If all the intelligent software in the world were to suddenly stop functioning, modern civilization would grind to a halt. Of course, our AI programs are not intelligent enough to organize such a conspiracy, at least not yet.

Why dont more people see these profound changes ahead?

Hopefully after they read my new book, they will. But the primary failure is the inability of many observers to think in exponential terms. Most long-range forecasts of what is technically feasible in future time periods dramatically underestimate the power of future developments because they are based on what I call the intuitive linear view of history rather than the historical exponential view. My models show that we are doubling the paradigm-shift rate every decade. Thus the 20th century was gradually speeding up to the rate of progress at the end of the century; its achievements, therefore, were equivalent to about twenty years of progress at the rate in 2000. Well make another twenty years of progress in just fourteen years (by 2014), and then do the same again in only seven years. To express this another way, we wont experience one hundred years of technological advance in the 21st century; we will witness on the order of 20,000 years of progress (again, when measured by the rate of progress in 2000), or about 1,000 times greater than what was achieved in the 20th century.

The exponential growth of information technologies is even greater: were doubling the power of information technologies, as measured by price-performance, bandwidth, capacity and many other types of measures, about every year. Thats a factor of a thousand in ten years, a million in twenty years, and a billion in thirty years. This goes far beyond Moores law (the shrinking of transistors on an integrated circuit, allowing us to double the price-performance of electronics each year). Electronics is just one example of many. As another example, it took us 14 years to sequence HIV; we recently sequenced SARS in only 31 days.

So this acceleration of information technologies applies to biology as well?

Absolutely. Its not just computer devices like cell phones and digital cameras that are accelerating in capability. Ultimately, everything of importance will be comprised essentially of information technology. With the advent of nanotechnology-based manufacturing in the 2020s, well be able to use inexpensive table-top devices to manufacture on-demand just about anything from very inexpensive raw materials using information processes that will rearrange matter and energy at the molecular level.

Well meet our energy needs using nanotechnology-based solar panels that will capture the energy in .03 percent of the sunlight that falls on the Earth, which is all we need to meet our projected energy needs in 2030. Well store the energy in highly distributed fuel cells.

I want to come back to both biology and nanotechnology, but how can you be so sure of these developments? Isnt technical progress on specific projects essentially unpredictable?

Predicting specific projects is indeed not feasible. But the result of the overall complex, chaotic evolutionary process of technological progress is predictable.

People intuitively assume that the current rate of progress will continue for future periods. Even for those who have been around long enough to experience how the pace of change increases over time, unexamined intuition leaves one with the impression that change occurs at the same rate that we have experienced most recently. From the mathematicians perspective, the reason for this is that an exponential curve looks like a straight line when examined for only a brief duration. As a result, even sophisticated commentators, when considering the future, typically use the current pace of change to determine their expectations in extrapolating progress over the next ten years or one hundred years. This is why I describe this way of looking at the future as the intuitive linear view. But a serious assessment of the history of technology reveals that technological change is exponential. Exponential growth is a feature of any evolutionary process, of which technology is a primary example.

As I show in the book, this has also been true of biological evolution. Indeed, technological evolution emerges from biological evolution. You can examine the data in different ways, on different timescales, and for a wide variety of technologies, ranging from electronic to biological, as well as for their implications, ranging from the amount of human knowledge to the size of the economy, and you get the same exponentialnot linearprogression. I have over forty graphs in the book from a broad variety of fields that show the exponential nature of progress in information-based measures. For the price-performance of computing, this goes back over a century, well before Gordon Moore was even born.

Arent there are a lot of predictions of the future from the past that look a little ridiculous now?

Yes, any number of bad predictions from other futurists in earlier eras can be cited to support the notion that we cannot make reliable predictions. In general, these prognosticators were not using a methodology based on a sound theory of technology evolution. I say this not just looking backwards now. Ive been making accurate forward-looking predictions for over twenty years based on these models.

But how can it be the case that we can reliably predict the overall progression of these technologies if we cannot even predict the outcome of a single project?

Predicting which company or product will succeed is indeed very difficult, if not impossible. The same difficulty occurs in predicting which technical design or standard will prevail. For example, how will the wireless-communication protocols Wimax, CDMA, and 3G fare over the next several years? However, as I argue extensively in the book, we find remarkably precise and predictable exponential trends when assessing the overall effectiveness (as measured in a variety of ways) of information technologies. And as I mentioned above, information technology will ultimately underlie everything of value.

But how can that be?

We see examples in other areas of science of very smooth and reliable outcomes resulting from the interaction of a great many unpredictable events. Consider that predicting the path of a single molecule in a gas is essentially impossible, but predicting the properties of the entire gascomprised of a great many chaotically interacting moleculescan be done very reliably through the laws of thermodynamics. Analogously, it is not possible to reliably predict the results of a specific project or company, but the overall capabilities of information technology, comprised of many chaotic activities, can nonetheless be dependably anticipated through what I call the law of accelerating returns.

What will the impact of these developments be?

Radical life extension, for one.

Sounds interesting, how does that work?

In the book, I talk about three great overlapping revolutions that go by the letters GNR, which stands for genetics, nanotechnology, and robotics. Each will provide a dramatic increase to human longevity, among other profound impacts. Were in the early stages of the geneticsalso called biotechnologyrevolution right now. Biotechnology is providing the means to actually change your genes: not just designer babies but designer baby boomers. Well also be able to rejuvenate all of your bodys tissues and organs by transforming your skin cells into youthful versions of every other cell type. Already, new drug development is precisely targeting key steps in the process of atherosclerosis (the cause of heart disease), cancerous tumor formation, and the metabolic processes underlying each major disease and aging process. The biotechnology revolution is already in its early stages and will reach its peak in the second decade of this century, at which point well be able to overcome most major diseases and dramatically slow down the aging process.

That will bring us to the nanotechnology revolution, which will achieve maturity in the 2020s. With nanotechnology, we will be able to go beyond the limits of biology, and replace your current human body version 1.0 with a dramatically upgraded version 2.0, providing radical life extension.

And how does that work?

The killer app of nanotechnology is nanobots, which are blood-cell sized robots that can travel in the bloodstream destroying pathogens, removing debris, correcting DNA errors, and reversing aging processes.

Human body version 2.0?

Were already in the early stages of augmenting and replacing each of our organs, even portions of our brains with neural implants, the most recent versions of which allow patients to download new software to their neural implants from outside their bodies. In the book, I describe how each of our organs will ultimately be replaced. For example, nanobots could deliver to our bloodstream an optimal set of all the nutrients, hormones, and other substances we need, as well as remove toxins and waste products. The gastrointestinal tract could be reserved for culinary pleasures rather than the tedious biological function of providing nutrients. After all, weve already in some ways separated the communication and pleasurable aspects of sex from its biological function.

And the third revolution?

The robotics revolution, which really refers to strong AI, that is, artificial intelligence at the human level, which we talked about earlier. Well have both the hardware and software to recreate human intelligence by the end of the 2020s. Well be able to improve these methods and harness the speed, memory capabilities, and knowledge- sharing ability of machines.

Well ultimately be able to scan all the salient details of our brains from inside, using billions of nanobots in the capillaries. We can then back up the information. Using nanotechnology-based manufacturing, we could recreate your brain, or better yet reinstantiate it in a more capable computing substrate.

Which means?

Our biological brains use chemical signaling, which transmit information at only a few hundred feet per second. Electronics is already millions of times faster than this. In the book, I show how one cubic inch of nanotube circuitry would be about one hundred million times more powerful than the human brain. So well have more powerful means of instantiating our intelligence than the extremely slow speeds of our interneuronal connections.

So well just replace our biological brains with circuitry?

I see this starting with nanobots in our bodies and brains. The nanobots will keep us healthy, provide full-immersion virtual reality from within the nervous system, provide direct brain-to-brain communication over the Internet, and otherwise greatly expand human intelligence. But keep in mind that nonbiological intelligence is doubling in capability each year, whereas our biological intelligence is essentially fixed in capacity. As we get to the 2030s, the nonbiological portion of our intelligence will predominate.

The closest life extension technology, however, is biotechnology, isnt that right?

Theres certainly overlap in the G, N and R revolutions, but thats essentially correct.

So tell me more about how genetics or biotechnology works.

As we are learning about the information processes underlying biology, we are devising ways of mastering them to overcome disease and aging and extend human potential. One powerful approach is to start with biologys information backbone: the genome. With gene technologies, were now on the verge of being able to control how genes express themselves. We now have a powerful new tool called RNA interference (RNAi), which is capable of turning specific genes off. It blocks the messenger RNA of specific genes, preventing them from creating proteins. Since viral diseases, cancer, and many other diseases use gene expression at some crucial point in their life cycle, this promises to be a breakthrough technology. One gene wed like to turn off is the fat insulin receptor gene, which tells the fat cells to hold on to every calorie. When that gene was blocked in mice, those mice ate a lot but remained thin and healthy, and actually lived 20 percent longer.

New means of adding new genes, called gene therapy, are also emerging that have overcome earlier problems with achieving precise placement of the new genetic information. One company Im involved with, United Therapeutics, cured pulmonary hypertension in animals using a new form of gene therapy and it has now been approved for human trials.

So were going to essentially reprogram our DNA.

Thats a good way to put it, but thats only one broad approach. Another important line of attack is to regrow our own cells, tissues, and even whole organs, and introduce them into our bodies without surgery. One major benefit of this therapeutic cloning technique is that we will be able to create these new tissues and organs from versions of our cells that have also been made youngerthe emerging field of rejuvenation medicine. For example, we will be able to create new heart cells from your skin cells and introduce them into your system through the bloodstream. Over time, your heart cells get replaced with these new cells, and the result is a rejuvenated young heart with your own DNA.

Drug discovery was once a matter of finding substances that produced some beneficial effect without excessive side effects. This process was similar to early humans tool discovery, which was limited to simply finding rocks and natural implements that could be used for helpful purposes. Today, we are learning the precise biochemical pathways that underlie both disease and aging processes, and are able to design drugs to carry out precise missions at the molecular level. The scope and scale of these efforts is vast.

But perfecting our biology will only get us so far. The reality is that biology will never be able to match what we will be capable of engineering, now that we are gaining a deep understanding of biologys principles of operation.

Isnt nature optimal?

Not at all. Our interneuronal connections compute at about 200 transactions per second, at least a million times slower than electronics. As another example, a nanotechnology theorist, Rob Freitas, has a conceptual design for nanobots that replace our red blood cells. A conservative analysis shows that if you replaced 10 percent of your red blood cells with Freitas respirocytes, you could sit at the bottom of a pool for four hours without taking a breath.

If people stop dying, isnt that going to lead to overpopulation?

A common mistake that people make when considering the future is to envision a major change to todays world, such as radical life extension, as if nothing else were going to change. The GNR revolutions will result in other transformations that address this issue. For example, nanotechnology will enable us to create virtually any physical product from information and very inexpensive raw materials, leading to radical wealth creation. Well have the means to meet the material needs of any conceivable size population of biological humans. Nanotechnology will also provide the means of cleaning up environmental damage from earlier stages of industrialization.

So well overcome disease, pollution, and povertysounds like a utopian vision.

Its true that the dramatic scale of the technologies of the next couple of decades will enable human civilization to overcome problems that we have struggled with for eons. But these developments are not without their dangers. Technology is a double edged swordwe dont have to look past the 20th century to see the intertwined promise and peril of technology.

What sort of perils?

G, N, and R each have their downsides. The existential threat from genetic technologies is already here: the same technology that will soon make major strides against cancer, heart disease, and other diseases could also be employed by a bioterrorist to create a bioengineered biological virus that combines ease of transmission, deadliness, and stealthiness, that is, a long incubation period. The tools and knowledge to do this are far more widespread than the tools and knowledge to create an atomic bomb, and the impact could be far worse.

So maybe we shouldnt go down this road.

Its a little late for that. But the idea of relinquishing new technologies such as biotechnology and nanotechnology is already being advocated. I argue in the book that this would be the wrong strategy. Besides depriving human society of the profound benefits of these technologies, such a strategy would actually make the dangers worse by driving development underground, where responsible scientists would not have easy access to the tools needed to defend us.

So how do we protect ourselves?

I discuss strategies for protecting against dangers from abuse or accidental misuse of these very powerful technologies in chapter 8. The overall message is that we need to give a higher priority to preparing protective strategies and systems. We need to put a few more stones on the defense side of the scale. Ive given testimony to Congress on a specific proposal for a Manhattan style project to create a rapid response system that could protect society from a new virulent biological virus. One strategy would be to use RNAi, which has been shown to be effective against viral diseases. We would set up a system that could quickly sequence a new virus, prepare a RNA interference medication, and rapidly gear up production. We have the knowledge to create such a system, but we have not done so. We need to have something like this in place before its needed.

Ultimately, however, nanotechnology will provide a completely effective defense against biological viruses.

But doesnt nanotechnology have its own self-replicating danger?

Yes, but that potential wont exist for a couple more decades. The existential threat from engineered biological viruses exists right now.

Okay, but how will we defend against self-replicating nanotechnology?

There are already proposals for ethical standards for nanotechnology that are based on the Asilomar conference standards that have worked well thus far in biotechnology. These standards will be effective against unintentional dangers. For example, we do not need to provide self-replication to accomplish nanotechnology manufacturing.

But what about intentional abuse, as in terrorism?

Well need to create a nanotechnology immune systemgood nanobots that can protect us from the bad ones.

Blue goo to protect us from the gray goo!

Yes, well put. And ultimately well need the nanobots comprising the immune system to be self-replicating. Ive debated this particular point with a number of other theorists, but I show in the book why the nanobot immune system we put in place will need the ability to self-replicate. Thats basically the same lesson that biological evolution learned.

Ultimately, however, strong AI will provide a completely effective defense against self-replicating nanotechnology.

Okay, whats going to protect us against a pathological AI?

Yes, well, that would have to be a yet more intelligent AI.

This is starting to sound like that story about the universe being on the back of a turtle, and that turtle standing on the back of another turtle, and so on all the way down. So what if this more intelligent AI is unfriendly? Another even smarter AI?

History teaches us that the more intelligent civilizationthe one with the most advanced technologyprevails. But I do have an overall strategy for dealing with unfriendly AI, which I discuss in chapter 8.

Okay, so Ill have to read the book for that one. But arent there limits to exponential growth? You know the story about rabbits in Australiathey didnt keep growing exponentially forever.

There are limits to the exponential growth inherent in each paradigm. Moores law was not the first paradigm to bring exponential growth to computing, but rather the fifth. In the 1950s they were shrinking vacuum tubes to keep the exponential growth going and then that paradigm hit a wall. But the exponential growth of computing didnt stop. It kept going, with the new paradigm of transistors taking over. Each time we can see the end of the road for a paradigm, it creates research pressure to create the next one. Thats happening now with Moores law, even though we are still about fifteen years away from the end of our ability to shrink transistors on a flat integrated circuit. Were making dramatic progress in creating the sixth paradigm, which is three-dimensional molecular computing.

But isnt there an overall limit to our ability to expand the power of computation?

Yes, I discuss these limits in the book. The ultimate 2 pound computer could provide 1042 cps, which will be about 10 quadrillion (1016) times more powerful than all human brains put together today. And thats if we restrict the computer to staying at a cold temperature. If we allow it to get hot, we could improve that by a factor of another 100 million. And, of course, well be devoting more than two pounds of matter to computing. Ultimately, well use a significant portion of the matter and energy in our vicinity. So, yes, there are limits, but theyre not very limiting.

And when we saturate the ability of the matter and energy in our solar system to support intelligent processes, what happens then?

Then well expand to the rest of the Universe.

Which will take a long time I presume.

Well, that depends on whether we can use wormholes to get to other places in the Universe quickly, or otherwise circumvent the speed of light. If wormholes are feasible, and analyses show they are consistent with general relativity, we could saturate the universe with our intelligence within a couple of centuries. I discuss the prospects for this in the chapter 6. But regardless of speculation on wormholes, well get to the limits of computing in our solar system within this century. At that point, well have expanded the powers of our intelligence by trillions of trillions.

Getting back to life extension, isnt it natural to age, to die?

Other natural things include malaria, Ebola, appendicitis, and tsunamis. Many natural things are worth changing. Aging may be natural, but I dont see anything positive in losing my mental agility, sensory acuity, physical limberness, sexual desire, or any other human ability.

In my view, death is a tragedy. Its a tremendous loss of personality, skills, knowledge, relationships. Weve rationalized it as a good thing because thats really been the only alternative weve had. But disease, aging, and death are problems we are now in a position to overcome.

Wait, you said that the golden era of biotechnology was still a decade away. We dont have radical life extension today, do we?

Read more from the original source:

Singularity Q&A | KurzweilAI

Posted in Singularity | Comments Off on Singularity Q&A | KurzweilAI

Singularity – Microsoft Research

Posted: June 22, 2016 at 11:42 pm

OS and tools for building dependable systems. The Singularity research codebase and design evolved to become the Midori advanced-development OS project. While never reaching commercial release, at one time Midori powered all of Microsoft's natural language search service for the West Coast and Asia.

"...it is impossible to predict how a singularity will affect objects in its causal future." - NCSA Cyberia Glossary

The Singularity Research Development Kit (RDK)2.0 is now available for academic non-commercial use. You can download it from CodePlex, Microsoft's open source project hosting website, here.

Our recent article in Operating Systems Review, Singularity: Rethinking the Software Stack, is a concise introduction to the Singularity project. It summarizes research in the current Singularity releases and highlights ongoing Singularity research.

Singularity is a research project focused on the construction of dependable systems through innovation in the areas of systems, languages, and tools. We are building a research operating system prototype (called Singularity), extending programming languages, and developing new techniques and tools for specifying and verifying program behavior.

Advances in languages, compilers, and tools open the possibility of significantly improving software. For example, Singularity uses type-safe languages and an abstract instruction set to enable what we call Software Isolated Processes (SIPs). SIPs provide the strong isolation guarantees of OS processes (isolated object space, separate GCs, separate runtimes) without the overhead of hardware-enforced protection domains. In the current Singularity prototype SIPs are extremely cheap; they run in ring 0 in the kernels address space.

Singularity uses these advances to build more reliable systems and applications. For example, because SIPs are so cheap to create and enforce, Singularity runs each program, device driver, or system extension in its own SIP. SIPs are not allowed to share memory or modify their own code. As a result, we can make strong reliability guarantees about the code running in a SIP. We can verify much broader properties about a SIP at compile or install time than can be done for code running in traditional OS processes. Broader application of static verification is critical to predicting system behavior and providing users with strong guarantees about reliability.

View post:

Singularity - Microsoft Research

Posted in Singularity | Comments Off on Singularity – Microsoft Research

Page 104«..1020..102103104105