Page 18«..10..17181920..»

Category Archives: Superintelligence

The AI Revolution: The Road to Superintelligence (PDF)

Posted: June 3, 2017 at 12:41 pm

Mailed to:

Is this your street address? Yes, update Yes, it is

AA AE AP AL AK AZ AR CA CO CT DE FL GA HI ID IL IN IA KS KY LA ME MD MA MI MN MS MO MT NE NV NH NJ NM NY NC ND OH OK OR PA RI SC SD TN TX UT VT VA WA WV WI WY DC

United States Japan land Islands Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia, Plurinational State of Bonaire, Sint Eustatius and Saba Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cabo Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Cook Islands Costa Rica Croatia Curaao Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guernsey Guinea Guinea-Bissau Guyana Haiti Heard Island and McDonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Ireland Isle of Man Israel Italy Jamaica Japan Jersey Jordan Kazakhstan Kenya Kiribati Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lesotho Liechtenstein Lithuania Luxembourg Macao Macedonia, the former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montenegro Montserrat Morocco Mozambique Namibia Nauru Nepal Netherlands New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestine, State of Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Runion Romania Russian Federation Rwanda Saint Barthlemy Saint Helena, Ascension and Tristan da Cunha Saint Kitts and Nevis Saint Lucia Saint Martin (French part) Saint Pierre and Miquelon Saint Vincent and the Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia Seychelles Sierra Leone Singapore Sint Maarten (Dutch part) Slovakia Slovenia Solomon Islands South Africa South Georgia and the South Sandwich Islands South Sudan Spain Sri Lanka Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-Leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela, Bolivarian Republic of Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Zambia

See more here:

The AI Revolution: The Road to Superintelligence (PDF)

Posted in Superintelligence | Comments Off on The AI Revolution: The Road to Superintelligence (PDF)

Today’s Kids Could Live Through Machine Superintelligence, Martian Colonies, and a Nuclear Attack – Motherboard

Posted: May 28, 2017 at 7:56 am

It has become a cliche to declare that the future is full of both "great promise and great peril." Nonetheless, this aphorism expresses an important fact about the Janus-faced nature of our increasingly powerful technologies. If humanity realizes the best possible future, we could quite possibly usher in an era of unprecedented human flourishing, happiness, and value. But if the great experiment of civilization fails, our species could meet the same fate as the dinosaurs.

I find it helpful to think about what a child born today could plausibly expect to witness in her or his lifetime. Since the rate of technological change appears to be unfolding according to Ray Kurzweil's Law of Accelerating Returns, this imaginative activity can actually yield some fascinating insights about our evolving human condition, which may soon become a posthuman condition as "person-engineering technologies" turn us into increasingly artificial cyborgs.

In a billion years or so, the sun will sterilize the planet as it turns into a red giant, eventually swallowing our planet whole inaccording to one study7.59 billion years. If we want to survive beyond this point, we will need to find a new planetary spaceship to call home. But even more immediately, evolutionary biology tells us that the more geographically spread out a species is, the greater its probability of survival. Elon Musk claims that "there is a strong humanitarian argument for making life multi-planetaryin order to safeguard the existence of humanity in the event that something catastrophic were to happen." Similarly, Stephen Hawkingwho recently booked a trip to space on Richard Branson's Virgin Galactic spaceshipbelieves that humanity has about 100 years to colonize space or face extinction.

There are good reasons to believe that this will happen in the coming decades. Musk has stated that SpaceX will build a city on the fourth rock from the sun "in our lifetimes." And NASA has announced that it "is developing the capabilities needed to send humans to an asteroid by 2025 and Mars in the 2030s." NASA is even planning to "send a robotic mission to capture and redirect an asteroid to orbit the moon. Astronauts aboard the Orion spacecraft will explore the asteroid in the 2020s, returning to Earth with samples."

According to a PEW study, the global population will reach approximately 9.3 billion by 2050. To put this in perspective, there were only 6 billion people alive in 2000, and roughly 200 million living when Jesus was (supposedly) born. This explosion has led to numerous Malthusian predictions of a civilizational collapse. Fortunately, the Green Revolution obviated such a disaster in the mid-twentieth century, although it also introduced new and significant environmental externalities that humanity has yet to overcome.

It appears that "in the next 50 years we will need to produce as much food as has been consumed over our entire human history," to quote Megan Clark, who heads Australia's Commonwealth Scientific and Industrial Research Organisation. She said this "means in the working life of my children, more grain than ever produced since the Egyptians, more fish than eaten to date, more milk than from all the cows that have ever been milked on every frosty morning humankind has ever known." Although technology has enabled the world to effectively double its food output between 1960 and 2000, we face unprecedented challenges such as climate change and the Anthropocene extinction.

Theoretical physicist Michio Kaku has claimed that human civilization could transition to a Type 1 civilization on the Kardashev scale within the next 100 years. A Type 1 civilization can harness virtually all of the energy available to its planet (including all the electromagnetic radiation sent from its sun), perhaps even controlling the weather, earthquakes, and volcanoes. The Oxford philosopher Nick Bostrom tacitly equates a Type 1 civilization with the posthuman condition of "technological maturity," which he describes as "the attainment of capabilities affording a level of economic productivity and control over nature close to the maximum that could feasibly be achieved."

"The danger period is now because we still have the savagery."

Right now, human civilization would qualify as a Type 0, although emerging "world-engineering technologies" could change this in the coming decades, as they enable our species to manipulate and rearrange the physical world in increasingly significant ways. But Kaku worries that the transition from a Type 0 to a Type 1 civilization carries immense risks to our survival. As he puts it, " the danger period is now because we still have the savagery. We still have all the passions. We have all the sectarian fundamentalist ideas circulating around. But we also have nuclear weapons. We have chemical, biological weapons capable of wiping out life on Earth." In other words, as I have written, archaic beliefs about how the world ought to be are on a collision course with neoteric technologies that could turn the entire planet into one huge graveyard.

This is a primary goal of many transhumanists, who see aging as an ongoing horror show that kills some 55.3 million people each year. It is, transhumanists say, "deathist" to argue that halting senescence through technological interventions is wrong: dying from old age should be no more involuntary than dying from childhood leukemia.

The topic of anti-aging technology gained a great deal of attention the past few decades due to the work of Aubrey deGray, who cofounded the Peter Thiel-funded Methuselah Foundation. According to the Harvard geneticist George Church, scientists could effectively reverse aging withinwait for it the next decade or so. This means actually making older people young again, not just stabilizing the healthy physiological state of people in their mid-20s. As Church puts it, the ultimate goal isn't "about stalling or curing, it's about reversing." One possible way of achieving this end involves the new breakthrough gene-editing technology called CRISPR/Cas9, as Oliver Medvedik discusses in a 2016 TED talk.

According to a 2012 article in Nature, we could be approaching a sudden, irreversible, catastrophic collapse of the global ecosystem that unfolds on timescales of a decade or so. It would usher in a new biospheric normal that could make the continued existence impossible. In fact, studies confirm that our industrial society has initiated only the sixth mass extinction event in the last 3.8 billion years, and other reports find that the global population of wild vertebrates has declined between 1970 and 2012 by a staggering 58 percent. Among the causes of this global disaster in slow motion is industrial pollution, ecosystem fragmentation, habitat destruction, overexploitation, overpopulation, and of course climate change.

Deforestation. Image: Dikshajhingan/Wikimedia

Yet another major study claims that there are nine "planetary boundaries" that demarcate a "safe operating space for humanity." As the authors of this influential paper write, "anthropogenic pressures on the Earth System have reached a scale where abrupt global environmental change can no longer be excluded...Transgressing one or more planetary boundaries may be deleterious or even catastrophic" to our systems. Unfortunately, humanity has already crossed three of these do-not-cross boundaries, namely climate change, the rate of biodiversity loss (i.e., the sixth mass extinction), and the global nitrogen cycle. As Frederic Jameson has famously said, "it has become easier to imagine the end of the world than the end of capitalism".

The only time nuclear weapons were used in conflict occurred at the end of World War II, when the US dropped two atomic bombs on the unsuspecting folks of the Japanese archipelago. But there are strong reasons for believing that another bomb will be used in the coming years, decades, or century. First, consider that the US appears to have entered into a "new Cold War" with Russia, as the Russian Prime Minister Dmitry Medvedev puts it. Second, North Korea continues to both develop its nuclear capabilities and threaten to use nuclear weapons against its perceived enemies. Third, when Donald Trump was elected the US president, the venerable Bulletin of the Atomic Scientists moved the Doomsday Clock minute-hand forward by 30 seconds in part because of "disturbing comments about the use and proliferation of nuclear weapons" made by Donald Trump.

And fourth, terrorists are more eager than ever to acquire and detonate a nuclear weapon somewhere in the Western world. In a recent issue of their propaganda magazine, the Islamic State fantasized about acquiring a nuclear weapon from Pakistan and exploding it in a major urban center of North America. According to the Stanford cryptologist and founder of NuclearRisk.org, Martin Hellman, the probability of a nuclear bomb going off is roughly 1 percent every year from the present, meaning that "in 10 years the likelihood is almost 10 percent, and in 50 years 40 percent if there is no substantial change." As the leading public intellectual Lawrence Krauss told me in a previous interview for Motherboard, unless humanity destroys every last nuclear weapon on the planet, the use of a nuclear weapon is more or less inevitable.

Homo sapiens are currently the most intelligent species on the planet, where "intelligence" is defined as the mental capacity to attain suitable ends to achieve one's means. But this could change if scientists successfully create a machine-based general intelligence that exceeds human-level intelligence. As scholars for decades have observed, this would be the most significant event in human history, since it would entail that our collective fate would then depend more on the superintelligence than our own, just as the fate of the mountain gorilla now depends more on human actions than its own. Intelligence confers power, so a greater-than-human-level intelligence would have greater-than-human-level power over the future of our species, and the biosphere more generally.

This would be the most significant event in human history.

According to one survey, nearly every AI expert who was polled agrees that one or more machine superintelligences will join our species on planet Earth by the end of this century. Although the field of AI has a poor track record of seeing the futurejust consider Marvin Minsky's claim in 1967 that "Within a generationthe problem of creating artificial intelligence will substantially be solved"recent breakthroughs in AI suggest that real progress is being made and that this progress could put us on a trajectory toward machine superintelligence.

In their 1955 manifesto, Bertrand Russell and Albert Einstein famously wrote:

Many warnings have been uttered by eminent men of science and by authorities in military strategy. None of them will say that the worst results are certain. What they do say is that these results are possible, and no one can be sure that they will not be realized...We have found that the men who know most are the most gloomy.

This more or less describes the situation with respect to existential risk scholars, where "existential risks" are worst-case scenarios that would, as two researchers put it, cause the permanent "loss of a large fraction of expected value." Those who actually study these risks assign shockingly high probabilities to an existential catastrophe in the foreseeable future.

An informal 2008 survey of scholars at an Oxford University conference suggests a 19 percent chance of human extinction before 2100. And the world-renowned cosmologist Lord Martin Rees writes in his 2003 book Our Final Hour that civilization has a mere 50-50 chance of surviving the present century intact. Other scholars claim that humans will probably be extinct by 2110 (Frank Fenner) and that the likelihood of an existential catastrophe is at least 25 percent (Bostrom). Similarly, the Canadian biologist Neil Dawe suggests that he "wouldn't be surprise if the generation after him witnesses the extinction of humanity." Even Stephen Hawking seems to agree with these doomsday estimates, as suggested above, by arguing that humanity will go extinct unless we colonize space within the next 100 years.

So, the "promise and peril" cliche should weigh heavily on people's mindsespecially when they head to the voting booth. If humanity can get its act together, the future could be unprecedentedly good; but if tribalism, ignorance, and myopic thinking continue to dominate, the last generation may already have been born.

Phil Torres is the founding director of the X-Risks Institute . He has written about apocalyptic terrorism, emerging technologies, and global catastrophic risks. His forthcoming book is called Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks .

Here is the original post:

Today's Kids Could Live Through Machine Superintelligence, Martian Colonies, and a Nuclear Attack - Motherboard

Posted in Superintelligence | Comments Off on Today’s Kids Could Live Through Machine Superintelligence, Martian Colonies, and a Nuclear Attack – Motherboard

Summoning the Demon: Why superintelligence is humanity’s biggest threat – GeekWire

Posted: May 26, 2017 at 4:20 am

[Editors Note:This guest commentaryis byRichard A. Clarke and R.P. Eddy, authors of the new book, Warnings: Finding Cassandras To Stop Catastrophes.]

Artificial intelligence is a broad term, maybe overly broad. It simply means a computer program that can perform tasks that would otherwise require human action. Such tasks include decision making, language translation, and data analysis. When most people think of AI, they are really thinking of what computer scientists call weak artificial intelligence the type of AI that runs everyday devices like computers, smartphones, even cars. It is any computer program that can analyze various inputs, then select and execute from a set of preprogrammed responses. Today, weak AI performs simple (or narrow) tasks: commanding robots to stack boxes, trading stocks autonomously, calibrating car engines, or running smartphones voice-command interfaces.

Machine learning is a type of computer programming that helps make AI possible. Machine-learning programs have the ability to learn without being explicitly programmed, optimizing themselves to most efficiently meet a set of pre-established goals. Machine learning is still in its infancy, but as it matures, its capacity for self-improvement sets AI apart from any other invention in history.

The compounding effect of computers teaching themselvesleads us to superintelligence. Superintelligence is an artificial intelligence that will be smarter than its human creators. Superintelligence does not yet exist, but when it does, some believe it could solve many of humanitys greatest challenges: aging, energy, and food shortages, even perhaps climate change. Self-perpetuating and untiring, this advanced AI would continue improving at a remarkably fast rate and eventually surpass the level of complexity humans can understand. While this promises great potential, it is not without its dangers.

As the excitement for superintelligence grows, so too does concern. The astrophysicist and Nobel laureate Dr. Stephen Hawking warns that AI is likely to be either the best or worst thing ever to happen to humanity, so theres huge value in getting it right. Hawking is not alone in his concern about superintelligence. Icons of the tech revolution, including former Microsoft chairman Bill Gates, Amazon founder Jeff Bezos, and Tesla and SpaceX CEO Elon Musk, echo his concern. And it terrifies Eliezer Yudkowsky.

A divisive figure, Yudkowsky is well-known in academic circles and the Silicon Valley scene as the coiner of the term friendly AI. His thesis is simple, though his solution is not: if we are to have any hope against superintelligence, we need to code it properly from the beginning. The answer, Eliezer believes, is one of morality. AI must be programmed with a set of ethical codes that align with humanitys. Though it is his lifes only work, Yudkowsky is pretty sure he will fail. Humanity, he says, is likely doomed.

Humanity has a long history of ignoring seers carrying accurate messages of our doom. You may not remember Cassandra, the tragic figure in Greek mythology for whom this phenomenon is named, but you will likely recall the 1986 Space Shuttle Challenger disaster. That explosion, and the resultant deaths of the seven astronauts, was specifically presaged in warnings by the selfsame scientists responsible for the o-ring technology that failed and caused the explosion. They were right, they warned, and they were ignored. Is Yudkowsky a modern-day Cassandra? Are there others?

Regardless of the warnings of Yudkowsky, Gates, Musk, Hawking, and others, humans will almost certainly pursue the creation of superintelligence relentlessly as it holds unimaginable promise to transform the world. If or when it is born, many believe it will rapidly become more and more capable, able to tackle and solve the most advanced and perplexing challenges scientists pursue, and even those that they cant yet. A superintelligent computer will recursively self-improve to as-of-yet uncomprehended levels of intelligence, although only time will tell whether this self-improvement will happen gradually or within the first second of being turned on. It will carve new paths in fields yet undiscovered, fueled by perpetual self-improvements to its own source code and the creation of new robotic tools.

Artificial intelligence has the potential to be dramatically more powerful than any previous scientific advancement. Superintelligence, according to Nick Bostrom at Oxford, is not just another technology, another tool that will add incrementally to human capabilities. It is, he says, radically different, and it may be the last invention humans ever need to make.

Yudkowsky and others concerned about super intelligence view the issue through a Darwinian lens. Once humans are no longer the most intelligent species on the planet, humankind will survive only at the whim of whatever is. He fears that such superintelligent software would exploit the Internet, seizing control of anything connected to it electrical infrastructure, telecommunications systems, manufacturing plants Its first order of business may be to covertly replicate itself on many other servers all over the globe as a measure of redundancy. It could build machines and robots, or even secretly influence the decisions of ordinary people in pursuit of its own goals. Humanity and its welfare may be of little interest to an entity so profoundly smart.

Elon Musk calls creating artificial intelligence summoning the demon and thinks its humanitys biggest existential threat. When we asked Eliezer what was at stake, his answer was simple: everything. Superintelligence gone wrong is a species-level threat, a human extinction event.

Humans are neither the fastest nor the strongest creatures on the planet but dominate for one reason: humans are the smartest. How might the balance of power shift if AI becomes superintelligence? Yudkowsky told us, By the time its starting to look like [an AI system] might be smarter than you, the stuff that is way smarter than you is not very far away. He believes this is crunch time for the whole human species, and not just for us but for the [future] intergalactic civilization whose existence depends on us. This is the hour before the final exam and were trying to get as much studying done as possible. The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

Self-aware computers and killer robots are nothing new to the big screen, but some believe the intelligence explosion will be far worse than anything Hollywood has imagined. In a 2011 interview on NPR, AI programmer Keefe Roedersheimer discussed The Terminator and the follow-up series, which pits the superintelligent Skynet computer system against humanity. Below is a transcript of their conversation:

Mr. Roedersheimer:The Terminator [is an example of an] AI that could get out of control. But if you really think about it, its much worse than that.

NPR: Much worse than Terminator?

Mr. Roedersheimer: Much, much worse.

NPR: How could it possibly thats a moonscape with people hiding under burnt-out buildings and being shot by lasers. I mean, what could be worse than that?

Mr. Roedersheimer: All the people are dead.

NPR: In other words, forget the heroic human resistance. Thered be no time to organize one. Somebody presses enter, and were done.

Yudkowsky believes superintelligence must be designed from the start with something approximating ethics. He envisions this as a system of checks and balances so that its growth is auditable and controllable; so that even as it continues to learn, advance, and reprogram itself, it will not evolve out of its own benign coding. Such preprogrammed measures will ensure that superintelligence will behave as we intend even in the absence of immediate human supervision. Eliezer calls this friendly AI.

According to Yudkowsky, once AI gains the ability to broadly reprogram itself, it will be too late to implement safeguards, so society needs to prepare now for the intelligence explosion. Yet, this preparation is complicated by the sporadic and unpredictable nature of scientific advancement and the numerous covert efforts to create superintelligence around the world. No supranational organization can track all of the efforts, much less predict when or which one of them will succeed.

Eli and his supporters believe a wait and see approach (a form of satisficing) is a Kevorkian prescription. [The birth of superintelligence] could be five years out; it could be forty years out; it could be sixty years out, Yudkowsky told us. You dont know. I dont know. Nobody on the planet knows. And by the time you actually know, its going to be [too late] to do anything about it.

Richard A. Clarke, a veteran of thirty years in national security and over a decade in the White House, is now the CEO ofGood Harbor Security Risk Management and author, with R.P. Eddy, of Warnings: Finding Cassandras To Prevent Catastrophes. Clarke is an adviser to Seattle-based AI cybersecurity company Versive.

R.P. Eddy is the CEO of Ergo, one of the worlds leading intelligence firms. His multi-decade career in national security includes serving as Director at the White House National Security Council.

Link:

Summoning the Demon: Why superintelligence is humanity's biggest threat - GeekWire

Posted in Superintelligence | Comments Off on Summoning the Demon: Why superintelligence is humanity’s biggest threat – GeekWire

Summoning the Demon: Why superintelligence is humanity’s … – GeekWire

Posted: at 4:20 am

[Editors Note:This guest commentaryis byRichard A. Clarke and R.P. Eddy, authors of the new book, Warnings: Finding Cassandras To Stop Catastrophes.]

Artificial intelligence is a broad term, maybe overly broad. It simply means a computer program that can perform tasks that would otherwise require human action. Such tasks include decision making, language translation, and data analysis. When most people think of AI, they are really thinking of what computer scientists call weak artificial intelligence the type of AI that runs everyday devices like computers, smartphones, even cars. It is any computer program that can analyze various inputs, then select and execute from a set of preprogrammed responses. Today, weak AI performs simple (or narrow) tasks: commanding robots to stack boxes, trading stocks autonomously, calibrating car engines, or running smartphones voice-command interfaces.

Machine learning is a type of computer programming that helps make AI possible. Machine-learning programs have the ability to learn without being explicitly programmed, optimizing themselves to most efficiently meet a set of pre-established goals. Machine learning is still in its infancy, but as it matures, its capacity for self-improvement sets AI apart from any other invention in history.

The compounding effect of computers teaching themselvesleads us to superintelligence. Superintelligence is an artificial intelligence that will be smarter than its human creators. Superintelligence does not yet exist, but when it does, some believe it could solve many of humanitys greatest challenges: aging, energy, and food shortages, even perhaps climate change. Self-perpetuating and untiring, this advanced AI would continue improving at a remarkably fast rate and eventually surpass the level of complexity humans can understand. While this promises great potential, it is not without its dangers.

As the excitement for superintelligence grows, so too does concern. The astrophysicist and Nobel laureate Dr. Stephen Hawking warns that AI is likely to be either the best or worst thing ever to happen to humanity, so theres huge value in getting it right. Hawking is not alone in his concern about superintelligence. Icons of the tech revolution, including former Microsoft chairman Bill Gates, Amazon founder Jeff Bezos, and Tesla and SpaceX CEO Elon Musk, echo his concern. And it terrifies Eliezer Yudkowsky.

A divisive figure, Yudkowsky is well-known in academic circles and the Silicon Valley scene as the coiner of the term friendly AI. His thesis is simple, though his solution is not: if we are to have any hope against superintelligence, we need to code it properly from the beginning. The answer, Eliezer believes, is one of morality. AI must be programmed with a set of ethical codes that align with humanitys. Though it is his lifes only work, Yudkowsky is pretty sure he will fail. Humanity, he says, is likely doomed.

Humanity has a long history of ignoring seers carrying accurate messages of our doom. You may not remember Cassandra, the tragic figure in Greek mythology for whom this phenomenon is named, but you will likely recall the 1986 Space Shuttle Challenger disaster. That explosion, and the resultant deaths of the seven astronauts, was specifically presaged in warnings by the selfsame scientists responsible for the o-ring technology that failed and caused the explosion. They were right, they warned, and they were ignored. Is Yudkowsky a modern-day Cassandra? Are there others?

Regardless of the warnings of Yudkowsky, Gates, Musk, Hawking, and others, humans will almost certainly pursue the creation of superintelligence relentlessly as it holds unimaginable promise to transform the world. If or when it is born, many believe it will rapidly become more and more capable, able to tackle and solve the most advanced and perplexing challenges scientists pursue, and even those that they cant yet. A superintelligent computer will recursively self-improve to as-of-yet uncomprehended levels of intelligence, although only time will tell whether this self-improvement will happen gradually or within the first second of being turned on. It will carve new paths in fields yet undiscovered, fueled by perpetual self-improvements to its own source code and the creation of new robotic tools.

Artificial intelligence has the potential to be dramatically more powerful than any previous scientific advancement. Superintelligence, according to Nick Bostrom at Oxford, is not just another technology, another tool that will add incrementally to human capabilities. It is, he says, radically different, and it may be the last invention humans ever need to make.

Yudkowsky and others concerned about super intelligence view the issue through a Darwinian lens. Once humans are no longer the most intelligent species on the planet, humankind will survive only at the whim of whatever is. He fears that such superintelligent software would exploit the Internet, seizing control of anything connected to it electrical infrastructure, telecommunications systems, manufacturing plants Its first order of business may be to covertly replicate itself on many other servers all over the globe as a measure of redundancy. It could build machines and robots, or even secretly influence the decisions of ordinary people in pursuit of its own goals. Humanity and its welfare may be of little interest to an entity so profoundly smart.

Elon Musk calls creating artificial intelligence summoning the demon and thinks its humanitys biggest existential threat. When we asked Eliezer what was at stake, his answer was simple: everything. Superintelligence gone wrong is a species-level threat, a human extinction event.

Humans are neither the fastest nor the strongest creatures on the planet but dominate for one reason: humans are the smartest. How might the balance of power shift if AI becomes superintelligence? Yudkowsky told us, By the time its starting to look like [an AI system] might be smarter than you, the stuff that is way smarter than you is not very far away. He believes this is crunch time for the whole human species, and not just for us but for the [future] intergalactic civilization whose existence depends on us. This is the hour before the final exam and were trying to get as much studying done as possible. The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

Self-aware computers and killer robots are nothing new to the big screen, but some believe the intelligence explosion will be far worse than anything Hollywood has imagined. In a 2011 interview on NPR, AI programmer Keefe Roedersheimer discussed The Terminator and the follow-up series, which pits the superintelligent Skynet computer system against humanity. Below is a transcript of their conversation:

Mr. Roedersheimer:The Terminator [is an example of an] AI that could get out of control. But if you really think about it, its much worse than that.

NPR: Much worse than Terminator?

Mr. Roedersheimer: Much, much worse.

NPR: How could it possibly thats a moonscape with people hiding under burnt-out buildings and being shot by lasers. I mean, what could be worse than that?

Mr. Roedersheimer: All the people are dead.

NPR: In other words, forget the heroic human resistance. Thered be no time to organize one. Somebody presses enter, and were done.

Yudkowsky believes superintelligence must be designed from the start with something approximating ethics. He envisions this as a system of checks and balances so that its growth is auditable and controllable; so that even as it continues to learn, advance, and reprogram itself, it will not evolve out of its own benign coding. Such preprogrammed measures will ensure that superintelligence will behave as we intend even in the absence of immediate human supervision. Eliezer calls this friendly AI.

According to Yudkowsky, once AI gains the ability to broadly reprogram itself, it will be too late to implement safeguards, so society needs to prepare now for the intelligence explosion. Yet, this preparation is complicated by the sporadic and unpredictable nature of scientific advancement and the numerous covert efforts to create superintelligence around the world. No supranational organization can track all of the efforts, much less predict when or which one of them will succeed.

Eli and his supporters believe a wait and see approach (a form of satisficing) is a Kevorkian prescription. [The birth of superintelligence] could be five years out; it could be forty years out; it could be sixty years out, Yudkowsky told us. You dont know. I dont know. Nobody on the planet knows. And by the time you actually know, its going to be [too late] to do anything about it.

Richard A. Clarke, a veteran of thirty years in national security and over a decade in the White House, is now the CEO ofGood Harbor Security Risk Management and author, with R.P. Eddy, of Warnings: Finding Cassandras To Prevent Catastrophes. Clarke is an adviser to Seattle-based AI cybersecurity company Versive.

R.P. Eddy is the CEO of Ergo, one of the worlds leading intelligence firms. His multi-decade career in national security includes serving as Director at the White House National Security Council.

Link:

Summoning the Demon: Why superintelligence is humanity's ... - GeekWire

Posted in Superintelligence | Comments Off on Summoning the Demon: Why superintelligence is humanity’s … – GeekWire

Artificial Superintelligence Review: Reigns Supreme? – Gamezebo

Posted: May 18, 2017 at 2:41 pm

Artificial Superintelligence is a game about making choices. Theyre binary choices you can swipe left or right when youve got a decision to make. The outcome affects the comings and goings of the AI company that youre the head of. Its a game of balancing and making the right decisions.

Its also a game about multiverses, the dangers of artificial intelligence, and a cute cat that wanders around your office making cute cat noises. Its essentially another take on the Tinder-as-a-game play of Reigns, but with a sci-fi setting.

And while its fun, its not quite as fun as its inspiration. Theres some good writing here, and some nice ideas, but you never quite feel compelled enough to push on to the next universe to see what you can mess up this time round.

The game starts off with an AI youve built destroying the human race. From there you move from alternate universe to alternate universe, seeing if you can try and avoid making the same mistakes. Or, more likely, make a lot different ones.

The key to success is balancing four different bars. These represent your employees, your investors, the government and the tech press. Annoy one of them too much and its game over. But on the flip side of that, if you favour one of them too much, youll find your dreams crashing down too.

You have to make sure youre not giving too much power to any of the factions. Every choice you make youll see how its going to change the meter of all the sides. But theres a twist, you can see how powerful that change is going to be, but not what direction the bar is going to slide.

Some of the choices are pretty obvious. Others are a little less clear, and some of them dont really make any sense. It can be frustrating when a decision you thought was definitely going to go well with one of your backers actually has the opposite reaction.

Sometimes youll get a second chance, with investors or the government stepping in at the last second to offer you a lifeline. Other times youll end up disemboweled because you tried to use a keyboard to defend yourself during an annual purge that your AI has set up.

Each of the universes has its own set of rules you need to contend with. Some are in the grip of a horrible virus that makes your skin fall off, others are populated by robots. Some of them have senators that really like dressing up as rabbits.

The whole thing plays out pretty well, but theres frustration as well as fun here. Making the wrong choice can lead to disaster, and when it feels like youve been shortchanged by the game, it means youre less likely to jump to the next universe and try again.

And thats a bit of a shame. Theres some excellent writing here, and some interesting ideas, but after a while youre going to stop caring that much and wander away to play something else.

Artificial Superintelligence could have been much better than it is, and who knows, maybe with a few tweaks down the line itll be worth a second look. As it stands its a very cautious recommendation. Theres definitely enjoyment to be sucked out of the game, but just be prepared for that enjoyment to be tempered with annoyance.

Continue reading here:

Artificial Superintelligence Review: Reigns Supreme? - Gamezebo

Posted in Superintelligence | Comments Off on Artificial Superintelligence Review: Reigns Supreme? – Gamezebo

Artificial Superintelligence review – Reigns for a new generation – Pocket Gamer

Posted: May 17, 2017 at 2:06 am

If you're the type to enjoy making seemingly minor choices that can make a huge difference, then you can sometimes find your choices slightly limited on mobile.

With Reigns 2 still a ways off, there's currently very little to get excited about though Artificial Superintelligence will certainly scratch the itch for a short time.

It's a more modern take on the binary choice-focused title, though with a slightly less-intuitive UI and a simpler art style that will be off-putting to some.

Boot up Playing Artificial Superintelligence couldn't be simpler you're presented with a problem, and you have to choose between two solutions, each of which will impact different people in different ways.

Instead of taking charge of a nation, you're a startup tech firm in Silicon Valley, tasked with building the titular artificial intelligence through any means necessary.

You have to keep your employees and investors happy, while also balancing the wants of the government and the Internet, making sure not to make either faction too happy or too angry.

It's never made clear how your choices will impact the different factions, but the game does let you know the severity of your actions in advance it's up to you to work out whether this will be in your favour or not.

The goal is to keep everyone balanced enough until your AI can reach its full potential, but failure simply means you jump to a new multiverse and start over again.

Blue screen For the most part, this is actually pretty enjoyable. The whole affair is very light-hearted, and has you starting flamewars with Internet trolls, dealing with malicious employees leaving dumb responses in your AI, and the eventual apocalypse caused by a computer that deems humanity a waste of time.

But somehow, while this game takes so many cues from Nerial's Reigns, it misses something incredibly key a slick user interface.

To make a choice, you have to move a slider from to one end of a spectrum, a movement that somehow lacks the effortlessness of swiping through a deck of cards.

It's an odd choice, given that you can only make two choices and a slider effectively allows for any number of options, but Artificial Superintelligence never uses it for this purpose and suffers because of it.

On top of that, some presentation choices feel weak. There's a decent amount of chunky pixel art throughout the game, giving it a pleasant retro feel, but the bulk of it is a more simplistic cartoon style which feels cheap in comparison.

How about a nice game of chess? Overall, Artificial Superintelligence is fairly enjoyable throughout. It's fast-paced fun with plenty of fresh ideas, and enough dumb jokes scattered throughout that you'll never go long without a laugh.

But it suffers from a lacklustre UI and some graphical choices that are more of eye-sore than endearing due to their simplicity.

It'll almost certainly make up for a lack of a new Reigns for now, but it needs a bit more spit and polish to be truly great.

See the article here:

Artificial Superintelligence review - Reigns for a new generation - Pocket Gamer

Posted in Superintelligence | Comments Off on Artificial Superintelligence review – Reigns for a new generation – Pocket Gamer

Artificial Superintelligence is an interesting Sci-Fi take on Reigns swiping mechanic – Pocket Gamer

Posted: May 13, 2017 at 6:03 am

General

Start your way up the Texas Hold 'em ladder with the 888 Poker app

Developers Corner

General

Nomad Games Steam and mobile sale delivers huge savings with up to 70% off Talisman products and more

Developers Corner

Launch

A must see game for minesweeper and minimalist puzzle game fans

Developers Corner

Launch

Tears of the Machine: creating immersive story telling on a mobile device

Developers Corner

Launch

Assault Breaker lands on iOS and Android

Developers Corner

General

Want to Create an App? Think Twice!

Developers Corner

Launch

King of Booze: Drinking Game looks to be your ideal weekend companion

Developers Corner

Launch

The Lost Treasure Island

Project One Games

Company News

Four games that need to make their way onto smartphones

Developers Corner

General

Can videogame violence actually make the world a safer place?

Happylatte

General

Dream: Hidden Adventure: a beautiful hidden objects game for iPad

Renatus

Launch

Doctor Strange Pinball Table Coming in December 2013

Zen Studios

See the original post here:

Artificial Superintelligence is an interesting Sci-Fi take on Reigns swiping mechanic - Pocket Gamer

Posted in Superintelligence | Comments Off on Artificial Superintelligence is an interesting Sci-Fi take on Reigns swiping mechanic – Pocket Gamer

Listen, Meatbag! Artificial Superintelligence is a New Game Starring the Snarky Carrot AI – AppAdvice

Posted: at 6:03 am

If youve never heard of Carrot, now is the time to take notice and listen. The snarky computer stars in a number of great titles including Carrot Weather, Carrot To-Do, and Carrot Fit. The large, sardonic dose of humor in each of the titles make them stand and make them some of our favorite in each category.

And that wit makes Artificial Superintelligence a game you dont want to miss.

Part sci-fi, part comedy, gamers take the role of a startup founder building the worlds first sentient supercomputer. For each major decision, you will only have two choices that can be activated by sliding left or right.

Each decision will add up and eventually put you on the path to one of 52 different and unique endings. One ending, no kidding, sees your cat Captain Whiskers enslave humanity. Each time you restart, the game will be in a parallel universe some with subtle changes and others that will make you laugh out loud.

Along with the great humor, the app also teaches players about how real AI works and how its both good and bad for humans and the future.

If youve previously downloaded any of the previous Carrot apps, youll definitely want to check out Artificial Superintelligence.

Artificial Superintelligence is designed for the iPhone/iPod touch and all iPad models. There is also a companion iMessage app with fun stickers. For a limited time, the app is available on the App Store for $2.99. Its usual price will be $3.99.

The rest is here:

Listen, Meatbag! Artificial Superintelligence is a New Game Starring the Snarky Carrot AI - AppAdvice

Posted in Superintelligence | Comments Off on Listen, Meatbag! Artificial Superintelligence is a New Game Starring the Snarky Carrot AI – AppAdvice

U.S. Navy calls out to gamers for assistance with …

Posted: May 11, 2017 at 1:05 pm

Get today's popular DigitalTrends articles in your inbox:

Why it matters to you

The Maritime Singularity simulation is yet another example of real-world value stemming from playing video games.

The next time someone tells you that playing video games doesnt have real-world applications, you might be able to say that your gaming skills assisted the U.S. Navy. As originally reported by Engadget, the U.S. Navy has put out a call for participants for its Maritime Singularity MMOWGLI -(massively multiplayer online war game leveraging the internet).

Technological singularity hypothesizes that if and when artificial superintelligence is invented, it will set off a swift chain reaction that will change human society forever, and not necessarily for the better. As itdevelops strategies for dealing with the possibility of apost-singularity world, the U.S. Navy thinks that gamers are ideal for problem-solving the future.

Dr. Eric Gulovsen, director of disruptive technology at the Office of Naval Research, claimed that technology has already reached the point where singularity is in the foreseeable future. What we cant see yet is what lies over that horizon. Thats where we need help from players. This is a complex, open-ended problem, so were looking for people from all walks of life Navy, non-Navy, technologist, non-technologist to help us design our Navy for a post-singularity world, he said.

If Maritime Singularity is set up like the Navys previous MMOWGLIs, such as the recent effort to foster a more prosperous and secure South China Sea, participants will come up with opportunities and challenges pertaining to singularity and play out various scenarios.

If the Navys interest in singularity doesnt sound enough like dystopian science fiction already, the games blurb certainly sounds like it was ripped from the back cover of a William Gibson novel:

A tidal wave of change is rapidly approaching todays Navy. We can ride this wave and harness its energy, or get crushed by it. There is no middle ground. What is the nature of this change? The SINGULARITY. We can see the SINGULARITY on the horizon. What we cant see, YET, is what lies OVER that horizon. Thats where you come in.

Maritime Singularity is open for signups now, and will run for a week beginning March 27. For more information, check out the overview video above.

See the rest here:

U.S. Navy calls out to gamers for assistance with ...

Posted in Superintelligence | Comments Off on U.S. Navy calls out to gamers for assistance with …

You’re invited: Strategies for an Artificially Superintelligent Future – FutureFive NZ

Posted: at 1:05 pm

David Miller, a Wellington business consultant with a social science background, invites you to join a discussion on the future of artificial intelligence. The talk is part of the Hutt City Councils STEMM Festival.

The topic of the session is something that Miller has a fascination of: The future arrival of The Singularity a point at which superintelligence exceeds human intelligence then rapidly accelerates beyond it.

Readily admitting no domain expertise in the technical disciplines associated with artificial intelligence, Miller has found the prospect of superintelligence a n interesting one worth talking about. He notes that some of the worlds relevant leading thinkers, such as Stephen Hawking and Elon Musk, have expressed their concerns on the subject. Miller will draw on the writings and thinking of Professor Nick Bostrom at Oxford University.

Miller says that a common belief is that there is no concern for such a future, but thinks keeping an open mind is important.

There are of course some who assure us that there is no risk. But remember the bright sparks (sometimes experts in their day) who assured us that aeroplanes, computers and telephones had no future when they were first invented? he says.

The session he has initiated is not concerned with the short term technicalities of artificial intelligence, but rather operates on the assumption that superintelligence poses significant threats to humans.

The topics he will cover include:

The session is designed to get people engaged in healthy discussion on the topic. There will be plenty of time for questions and discussion and perhaps to cover off some global issues which Miller says have not been well covered in the literature to date.

The session takes placeThursday 18th May, 5.30pm to 6.30pm at The Dowse Art Museum in Lower Hutt.

For those interested you can sign up at EXOSpherehere or simply email david@vantagegroup.co.nz to book or ask any questions.

Excerpt from:

You're invited: Strategies for an Artificially Superintelligent Future - FutureFive NZ

Posted in Superintelligence | Comments Off on You’re invited: Strategies for an Artificially Superintelligent Future – FutureFive NZ

Page 18«..10..17181920..»