The idea of a pending technological Singularity is under attack again with a number of prominent futurists arguing against the possibility—the most prominent being Charlie Stross and his astonishingly unconvincing article, "Three arguments against the singularity." While it’s not my intention to write a comprehensive rebuttal at this time, I would like to bring something to everyone’s attention: The early rumblings of the coming Singularity are becoming increasingly evident and obvious.
Make no mistake. It's coming.
As I’ve discussed on this blog before, there are nearly as many definitions of the Singularity as there are individuals who are willing to talk about it. The whole concept is very much a sounding board for our various hopes and fears about radical technologies and where they may bring our species and our civilization. It’s important to note, however, that at best the Singularity describes a social event horizon beyond which it becomes difficult, if not impossible, to predict the impact of the advent of recursively self-improving greater-than-human artificial intelligence.
So, it’s more of a question than an answer. And in my own attempt to answer this quandary, I have personally gravitated towards the I.J. Good camp in which the Singularity is characterized as an intelligence explosion. In 1965 Good wrote,
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
This perspective and phrasing sits well with me, mostly because I already see signs of this pending intelligence explosion happening all around us. It’s becoming glaringly obvious that humanity is offloading all of it’s capacities, albeit in a distributed way, to its technological artifacts. Eventually, these artifacts will supersede our capacities in every way imaginable, including the acquisition of new ones altogether.
A common misnomer about the Singularity and the idea of greater-than-human AI is that it will involve a conscious, self-reflective, and even morally accountable agent. This has led some people to believe that it will have deep and profound thoughts, quote Satre, and resultantly act in a quasi-human manner. This will not be the case. We are not talking about artificial consciousness or even human-like cognition. Rather, we are talking about super-expert systems that are capable of executing tasks that exceed human capacities. It will stem from a multiplicity of systems that are individually singular in purpose, or at the very least, very limited in terms of functional scope. And in virtually all cases, these systems won't reflect on the consequences of their actions unless they are programmed to do so.
But just because they're highly specialized doesn’t mean they won’t be insanely powerful. These systems will have access to a myriad of resources around them, including the internet, factories, replicators, socially engineered humans, robots that they can control remotely, and much more; this technological outreach will serve as their arms and legs.
Consequently, the great fear of the Singularity stems from the realization that these machine intelligences, which will have processing capacities a significant order of magnitude beyond that of humans, will be able to achieve their pre-programmed goals without difficulty–even if we try to intervene and stop them. This is what has led to the fear of poorly programmed SAI or “malevolent” SAI. If our instructions to these super-expert systems are poorly articulated or under-developed, these machines could pull the old 'earth-into-paperclips' routine.
For those skeptics who don’t see this coming, I implore them to look around. We are beginning to see the opening salvo of the intelligence explosion. We are already creating systems that exceed our capacities and it's a trend that is quickly accelerating. This is a process that started a few decades ago with the advent of computers and other calculating machines, but it’s been in the last little while that we’ve been witness to more profound innovations. Humanity chuckled in collective nervousness back in 1997 when chess grandmaster Garry Kasparaov was defeated by Deep Blue. From that moment on we knew the writing was on the wall, but we’ve since chosen to deny the implications; call it proof-of-concept, if you will, that a Singularity is coming.
More recently, we have developed a machine that can defeat the finest Jeopardy players, and now there’s a AI/robotic system that can play billiards at a high level. You see where this is going, right? We are systematically creating individual systems that will eventually and collectively exceed all human capacities. This can only be described as an intelligence explosion. While we are a far ways off from creating a unified system that can defeat us well-rounded and highly multi-disciplinal humans across all fields, it’s not unrealistic to suggest that such a day is coming.
But that’s beside the point. What’s of concern here is the advent of the super-expert system that works beyond human comprehension and control—the one that takes things a bit too far and with catastrophic results.
Or with good results.
Or with something that we can't even begin to imagine.
We don’t know, but we can be pretty darned sure it’ll be disruptive—if not paradigmatic in scope. This is why it’s called the Singularity. The skeptics and the critics can clench their hands in a fist and stamp their feet all they want about it, but that’s where we find ourselves.
We humans are already lagging behind many of our systems in terms of comprehension, especially in mathematics. Our artifacts will increasingly do things for reasons that we can’t really understand. We’ll just have to stand back and watch, incredulous as to the how and why. And accompanying this will come the (likely) involuntary relinquishment of control.
So, we can nit-pick all we want about definitions, fantasize about creating a god from the machine, or poke fun at the rapture of the nerds.
Or we can start to take this potential more seriously and have a mature and fully engaged discussion on the matter.
So what’s it going to be?
- Neurodiversity vs. Cognitive Liberty [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump: 2009.10.13 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Limits to the biolibertarian impulse [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump: 2009.10.15 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Neurodiversity vs. Cognitive Liberty, Round II [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump: 2009.10.17 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Cognitive liberty and right to one's mind [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- TED Talks: Henry Markram builds a brain in a supercomputer [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- And Now, for Something Completely Different: Doomsday! [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump: 2009.10.19 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Oklahoma and abortion - some fittingly harsh reflections [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Pigliucci on science and the scope of skeptical inquiry [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Remembering Mac Tonnies [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump: 2009.10.24 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump: 2009.10.26 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- The Bright Side of Nuclear Armament [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Grieving chimps [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Elephant prosthetic [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Mass produced artificial skin to replace animal testing [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Dog gets osseointegrated prosthetic [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- NASA Shuttle-derived Sidemount Heavy Launch Vehicle Concept [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump for 2009.02.02 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump for 2009.11.04 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Link dump for 2009.11.05 [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- IEET's Biopolitics of Popular Culture Seminar [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Einstein and Millikan should have done a Kurzweil [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Affective Death Spirals [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Cure aging or give a small number of disabled people jobs as janitors? [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Would unary notation prevent scope insensitivity? [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Cure aging or give a small number of disabled people jobs as janitors - unary version. [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- At SENS4, Cambridge, UK [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- SENS4 overview and review - how evolution complicates SENS, and why we must try harder [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- SENS4 top 10 photos [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My AI research for this year [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My AI research: Formal Logic [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My AI research: Category theory and institution theory [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My AI research: The Semantic Web [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My AI research: Features and Flaws of Logical representation [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- My AI research: Graphical models and probabilistic logics [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Hughes and More engage Italian Catholicism: Image caption competition [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Surprisingly good solutions, falling in love and life in a materialistic universe [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- What do you get when you cross slightly evolved, status seeking monkeys with the scientific method? [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Seeking the optimal philanthropic strategy: Global Warming or AI risk? [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Machine Learning - harbinger of the future of AI? [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- At the Singularity Summit in NYC [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Katja Grace: world-dominating superintelligence is "unlikely" [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Normal Human Heroes on "Nightmare futures" [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Anissimov on Intelligence Enhancement [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Yudkowsky on "Value is fragile" [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Response to Pearce [Last Updated On: November 8th, 2009] [Originally Added On: November 8th, 2009]
- Creative thinking lets you believe whatever you want [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Let’s get metaphysical: How our ongoing existence could appear increasingly absurd [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Linda MacDonald Glenn guest blogging in November and December [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Link dump for 2009.11.15 [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Call 1-800-New-Organ, by 2020? [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- IBM's claim to have simulated a cat's brain grossly overstated [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- John Hodgman pulls off Fermi Paradox schtick [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Deus Sex Machina [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- How Americans spent themselves into ruin... but saved the world [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- I am my own grandpa (or grandma)? [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Link dump for 2009.11.29 [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- The art of Tomas Saraceno [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Link dump: 2009.12.05 [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- The Harmonic Convergence of Science, Sight, & Sound [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Working on my website [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Transhumanism, personal immortality and the prospect of technologically enabled utopia [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- RokoMijic.com is up [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Why the Fuss About Intelligence? [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Initiation ceremony [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Birthing Gods [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- 11 core rationalist skills - from LessWrong [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- The best of the guests [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- The best of Sentient Developments: 2009 [Last Updated On: December 13th, 2009] [Originally Added On: December 13th, 2009]
- Link dump: 2009.12.15 [Last Updated On: December 15th, 2009] [Originally Added On: December 15th, 2009]
- The Utopia Force [Last Updated On: December 22nd, 2009] [Originally Added On: December 22nd, 2009]
- Avatar: The good, the bad and ugly [Last Updated On: December 23rd, 2009] [Originally Added On: December 23rd, 2009]
- Singularity Institute launches "2010 Singularity Research Challenge" [Last Updated On: December 24th, 2009] [Originally Added On: December 24th, 2009]
- Transhumanism as a "nonissue" [Last Updated On: December 24th, 2009] [Originally Added On: December 24th, 2009]
- Hanson on "Meh, Transhumanism" [Last Updated On: December 25th, 2009] [Originally Added On: December 25th, 2009]
- Merry Newtonmas from Transhuman Goodness [Last Updated On: December 25th, 2009] [Originally Added On: December 25th, 2009]