Yudkowsky – Staring into the Singularity 1.2.5

This document has been marked as wrong, obsolete, deprecated by an improved version, or just plain old.

The address of this document is http://sysopmind.com/singularity.html.If you found it elsewhere, please visit the foregoing link for themost recent version.

Computing speed doubles every two years.Computing speed doubles every two years of work.Computing speed doubles every two subjective years of work.

Two years after Artificial Intelligences reach human equivalence, theirspeed doubles. One year later, their speed doubles again.

Six months - three months - 1.5 months ... Singularity.

Plug in the numbers for current computing speeds, the current doublingtime, and an estimate for the raw processing power of the human brain,and the numbers match in: 2021.

But personally, I'd like to do it sooner.

It began three and a half billion years ago in a pool of muck, whena molecule made a copy of itself and so became the ultimate ancestor ofall earthly life.

It began four million years ago, when brain volumes began climbing rapidlyin the hominid line.

Fifty thousand years ago with the rise of Homo sapiens sapiens.Ten thousand years ago with the invention of civilization.Five hundred years ago with the invention of the printing press.Fifty years ago with the invention of the computer.

In less than thirty years, it will end.

At some point in the near future, someone will come up with a methodof increasing the maximum intelligence on the planet - either codinga true Artificial Intelligence or enhancinghuman intelligence. An enhanced human would be better at thinkingup ways of enhancing humans; would have an "increased capacity for invention".What would this increased ability be directed at? Creating the nextgeneration of enhanced humans.

And what would those doubly enhanced minds do? Research methodson triply enhanced humans, or build AI minds operating at computer speeds.And an AI would be able to reprogram itself, directly, to run faster- or smarter. And then our crystal ball explodes, "life aswe know it" is over, and everything we know goes out the window.

A civilization with high technology is unstable; it ends when the speciesdestroys itself or improves on itself. If the current trends continue- if we don't run up against some unexpected theoretical cap on intelligence,or turn the Earth into a radioactive wasteland, or bury the planet undera tidal wave of voracious self-reproducing nanodevices - the Singularityis inevitable. The most-quoted estimate for the Singularity is 2035- within your lifetime! - although many, including I, think that the Singularitymay occur substantially sooner.

Some terminology, due to Vernor Vinge's Hugo-winning AFire Upon The Deep:

Power - An entity from beyond the Singularity.Transcend, Transcended, Transcendence - The act of reprogrammingoneself to be smarter, reprogramming (with one's new intelligence) to besmarter still, and so on ad Singularitum. The "Transcend"is the metaphorical area where the Powers live.Beyond - The grey area between being human and being a Power;the domain inhabited by entities smarter than human, but not possessingthe technology to reprogram themselves directly and Transcend.

"I imagine bugs and girls have a dim perception that Nature playeda cruel trick on them, but they lack the intelligence to really comprehendits magnitude."-- Calvin and Hobbes

But why should the Powers be so much more than we are now?Why not assume that we'll get a little smarter, and that's it?

Consider the sequence 1, 2, 4, 8, 16, 32. Consider the iterationof F(x) = (x + x). Every couple of years, computer performance doubles.(1)That is the demonstrated rate of improvement as overseen by constant, unenhancedminds - progress according to mortals.

Right now the amount of networked silicon computing power on the planetis slightly above the power of a human brain. The power of a humanbrain is 10^17 ops/sec, or one hundred million billion operations per second(2), versus a billionor so computers on the Internet with somewhere between 100 millions ops/secand 1 billion ops/sec apiece. The total amount of computingpower on the planet is the amount of power in a human brain, 10^17 ops/sec,multiplied by the number of humans, presently six billion or 6x10^9.The amount of artificial computing power is so small as to be irrelevant,not because there are so many humans, but because of the sheer raw powerof a single human brain.

At the old rate of progress, when the original Singularity calculationswere performed in 1988 (3),computers were expected to reach human-equivalent levels - 10^17 floating-pointoperations per second, or one hundred petaflops - at around 2035.But at that rate of progress, one-teraflops machines were expected in 2000;as it turned out, one-teraflops machines were around in 1996, when thisdocument was first written. In 1998 the top speed was 3.2 teraflops,and in 1999 IBM announced theBlue Gene project to build a petaflops machine by 2005. So theold estimates may be a little conservative.

Once we have human-equivalent computers, the amount of computing poweron the planet is equal to the number of humansplus the number ofcomputers. The amount of intelligence available takes a huge jump.Ten years later,humans become a vanishing quantity in the equation.

That doubling sequence is actually a pessimistic projection,because it assumes that computing power continues to double at the samerate. But why? Computer speeds don't double due to some inexorablephysical law, but because researchers and engineers find ways to make fasterchips. If some of the researchers and engineers are themselvescomputers...

A group of human-equivalent computers spends 2 years to double computerspeeds. Then they spend another 2 subjective years, or 1 yearin human terms, to double it again. Then they spend another 2 subjectiveyears, or six months, to double it again. After four years total,the computing power goes to infinity.

That is the "Transcended" version of the doubling sequence. Let'scall the "Transcend" of a sequence {a0, a1, a2...}the function where the interval between an and an+1is inversely proportional to an. (4).So a Transcended doubling function starts with 1, in which case it takes1 time-unit to go to 2. Then it takes 1/2 time-units to go to 4.Then it takes 1/4 time-units to go to 8. This function, if it werecontinuous, would be the hyperbolic function y = 2/(2 - x). Whenx= 2, then (2 - x) = 0 and y = infinity. Thebehavior at that point is known mathematically as a singularity.

And the Transcended doubling sequence is also a pessimistic projection,not a Singularity at all, because it assumes that only speed isenhanced. What if the quality of thought were enhanced?Right now, two years of work - well, these days, eighteen months of work.Eighteen subjective months of work suffices to double computing speeds.Shouldn't this improve a bit with thought-sharing and eidetic memories?Shouldn't this improve if, say, the total sum of human scientific knowledgeis stored in predigested, cognitive, ready-to-think format? Shouldn'tthis improve with short-term memories capable of holding the whole of humanknowledge? A human-equivalent AI isn't "equivalent" - if Kasparovhad had even the smallest, meanest automatic chess-playing program integratedsolidly with his intuitions, he would have beat Deep Blue into a pulp.That's TheAI Advantage: Simple tasks carried out at blinding speeds andwithout error, conscious tasks carried out with perfect memory and totalself-awareness.

I haven't even started on the subject of AIs redesigning theircognitive architectures, although they'll have a far easier time of itthan we would - especially if they can make backups. Transcendeddoubling might run up against the laws of physics before reachinginfinity... but even the laws of physics as now understood wouldallow one gram (more or less) to store and run the entire human race ata million subjective years per second. (5).

Let's take a deep breath and think about that for a moment. Onegram. The entire human race. One million yearsper second. That means, using only this planetary mass for computingpower, it would be possible to support more people than the entire Universecould support if biological humans colonized every single planet.It means that, in a single day, a civilization could live over 80 billionyears, several times older than the age of the Universe to date.

The peculiar thing is that most people who talk about "the laws of physics"setting hard limits on Powers would never even dream of setting the samelimits on a (merely) galaxy-spanning civilization of (normal) humans a(brief) billion years old. Part of that is simply a cultural conventionof science fiction; interstellar civilizations can break any physical lawthey please, because the readers are used to it. But part of thatis because scientists and science-fiction authors have been taught, somany times, that Ultimate Unbreakable Limits usually fall to human ingenuityand a few generations of time. Nobody dares say what might be possiblea billion years from now because that is a simply unimaginable amountof time.

We know that change crept at a snail's pace a mere millennium ago, andthat even a hundred years ago it would have been impossible to placecorrect limits on the ultimate power of technology. We know thatthe past could never have placed limits on the present, and so we don'ttry to place limits on the future. But with transhumans, the analogyis not to Lord Kelvin, nor Aristotle, nor to a hunter-gatherer - all ofwhom had human intelligence - but to a Neanderthal. With Powers,to a fish. And yet, because the power of higher intelligence is notas publicly recognized as the power of a few million years - because wehave no history of naysayers being embarrassed by transhumans insteadof mere time - some of us still sit, grunting around the fire, settingultimate limits on the sharpness of spears; some of us still swim about,unblinking, unable to engage in abstract thought, but knowing that theentire Universe is, must be, wet.

To convey the rate of progress driven by smarter researchers,I needed to invent a function more complex than the doubling function usedabove. We'll call this new function T(n). Youcan think of T(n) as representing the largest number conceivableto someone with an n-neuron brain. More formally, T(n)is defined as the longest block of 1s produced by any halting n-stateTuringMachine acting on an initially blank tape. If you're familiarwith computers but not Turing Machines, consider T(n) tobe the largest number that can be produced by a computer program with ninstructions. Or, if you're an information theorist, think of T(n)as the inverse function of complexity; it produces the largest number withcomplexity n or less.

The sequence produced by iterating T(n), S{n}= T(S{n - 1}), is constant for very low values of n.S{0}is defined to be 0; a program of length zero produces no output.This corresponds to a Universe empty of intelligence. T(1) = 1.This corresponds to an intelligence not capable of enhancing itself; thiscorresponds to where we are now. T(2) = 3. Here beginsthe leap into the Abyss. Once this function increases at all, itimmediately tapdances off the brink of the knowable. T(3) = 6?T(6) = 64?

T(64) = vastly more than 1080, the number of atomsin the Universe. T(1080) is something that onlya Transcendent entity will ever be able to calculate, and that only ifTranscendent entities can create new Universes, maybe even new laws ofphysics, to supply the necessary computing power. Even T(64)will probably never be known to any strictly human being.

Now take the Transcended version of S{n}, starting at2. Half a time-unit later, we have 3. A third of a time-unitafter that, 6. A sixth later - one whole unit after this functionstarted - we have 64. A sixty-fourth later, 10^80. An unimaginablytiny fraction of a second later... Singularity.

Is S{n} really a good model of the Singularity?Of course not. "Good model of the Singularity" is an oxymoron; that'sthe wholepoint; the Singularity will outrun any model a human couldhave formulated a hundred years ago, and the Singularity will outrun anymodel we formulate today. (6)

The main objection, though, would be that S{n} is anungrounded metaphor. The Transcended doubling sequence models fasterresearchers. It's easy to say that S{n} models smarterresearchers, but what does smarter actually mean in this context?

Smartness is the measure of what you see as obvious, what youcan see as obvious in retrospect, what you can invent, andwhat you can comprehend. To be more precise about it, smartnessis the measure of your semantic primitives (what is simple in retrospect),the way in which you manipulate the semantic primitives (what is obvious),the structures your semantic primitives can form (what you can comprehend),and the way you can manipulate those structures (what you can invent).If you speak complexity theory, the difference between obvious andobviousin retrospect, or inventable andcomprehensible, is likethe difference between NP and P.

All humans who have not suffered neural injuries have the same semanticprimitives. What is obvious in retrospect to one is obviousin retrospect to all. (Four notes: First, by "neural injuries"I do not mean anything derogatory - it's just that a person missing thevisual cortex will not have visual semantic primitives. If certainneural pathways are severed, people not only lose their ability to seecolors; they lose their ability to remember or imagine colors.Second, theorems in math may be obvious in retrospect only to mathematicians- but anyone else who acquired theskill would have the abilityto see it. Third, to some extent what we speak of as obviousinvolves not just the symbolic primitives but very short links betweenthem. I am counting the primitive link types as being included under"semantic primitives". When we look at a thought-sequence and seeit as being obvious in retrospect, it is not necessarily a singlesemantic primitive, but is composed of a very short chain of semantic primitivesand link types. Fourth, I apologize for my tendency to dissect myown metaphors; I really can't help it.)

Similarly, the human cognitive architecture is universal. We allhave the same sorts of underlying mindstuff. Though the nature ofthis mindstuff is not necessarily known, our ability to communicate witheach other indicates that, whatever we are communicating, it is the sameon both sides. If any two humans share a set of concepts, any structurecomposed of those concepts that is understood by one will be understoodby the other.

Different humans may have different degrees of the ability to manipulateand structure concepts; different humans may see and invent differentthings. The great breakthroughs of physics and engineering did notoccur because a group of people plodded and plodded and plodded for generationsuntil they found an explanation so complex, a string of ideas so long,that only time could invent it. Relativity and quantum physics andbuckyballs and object-oriented programming all happened because someoneput together a short, simple, elegant semantic structure in a way thatnobody had ever thought of before. Being a little bit smarteris where revolutions come from. Not time. Not hard work.Although hard work and time were usually necessary, others had worked farharder and longer without result. The essence of revolution is rawsmartness.

Now think about the Singularity. Think about a chimpanzee tryingto understand integral calculus. Think about the people with damaged visualneurology who cannot remember what it was like to see, who cannot imaginethe color red or visualize two-dimensional structures. Think abouta visual cortex with trillions of times as many neuron-equivalents.Think about twenty thousand distinct colors in the rainbow, none a shadeof any other. Think about rotating fifty-dimensional objects. Thinkabout attaching semantic primitives to the pixels, so that one could seea rainbow of ideas in the same way that we see a rainbow of colors.

Our semantic primitives even determine what we can know.Why does anything exist at all? Nobody knows. And yet the answeris obvious. The First Cause must be obvious. It hasto be obvious toNothing, present in the absence of anything else,a substance formed from -blank-, a conclusion derived without dataor initial assumptions. What is it that evokes consciousexperience, the stuff that minds are made of? We are madeof conscious experiences. There is nothing we experience moredirectly. How does it work? We don't have a clue. Twoand a half millennia of trying to solve it and nothing to show forit but "I think therefore I am." The solutions seem to be necessarilysimple, yet are demonstrably imperceptible. Perhaps the solutionsoperate outside the representations that can be formed with the human brain.

If so, then our descendants, successors, future selves will figure outthe semantic primitives necessary and alter themselves to perceive them.The Powers will dissect the Universe and the Reality until they understandwhy anything exists at all, analyze neurons until they understand qualia.And that will only be the beginning. It won't end there.Why should there be only two hard problems? After all, if not forhumans, the Universe would apparently contain only one hard problem, forhow could a non-conscious thinker formulate the hard problem of consciousness?Might there be states of existence beyond mere consciousness - transsentience?Might solving the nature of reality create the ability to create new Universes,manipulate the laws of physics, even alter the kind of things that canbe real - "ontotechnology"? That's what the Singularityis all about.

So before you talk about life as a Power or the Utopia to come - a favoritepastime of transhumanists and Extropiansis to discuss the problems of uploading, life afterbeing uploaded, and so on - just remember that you probably have a muchbetter chance of solving both hard problems than you do of making a validstatement about the future. This goes for me too. I'll standby everything I said about humans, including our inability to understandcertain things, but everything I said about the Powers is almost certainlywrong. "They'll figure out the semantic primitives necessary andalter themselves to perceive them." Wrong. "Figure out.""Semantic primitives." "Alter." "Perceive." I would beton all of these terms becoming obsolete after the Singularity. Thereare better ways and I'm sure They - or It, or [sound of exploding brain]will "find them".

I would like to introduce a unit of post-Singularity progress, the PerceptualTranscend or PT.

[Brief pause while audience collapses in helpless laughter.]

A Perceptual Transcend occurs when all things that were comprehensiblebecomeobvious in retrospect, and all things that were inventablebecome obvious. A Perceptual Transcend occurs when the semanticstructures of one generation become the semantic primitives of the next.To put it another way, one PT from now, the whole of human knowledgebecomes perceivable in a single flash of experience, in the same waythat we now perceive an entire picture at once.

Computers are a PT above humans when it comes to arithmetic - sort of.While we need to manipulate an entire precarious pyramid of digits, rowsand columns in order to multiply 62305 by 10358, a computer can spit outthe answer - 645355190 - in a single obvious step. These computersaren't actually a PT above us at all, for two reasons. First of all,they just handle numbers up to two billion instead of 9; after that theyneed to manipulate pyramids too. Far more importantly, they don'tnotice anything about the numbers they manipulate, as humans do.If you multiply 23704 by 14223, using the wedding-cake method of multiplication,you won't multiply 23704 by 2 twice in a row; you'll just steal the resultsfrom last time. If one of the interim results is 12345 or 99999 or314159, you'll notice that, too. The way computers manipulate numbersis actually less powerful than the way we manipulate numbers.

Would the Powers settle for less? A PT above us, multiplicationis carried out automatically but with full attention to interimresults, numbers that happen to be prime, and the like. If I weredesigning one of the first Powers - and, down at the SingularityInstitute, this is what we're doing - I would create an entire subsystemfor manipulating numbers, one that would pick up on primality, complexity,and all the numeric properties known to humanity. A Power would understandwhy62305 times 10358 equals 645355190, with the same understanding that wouldbe achieved by a top human mathematician who spent hours studying all thenumbers involved. And at the same time, the Power will multiply thetwo numbers automatically.

For such a Power, to whom numbers were true semantic primitives, Fermat'sLast Theorem and the Goldbach Conjecture and the Riemann Hypothesis mightbe obvious. Somewhere in the back of its mind, the Power wouldtest each statement with a million trials, subconsciously manipulatingall the numbers involved to find why they were not the sum of twocubes or why they were the sum of two primes or why theirreal part was equal to one-half. From there, the Power could intuitthe most basic, simple solution simply by generalizing. Perhaps humanmathematicians, if they could perform the arithmetic for a thousand trialsof the Riemann Hypothesis, examining every intermediate step, looking forcommon properties and interesting shortcuts, could intuit a formal solution.But they can't, and they certainly can't do it subconsciously, which iswhy the Riemann Hypothesis remains unobvious and unproven - it is a conceptualstructureinstead of a conceptual primitive.

Perhaps an even more thought-provoking example is provided by our visualcortex. On the surface, the visual cortex seems to be an image processor.In a modern computer graphics engine, an image is represented by a two-dimensionalarray of pixels (7).To rotate this image - to cite one operation - each pixel's rectangularcoordinates {x, y} are converted to polar coordinates {theta, r}. All thetas,representing the angle, have a constant added. The polar coordinatesare then converted back to rectangular. There are ways to optimizethis process, and ways to account for intersecting and empty pixels onthe new array, but the essence is clear: To perform an operationon an entire picture, perform the operation on each pixel in that picture.

At this point, one could say that a Perceptual Transcend depends onwhat level you're looking at the operation. If you view yourselfas carrying out the operation pixel by pixel, it is an unimaginably tediouscognitive structure, but if you view the whole thing in a single lump,it is a cognitive primitive - a point made in Hofstadter's Ant Fugue whendiscussing ants and colonies. Not very exciting unless it's Hofstadterexplaining it, but there's more to the visual cortex than that.

For one thing, we consciously experience redness. (If you're notsure what conscious experiencea.k.a. "qualia" means, the short version is that you are not the one whospeaksyour thoughts, you are the one who hears your thoughts.) Qualiaare the stuff making up the indescribable difference between redand green.

The term "semantic primitive" describes more than just the level atwhich symbols are discrete, compact objects. It describes the levelof conscious perception. Unlike the computer manipulating numbersformed of bits, and like the imagined Power manipulating theorems formedof numbers, we don't lose any resolution in passing from the pixel levelto the picture level. We don't suddenly perceive the idea "thereis a bear in front of me"; we see a picture of a bear, containing millionsof pixels, every one of which is consciously experienced simultaneously.A Perceptual Transcend isn't "just" the imposition of a new cognitive level;it turns the cognitive structures into consciously experienced primitives.

"To put it another way, one PT from now, the whole of human knowledgebecomes perceivable in a single flash of experience, in the same waythat we now perceive an entire picture at once."

Of course, the PT won't be used as a post-Singularity unit of progress.Even if it were initially, it won't be too long before "PT" itself is Transcendedand the Powers jump out of the system yet again. After all, the Singularityis ultimately as far beyond me, the author, as it is beyond any other human,and so my PTs will be as worthless a description as the doubling sequencediscarded so long ago. Even if we accept the PT as the basic unitof measure, it simply introduces a secondary Singularity. Maybe thePerceptual Transcends will occur every two consciously experienced yearsat first, but then will occur every conscious year, and then every conscioussix months - get the picture?

It's like the "Birthday Cantatatata..." in Hofstadter'sbookGodel, Escher, Bach. Youcan start with the sequence {1, 2, 3, 4 ...} and jump out of it to w(omega), the symbol for infinity. But then one has {w, w+1, w + 2 ... }, and we jump out again to 2w. Then 3w,and 4w, and w2 andw3 and wwand w^(ww) and higher towers of w untilwe jump out to the ordinale0, which includes all exponentialtowers of ws.

The PTs may introduce a second Singularity, and a third Singularity,and a fourth, until Singularities are coming faster and faster and thefirst w-Singularity is imminent -

Or the Powers may simply jump beyond that system. The BirthdayCantatatata... was written by a human - admittedly Douglas Hofstadter,but still a human - and the concepts involved in it may be Transcendedby the very first transhuman.

The Powers are beyond our ability to comprehend.

Get the picture?

It's hard to appreciate the Singularity properly withoutfirst appreciating really large numbers. I'm not talking about littletiny numbers, barely distinguishable from zero, like the number of atomsin the Universe or the number of years it would take a monkey to duplicatethe works of Shakespeare. I invite you to consider what was, circa1977, the largest number ever to be used in a serious mathematical proof.The proof, by Ronald L.Graham, is an upper bound to a certain question of Ramsey theory.In order to explain the proof, one must introduce a new notation, due toDonaldE. Knuth in the article Coping With Finiteness. The notationis usually a small arrow, pointing upwards, here abbreviated as ^.Written as a function:

2^4 = 24 = 16.

3^^4 = 3^(3^(3^3)) = 3^(3^27) = 37,625,597,484,987

7^^^^3 = 7^^^(7^^^7).

3^3 = 3 * 3 * 3 = 27. This number is small enough to visualize.

3^^3 = 3^(3^3) = 3^27 = 7,625,597,484,987. Larger than 27, butso small I can actually type it. Nobody can visualize seven trillionof anything, but we can easily understand it as being on roughly the sameorder as, say, the gross national product.

3^^^3 = 3^^(3^^3) = 3^(3^(3^(3^...^(3^3)...))). The "..." is 7,625,597,484,987threes long. In other words, 3^^^3 or arrow(3,3, 3) is an exponential tower of threes 7,625,597,484,987 levels high.The number is now beyond the human ability to understand, but the procedurefor producing it can be visualized. You take x=1. Youlet x equal 3^x. Repeat seven trillion times.While the very first stages of the number are far too large to be containedin the entire Universe, the exponential tower, written as "3^3^3^3...^3",is still so small that it could be stored on a modern supercomputer.

3^^^^3 = 3^^^(3^^^3) = 3^^(3^^(3^^...^^(3^^3)...)). Both the numberand the procedure for producing it are now beyond human visualization,although the procedure can be understood. Take a number x=1.Letxequal an exponential tower of threes of height x.Repeat 3^^^3 times, where 3^^^3 equals an exponential tower seven trillionthrees high.

And yet, in the words of Martin Gardner: "3^^^^3 is unimaginablylarger than 3^^^3, but it is still small as finite numbers go, since mostfinite numbers are very much larger."

And now, Graham's number. Let x equal3^^^^3, or the unimaginable number just described above. Let x equal3^^^^^^^(x arrows)^^^^^^^3. Repeat 63 times, or 64 includingthe starting 3^^^^3.

Graham's number is far beyond my ability to grasp. I can describeit, but I cannot properly appreciate it. (Perhaps Graham can appreciateit, having written a mathematical proof that uses it.) This numberis far larger than most people's conception of infinity. Iknow that it was larger than mine. My sense of awe when I first encounteredthis number was beyond words. It was the sense of looking upon somethingso much larger than the world inside my head that my conceptionof the Universe was shattered and rebuilt to fit. All theologiansshould face a number like that, so they can properly appreciate what theyinvoke by talking about the "infinite" intelligence of God.

My happiness was completed when I learned that the actual answertothe Ramsey problem that gave birth to that number - rather than the upperbound - was probably six.

Why was all of this necessary, mathematical aesthetics aside?Because until you understand the hollowness of the words "infinity", "large"and "transhuman", you cannot appreciate the Singularity. Even appreciatingthe Singularity is as far beyond us as visualizing Graham's number is toa chimpanzee. Farther beyond us than that. No human analogieswill ever be able to describe the Singularity, because we are only human.

The number above was forged of the human mind. It is nothing buta finite positive integer, though a large one. It is composite andodd, rather than prime or even; it is perfectly divisible by three.Encoded in the decimal digits of that number, by almost any encoding schemeone cares to name, are all the works ever written by the human hand, andall the works that could have been written, at a hundred thousand wordsper minute, over the age of the Universe raised to its own power a thousandtimes. And yet, if we add up all the base-ten digits the result willbe divisible by nine. The number is still a finite positive integer.It may contain Universes unimaginably larger than this one, but it is stillonly a number. It is a number so small that the algorithm to produceit can be held in a single human mind.

The Singularity is beyond that. We cannot pigeonhole it by statingthat it will be a finite positive integer. We cannot say anythingat all about it, except that it will be beyond our understanding.

If you thought that Knuth's arrow notation produced some fairly largenumbers, what about T(n)? How many states does a Turing machineneed to implement the calculation above? What is the complexity ofGraham'snumber, C(Graham)? Probably on the order of 100. And moreover,T(C(Graham)) is likely to be much, much larger than Graham's number.Why go throughx = 3^(x ^s)^3 only 64 times? Why not3^^^^3 times? That'd probably be easier, since we already need togenerate 3^^^^3, but not 64. And with the extra space, we might evenbe able to introduce an even more computationally complex algorithm.In fact, Knuth's arrow notation may not be the most powerful algorithmthat fits into C(Knuth) states.

T(n) is the metaphor for the growth rate of a self-enhancingentity because it conveys the concept of having additional intelligencewith which to enhance oneself. I don't know when T(n) passesbeyond the threshold of what human mathematicians can, in theory, calculate.Probably more thann=10 and less than n=100. The pointis that after a few iterations, we wind up with T(4294967296). Now,I don't know what T(4294967296) will be equal to, but the winning Turingmachine will probably generate a Power whose purpose is to think of a reallylarge number. That's what the term "large" means.

It's all very well to talk about cognitive primitives and obviousness,but again - what does smarter mean? The meaning of smartcan't be grounded in the Singularity - I haven't been there yet.So what's my practical definition?

"Of course, I never wrote the 'important' story, the sequel about thefirst amplified human. Once I tried something similar. JohnCampbell's letter of rejection began: 'Sorry - you can't write thisstory. Neither can anyone else.'" -- Vernor Vinge

Let's take a concrete example, the story Flowers for Algernon(later the movie Charly), by Daniel Keyes. (I'm afraid I'llhave to tell you how the story comes out, but it's a Character story, notan Idea story, so that shouldn't spoil it.) Flowers for Algernonis about a neurosurgical procedure for intelligence enhancement.This procedure was first tested on a mouse, Algernon, and later on a retardedhuman, Charlie Gordon. The enhanced Charlie has the standard science-fictionalset of superhuman characteristics; he thinks fast, learns a lifetime ofknowledge in a few weeks, and discusses arcane mathematics (not shown).Then the mouse, Algernon, gets sick and dies. Charlie analyzes theenhancement procedure (not shown) and concludes that the process is basicallyflawed. Later, Charlie dies.

That's a science-fictional enhanced human. A real enhanced humanwould not have been taken by surprise. A real enhanced human wouldrealize that any simple intelligence enhancement will be a net evolutionarydisadvantage - if enhancing intelligence were a matter of a simple surgicalprocedure, it would have long ago occurred as a natural mutation.This goes double for a procedure that works on rats! (As far as Iknow, this never occurred to Keyes. I selected Flowers, outof all the famous stories of intelligence enhancement, because, for reasonsof dramatic unity, this story shows what happens to be the correct outcome.)

Note that I didn't dazzle you with an abstruse technobabble explanationfor Charlie's death; my explanation is two sentences long and can be understoodby someone who isn't an expert in the field. It's the simplicityof smartness that's so impossible to convey in fiction, and so shockingwhen we encounter it in person. All that science fiction can do toshow intelligence is jargon and gadgetry. A truly ultrasmart CharlieGordon wouldn't have been taken by surprise; he would have deduced hisprobable fate using the above, very simple, line of reasoning. Hewould have accepted that probability, rearranged his priorities, and actedaccordingly until his time ran out - or, more probably, figured out anequally simple and obvious-in-retrospect way to avoid his fate. IfCharlie Gordon had really been ultrasmart, there would have beenno story.

There are some gaps so vast that they make all problems new. Imaginewhatever field you happen to be an expert in - neuroscience, programming,plumbing, whatever - and consider the gap between a novice, just approachinga problem for the first time, and an expert. Even if a thousand novicestry to solve a problem and fail, there's no way to say that a single expertcouldn't solve the problem casually, offhandedly. If a hundred well-educatedphysicists try to solve a problem and fail, an Einstein might still beable to succeed. If a thousand twelve-year-olds try for a year tosolve a problem, it says nothing about whether or not an adult is likelyto be able to solve the problem. If a million hunter-gatherers tryto solve a problem for a century, the answer might still be obvious toany educated twenty-first-century human. And no number of chimpanzees,however long they try, could ever say anything about whether the leasthuman moron could solve the problem without even thinking. Thereare some gaps so vast that they make all problems new; and some of them,such as the gap between novice and expert, or the gap between hunter-gathererand educated citizen, are not even hardware gaps - they deal not with themagic of intelligence, but the magic of knowledge, or oflackof stupidity.

I think back to before I started studying evolutionary psychology andcognitive science. I know that I could not then have come close topredicting the course of the Singularity. "If I couldn't have gottenit right then, what makes me think I can get it right now?"I am a human, and an educated citizen, and an adult, and an expert, anda genius... but if there is even one more gap of similar magnitude remainingbetween myself and the Singularity, then my speculations will be no betterthan those of an eighteenth-century scientist.

We're all familiar with individual variations in human intelligence,distributed along the great Gaussian curve; this is the only referent mostof us have for "smarter". But precisely because these variationsfall within the design range of the human brain, they're nothing out ofthe ordinary. One of the very deep truths about the human mind isthat evolution designed us to be stupid - to be blinded by ideology, torefuse to admit we're wrong, to think "the enemy" is inhuman, to be affectedby peer pressure. Variations in intelligence that fall within thenormal design range don't directly affect this stupidity. That'swhere we get the folk wisdom that intelligence doesn't imply wisdom, andwithin the human range this is mostly correct (8).The variations we see don't hit hard enough to make people appreciatewhat "smarter" means.

I am a Singularitarian because I have some small appreciation of howutterly, finally, absolutelyimpossible it is to think like someoneeven a little tiny bit smarter than you are. I know that we are allmissing the obvious, every day. There are no hard problems, onlyproblems that are hard to a certain level of intelligence. Move thesmallest bit upwards, and some problems will suddenly move from "impossible"to "obvious". Move a substantial degree upwards, and all of themwill become obvious. Move a huge distance upwards...

And I know that my picture of the Singularity will still fallshort of the truth. I may not be modest, but I have my humility -if I can spot anthropomorphisms and gaping logical flaws in every allegedtranshuman in every piece of science fiction, it follows that a slightlyhigher-order genius (never mind a real transhuman!) could read this pageand laugh at my lack of imagination. Call it experience, call ithumility, call it self-awareness, call it the Principle of Mediocrity;I've crossed enough gaps to believe there are more. I know, in adim way, just how dumb I am.

I've tried to show the Beyondness of the Singularity by brute force,but it doesn't take infinite speeds and PTs and ws to placesomething utterly beyond us. All it takes is a little tiny bitof edge, a bit smarter, and the Beyond stares us in the face oncemore. I've never been through the Singularity. I'venever been to the Transcend. I just staked out an area of the LowBeyond. This page is devoted to communicating a sense of awe thatcomes from personal experience, and is, therefore, merely human.

From my cortex, to yours; every concept here was born of a plain oldHomosapiens - and any impression it has made on you was likewise born ofa plain old Homo sapiens. Someone who has devoted a bit morethought, or someone a bit more extreme; it makes no difference. Whateverimpression you got from this page has not been an accurate picture of thefar future; it has, unavoidably, been an impression of me.And I am not the far future. Only a version of "Staring intothe Singularity" written by an actual Power could convey experience ofthe actual Singularity.

Take whatever future shock this page evoked, and associate it not withthe Singularity; associate it with me, the mild, quiet-spoken fellow infinitesimallydifferent from the rest of humanity. Don't bother trying to extrapolatebeyond that. You can't. Nobody can - not you, not me.

2035. Probably earlier.

Since the Internet exploded across the planet, there has been enoughnetworked computing power for intelligence. If the Internet wereproperly reprogrammed, it would be enough to run a human brain, or a seedAI. On the nanotechnology side, we possess machines capable ofproducing arbitrary DNA sequences, and we know how to turn arbitrary DNAsequences into arbitrary proteins (9).We have machines - Atomic Force Probes - that can put single atoms anywherewe like, and which have recently [1999] been demonstrated to be capableof forming atomic bonds. Hundredth-nanometer precision positioning,atomic-scale tweezers... the news just keeps on piling up.

If we had a time machine, 100K of information from the future couldspecify a protein that built a device that would give us nanotechnologyovernight. 100K could contain the code for a seed AI. Eversince the late 90's, the Singularity has been only a problem of software.And software is information, the magic stuff that changes at arbitrarilyhigh speeds. As far as technology is concerned, the Singularity couldhappen tomorrow. One breakthrough - just one major insight- in the science of protein engineering or atomic manipulation or ArtificialIntelligence, one really good day at Webmindor Zyvex, and the door to Singularitysweeps open.

Drexler has writtena detailed, technical,how-to book for nanotechnology. After stalling for thirty years,AI is making a comeback. Computers are growing in power even fasterthan their usual, pedestrian rate of doubling in power every two years.Quate has constructed a 16-head parallel ScanningTunnelling Probe. [Written in '96.] I'm starting to workout methods of coding atranshuman AI. [Written in '98.] The first chemical bondhas been formed using an atomic-force microscope. The U.S. governmenthas announced its intent to spend hundreds of millions of dollars on nanotechnologyresearch. IBM has announced the BlueGene project to achieve petaflops (10) computing power by 2005, with intent tocrack the protein folding problem. The SingularityInstitute for Artificial Intelligence, Inc. has been incorporated asa nonprofit with the express purpose of codinga seed AI. [Written in '00.]

The exact time of Singularity is customarily predicted by taking a trendand extrapolating it, much as The Population Bomb predicted thatwe'd run out of food in 1977. For example, population growth is hyperbolic.(Maybe you learned it was exponential in math class, but it's hyperbolicto a much better fit than exponential.)If that trend continues, world population reaches infinity on Aug 17, 2027,plus or minus 1.8 years. It is, of course, impossible for the humanpopulation to reach infinity. Some say that if we can create AIs,then the graph might measure sentient population instead of humanpopulation. These people are torturing the metaphor. Nobodydesigned the population curve to take into account developments in AI.It's just a curve, a bunch of numbers. It can't distortthe future course of technology just to remain on track.

If you project on a graph the minimum size of the materials we can manipulate,it reaches the atomic level - nanotechnology- in I forget how many years (the page vanished), but I think around 2035.This, of course, was before the time of the ScanningTunnelling Microscope and "IBM" spelled out in xenon atoms. Forthat matter, we now have theartificialatom ("You can make any kind of artificial atom - long, thin atomsand big, round atoms."), which has in a sense obsoleted merely molecularnanotechnology. As of '95, Drexler was giving the ballpark figureof 2015 (11).I suspect the timetable has been accelerated a bit since then. Myown guess would be no later than 2010.

Similarly, computing power doubles every two yearseighteen months. If we extrapolate forty thirtyfifteen years ahead we find computers with as much raw power (10^17ops/sec) assome people think humans have, arriving in 20352025 2015. [The previous sentence was written in1996, revised later that year, and then revised again in 2000; hence thepeculiar numbers.] Does this mean we have the softwareto spin minds? No. Does this mean we can program smarter people?No. Does this take into account any breakthroughsbetween now and then? No. Does this take into account the lawsof physics? No. Is this a detailed model of all the researchersaround the planet? No.

It's just a graph. The "amazing constancy" of Moore's Law entitlesit to consideration as a thought-provoking metaphor of the future, butnothing more. The Transcended doubling sequence doesn'taccount for how the faster computer-based researchers can get the physicalmanufacturing technology for the next generation set up in picoseconds,or how they can beat the laws of physics. That's not to say thatsuch things are impossible - it doesn't actually strike me as allthat likely that modern-day physics has really reached the ultimate bottomlevel. Maybe there are no physical limits. The pointis that Moore's Law doesn't explain how physics can be bypassed.

Mathematics can't predict when the Singularity is coming. (Well,it can, but it won't get it right.) Even the remarkably steady numbers,such as the one describing the doubling rate of computing power, (A) describeunaided human minds and (B) are speeding up, perhaps due to computer-aideddesign programs. Statistics may be used to predict the future, butthey don't model it. What I'm trying to say here is that "2035"is just a wild guess, and it might as well be next Tuesday.

The rest is here:

Yudkowsky - Staring into the Singularity 1.2.5

Related Posts

Comments are closed.