Page 112

Category Archives: The Singularity

Singularity Q&A | KurzweilAI

Posted: June 26, 2016 at 10:50 am

Originally published in 2005 with the launch of The Singularity Is Near.

Questions and Answers

So what is the Singularity?

Within a quarter century, nonbiological intelligence will match the range and subtlety of human intelligence. It will then soar past it because of the continuing acceleration of information-based technologies, as well as the ability of machines to instantly share their knowledge. Intelligent nanorobots will be deeply integrated in our bodies, our brains, and our environment, overcoming pollution and poverty, providing vastly extended longevity, full-immersion virtual reality incorporating all of the senses (like The Matrix), experience beaming (like Being John Malkovich), and vastly enhanced human intelligence. The result will be an intimate merger between the technology-creating species and the technological evolutionary process it spawned.

And thats the Singularity?

No, thats just the precursor. Nonbiological intelligence will have access to its own design and will be able to improve itself in an increasingly rapid redesign cycle. Well get to a point where technical progress will be so fast that unenhanced human intelligence will be unable to follow it. That will mark the Singularity.

When will that occur?

I set the date for the Singularityrepresenting a profound and disruptive transformation in human capabilityas 2045. The nonbiological intelligence created in that year will be one billion times more powerful than all human intelligence today.

Why is this called the Singularity?

The term Singularity in my book is comparable to the use of this term by the physics community. Just as we find it hard to see beyond the event horizon of a black hole, we also find it difficult to see beyond the event horizon of the historical Singularity. How can we, with our limited biological brains, imagine what our future civilization, with its intelligence multiplied trillions-fold, be capable of thinking and doing? Nevertheless, just as we can draw conclusions about the nature of black holes through our conceptual thinking, despite never having actually been inside one, our thinking today is powerful enough to have meaningful insights into the implications of the Singularity. Thats what Ive tried to do in this book.

Okay, lets break this down. It seems a key part of your thesis is that we will be able to capture the intelligence of our brains in a machine.

Indeed.

So how are we going to achieve that?

We can break this down further into hardware and software requirements. In the book, I show how we need about 10 quadrillion (1016) calculations per second (cps) to provide a functional equivalent to all the regions of the brain. Some estimates are lower than this by a factor of 100. Supercomputers are already at 100 trillion (1014) cps, and will hit 1016 cps around the end of this decade. Several supercomputers with 1 quadrillion cps are already on the drawing board, with two Japanese efforts targeting 10 quadrillion cps around the end of the decade. By 2020, 10 quadrillion cps will be available for around $1,000. Achieving the hardware requirement was controversial when my last book on this topic, The Age of Spiritual Machines, came out in 1999, but is now pretty much of a mainstream view among informed observers. Now the controversy is focused on the algorithms.

And how will we recreate the algorithms of human intelligence?

To understand the principles of human intelligence we need to reverse-engineer the human brain. Here, progress is far greater than most people realize. The spatial and temporal (time) resolution of brain scanning is also progressing at an exponential rate, roughly doubling each year, like most everything else having to do with information. Just recently, scanning tools can see individual interneuronal connections, and watch them fire in real time. Already, we have mathematical models and simulations of a couple dozen regions of the brain, including the cerebellum, which comprises more than half the neurons in the brain. IBM is now creating a simulation of about 10,000 cortical neurons, including tens of millions of connections. The first version will simulate the electrical activity, and a future version will also simulate the relevant chemical activity. By the mid 2020s, its conservative to conclude that we will have effective models for all of the brain.

So at that point well just copy a human brain into a supercomputer?

I would rather put it this way: At that point, well have a full understanding of the methods of the human brain. One benefit will be a deep understanding of ourselves, but the key implication is that it will expand the toolkit of techniques we can apply to create artificial intelligence. We will then be able to create nonbiological systems that match human intelligence in the ways that humans are now superior, for example, our pattern- recognition abilities. These superintelligent computers will be able to do things we are not able to do, such as share knowledge and skills at electronic speeds.

By 2030, a thousand dollars of computation will be about a thousand times more powerful than a human brain. Keep in mind also that computers will not be organized as discrete objects as they are today. There will be a web of computing deeply integrated into the environment, our bodies and brains.

You mentioned the AI tool kit. Hasnt AI failed to live up to its expectations?

There was a boom and bust cycle in AI during the 1980s, similar to what we saw recently in e-commerce and telecommunications. Such boom-bust cycles are often harbingers of true revolutions; recall the railroad boom and bust in the 19th century. But just as the Internet bust was not the end of the Internet, the so-called AI Winter was not the end of the story for AI either. There are hundreds of applications of narrow AI (machine intelligence that equals or exceeds human intelligence for specific tasks) now permeating our modern infrastructure. Every time you send an email or make a cell phone call, intelligent algorithms route the information. AI programs diagnose electrocardiograms with an accuracy rivaling doctors, evaluate medical images, fly and land airplanes, guide intelligent autonomous weapons, make automated investment decisions for over a trillion dollars of funds, and guide industrial processes. These were all research projects a couple of decades ago. If all the intelligent software in the world were to suddenly stop functioning, modern civilization would grind to a halt. Of course, our AI programs are not intelligent enough to organize such a conspiracy, at least not yet.

Why dont more people see these profound changes ahead?

Hopefully after they read my new book, they will. But the primary failure is the inability of many observers to think in exponential terms. Most long-range forecasts of what is technically feasible in future time periods dramatically underestimate the power of future developments because they are based on what I call the intuitive linear view of history rather than the historical exponential view. My models show that we are doubling the paradigm-shift rate every decade. Thus the 20th century was gradually speeding up to the rate of progress at the end of the century; its achievements, therefore, were equivalent to about twenty years of progress at the rate in 2000. Well make another twenty years of progress in just fourteen years (by 2014), and then do the same again in only seven years. To express this another way, we wont experience one hundred years of techn
ological advance in the 21st century; we will witness on the order of 20,000 years of progress (again, when measured by the rate of progress in 2000), or about 1,000 times greater than what was achieved in the 20th century.

The exponential growth of information technologies is even greater: were doubling the power of information technologies, as measured by price-performance, bandwidth, capacity and many other types of measures, about every year. Thats a factor of a thousand in ten years, a million in twenty years, and a billion in thirty years. This goes far beyond Moores law (the shrinking of transistors on an integrated circuit, allowing us to double the price-performance of electronics each year). Electronics is just one example of many. As another example, it took us 14 years to sequence HIV; we recently sequenced SARS in only 31 days.

So this acceleration of information technologies applies to biology as well?

Absolutely. Its not just computer devices like cell phones and digital cameras that are accelerating in capability. Ultimately, everything of importance will be comprised essentially of information technology. With the advent of nanotechnology-based manufacturing in the 2020s, well be able to use inexpensive table-top devices to manufacture on-demand just about anything from very inexpensive raw materials using information processes that will rearrange matter and energy at the molecular level.

Well meet our energy needs using nanotechnology-based solar panels that will capture the energy in .03 percent of the sunlight that falls on the Earth, which is all we need to meet our projected energy needs in 2030. Well store the energy in highly distributed fuel cells.

I want to come back to both biology and nanotechnology, but how can you be so sure of these developments? Isnt technical progress on specific projects essentially unpredictable?

Predicting specific projects is indeed not feasible. But the result of the overall complex, chaotic evolutionary process of technological progress is predictable.

People intuitively assume that the current rate of progress will continue for future periods. Even for those who have been around long enough to experience how the pace of change increases over time, unexamined intuition leaves one with the impression that change occurs at the same rate that we have experienced most recently. From the mathematicians perspective, the reason for this is that an exponential curve looks like a straight line when examined for only a brief duration. As a result, even sophisticated commentators, when considering the future, typically use the current pace of change to determine their expectations in extrapolating progress over the next ten years or one hundred years. This is why I describe this way of looking at the future as the intuitive linear view. But a serious assessment of the history of technology reveals that technological change is exponential. Exponential growth is a feature of any evolutionary process, of which technology is a primary example.

As I show in the book, this has also been true of biological evolution. Indeed, technological evolution emerges from biological evolution. You can examine the data in different ways, on different timescales, and for a wide variety of technologies, ranging from electronic to biological, as well as for their implications, ranging from the amount of human knowledge to the size of the economy, and you get the same exponentialnot linearprogression. I have over forty graphs in the book from a broad variety of fields that show the exponential nature of progress in information-based measures. For the price-performance of computing, this goes back over a century, well before Gordon Moore was even born.

Arent there are a lot of predictions of the future from the past that look a little ridiculous now?

Yes, any number of bad predictions from other futurists in earlier eras can be cited to support the notion that we cannot make reliable predictions. In general, these prognosticators were not using a methodology based on a sound theory of technology evolution. I say this not just looking backwards now. Ive been making accurate forward-looking predictions for over twenty years based on these models.

But how can it be the case that we can reliably predict the overall progression of these technologies if we cannot even predict the outcome of a single project?

Predicting which company or product will succeed is indeed very difficult, if not impossible. The same difficulty occurs in predicting which technical design or standard will prevail. For example, how will the wireless-communication protocols Wimax, CDMA, and 3G fare over the next several years? However, as I argue extensively in the book, we find remarkably precise and predictable exponential trends when assessing the overall effectiveness (as measured in a variety of ways) of information technologies. And as I mentioned above, information technology will ultimately underlie everything of value.

But how can that be?

We see examples in other areas of science of very smooth and reliable outcomes resulting from the interaction of a great many unpredictable events. Consider that predicting the path of a single molecule in a gas is essentially impossible, but predicting the properties of the entire gascomprised of a great many chaotically interacting moleculescan be done very reliably through the laws of thermodynamics. Analogously, it is not possible to reliably predict the results of a specific project or company, but the overall capabilities of information technology, comprised of many chaotic activities, can nonetheless be dependably anticipated through what I call the law of accelerating returns.

What will the impact of these developments be?

Radical life extension, for one.

Sounds interesting, how does that work?

In the book, I talk about three great overlapping revolutions that go by the letters GNR, which stands for genetics, nanotechnology, and robotics. Each will provide a dramatic increase to human longevity, among other profound impacts. Were in the early stages of the geneticsalso called biotechnologyrevolution right now. Biotechnology is providing the means to actually change your genes: not just designer babies but designer baby boomers. Well also be able to rejuvenate all of your bodys tissues and organs by transforming your skin cells into youthful versions of every other cell type. Already, new drug development is precisely targeting key steps in the process of atherosclerosis (the cause of heart disease), cancerous tumor formation, and the metabolic processes underlying each major disease and aging process. The biotechnology revolution is already in its early stages and will reach its peak in the second decade of this century, at which point well be able to overcome most major diseases and dramatically slow down the aging process.

That will bring us to the nanotechnology revolution, which will achieve maturity in the 2020s. With nanotechnology, we will be able to go beyond the limits of biology, and replace your current human body version 1.0 with a dramatically upgraded version 2.0, providing radical life extension.

And how does that work?

The killer app of nanotechnology is nanobots, which are blood-cell sized robots that can travel in the bloodstream destroying pathogens, removing debris, correcting DNA errors, and reversing aging processes.

Human body version 2.0?

Were already in the early stages of augmenting and replacing each of our organs, even portions of our brains with neural implants, th
e most recent versions of which allow patients to download new software to their neural implants from outside their bodies. In the book, I describe how each of our organs will ultimately be replaced. For example, nanobots could deliver to our bloodstream an optimal set of all the nutrients, hormones, and other substances we need, as well as remove toxins and waste products. The gastrointestinal tract could be reserved for culinary pleasures rather than the tedious biological function of providing nutrients. After all, weve already in some ways separated the communication and pleasurable aspects of sex from its biological function.

And the third revolution?

The robotics revolution, which really refers to strong AI, that is, artificial intelligence at the human level, which we talked about earlier. Well have both the hardware and software to recreate human intelligence by the end of the 2020s. Well be able to improve these methods and harness the speed, memory capabilities, and knowledge- sharing ability of machines.

Well ultimately be able to scan all the salient details of our brains from inside, using billions of nanobots in the capillaries. We can then back up the information. Using nanotechnology-based manufacturing, we could recreate your brain, or better yet reinstantiate it in a more capable computing substrate.

Which means?

Our biological brains use chemical signaling, which transmit information at only a few hundred feet per second. Electronics is already millions of times faster than this. In the book, I show how one cubic inch of nanotube circuitry would be about one hundred million times more powerful than the human brain. So well have more powerful means of instantiating our intelligence than the extremely slow speeds of our interneuronal connections.

So well just replace our biological brains with circuitry?

I see this starting with nanobots in our bodies and brains. The nanobots will keep us healthy, provide full-immersion virtual reality from within the nervous system, provide direct brain-to-brain communication over the Internet, and otherwise greatly expand human intelligence. But keep in mind that nonbiological intelligence is doubling in capability each year, whereas our biological intelligence is essentially fixed in capacity. As we get to the 2030s, the nonbiological portion of our intelligence will predominate.

The closest life extension technology, however, is biotechnology, isnt that right?

Theres certainly overlap in the G, N and R revolutions, but thats essentially correct.

So tell me more about how genetics or biotechnology works.

As we are learning about the information processes underlying biology, we are devising ways of mastering them to overcome disease and aging and extend human potential. One powerful approach is to start with biologys information backbone: the genome. With gene technologies, were now on the verge of being able to control how genes express themselves. We now have a powerful new tool called RNA interference (RNAi), which is capable of turning specific genes off. It blocks the messenger RNA of specific genes, preventing them from creating proteins. Since viral diseases, cancer, and many other diseases use gene expression at some crucial point in their life cycle, this promises to be a breakthrough technology. One gene wed like to turn off is the fat insulin receptor gene, which tells the fat cells to hold on to every calorie. When that gene was blocked in mice, those mice ate a lot but remained thin and healthy, and actually lived 20 percent longer.

New means of adding new genes, called gene therapy, are also emerging that have overcome earlier problems with achieving precise placement of the new genetic information. One company Im involved with, United Therapeutics, cured pulmonary hypertension in animals using a new form of gene therapy and it has now been approved for human trials.

So were going to essentially reprogram our DNA.

Thats a good way to put it, but thats only one broad approach. Another important line of attack is to regrow our own cells, tissues, and even whole organs, and introduce them into our bodies without surgery. One major benefit of this therapeutic cloning technique is that we will be able to create these new tissues and organs from versions of our cells that have also been made youngerthe emerging field of rejuvenation medicine. For example, we will be able to create new heart cells from your skin cells and introduce them into your system through the bloodstream. Over time, your heart cells get replaced with these new cells, and the result is a rejuvenated young heart with your own DNA.

Drug discovery was once a matter of finding substances that produced some beneficial effect without excessive side effects. This process was similar to early humans tool discovery, which was limited to simply finding rocks and natural implements that could be used for helpful purposes. Today, we are learning the precise biochemical pathways that underlie both disease and aging processes, and are able to design drugs to carry out precise missions at the molecular level. The scope and scale of these efforts is vast.

But perfecting our biology will only get us so far. The reality is that biology will never be able to match what we will be capable of engineering, now that we are gaining a deep understanding of biologys principles of operation.

Isnt nature optimal?

Not at all. Our interneuronal connections compute at about 200 transactions per second, at least a million times slower than electronics. As another example, a nanotechnology theorist, Rob Freitas, has a conceptual design for nanobots that replace our red blood cells. A conservative analysis shows that if you replaced 10 percent of your red blood cells with Freitas respirocytes, you could sit at the bottom of a pool for four hours without taking a breath.

If people stop dying, isnt that going to lead to overpopulation?

A common mistake that people make when considering the future is to envision a major change to todays world, such as radical life extension, as if nothing else were going to change. The GNR revolutions will result in other transformations that address this issue. For example, nanotechnology will enable us to create virtually any physical product from information and very inexpensive raw materials, leading to radical wealth creation. Well have the means to meet the material needs of any conceivable size population of biological humans. Nanotechnology will also provide the means of cleaning up environmental damage from earlier stages of industrialization.

So well overcome disease, pollution, and povertysounds like a utopian vision.

Its true that the dramatic scale of the technologies of the next couple of decades will enable human civilization to overcome problems that we have struggled with for eons. But these developments are not without their dangers. Technology is a double edged swordwe dont have to look past the 20th century to see the intertwined promise and peril of technology.

What sort of perils?

G, N, and R each have their downsides. The existential threat from genetic technologies is already here: the same technology that will soon make major strides against cancer, heart disease, and other diseases could also be employed by a bioterrorist to create a bioengineered biological virus that combines ease of transmission, deadliness, and stealthiness, that is, a long incubation period. The tools and knowledge to do this are far more widespread than the tools and knowledge to create an atomic b
omb, and the impact could be far worse.

So maybe we shouldnt go down this road.

Its a little late for that. But the idea of relinquishing new technologies such as biotechnology and nanotechnology is already being advocated. I argue in the book that this would be the wrong strategy. Besides depriving human society of the profound benefits of these technologies, such a strategy would actually make the dangers worse by driving development underground, where responsible scientists would not have easy access to the tools needed to defend us.

So how do we protect ourselves?

I discuss strategies for protecting against dangers from abuse or accidental misuse of these very powerful technologies in chapter 8. The overall message is that we need to give a higher priority to preparing protective strategies and systems. We need to put a few more stones on the defense side of the scale. Ive given testimony to Congress on a specific proposal for a Manhattan style project to create a rapid response system that could protect society from a new virulent biological virus. One strategy would be to use RNAi, which has been shown to be effective against viral diseases. We would set up a system that could quickly sequence a new virus, prepare a RNA interference medication, and rapidly gear up production. We have the knowledge to create such a system, but we have not done so. We need to have something like this in place before its needed.

Ultimately, however, nanotechnology will provide a completely effective defense against biological viruses.

But doesnt nanotechnology have its own self-replicating danger?

Yes, but that potential wont exist for a couple more decades. The existential threat from engineered biological viruses exists right now.

Okay, but how will we defend against self-replicating nanotechnology?

There are already proposals for ethical standards for nanotechnology that are based on the Asilomar conference standards that have worked well thus far in biotechnology. These standards will be effective against unintentional dangers. For example, we do not need to provide self-replication to accomplish nanotechnology manufacturing.

But what about intentional abuse, as in terrorism?

Well need to create a nanotechnology immune systemgood nanobots that can protect us from the bad ones.

Blue goo to protect us from the gray goo!

Yes, well put. And ultimately well need the nanobots comprising the immune system to be self-replicating. Ive debated this particular point with a number of other theorists, but I show in the book why the nanobot immune system we put in place will need the ability to self-replicate. Thats basically the same lesson that biological evolution learned.

Ultimately, however, strong AI will provide a completely effective defense against self-replicating nanotechnology.

Okay, whats going to protect us against a pathological AI?

Yes, well, that would have to be a yet more intelligent AI.

This is starting to sound like that story about the universe being on the back of a turtle, and that turtle standing on the back of another turtle, and so on all the way down. So what if this more intelligent AI is unfriendly? Another even smarter AI?

History teaches us that the more intelligent civilizationthe one with the most advanced technologyprevails. But I do have an overall strategy for dealing with unfriendly AI, which I discuss in chapter 8.

Okay, so Ill have to read the book for that one. But arent there limits to exponential growth? You know the story about rabbits in Australiathey didnt keep growing exponentially forever.

There are limits to the exponential growth inherent in each paradigm. Moores law was not the first paradigm to bring exponential growth to computing, but rather the fifth. In the 1950s they were shrinking vacuum tubes to keep the exponential growth going and then that paradigm hit a wall. But the exponential growth of computing didnt stop. It kept going, with the new paradigm of transistors taking over. Each time we can see the end of the road for a paradigm, it creates research pressure to create the next one. Thats happening now with Moores law, even though we are still about fifteen years away from the end of our ability to shrink transistors on a flat integrated circuit. Were making dramatic progress in creating the sixth paradigm, which is three-dimensional molecular computing.

But isnt there an overall limit to our ability to expand the power of computation?

Yes, I discuss these limits in the book. The ultimate 2 pound computer could provide 1042 cps, which will be about 10 quadrillion (1016) times more powerful than all human brains put together today. And thats if we restrict the computer to staying at a cold temperature. If we allow it to get hot, we could improve that by a factor of another 100 million. And, of course, well be devoting more than two pounds of matter to computing. Ultimately, well use a significant portion of the matter and energy in our vicinity. So, yes, there are limits, but theyre not very limiting.

And when we saturate the ability of the matter and energy in our solar system to support intelligent processes, what happens then?

Then well expand to the rest of the Universe.

Which will take a long time I presume.

Well, that depends on whether we can use wormholes to get to other places in the Universe quickly, or otherwise circumvent the speed of light. If wormholes are feasible, and analyses show they are consistent with general relativity, we could saturate the universe with our intelligence within a couple of centuries. I discuss the prospects for this in the chapter 6. But regardless of speculation on wormholes, well get to the limits of computing in our solar system within this century. At that point, well have expanded the powers of our intelligence by trillions of trillions.

Getting back to life extension, isnt it natural to age, to die?

Other natural things include malaria, Ebola, appendicitis, and tsunamis. Many natural things are worth changing. Aging may be natural, but I dont see anything positive in losing my mental agility, sensory acuity, physical limberness, sexual desire, or any other human ability.

In my view, death is a tragedy. Its a tremendous loss of personality, skills, knowledge, relationships. Weve rationalized it as a good thing because thats really been the only alternative weve had. But disease, aging, and death are problems we are now in a position to overcome.

Wait, you said that the golden era of biotechnology was still a decade away. We dont have radical life extension today, do we?

Go here to read the rest:
Singularity Q&A | KurzweilAI

Posted in The Singularity | Comments Off on Singularity Q&A | KurzweilAI

Singularities and Black Holes (Stanford Encyclopedia of …

Posted: February 1, 2016 at 7:44 pm

General relativity, Einstein's theory of space, time, and gravity, allows for the existence of singularities. On this nearly all agree. However, when it comes to the question of how, precisely, singularities are to be defined, there is widespread disagreement Singularties in some way signal a breakdown of the geometry itself, but this presents an obvious difficulty in referring to a singulary as a thing that resides at some location in spacetime: without a well-behaved geomtry, there can be no location. For this reason, some philosopers and physicists have suggested that we should not speak of singularities at all, but rather of singular spacetimes. In this entry, we shall generally treat these two formulations as being equivalent, but we will highlight the distinction when it becomes significant.

Singularities are often conceived of metaphorically as akin to a tear in the fabric of spacetime. The most common attempts to define singularities center on one of two core ideas that this image readily suggests.

The first is that a spacetime has a singularity just in case it contains an incomplete path, one that cannot be continued indefinitely, but draws up short, as it were, with no possibility of extension. (Where is the path supposed to go after it runs into the tear? Where did it come from when it emerged from the tear?). The second is that a spacetime is singular just in case there are points missing from it. (Where are the spacetime points that used to be or should be where the tear is?) Another common thought, often adverted to in discussion of the two primary notions, is that singular structure, whether in the form of missing points or incomplete paths, must be related to pathological behavior of some sort on the part of the singular spacetime's curvature, that is, the fundamental deformation of spacetime that manifests itself as the gravitational field. For example, some measure of the intensity of the curvature (the strength of the gravitational field) may increase without bound as one traverses the incomplete path. Each of these three ideas will be considered in turn below.

There is likewise considerable disagreement over the significance of singularties. Many eminent physicists believe that general relativity's prediction of singular structure signals a serious deficiency in the theory; singularities are an indication that the description offered by general relativity is breaking down. Others believe that singularities represent an exciting new horizon for physicists to aim for and explore in cosmology, holding out the promise of physical phenomena differing so radically from any that we have yet experienced as to ensure, in our attempt to observe, quantify and understand them, a profound advance in our comprehension of the physical world.

While there are competing definitions of spacetime singularities, the most central, and widely accepted, criterion rests on the possibility that some spacetimes contain incomplete paths. Indeed, the rival definitions (in terms of missing points or curvature pathology) still make use of the notion of path incompleteness.

(The reader unfamiliar with general relativity may find it helpful to review the Hole Argument entry's Beginner's Guide to Modern Spacetime Theories, which presents a brief and accessible introduction to the concepts of a spacetime manifold, a metric, and a worldline.)

A path in spacetime is a continuous chain of events through space and time. If I snap my fingers continually, without pause, then the collection of snaps forms a path. The paths used in the most important singularity theorems represent possible trajectories of particles and observers. Such paths are known as world-lines; they consist of the events occupied by an object throughout its lifetime. That the paths be incomplete and inextendible means, roughly speaking, that, after a finite amount of time, a particle or observer following that path would run out of world, as it wereit would hurtle into the tear in the fabric of spacetime and vanish. Alternatively, a particle or observer could leap out of the tear to follow such a path. While there is no logical or physical contradiction in any of this, it appears on the face of it physically suspect for an observer or a particle to be allowed to pop in or out of existence right in the middle of spacetime, so to speakif that does not suffice for concluding that the spacetime is singular, it is difficult to imagine what else would. At the same time, the ground-breaking work predicting the existence of such pathological paths produced no consensus on what ought to count as a necessary condition for singular structure according to this criterion, and thus no consensus on a fixed definition for it.

In this context, an incomplete path in spacetime is one that is both inextendible and of finite proper length, which means that any particle or observer traversing the path would experience only a finite interval of existence that in principle cannot be continued any longer. However, for this criterion to do the work we want it to, we'll need to limit the class of spacetimes under discussion. Specifically, we shall be concerned with spacetimes that are maximally extended (or just maximal). In effect, this condition says that one's representation of spacetime is as big as it possibly can bethere is, from the mathematical point of view, no way to treat the spacetime as being a proper subset of a larger, more extensive spacetime.

If there is an incomplete path in a spacetime, goes the thinking behind the requirement, then perhaps the path is incomplete only because one has not made one's model of spacetime big enough. If one were to extend the spacetime manifold maximally, then perhaps the previously incomplete path could be extended into the new portions of the larger spacetime, indicating that no physical pathology underlay the incompleteness of the path. The inadequacy would merely reside in the incomplete physical model we had been using to represent spacetime.

An example of a non-maximally extended spacetime can be easily had, along with a sense of why they intuitively seem in some way or other deficient. For the moment, imagine spacetime is only two-dimensional, and flat. Now, excise from somewhere on the plane a closed set shaped like Ingrid Bergman. Any path that had passed through one of the points in the removed set is now incomplete.

In this case, the maximal extension of the resulting spacetime is obvious, and does indeed fix the problem of all such incomplete paths: re-incorporate the previously excised set. The seemingly artificial and contrived nature of such examples, along with the ease of rectifying them, seems to militate in favor of requiring spacetimes to be maximal.

Once we've established that we're interested in maximal spacetimes, the next issue is what sort of path incompleteness is relevant for singularities. Here we find a good deal of controversy. Criteria of incompleteness typically look at how some parameter naturally associated with the path (such as its proper length) grows. One generally also places further restrictions on the paths that are worth considering (for example, one rules out paths that could only be taken by particles undergoing unbounded acceleration in a finite period of time). A spacetime is said to be singular if it possesses a path such that the specified parameter associated with that path cannot increase without bound as one traverses the entirety of the maximally extended path. The idea is that the parameter at issue will serve as a marker for something like the time experienced by a particle or observer,
and so, if the value of that parameter remains finite along the whole path then we've run out of path in a finite amout of time, as it were. We've hit and edge or a tear in spacetime.

For a path that is everywhere timelike (i.e., that does not involves speeds at or above that of light), it is natural to take as the parameter the proper time a particle or observer would experience along the path, that is, the time measured along the path by a natural clock, such as one based on the natural vibrational frequency of an atom. (There are also fairly natural choices that one can make for spacelike paths (i.e., those that consist of points at a single time) and null paths (those followed by light signals). However, because the spacelike and null cases add yet another level of difficulty, we shall not discuss them here.) The physical interpretation of this sort of incompleteness for timelike paths is more or less straightforward: a timelike path incomplete with respect to proper time in the future direction would represent the possible trajectory of a massive body that would, say, never age beyond a certain point in its existence (an analogous statement can be made, mutatis mutandis, if the path were incomplete in the past direction).

We cannot, however, simply stipulate that a maximal spacetime is singular just in case it contains paths of finite proper length that cannot be extended. Such a criterion would imply that even the flat spacetime described by special relativity is singular, which is surely unacceptable. This would follow because, even in flat spacetime, there are timelike paths with unbounded acceleration which have only a finite proper length (proper time, in this case) and are also inextendible.

The most obvious option is to define a spacetime as singular if and only if it contains incomplete, inextendible timelike geodesics, i.e., paths representing the trajectories of inertial observers, those in free-fall experiencing no acceleration other than that due to gravity. However, this criterion seems too permissive, in that it would count as non-singular some spacetimes whose geometry seems quite pathological. For example, Geroch (1968) demonstrates that a spacetime can be geodesically complete and yet possess an incomplete timelike path of bounded total accelerationthat is to say, an inextendible path in spacetime traversable by a rocket with a finite amount of fuel, along which an observer could experience only a finite amount of proper time. Surely the intrepid astronaut in such a rocket, who would never age beyond a certain point but who also would never necessarily die or cease to exist, would have just cause to complain that something was singular about this spacetime.

We therefore want a definition that is not restricted to geodesics when deciding whether a spacetime is singular. However, we need some way of overcoming the fact that non-singular spacetimes include inextendible paths of finite proper length. The most widely accepted solution to this problem makes use of a slightly different (and slightly technical) notion of length, known as generalized affine length.[1] Unlike proper length, this generalized affine length depends on some arbitrary choices (roughly speaking, the length will vary depending on the coordinates one chooses). However, if the length is infinite for one such choice, it will be infinite for all other choices. Thus the question of whether a path has a finite or infinite generalized affine length is a perfectly well-defined question, and that is all we'll need.

The definition that has won the most widespread acceptance leading Earman (1995, p. 36) to dub this the semiofficial definition of singularities is the following:

To say that a spacetime is singular then is to say that there is at least one maximally extended path that has a bounded (generalized affine) length. To put it another way, a spacetime is nonsingular when it is complete in the sense that the only reason any given path might not be extendible is that it's already infinitely long (in this technical sense).

The chief problem facing this definition of singularities is that the physical significance of generalized affine length is opaque, and thus it is unclear what the relevance of singularities, defined in this way, might be. It does nothing, for example, to clarify the physical status of the spacetime described by Geroch; it seems as though the new criterion does nothing more than sweep the troubling aspects of such examples under the rug. It does not explain why we ought not take such prima facie puzzling and troubling examples as physically pathological; it merely declares by fiat that they are not.

So where does this leave us? The consensus seems to be that, while it is easy in specific examples to conclude that incomplete paths of various sorts represent singular structure, no entirely satisfactory, strict definition of singular structure in their terms has yet been formulated. For a philosopher, the issues offer deep and rich veins for those contemplating, among other matters, the role of explanatory power in the determination of the adequacy of physical theories, the role of metaphysics and intuition, questions about the nature of the existence attributable to physical entities in spacetime and to spacetime itself, and the status of mathematical models of physical systems in the determination of our understanding of those systems as opposed to in the mere representation our knowledge of them.

We have seen that one runs into difficulties if one tries to define singularities as things that have locations, and how some of those difficulties can be avoided by defining singular spacetimes in terms of incomplete paths. However, it would be desirable for many reasons to have a characterization of a spacetime singularity in general relativity as, in some sense or other, a spatiotemporal place. If one had a precise characterization of a singularity in terms of points that are missing from spacetime, one might then be able to analyze the structure of the spacetime locally at the singularity, instead of taking troublesome, perhaps ill-defined limits along incomplete paths. Many discussions of singular structure in relativistic spacetimes, therefore, are premised on the idea that a singularity represents a point or set of points that in some sense or other is missing from the spacetime manifold, that spacetime has a hole or tear in it that we could fill in or patch by the appendage of a boundary to it.

In trying to determine whether an ordinary web of cloth has a hole in it, for example, one would naturally rely on the fact that the web exists in space and time. In this case one can, so to speak, point to a hole in the cloth by specifying points of space at a particular moment of time not currently occupied by any of the cloth but which would, as it were, complete the cloth were they so occupied. When trying to conceive of a singular spacetime, however, one does not have the luxury of imagining it embedded in a larger space with respect to which one can say there are points missing from it. In any event, the demand that the spacetime be maximal rules out the possibility of embedding the spacetime manifold in any larger spacetime manifold of any ordinary sort. It would seem, then, that making precise the idea that a singularity is a marker of missing points ought to devolve upon some idea of intrinsic structural incompleteness in the spacetime manifold rather than extrinsic incompleteness with respect to an external structure.

Force of analogy suggests that one define a spacetime to have points missing from it if and only if it contains incomplete, inext
endible paths, and then try to use these incomplete paths to construct in some fashion or other new, properly situated points for the spacetime, the addition of which will make the previously inextendible paths extendible. These constructed points would then be our candidate singularities. Missing points on this view would correspond to a boundary for a singular spacetimeactual points of an extended spacetime at which paths incomplete in the original spacetime would terminate. (We will, therefore, alternate between speaking of missing points and speaking of boundary points, with no difference of sense intended.) The goal then is to construct this extended space using the incomplete paths as one's guide.

Now, in trivial examples of spacetimes with missing points such as the one offered before, flat spacetime with a closed set in the shape of Ingrid Bergman excised from it, one does not need any technical machinery to add the missing points back in. One can do it by hand, as it were. Many spacetimes with incomplete paths, however, do not allow missing points to be attached in any obvious way by hand, as this example does. For this program to be viable, which is to say, in order to give substance to the idea that there really are points that in some sense ought to have been included in the spacetime in the first place, we require a physically natural completion procedure based on the incomplete paths that can be applied to incomplete paths in arbitrary spacetimes.

Several problems with this program make themselves felt immediately. Consider, for example, an instance of spacetime representing the final state of the complete gravitational collapse of a spherically symmetric body resulting in a black hole. (See 3 below for a description of black holes.) In this spacetime, any timelike path entering the black hole will necessarily be extendible for only a finite amount of proper timeit then runs into the singularity at the center of the black hole. In its usual presentation, however, there are no obvious points missing from the spacetime at all. It is, to all appearances, as complete as the Cartesian plane, excepting only for the existence of incomplete curves, no class of which indicates by itself a place in the manifold to add a point to it to make the paths in the class complete. Likewise, in our own spacetime every inextendible, past-directed timelike path is incomplete (and our spacetime is singular): they all run into the Big Bang. Insofar as there is no moment of time at which the Big Bang occurred (there is no moment of time at which time began, so to speak), there is no point to serve as the past endpoint of such a path.

The reaction to the problems faced by these boundary constructions is varied, to say the least, ranging from blithe acceptance of the pathology (Clarke 1993), to the attitude that there is no satisfying boundary construction currently available without ruling out the possibility of better ones in the future (Wald 1984), to not even mentioning the possibility of boundary constructions when discussing singular structure (Joshi 1993), to rejection of the need for such constructions at all (Geroch, Can-bin and Wald, 1982).

Nonetheless, many eminent physicists seem convinced that general relativity stands in need of such a construction, and have exerted extraordinary efforts in the service of trying to devise such constructions. This fact raises several fascinating philosophical problems. Though physicists offer as strong motivation the possibility of gaining the ability to analyze singular phenomena locally in a mathematically well-defined manner, they more often speak in terms that strongly suggest they suffer a metaphysical, even an ontological, itch that can be scratched only by the sharp point of a localizable, spatiotemporal entity serving as the locus of their theorizing. However, even were such a construction forthcoming, what sort of physical and theoretical status could accrue to these missing points? They would not be idealizations of a physical system in any ordinary sense of the term, insofar as they would not represent a simplified model of a system formed by ignoring various of its physical features, as, for example, one may idealize the modeling of a fluid by ignoring its viscosity. Neither would they seem necessarily to be only convenient mathematical fictions, as, for example, are the physically impossible dynamical evolutions of a system one integrates over in the variational derivation of the Euler-Lagrange equations, for, as we have remarked, many physicists and philosophers seem eager to find such a construction for the purpose of bestowing substantive and clear ontic status on singular structure. What sorts of theoretical entities, then, could they be, and how could they serve in physical theory?

While the point of this project may seem at bottom identical to the path incompleteness account discussed in 1.1, insofar as singular structure will be defined by the presence of incomplete, inextendible paths, there is a crucial semantic and logical difference between the two. Here, the existence of the incomplete path is not taken itself to constitute the singular structure, but rather serves only as a marker for the presence of singular structure in the sense of missing points: the incomplete path is incomplete because it runs into a hole in the spacetime that, were it filled, would allow the path to be continued; this hole is the singular structure, and the points constructed to fill it compose its locus.

Currently, however, there seems to be even less consensus on how (and whether) one should define singular structure in terms of missing points than there is regarding definitions in terms of path incompleteness. Moreover, this project also faces even more technical and philosophical problems. For these reasons, path incompleteness is generally considered the default definition of singularities.

While path incompleteness seems to capture an important aspect of the intuitive picture of singular structure, it completely ignores another seemingly integral aspect of it: curvature pathology. If there are incomplete paths in a spacetime, it seems that there should be a reason that the path cannot go farther. The most obvious candidate explanation of this sort is something going wrong with the dynamical structure of the spacetime, which is to say, with the curvature of the spacetime. This suggestion is bolstered by the fact that local measures of curvature do in fact blow up as one approaches the singularity of a standard black hole or the big bang singularity. However, there is one problem with this line of thought: no species of curvature pathology we know how to define is either necessary or sufficient for the existence of incomplete paths. (For a discussion of defining singularities in terms of curvature pathologies, see Curiel 1998.)

To make the notion of curvature pathology more precise, we will use the manifestly physical idea of tidal force. Tidal force is generated by the differential in intensity of the gravitational field, so to speak, at neighboring points of spacetime. For example, when you stand, your head is farther from the center of the Earth than your feet, so it feels a (practically negligible) smaller pull downward than your feet. (For a diagram illustrating the nature of tidal forces, see Figure 9 of the entry on Inertial Frames.) Tidal forces are a physical manifestation of spacetime curvature, and one gets direct observational access to curvature by measuring these forces. For our purposes, it is important that in regions of extreme curvature, tidal forces can grow without bound.

It is perhaps surprising that the state of
motion of the observer as it traverses an incomplete path (e.g. whether the observer is accelerating or spinning) can be decisive in determining the physical response of an object to the curvature pathology. Whether the object is spinning on its axis or not, for example, or accelerating slightly in the direction of motion, may determine whether the object gets crushed to zero volume along such a path or whether it survives (roughly) intact all the way along it, as in examples offered by Ellis and Schmidt (1977). The effect of the observer's state of motion on his or her experience of tidal forces can be even more pronounced than this. There are examples of spacetimes in which an observer cruising along a certain kind of path would experience unbounded tidal forces and so be torn apart, while another observer, in a certain technical sense approaching the same limiting point as the first observer, accelerating and decelerating in just the proper way, would experience a perfectly well-behaved tidal force, though she would approach as near as one likes to the other fellow who is in the midst of being ripped to shreds.[2]

Things can get stranger still. There are examples of incomplete geodesics contained entirely within a well-defined area of a spacetime, each having as its limiting point an honest-to-goodness point of spacetime, such that an observer freely falling along such a path would be torn apart by unbounded tidal forces; it can easily be arranged in such cases, however, that a separate observer, who actually travels through the limiting point, will experience perfectly well-behaved tidal forces.[3] Here we have an example of an observer being ripped apart by unbounded tidal forces right in the middle of spacetime, as it were, while other observers cruising peacefully by could reach out to touch him or her in solace during the final throes of agony. This example also provides a nice illustration of the inevitable difficulties attendant on attempts to localize singular structure.

It would seem, then, that curvature pathology as standardly quantified is not in any physical sense a well-defined property of a region of spacetime simpliciter. When we consider the curvature of four-dimensional spacetime, the motion of the device that we use to probe a region (as well as the nature of the device) becomes crucially important for the question of whether pathological behavior manifests itself. This fact raises questions about the nature of quantitative measures of properties of entities in general relativity, and what ought to count as observable, in the sense of reflecting the underlying physical structure of spacetime. Because apparently pathological phenomena may occur or not depending on the types of measurements one is performing, it does not seem that this pathology reflects anything about the state of spacetime itself, or at least not in any localizable way. What then may it reflect, if anything? Much work remains to be done by both physicists and by philosophers in this area, the determination of the nature of physical quantities in general relativity and what ought to count as an observable with intrinsic physical significance. See Bergmann (1977), Bergmann and Komar (1962), Bertotti (1962), Coleman and Kort (1992), and Rovelli (1991, 2001, 2002a, 2002b) for discussion of many different topics in this area, approached from several different perspectives.

When considering the implications of spacetime singularities, it is important to note that we have good reasons to believe that the spacetime of our universe is singular. In the late 1960s, Hawking, Penrose, and Geroch proved several singularity theorems, using the path-incompleteness definition of singularities (see, e.g., Hawking and Ellis 1973). These theorems showed that if certain reasonable premises were satisfied, then in certain circumstances singularities could not be avoided. Notable among these conditions was the positive energy condition that captures the idea that energy is never negative. These theorems indicate that our universe began with an initial singularity, the Big Bang, 13.7 billion years ago. They also indicate that in certain circumstances (discussed below) collapsing matter will form a black hole with a central singularity.

Should these results lead us to believe that singularities are real? Many physicists and philosophers resist this conclusion. Some argue that singularities are too repugnant to be real. Others argue that the singular behavior at the center of black holes and at the beginning of time points to a the limit of the domain of applicability of general relativity. However, some are inclined to take general relativity at its word, and simply accept its prediction of singularities as a surprising, but perfectly consistent account of the geometry of our world.

As we have seen, there is no commonly accepted, strict definition of singularity, no physically reasonable definition of missing point, and no necessary connection of singular structure, at least as characterized by the presence of incomplete paths, to the presence of curvature pathology. What conclusions should be drawn from this state of affairs? There seem to be two primary responses, that of Clarke (1993) and Earman (1995) on the one hand, and that of Geroch, Can-bin and Wald (1982), and Curiel (1998) on the other. The former holds that the mettle of physics and philosophy demands that we find a precise, rigorous and univocal definition of singularity. On this view, the host of philosophical and physical questions surrounding general relativity's prediction of singular structure would best be addressed with such a definition in hand, so as better to frame and answer these questions with precision in its terms, and thus perhaps find other, even better questions to pose and attempt to answer. The latter view is perhaps best summarized by a remark of Geroch, Can-bin and Wald (1982): The purpose of a construction [of singular points], after all, is merely to clarify the discussion of various physical issues involving singular space-times: general relativity as it stands is fully viable with no precise notion of singular points. On this view, the specific physics under investigation in any particular situation should dictate which definition of singularity to use in that situation, if, indeed, any at all.

In sum, the question becomes the following: Is there a need for a single, blanket definition of singularity or does the urge for one bespeak only an old Platonic, essentialist prejudice? This question has obvious connections to the broader question of natural kinds in science. One sees debates similar to those canvassed above when one tries to find, for example, a strict definition of biological species. Clearly part of the motivation for searching for a single exceptionless definition is the impression that there is some real feature of the world (or at least of our spacetime models) which we can hope to capture precisely. Further, we might hope that our attempts to find a rigorous and exceptionless definition will help us to better understand the feature itself. Nonetheless, it is not entirely clear why we shouldn't be happy with a variety of types of singular structure, and with the permissive attitude that none should be considered the right definition of singularities.

Even without an accepted, strict definition of singularity for relativistic spacetimes, the question can be posed of what it may mean to ascribe existence to singular structure under any of the available open possibilities. It is not farfetched to think that answers to this question may bear on the larger question of the existence of spacetime points in general.

It would be difficult to argue that an incomplete path in a maximal relativistic spacetime does not exist in at least some sense of the term. It is not hard to convince oneself, however, that the incompleteness of the path does not exist at any particular point of the spacetime in the same way, say, as this glass of beer at this moment exists at this point of spacetime. If there were a point on the manifold where the incompleteness of the path could be localized, surely that would be the point at which the incomplete path terminated. But if there were such a point, then the path could be extended by having it pass through that point. It is perhaps this fact that lies behind much of the urgency surrounding the attempt to define singular structure as missing points.

The demand that singular structure be localized at a particular place bespeaks an old Aristotelian substantivalism that invokes the maxim, To exist is to exist in space and time (Earman 1995, p. 28). Aristotelian substantivalism here refers to the idea contained in Aristotle's contention that everything that exists is a substance and that all substances can be qualified by the Aristotelian categories, two of which are location in time and location in space. One need not consider anything so outr as incomplete, inextendible paths, though, in order to produce examples of entities that seem undeniably to exist in some sense of the term or other, and yet which cannot have any even vaguely determined location in time and space predicated of them. Indeed, several essential features of a relativistic spacetime, singular or not, cannot be localized in the way that an Aristotelian substantivalist would demand. For example, the Euclidean (or non-Euclidean ) nature of a space is not something with a precise location. Likewise, various spacetime geometrical structures (such as the metric, the affine structure, etc.) cannot be localized in the way that the Aristotelian would demand. The existential status of such entities vis--vis more traditionally considered objects is an open and largely ignored issue. Because of the way the issue of singular structure in relativistic spacetimes ramifies into almost every major open question in relativistic physics today, both physical and philosophical, it provides a peculiarly rich and attractive focus for these sorts of questions.

At the heart of all of our conceptions of a spacetime singularity is the notion of some sort of failing: a path that disappears, points that are torn out, spacetime curvature that becomes pathological. However, perhaps the failing lies not in the spacetime of the actual world (or of any physically possible world), but rather in the theoretical description of the spacetime. That is, perhaps we shouldn't think that general relativity is accurately describing the world when it posits singular structure.

Indeed, in most scientific arenas, singular behavior is viewed as an indication that the theory being used is deficient. It is therefore common to claim that general relativity, in predicting that spacetime is singular, is predicting its own demise, and that classical descriptions of space and time break down at black hole singularities and at the Big Bang. Such a view seems to deny that singularities are real features of the actual world, and to assert that they are instead merely artifices of our current (flawed) physical theories. A more fundamental theory presumably a full theory of quantum gravity will be free of such singular behavior. For example, Ashtekar and Bojowald (2006) and Ashtekar, Pawlowski and Singh (2006) argue that, in the context of loop quantum gravity, neither the big bang singularity nor black hole singularities appear.

On this reading, many of the earlier worries about the status of singularities become moot. Singularties don't exist, nor is the question of how to define them, as such, particularly urgent. Instead, the pressing question is what indicates the borders of the domain of applicability of general relativity? We pick up this question below in Section 5 on quantum black holes, for it is in this context that many of the explicit debates play out over the limits of general relativity.

The simplest picture of a black hole is that of a body whose gravity is so strong that nothing, not even light, can escape from it. Bodies of this type are already possible in the familiar Newtonian theory of gravity. The escape velocity of a body is the velocity at which an object would have to travel to escape the gravitational pull of the body and continue flying out to infinity. Because the escape velocity is measured from the surface of an object, it becomes higher if a body contracts down and becomes more dense. (Under such contraction, the mass of the body remains the same, but its surface gets closer to its center of mass; thus the gravitational force at the surface increases.) If the object were to become sufficiently dense, the escape velocity could therefore exceed the speed of light, and light itself would be unable to escape.

This much of the argument makes no appeal to relativistic physics, and the possibility of such classical black holes was noted in the late 18th Century by Michel (1784) and Laplace (1796). These Newtonian black holes do not precipitate quite the same sense of crisis as do relativistic black holes. While light hurled ballistically from the surface of the collapsed body cannot escape, a rocket with powerful motors firing could still gently pull itself free.

Taking relativistic considerations into account, however, we find that black holes are far more exotic entities. Given the usual understanding that relativity theory rules out any physical process going faster than light, we conclude that not only is light unable to escape from such a body: nothing would be able to escape this gravitational force. That includes the powerful rocket that could escape a Newtonian black hole. Further, once the body has collapsed down to the point where its escape velocity is the speed of light, no physical force whatsoever could prevent the body from continuing to collapse down further for this would be equivalent to accelerating something to speeds beyond that of light. Thus once this critical amount of collapse is reached, the body will get smaller and smaller, more and more dense, without limit. It has formed a relativistic black hole; at its center lies a spacetime singularity.

For any given body, this critical stage of unavoidable collapse occurs when the object has collapsed to within its so-called Schwarzschild radius, which is proportional to the mass of the body. Our sun has a Schwarzschild radius of approximately three kilometers; the Earth's Schwarzschild radius is a little less than a centimeter. This means that if you could collapse all the Earth's matter down to a sphere the size of a pea, it would form a black hole. It is worth noting, however, that one does not need an extremely high density of matter to form a black hole if one has enough mass. Thus for example, if one has a couple hundred million solar masses of water at its standard density, it will be contained within its Schwarzschild radius and will form a black hole. Some supermassive black holes at the centers of galaxies are thought to be even more massive than this, at several billion solar masses.

The event horizon of a black hole is the point of no return. That is, it comprises the last events in the spacetime around the singularity at which a light signal can still escape to the external universe. For a standard (uncharged, non-rotating) black hole, the event horizon lies at the Schwarzschild radius. A flash of light that originates at an event ins
ide the black hole will not be able to escape, but will instead end up in the central singularity of the black hole. A light flash originating at an event outside of the event horizon will escape, but it will be red-shifted strongly to the extent that it is near the horizon. An outgoing beam of light that originates at an event on the event horizon itself, by definition, remains on the event horizon until the temporal end of the universe.

General relativity tells us that clocks running at different locations in a gravitational field will generally not agree with one another. In the case of a black hole, this manifests itself in the following way. Imagine someone falls into a black hole, and, while falling, she flashes a light signal to us every time her watch hand ticks. Observing from a safe distance outside the black hole, we would find the times between the arrival of successive light signals to grow larger without limit. That is, it would appear to us that time were slowing down for the falling person as she approached the event horizon. The ticking of her watch (and every other process as well) would seem to go slower and slower as she got closer and closer to the event horizon. We would never actually see the light signals she emits when she crosses the event horizon; instead, she would seem to be eternally frozen just above the horizon. (This talk of seeing the person is somewhat misleading, because the light coming from the person would rapidly become severely red-shifted, and soon would not be practically detectable.)

From the perspective of the infalling person, however, nothing unusual happens at the event horizon. She would experience no slowing of clocks, nor see any evidence that she is passing through the event horizon of a black hole. Her passing the event horizon is simply the last moment in her history at which a light signal she emits would be able to escape from the black hole. The concept of an event horizon is a global concept that depends on how the events on the event horizon relate to the overall structure of the spacetime. Locally there is nothing noteworthy about the events at the event horizon. If the black hole is fairly small, then the tidal gravitational forces there would be quite strong. This just means that gravitational pull on one's feet, closer to the singularity, would be much stronger than the gravitational pull on one's head. That difference of force would be great enough to pull one apart. For a sufficiently large black hole the difference in gravitation at one's feet and head would be small enough for these tidal forces to be negligible.

As in the case of singularties, alternative definitions of black holes have been explored. These definitions typically focus on the one-way nature of the event horizon: things can go in, but nothing can get out. Such accounts have not won widespread support, however, and we have not space here to elaborate on them further.[4]

One of the most remarkable features of relativistic black holes is that they are purely gravitational entities. A pure black hole spacetime contains no matter whatsoever. It is a vacuum solution to the Einstein field equations, which just means that it is a solution of Einstein's gravitational field equations in which the matter density is everywhere zero. (Of course, one can also consider a black hole with matter present.) In pre-relativistic physics we think of gravity as a force produced by the mass contained in some matter. In the context of general relativity, however, we do away with gravitational force, and instead postulate a curved spacetime geometry that produces all the effects we standardly attribute to gravity. Thus a black hole is not a thing in spacetime; it is instead a feature of spacetime itself.

A careful definition of a relativistic black hole will therefore rely only on the geometrical features of spacetime. We'll need to be a little more precise about what it means to be a region from which nothing, not even light, can escape. First, there will have to be someplace to escape to if our definition is to make sense. The most common method of making this idea precise and rigorous employs the notion of escaping to infinity. If a particle or light ray cannot travel arbitrarily far from a definite, bounded region in the interior of spacetime but must remain always in the region, the idea is, then that region is one of no escape, and is thus a black hole. The boundary of the region is called the event horizon. Once a physical entity crosses the event horizon into the hole, it never crosses it again.

Second, we will need a clear notion of the geometry that allows for escape, or makes such escape impossible. For this, we need the notion of the causal structure of spacetime. At any event in the spacetime, the possible trajectories of all light signals form a cone (or, more precisely, the four-dimensional analog of a cone). Since light travels at the fastest speed allowed in the spacetime, these cones map out the possible causal processes in the spacetime. If an occurence at an event A is able to causally affect another occurence at event B, there must be a continuous trajectory in spacetime from event A to event B such that the trajectory lies in or on the lightcones of every event along it. (For more discussion, see the Supplementary Document: Lightcones and Causal Structure.)

Figure 1 is a spacetime diagram of a sphere of matter collapsing down to form a black hole. The curvature of the spacetime is represented by the tilting of the light cones away from 45 degrees. Notice that the light cones tilt inwards more and more as one approaches the center of the black hole. The jagged line running vertically up the center of the diagram depicts the black hole central singularity. As we emphasized in Section 1, this is not actually part of the spacetime, but might be thought of as an edge of space and time itself. Thus, one should not imagine the possibility of traveling through the singularity; this would be as nonsensical as something's leaving the diagram (i.e., the spacetime) altogether.

What makes this a black hole spacetime is the fact that it contains a region from which it is impossible to exit while traveling at or below the speed of light. This region is marked off by the events at which the outside edge of the forward light cone points straight upward. As one moves inward from these events, the light cone tilts so much that one is always forced to move inward toward the central singularity. This point of no return is, of course, the event horizon; and the spacetime region inside it is the black hole. In this region, one inevitably moves towards the singularity; the impossibility of avoiding the singularity is exactly like the impossibility of preventing ourselves from moving forward in time.

Notice that the matter of the collapsing star disappears into the black hole singularity. All the details of the matter are completely lost; all that is left is the geometrical properties of the black hole which can be identified with mass, charge, and angular momentum. Indeed, there are so-called no-hair theorems which make rigorous the claim that a black hole in equilibrium is entirely characterized by its mass, its angular momentum, and its electric charge. This has the remarkable consequence that no matter what the particulars may be of any body that collapses to form a black holeit may be as intricate, complicated and Byzantine as one likes, composed of the most exotic materialsthe final result after the system has settled down to equilibrium will be identical in every respect to a black hole that formed from the collapse of any other body having the same
total mass, angular momentum and electric charge. For this reason Chandrasekhar (1983) called black holes the most perfect objects in the universe.

While spacetime singularities in general are frequently viewed with suspicion, physicists often offer the reassurance that we expect most of them to be hidden away behind the event horizons of black holes. Such singularities therefore could not affect us unless we were actually tojump into the black hole. A naked singularity, on the other hand, is one that is not hidden behind an event horizon. Such singularities appear much more threatening because they are uncontained, accessible to vast areas of spacetime.

The heart of the worry is that singular structure would seem to signify some sort of breakdown in the fundamental structure of spacetime to such a profound depth that it could wreak havoc on any region of universe that it were visible to. Because the structures that break down in singular spacetimes are required for the formulation of our known physical laws in general, and of initial-value problems for individual physical systems in particular, one such fear is that determinism would collapse entirely wherever the singular breakdown were causally visible. As Earman (1995, pp. 65-6) characterizes the worry, nothing would seem to stop the singularity from disgorging any manner of unpleasant jetsam, from TVs showing Nixon's Checkers Speech to old lost socks, in a way completely undetermined by the state of spacetime in any region whatsoever, and in such a way as to render strictly indeterminable all regions in causal contact with what it spews out.

One form that such a naked singularity could take is that of a white hole, which is a time-reversed black hole. Imagine taking a film of a black hole forming, and various astronauts, rockets, etc. falling into it. Now imagine that film being run backwards. This is the picture of a white hole: one starts with a naked singularity, out of which might appear people, artifacts, and eventually a star bursting forth. Absolutely nothing in the causal past of such a white hole would determine what would pop out of it (just as items that fall into a black hole leave no trace on the future). Because the field equations of general relativity do not pick out a preferred direction of time, if the formation of a black hole is allowed by the laws of spacetime and gravity, then white holes will also be permitted by these laws.

Roger Penrose famously suggested that although naked singularties are comaptible with general relativity, in physically realistic situations naked singularities will never form; that is, any process that results in a singularity will safely deposit that singularity behind an event horizon. This suggestion, titled the Cosmic Censorship Hypothesis, has met with a fair degree of success and popularity; however, it also faces several difficulties.

Penrose's original formulation relied on black holes: a suitably generic singularity will always be contained in a black hole (and so causally invisible outside the black hole). As the counter-examples to various ways of articulating the hypothesis in terms of this idea have accumulated over the years, it has gradually been abandoned.

More recent approaches either begin with an attempt to provide necessary and sufficient conditions for cosmic censorship itself, yielding an indirect characterization of a naked singularity as any phenomenon violating those conditions, or else they begin with an attempt to provide a characterization of a naked singularity and so conclude with a definite statement of cosmic censorship as the absence of such phenomena. The variety of proposals made using both approaches is too great to canvass here; the interested reader is referred to Joshi (2003) for a review of the current state of the art, and to Earman (1995, ch. 3) for a philosophical discussion of many of the proposals.

The challenge of uniting quantum theory and general relativity in a successful theory of quantum gravity has arguably been the greatest challenge facing theoretical physics for the past eighty years. One avenue that has seemed particularly promising here is the attempt to apply quantum theory to black holes. This is in part because, as completely gravitational entities, black holes present an especially pure case to study the quantization of gravity. Further, because the gravitational force grows without bound as one nears a standard black hole singularity, one would expect quantum gravitational effects (which should come into play at extremely high energies) to manifest themselves in black holes.

Studies of quantum mechanics in black hole spacetimes have revealed several surprises that threaten to overturn our traditional views of space, time, and matter. A remarkable parallel between the laws of black hole mechanics and the laws of thermodynamics indicates that spacetime and thermodynamics may be linked in a fundamental (and previously unimagined) way. This linkage hints at a fundamental limitation on how much entropy can be contained in a spatial region. A further topic of foundational importance is found in the so-called information loss paradox, which suggests that standard quantum evolution will not hold when black holes are present. While many of these suggestions are somewhat speculative, they nevertheless touch on deep issues in the foundations of physics.

In the early 1970s, Bekenstein argued that the second law of thermodynamics requires one to assign a finite entropy to a black hole. His worry was that one could collapse any amount of highly entropic matter into a black hole which, as we have emphasized, is an extremely simple object leaving no trace of the original disorder. This seems to violate the second law of thermodynamics, which asserts that the entropy (disorder) of a closed system can never decrease. However, adding mass to a black hole will increase its size, which led Bekenstein to suggest that the area of a black hole is a measure of its entropy. This conviction grew when, in 1972, Hawking proved that the surface area of a black hole, like the entropy of a closed system, can never decrease.

The similarity between black holes and thermodynamic systems was considerably strengthened when Bardeen, Carter, and Hawking (1973) proved three other laws of black hole mechanics that parallel exactly the first, third, and zeroth laws of thermodynamics. Although this parallel was extremely suggestive, taking it seriously would require one to assign a non-zero temperature to a black hole, which all then agreed was absurd: All hot bodies emit thermal radiation (like the heat given off from a stove). However, according to general relativity, a black hole ought to be a perfect sink for energy, mass, and radiation, insofar as it absorbs everything (including light), and emits nothing (including light). The only temperature one might be able to assign it would be absolute zero.

This obvious fact was overthrown when Hawking (1974, 1975) demonstrated that black holes are not completely black after all. His analysis of quantum fields in black hole spacetimes revealed that the black holes will emit particles: black holes generate heat at a temperature that is inversely proportional to their mass and directly proportional to their so-called surface gravity. It glows like a lump of smoldering coal even though light should not be able to escape from it! The temperature of this Hawking effect radiation is extremely low for stellar-scale black holes, but for very small black holes the temperatures would be quite high. This means that a very small black hole should rapidly evaporate away, as all of its mass-en
ergy is emitted in high-temperature Hawking radiation.

These results were taken to establish that the parallel between the laws of black hole mechanics and the laws of thermodynamics was not a mere fluke: it seems they really are getting at the same deep physics. The Hawking effect establishes that the surface gravity of a black hole can indeed be interpreted as a physical temperature. Further, mass in black hole mechanics is mirrored by energy in thermodynamics, and we know from relativity theory that mass and energy are actually equivalent. Connecting the two sets of laws also requires linking the surface area of a black hole with entropy, as Bekenstein had suggested. This black hole entropy is called its Bekenstein entropy, and is proportional to the area of the event horizon of the black hole.

In the context of thermodynamic systems containing black holes, one can construct apparent violations of the laws of thermodynamics, and of the laws of black hole mechanics, if one considers these laws to be independent of each other. So for example, if a black hole gives off radiation through the Hawking effect, then it will lose mass in apparent violation of the area increase theorem. Likewise, as Bekenstein argued, we could violate the second law of thermodynamics by dumping matter with high entropy into a black hole. However, the price of dropping matter into the black hole is that its event horizon will increase in size. Likewise, the price of allowing the event horizon to shrink by giving off Hawking radiation is that the entropy of the external matter fields will go up. We can consider a combinationof the two laws that stipulates that the sumof a black hole's area, and the entropy of the system, can never decrease. This is the generalized second law of (black hole) thermodynamics.

From the time that Bekenstein first proposed that the area of a black hole could be a measure of its entropy, it was know to face difficulties that appeared insurmountable. Geroch (1971) proposed a scenario that seems to allow a violation of the generalized second law. If we have a box full of energetic radiation with a high entropy, that box will have a certain weight as it is attracted by the gravitational force of a black hole. One can use this weight to drive an engine to produce energy (e.g., to produce electricity) while slowly lowering the box towards the event horizon of the black hole. This process extracts energy, but not entropy, from the radiation in the box; once the box reaches the event horizon itself, it can have an arbitrarily small amount of energy remaining. If one then opens the box to let the radiation fall into the black hole, the size of the event horizon will not increase any appreciable amount (because the mass-energy of the black hole has barely been increased), but the thermodynamic entropy outside the black hole has decreased. Thus we seem to have violated the generalized second law.

The question of whether we should be troubled by this possible violation of the generalized law touches on several issues in the foundations of physics. The status of the ordinary second law of thermodynamics is itself a thorny philosophical puzzle, quite apart from the issue of black holes. Many physicists and philosophers deny that the ordinary second law holds universally, so one might question whether we should insist on its validity in the presence of black holes. On the other hand, the second law clearly captures some significant feature of our world, and the analogy between black hole mechanics and thermodynamics seems too rich to be thrown out without a fight. Indeed, the generalized second law is our only law that joins together the fields of general relativity, quantum mechanics, and thermodynamics. As such, it seems the most promising window we have into the truly fundamental nature of the physical world.

In response to this apparent violation of the generalized second law, Bekenstein pointed out that one could never get all of the radiation in the box arbitrarily close to the event horizon, because the box itself would have to have some volume. This observation by itself is not enough to save the second law, however, unless there is some limit to how much entropy can be contained in a given volume of space. Current physics poses no such limit, so Bekenstein (1981) postulated that the limit would be enforced by the underlying theory of quantum gravity, which black hole thermodynamics is providing a glimpse of.

However, Unruh and Wald (1982) argue that there is a less ad hoc way to save the generalized second law. The heat given off by any hot body, including a black hole, will produce a kind of buoyancy force on any object (like our box) that blocks thermal radiation. This means that when we are lowering our box of high-entropy radiation towards the black hole, the optimal place to release that radiation will not be just above the event horizon, but rather at the floating point for the container. Unruh and Wald demonstrate that this fact is enough guarantee that the decrease in outside entropy will be compensated by an increase in the area of the event horizon. It therefore seems that there is no reliable way to violate the generalized second law of black hole thermodynamics.

There is, however, a further reason that one might think that black hole thermodynamics implies a fundamental bound on the amount of entropy that can be contained in a region. Suppose that there were more entropy in some region of space than the Bekenstein entropy of a black hole of the same size. Then one could collapse that entropic matter into a black hole, which obviously could not be larger than the size of the original region (or the mass-energy would have already formed a black hole). But this would violate the generalized second law, for the Bekenstein entropy of a the resulting black hole would be less than that of the matter that formed it. Thus the second law appears to imply a fundamental limit on how much entropy a region can contain. If this is right, it seems to be a deep insight into the nature of quantum gravity.

Arguments along these lines led t Hooft (1985) to postulate the Holographic Principle (though the title is due to Susskind). This principle claims that the number of fundamental degrees of freedom in any spherical region is given by the Bekenstein entropy of a black hole of the same size as that region. The Holographic Principle is notable not only because it postulates a well-defined, finite, number of degrees of freedom for any region, but also because this number grows as the area surrounding the region, and not as the volume of the region. This flies in the face of standard physical pictures, whether of particles or fields. According to that picture, the entropy is the number of possible ways something can be, and that number of ways increases as the volume of any spatial region. The Holographic Principle does get some support from a result in string theory known as the AdS/CFT correspondence. If the Principle is correct, then one spatial dimension can, in a sense, be viewed as superfluous: the fundamental physical story of a spatial region is actually a story that can be told merely about the boundary of the region.

In classical thermodynamics, that a system possesses entropy is often attributed to the fact that we in practice are never able to render to it a complete description. When describing a cloud of gas, we do not specify values for the position and velocity of every molecule in it; we rather describe it in terms of quantities, such as pressure and temperature, constructed as statistical measures over underlying, more finely grained quantities, such as the momentum and
energy of the individual molecules. The entropy of the gas then measures the incompleteness, as it were, of the gross description. In the attempt to take seriously the idea that a black hole has a true physical entropy, it is therefore natural to attempt to construct such a statistical origin for it. The tools of classical general relativity cannot provide such a construction, for it allows no way to describe a black hole as a system whose physical attributes arise as gross statistical measures over underlying, more finely grained quantities. Not even the tools of quantum field theory on curved spacetime can provide it, for they still treat the black hole as an entity defined entirely in terms of the classical geometry of the spacetime. Any such statistical accounting, therefore, must come from a theory that attributes to the classical geometry a description in terms of an underlying, discrete collection of micro-states. Explaining what these states are that are counted by the Bekenstein entropy has been a challenge that has been eagerly pursued by quantum gravity researchers.

In 1996, superstring theorists were able to give an account of how M-theory (which is an extension of superstring theory) generates a number of the string-states for a certain class of black holes, and this number matched that given by the Bekenstein entropy (Strominger and Vafa, 1996). A counting of black hole states using loop quantum gravity has also recovered the Bekenstein entropy (Ashtekar et al., 1998). It is philosophically noteworthy that this is treated as a significant success for these theories (i.e., it is presented as a reason for thinking that these theories are on the right track) even though Hawking radiation has never been experimentally observed (in part, because for macroscopic black holes the effect is minute).

Hawking's discovery that black holes give off radiation presented an apparent problem for the possibility of describing black holes quantum mechanically. According to standard quantum mechanics, the entropy of a closed system never changes; this is captured formally by the unitary nature of quantum evolution. Such evolution guarantees that the initial conditions, together with the quantum Schrdinger equation, will fix the future state of the system. Likewise, a reverse application of the Schrdinger equation will take us from the later state back to the original initial state. The states at each time are rich enough, detailed enough, to fix (via the dynamical equations) the states at all other times. Thus there is a sense in which the completeness of the state is maintained by unitary time evolution.

It is typical to characterize this feature with the claim that quantum evolution preserves information. If one begins with a system in a precisely known quantum state, then unitary evolution guarantees that the details about that system will evolve in such a way that one can infer the precise quantum state of the system at some later time (as long as one knows the law of evolution and can perform the relevant calculations), and vice versa. This quantum preservation of details implies that if we burn a chair, for example, it would in principle be possible to perform a complete set of measurements on all the outgoing radiation, the smoke, and the ashes, and reconstruct exactly what the chair looked like. However, if we were instead to throw the chair into a black hole, then it would be physically impossible for the details about the chair ever to escape to the outside universe. This might not be a problem if the black hole continued to exist for all time, but Hawking tells us that the black hole is giving off energy, and thus it will shrink down and presumably will eventually disappear altogether. At that point, the details about the chair will be irrevocably lost; thus such evolution cannot be described unitarily. This problem has been labeled the information loss paradox of quantum black holes.

(A brief technical explanation for those familiar with quantum mechanics: The argument is simply that the interior and the exterior of the black hole will generally be entangled. However, microcausality implies that the entangled degrees of freedom in the black hole cannot coherently recombine with the external universe. Thus once the black hole has completely evaporated away, the entropy of the universe will have increased in violation of unitary evolution.)

The attitude physicists adopted towards this paradox was apparently strongly influenced by their vision of which theory, general relativity or quantum theory, would have to yield to achieve a consistent theory of quantum gravity. Spacetime physicists tended to view non-unitary evolution as a fairly natural consequence of singular spacetimes: one wouldn't expect all the details to be available at late times if they were lost in a singularity. Hawking, for example, argued that the paradox shows that the full theory of quantum gravity will be a non-unitary theory, and he began working to develop such a theory. (He has since abandoned this position.)

However, particle physicists (such as superstring theorists) tended to view black holes as being just another quantum state. If two particles were to collide at extremely high (i.e., Planck-scale) energies, they would form a very small black hole. This tiny black hole would have a very high Hawking temperature, and thus it would very quickly give off many high-energy particles and disappear. Such a process would look very much like a standard high-energy scattering experiment: two particles collide and their mass-energy is then converted into showers of outgoing particles. The fact that all known scattering processes are unitary then seems to give us some reason to expect that black hole formation and evaporation should also be unitary.

These considerations led many physicists to propose scenarios that might allow for the unitary evolution of quantum black holes, while not violating other basic physical principles, such as the requirement that no physical influences be allowed to travel faster than light (the requirement of microcausality), at least not when we are far from the domain of quantum gravity (the Planck scale). Once energies do enter the domain of quantum gravity, e.g. near the central singularity of a black hole, then we might expect the classical description of spacetime to break down; thus, physicists were generally prepared to allow for the possibility of violations of microcausality in this region.

A very helpful overview of this debate can be found in Belot, Earman, and Ruetsche (1999). Most of the scenarios proposed to escape Hawking's argument faced serious difficulties and have been abandoned by their supporters. The proposal that currently enjoys the most wide-spread (though certainly not universal) support is known as black hole complementarity. This proposal has been the subject of philosophical controversy because it includes apparently incompatible claims, and then tries to escape the contradiction by making a controversial appeal to quantum complementarity or (so charge the critics) verificationism.

The challenge of saving information from a black hole lies in the fact that it is impossible to copy the quantum details (especially the quantum correlations) that are preserved by unitary evolution. This implies that if the details pass behind the event horizon, for example, if an astronaut falls into a black hole, then those details are lost forever. Advocates of black hole complementarity (Susskind et al. 1993), however, point out that an outside observer will never see the infalling astronaut pass through the event horizon. Instead, as we saw in Section 2, s
he will seem to hover at the horizon for all time. But all the while, the black hole will also be giving off heat, and shrinking down, and getting hotter, and shrinking more. The black hole complementarian therefore suggests that an outside observer should conclude that the infalling astronaut gets burned up before she crosses the event horizon, and all the details about her state will be returned in the outgoing radiation, just as would be the case if she and her belongings were incinerated in a more conventional manner; thus the information (and standard quantum evolution) is saved.

However, this suggestion flies in the face of the fact (discussed earlier) that for an infalling observer, nothing out of the ordinary should be experienced at the event horizon. Indeed, for a large enough black hole, one wouldn't even know that she was passing through an event horizon at all. This obviously contradicts the suggestion that she might be burned up as she passes through the horizon. The black hole complementarian tries to resolve this contradiction by agreeing that the infalling observer will notice nothing remarkable at the horizon. This is followed by a suggestion that the account of the infalling astronaut should be considered to be complementary to the account of the external observer, rather in the same way that position and momentum are complementary descriptions of quantum particles (Susskind et al. 1993). The fact that the infalling observer cannot communicate to the external world that she survived her passage through the event horizon is supposed to imply that there is no genuine contradiction here.

This solution to the information loss paradox has been criticized for making an illegitimate appeal to verificationism (Belot, Earman, and Ruetsche 1999). However, the proposal has nevertheless won wide-spread support in the physics community, in part because models of M-theory seem to behave somewhat as the black hole complementarian scenario suggests (for a philosophical discussion, see van Dongen and de Haro 2004). Bokulich (2005) argues that the most fruitful way of viewing black hole complementarity is as a novel suggestion for how a non-local theory of quantum gravity will recover the local behavior of quantum field theory when black holes are involved.

The physical investigation of spacetime singularities and black holes has touched on numerous philosophical issues. To begin, we were confronted with the question of the definition and significance of singularities. Should they be defined in terms of incomplete paths, missing points, or curvature pathology? Should we even think that there is a single correct answer to this question? Need we include such things in our ontology, or do they instead merely indicate the break-down of a particular physical theory? Are they edges of spacetime, or merely inadequate descriptions that will be dispensed with by a truly fundamental theory of quantum gravity?

This has obvious connections to the issue of how we are to interpret the ontology of merely effective physical descriptions. The debate over the information loss paradox also highlights the conceptual importance of the relationship between different effective theories. At root, the debate is over where and how our effective physical theories will break down: when can they be trusted, and where must they be replaced by a more adequate theory?

Black holes appear to be crucial for our understanding of the relationship between matter and spacetime. As discussed in Section 3, When matter forms a black hole, it is transformed into a purely gravitational entity. When a black hole evaporates, spacetime curvature is transformed into ordinary matter. Thus black holes offer an important arena for investigating the ontology of spacetime and ordinary objects.

Black holes were also seen to provide an important testing ground to investigate the conceptual problems underlying quantum theory and general relativity. The question of whether black hole evolution is unitary raises the issue of how the unitary evolution of standard quantum mechanics serves to guarantee that no experiment can reveal a violation of energy conservation or of microcausality. Likewise, the debate over the information loss paradox can be seen as a debate over whether spacetime or an abstract dynamical state space (Hilbert space) should be viewed as being more fundamental. Might spacetime itself be an emergent entity belonging only to an effective physical theory?

Singularities and black holes are arguably our best windows into the details of quantum gravity, which would seem to be the best candidate for a truly fundamental physical description of the world (if such a fundamental description exists). As such, they offer glimpses into deepest nature of matter, dynamical laws, and space and time; and these glimpses seem to call for a conceptual revision at least as great as that required by quantum mechanics or relativity theory alone.

Go here to read the rest:
Singularities and Black Holes (Stanford Encyclopedia of ...

Posted in The Singularity | Comments Off on Singularities and Black Holes (Stanford Encyclopedia of …

What Is The Singularity And Will You Live To See It?

Posted: January 31, 2016 at 2:44 am

If you read any science fiction or futurism, you've probably heard people using the term "singularity" to describe the world of tomorrow. But what exactly does it mean, and where does the idea come from? We answer in today's backgrounder.

What is the singularity?

The term singularity describes the moment when a civilization changes so much that its rules and technologies are incomprehensible to previous generations. Think of it as a point-of-no-return in history.

Most thinkers believe the singularity will be jump-started by extremely rapid technological and scientific changes. These changes will be so fast, and so profound, that every aspect of our society will be transformed, from our bodies and families to our governments and economies.

A good way to understand the singularity is to imagine explaining the internet to somebody living in the year 1200. Your frames of reference would be so different that it would be almost impossible to convey how the internet works, let alone what it means to our society. You are on the other side of what seems like a singularity to our person from the Middle Ages. But from the perspective of a future singularity, we are the medieval ones. Advances in science and technology mean that singularities might happen over periods much shorter than 800 years. And nobody knows for sure what the hell they'll bring.

Talking about the singularity is a paradox, because it is an attempt to imagine something that is by definition unimaginable to people in the present day. But that hasn't stopped hundreds of science fiction writers and futurists from doing it.

Where does the term "singularity" come from?

Science fiction writer Vernor Vinge popularized the idea of the singularity in his 1993 essay "Technological Singularity." There he described the singularity this way:

It is a point where our old models must be discarded and a new reality rules. As we move closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown.

Specifically, Vinge pinned the Singularity to the emergence of artificial intelligence. "We are on the edge of change comparable to the rise of human life on Earth," he wrote. "The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence."

Author Ken MacLeod has a character describe the singularity as "the Rapture for nerds" in his novel The Cassini Division, and the turn of phrase stuck, becoming a popular way to describe the singularity. (Note: MacLeod didn't actually coin this phrase - he says he got the phrase from a satirical essay in an early-1990s issue of Extropy.) Catherynne Valente argued recently for an expansion of the term to include what she calls "personal singularities," moments where a person is altered so much that she becomes unrecognizable to her former self. This definition could include posthuman experiences.

What technologies are likely to cause the next singularity?

As we mentioned earlier, artificial intelligence is the technology that most people believe will usher in the singularity. Authors like Vinge and singulatarian Ray Kurzweil think AI will usher in the singularity for a twofold reason. First, creating a new form of intelligent life will completely change our understanding of ourselves as humans. Second, AI will allow us to develop new technologies so much faster than we could before that our civilization will transform rapidly. A corollary to AI is the development of robots who can work alongside - and beyond - humans.

Another singularity technology is the self-replicating molecular machine, also called autonomous nanobots, "gray goo," and a host of other things. Basically the idea is that if we can build machines that manipulate matter at the atomic level, we can control our world in the most granular way imaginable. And if these machines can work on their own? Who knows what will happen. For a dark vision of this singularity, see Greg Bear's novel Blood Music or Bill Joy's essay "The Future Doesn't Need Us"; for a more optimistic vision, Rudy Rucker's Postsingular.

And finally, a lot of singulatarian thought is devoted to the idea that synthetic biology, genetic engineering, and other life sciences will eventually give us control of the human genome. Two world-altering events would come out of that. One, we could engineer new forms of life and change the course of human evolution in one generation. Two, it's likely that control over our genomes will allow us to tinker with the mechanisms that make us age, thus dramatically increasing our lifespans. Many futurists, from Kurzweil and Steward Brand, to scientists like Aubrey De Gray, have suggested that extreme human longevity (in the hundreds of years) is a crucial part of the singularity.

Have we had a singularity before?

The singularity is usually anticipated as a future transformation, but it can also be used to describe past transformations like the one in our example earlier with the person from 1200. The industrial revolution could be said to represent a singularity, as could the information age.

When will the singularity happen?

In 1992, Vinge predicted that "in 30 years" we would have artificial intelligence. We've still got 12 years to go - it could happen! In his groundbreaking 2000 essay for Wired, "The Future Doesn't Need Us," technologist Joy opined:

The enabling breakthrough to assemblers seems quite likely within the next 20 years. Molecular electronics - the new subfield of nanotechnology where individual molecules are circuit elements - should mature quickly and become enormously lucrative within this decade, causing a large incremental investment in all nanotechnologies.

And in the 2005 book The Singularity Is Near, Ray Kurzweil says the singularity will come "within several decades."

Longevity scientist De Gray says that our biotech is advanced enough that a child born in 2010 might live to be 150, or 500 years old. MIT AI researcher Rodney Brooks writes in his excellent book Flesh and Machines that it's "unlikely that we will be able to simply download our brains into a computer anytime soon." Though Brooks does add:

The lives of our grandchildren and great-grandchildren will be as unrecognizable to us as our use of information technology in all its forms would be incomprehensible to someone form the dawn of the twentieth century.

So when will the singularity really happen? It depends on your perspective. But it always seem like it's just a few decades off.

Image of gray goo by Giacomo Costa.

Read the original here:
What Is The Singularity And Will You Live To See It?

Posted in The Singularity | Comments Off on What Is The Singularity And Will You Live To See It?

Singularity (operating system) – Wikipedia, the free …

Posted: January 23, 2016 at 1:47 pm

Singularity was an experimental operating system built by Microsoft Research between 2003 and 2010.[1] It was designed as a highly-dependable OS in which the kernel, device drivers, and applications were all written in managed code.

The lowest-level x86 interrupt dispatch code is written in assembly language and C. Once this code has done its job, it invokes the kernel, whose runtime and garbage collector are written in Sing# (an extended version of Spec#, itself an extension of C#) and runs in unprotected mode. The hardware abstraction layer is written in C++ and runs in protected mode. There is also some C code to handle debugging. The computer's BIOS is invoked during the 16-bit real mode bootstrap stage; once in 32-bit mode, Singularity never invokes the BIOS again, but invokes device drivers written in Sing#. During installation, Common Intermediate Language (CIL) opcodes are compiled into x86 opcodes using the Bartok compiler.

Singularity is a microkernel operating system. Unlike most historical microkernels, its components execute in the same address space (process), which contains "software-isolated processes" (SIPs). Each SIP has its own data and code layout, and is independent from other SIPs. These SIPs behave like normal processes, but avoid the cost of task-switches.

Protection in this system is provided by a set of rules called invariants that are verified by static analysis. For example, in the memory-invariant states there must be no cross-references (or memory pointers) between two SIPs; communication between SIPs occurs via higher-order communication channels managed by the operating system. Invariants are checked during installation of the application. (In Singularity, installation is managed by the operating system.)

Most of the invariants rely on the use of safer memory-managed languages, such as Sing#, which have a garbage collector, allow no arbitrary pointers, and allow code to be verified to meet a certain policy.

Singularity 1.0 was completed in 2007. A Singularity Research Development Kit (RDK) has been released under a Shared Source license that permits academic non-commercial use and is available from CodePlex. Version 1.1 was released in March 2007 and version 2.0 was released on November 14, 2008.

The rest is here:
Singularity (operating system) - Wikipedia, the free ...

Posted in The Singularity | Comments Off on Singularity (operating system) – Wikipedia, the free …

Singularity: The Influence Of New Order

Posted: January 14, 2016 at 9:44 am

ARTIST SHARE THIS POST

Simon Stephens is a playwright, previously Resident Dramatist at the prestigious Royal Court Theatre (UK) and he is currently Artist Associate at the Lyric in Hammersmith, London. He's won numerous awards, most recently for his script for The Curious Incident of the Dog in the Night-Time, which won an Olivier and a Tony. He was also bassist in the band Country Teasers. Photo by Kevin Cummins.

I grew up wanting to write because of the music that I heard not because of the plays that I saw or the books that I read.

I went to the kind of school where admitting to a love of literature or an interest in theatre would lead to getting your head kicked in. So I kept these things to myself. But I could be fearless about music. And I grew up in Stockport, on the edge of a city where the best music in the world was being made.

The angst and the shit of a teenage suburban life were soothed by the dissonant snarl of Mark E Smith and melancholic comedy of Morrissey and the promise of swagger and glory of the Happy Mondays and the Stone Roses. And then there was New Order.

I would stay up until two in the morning on a Sunday night listening to Manchesters bespoke John Peel, Tony Michaelides Last Radio Programme and it was there that I properly heard New Order for the first time. The gorgeous rumble of Blue Monday might have been one of those surprisingly cool chart sensations that existed on the more alert peripheries of my 12-year-old consciousness but I never properly listened to it. It was Perfect Kiss that ripped my skull open.

It is an astonishing song. Astonishing because of its gorgeous rhythms and haunting, deeply sad and weirdly uplifting melody but astonishing, to my teenage head, because of the jarring drama of Bernard Sumners lyric. For eighteen months the couplet pretending not to see his gun I said lets go out and have some fun rattled round my head. What world was being opened by this mans pop heart? A word of secrecy and danger and battered attempts at friendship and compassion. I remember playing the song to my friends and trying to get them to see how much of a work of genius the song was. Im not sure they ever got it like I did. But when I look at my plays now I think that the same tension between danger and compassion underpins all the stories I have tried to tell in the plays that I have written. Like Im still trying to see if people get it in the way that New order palpably did.

Lowlife the album Perfect Kiss came from was a masterpiece. I listened to it constantly. Technique though was something else altogether. Coming, in my blurred inaccurate memory, in the wake of the arrival of Stone Roses and Happy Mondays on Top of The Pops it took their swagger and with humour and grace and deeper proper sadness made those two brilliant bands look like children. It is, I think, the best album of the eighties.

Ive listened to it now for twenty-five years. With its ecstasy driven rhythms and glorious melodies it still catches me by surprise. And those dramas are there. That search for the possibility of love or compassion in a battered Manchester that had yet to be changed by the investments of the nineties haunted me and made sense of that 192 bus route from Stockport into town.

It is an album that sits in the metabolism of my play Port. Port is a play that dramatizes the life of a girl growing up around the same time that I did in the same town with the same sense of self and yearning for escape. It was directed by Marianne Elliot at the Royal Exchange Theatre in 2002 and revived by her at the National theatre in 2013.

The music of Manchester at that time was rich in its spirit and used in its sound design.

In the scene change from the first scene where her Racheal, the plays heroine, watches her Mother leave her forever to the second where she has become a surrogate mother for her brother, she grows from the age of 9 to the age of 11. On the huge cavernous stage of the Royal Exchange and ten years later in the enormous Lyttleton stage at the National Marianne and sound designer Ian Dickinson cranked All The Way to a maximum volume as Racheal faced her future.

In those huge rooms Bernie sang out. It takes years to find the nerve to be apart from what youve done, to find the truth beside yourself and not depend on any one.

What a glorious lyric to hear hollering around the magical halls of Englands greatest theatres. Every time I heard it in performance, and when I hear it still, its hard to stop the hairs on my arm from standing on end. Bernie was singing Racheals song. But he was singing a song that cut to the quick of my teenage sense of self and still, to a degree, my sense of self now. With that quicksilver mix of tenderness and defiance that always galvanised me. He is a writer who makes me want to write. Always was. Always will be.

I am very, very glad he and his band are back. We are far richer for their presence.

-Simon Stephens

The rest is here:
Singularity: The Influence Of New Order

Posted in The Singularity | Comments Off on Singularity: The Influence Of New Order

Big Bang Theory

Posted: at 9:44 am

You are here: Science >> Big Bang Theory

Big Bang Theory - The Premise The Big Bang theory is an effort to explain what happened at the very beginning of our universe. Discoveries in astronomy and physics have shown beyond a reasonable doubt that our universe did in fact have a beginning. Prior to that moment there was nothing; during and after that moment there was something: our universe. The big bang theory is an effort to explain what happened during and after that moment.

According to the standard theory, our universe sprang into existence as "singularity" around 13.7 billion years ago. What is a "singularity" and where does it come from? Well, to be honest, we don't know for sure. Singularities are zones which defy our current understanding of physics. They are thought to exist at the core of "black holes." Black holes are areas of intense gravitational pressure. The pressure is thought to be so intense that finite matter is actually squished into infinite density (a mathematical concept which truly boggles the mind). These zones of infinite density are called "singularities." Our universe is thought to have begun as an infinitesimally small, infinitely hot, infinitely dense, something - a singularity. Where did it come from? We don't know. Why did it appear? We don't know.

After its initial appearance, it apparently inflated (the "Big Bang"), expanded and cooled, going from very, very small and very, very hot, to the size and temperature of our current universe. It continues to expand and cool to this day and we are inside of it: incredible creatures living on a unique planet, circling a beautiful star clustered together with several hundred billion other stars in a galaxy soaring through the cosmos, all of which is inside of an expanding universe that began as an infinitesimal singularity which appeared out of nowhere for reasons unknown. This is the Big Bang theory.

Big Bang Theory - Common Misconceptions There are many misconceptions surrounding the Big Bang theory. For example, we tend to imagine a giant explosion. Experts however say that there was no explosion; there was (and continues to be) an expansion. Rather than imagining a balloon popping and releasing its contents, imagine a balloon expanding: an infinitesimally small balloon expanding to the size of our current universe.

Another misconception is that we tend to image the singularity as a little fireball appearing somewhere in space. According to the many experts however, space didn't exist prior to the Big Bang. Back in the late '60s and early '70s, when men first walked upon the moon, "three British astrophysicists, Steven Hawking, George Ellis, and Roger Penrose turned their attention to the Theory of Relativity and its implications regarding our notions of time. In 1968 and 1970, they published papers in which they extended Einstein's Theory of General Relativity to include measurements of time and space.1, 2 According to their calculations, time and space had a finite beginning that corresponded to the origin of matter and energy."3 The singularity didn't appear in space; rather, space began inside of the singularity. Prior to the singularity, nothing existed, not space, time, matter, or energy - nothing. So where and in what did the singularity appear if not in space? We don't know. We don't know where it came from, why it's here, or even where it is. All we really know is that we are inside of it and at one time it didn't exist and neither did we.

Big Bang Theory - Evidence for the Theory What are the major evidences which support the Big Bang theory?

Big Bang Theory - The Only Plausible Theory? Is the standard Big Bang theory the only model consistent with these evidences? No, it's just the most popular one. Internationally renown Astrophysicist George F. R. Ellis explains: "People need to be aware that there is a range of models that could explain the observations.For instance, I can construct you a spherically symmetrical universe with Earth at its center, and you cannot disprove it based on observations.You can only exclude it on philosophical grounds. In my view there is absolutely nothing wrong in that. What I want to bring into the open is the fact that we are using philosophical criteria in choosing our models. A lot of cosmology tries to hide that."4

In 2003, Physicist Robert Gentry proposed an attractive alternative to the standard theory, an alternative which also accounts for the evidences listed above.5 Dr. Gentry claims that the standard Big Bang model is founded upon a faulty paradigm (the Friedmann-lemaitre expanding-spacetime paradigm) which he claims is inconsistent with the empirical data. He chooses instead to base his model on Einstein's static-spacetime paradigm which he claims is the "genuine cosmic Rosetta." Gentry has published several papers outlining what he considers to be serious flaws in the standard Big Bang model.6 Other high-profile dissenters include Nobel laureate Dr. Hannes Alfvn, Professor Geoffrey Burbidge, Dr. Halton Arp, and the renowned British astronomer Sir Fred Hoyle, who is accredited with first coining the term "the Big Bang" during a BBC radio broadcast in 1950.

Big Bang Theory - What About God? Any discussion of the Big Bang theory would be incomplete without asking the question, what about God? This is because cosmogony (the study of the origin of the universe) is an area where science and theology meet. Creation was a supernatural event. That is, it took place outside of the natural realm. This fact begs the question: is there anything else which exists outside of the natural realm? Specifically, is there a master Architect out there? We know that this universe had a beginning. Was God the "First Cause"? We won't attempt to answer that question in this short article. We just ask the question:

Does God Exist?

Footnotes:

Like this information? Help us by sharing it with others using the social media buttons below. What is this?

Follow Us:

See the original post:
Big Bang Theory

Posted in The Singularity | Comments Off on Big Bang Theory

Gravitational singularity – Wikipedia, the free encyclopedia

Posted: October 23, 2015 at 9:44 am

A gravitational singularity or spacetime singularity is a location where the force of gravity has become effectively infinite, and the quantities that are used to measure the gravitational field of a singularity become infinite in a way that does not depend on the coordinate system from the standpoint of any observer of the singularity. These quantities are the scalar invariant curvatures of spacetime, which includes a measure of the density of matter. The laws of normal spacetime could not exist within a singularity and it is currently postulated that matter cannot cross the event horizon of a singularity due to the effects of time dialation.[1][2][3] Singularities are theorized to exist at the center of Black Holes, within Cosmic Strings, and as leftover remants from the early formation of the Universe following the Big Bang. Although gravitational singularities were proposed by Einstein in his General Relativity Theory, their existence has not been confirmed. [4][5][6][7]

For the purposes of proving the PenroseHawking singularity theorems, a spacetime with a singularity is defined to be one that contains geodesics that cannot be extended in a smooth manner.[8] The end of such a geodesic is considered to be the singularity. This is a different definition, useful for proving theorems.[9][10]

The two most important types of spacetime singularities are curvature singularities and conical singularities.[11] Singularities can also be divided according to whether or not they are covered by an event horizon (naked singularities are not covered).[12] According to modern general relativity, the initial state of the universe, at the beginning of the Big Bang, was a singularity.[13] Both general relativity and quantum mechanics break down in describing the Big Bang,[14] but in general, quantum mechanics does not permit particles to inhabit a space smaller than their wavelengths (See: Wave-particle_duality).[15]

Another type of singularity predicted by general relativity is inside a black hole: any star collapsing beyond a certain point (the Schwarzschild radius) would form a black hole, inside which a singularity (covered by an event horizon) would be formed, as all the matter would flow into a certain point (or a circular line, if the black hole is rotating).[16] This is again according to general relativity without quantum mechanics, which forbids wavelike particles entering a space smaller than their wavelength. These hypothetical singularities are also known as curvature singularities.

In theoretical modeling with supersymmetry theory, a singularity in the moduli space (a geometric space using coordinates to model objects, observers, or locations) happens usually when there are additional massless degrees of freedom (dimensions) at a certain point. Similarly, in String Theory and in M Theory, it is thought that singularities in spacetime often mean that there are additional degrees of freedom (physical dimensions beyond the four dimensions described by General Relativity) that exist only within the vicinity of the singularity. The same fields related to the whole spacetime are postulated to also exist according to this theory; for example, the electromagnetic field. In known examples of string theory, the latter degrees of freedom are related to closed strings, while the degrees of freedom are "stuck" to the singularity and related either to open strings or to the twisted sector of an orbifold (A theoretical construct of abstract mathematics). This is however, only a theory.[17][18]

A theory supported by Stephen Hawkings called the Black hole information paradox postulates that matter cannot cross the event horizon of a singularity or black hole and remains as stored information just beyond the event horizon and is slowly released as Hawking radiation or held at the event horizon permanently due to the effects of time dialation. "The information is not stored in the interior of the black hole as one might expect, but in its boundary - the event horizon," he told a conference at the KTH Royal Institute of Technology in Stockholm, Sweden. (meaning that as matter enters the event horizon of a black hole the deeper it travels inside the black hole, the slower time flows for that matter relative to an observer outside the black hole watching the matter travel through the event horizon. Time essentially slows until it virtually stops as the matter reaches the event horizon of the singularity and can never make it to the center and is held there forever).[19]

Solutions to the equations of general relativity or another theory of gravity (such as supergravity) often result in encountering points where the metric blows up to infinity. However, many of these points are completely regular, and the infinities are merely a result of using an inappropriate coordinate system at this point (meaning that Einsteins partial differential equations that describe spacetime curvature and gravity produce infinite values if you provide bad data). In order to test whether there is a singularity at a certain point, one must check whether at this point diffeomorphism invariant quantities (coordinates in a coordinate system describing an observer in such a way that the relationships of the physical law being tested do not vary based on the coordinates of the observer being at different locations) which are scalars become infinite (a scalar is a pure number representing a value, like length). Such quantities are the same in every coordinate system, so these infinities will not "go away" by a change of coordinates. (when proper coordinate systems are used to describe an observers location, no matter what system is employed, a singularity will always produce these infinites using Einstein's partial differential equations to describe space time curvature).

An example is the Schwarzschild solution that describes a non-rotating, uncharged black hole. In coordinate systems convenient for working in regions far away from the black hole, a part of the metric becomes infinite at the event horizon. However, spacetime at the event horizon is regular. The regularity becomes evident when changing to another coordinate system (such as the Kruskal coordinates), where the metric is perfectly smooth. On the other hand, in the center of the black hole, where the metric becomes infinite as well, the solutions suggest a singularity exists. The existence of the singularity can be verified by noting that the Kretschmann scalar, being the square of the Riemann tensor i.e. , which is diffeomorphism invariant, is infinite. While in a non-rotating black hole the singularity occurs at a single point in the model coordinates, called a "point singularity". In a rotating black hole, also known as a Kerr black hole, the singularity occurs on a ring (a circular line), known as a "ring singularity". Such a singularity may also theoretically become a wormhole.[20]

A conical singularity occurs when there is a point where the limit of every diffeomorphism invariant quantity is finite, in which case spacetime is not smooth at the point of the limit itself. Thus, spacetime looks like a cone around this point, where the singularity is located at the tip of the cone. The metric can be finite everywhere if a suitable coordinate system is used. An example of such a conical singularity is a cosmic string. Cosmic strings are theoretical, and their existence has not yet been confirmed. [21]

Until the early 1990s, it was widely believed that general relativity hides every singularity behind an event horizon, making naked singularities impossible. This is referred to as the cosmic censorship hypothesis. However, in 1991, physicists Stuart Shapiro and Saul Teukolsky performed computer simulations of a rotating plane of dust that indicated that general relativity might allow for "naked" singularities. What these objects would actually look like in such a model is unknown. Nor is it known whether singularities would still arise if the simplifying assumptions used to make the simulation were removed.[22][23][24]

Some theories, such as the theory of loop quantum gravity suggest that singularities may not exist. The idea is that due to quantum gravity effects, there is a minimum distance beyond which the force of gravity no longer continues to increase as the distance between the masses becomes shorter.[25][26]

The Einstein-Cartan-Sciama-Kibble theory of gravity naturally averts the gravitational singularity at the Big Bang. This theory extends general relativity to matter with intrinsic angular momentum (spin) by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a variable in varying the action. The minimal coupling between torsion and Dirac spinors generates a spinspin interaction in fermionic matter, which becomes dominant at extremely high densities and prevents the scale factor of the Universe from reaching zero. The Big Bang is replaced by a cusp-like Big Bounce at which the matter has an enormous but finite density and before which the Universe was contracting (what is theorized is that matter exerts a counterforce based upon the spin (angular momentum) which is present in all fermionic matter that will resist the effects of gravity beyond a certain point of compression and a singularity can never fully form). [27]

Read the rest here:
Gravitational singularity - Wikipedia, the free encyclopedia

Posted in The Singularity | Comments Off on Gravitational singularity – Wikipedia, the free encyclopedia

Paul Allen: The Singularity Isn’t Near | MIT Technology Review

Posted: September 24, 2015 at 11:47 pm

The Singularity Summit approaches this weekend in New York. But the Microsoft cofounder and a colleague say the singularity itself is a long way off.

Futurists like Vernor Vinge and Ray Kurzweil have argued that the world is rapidly approaching a tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human capabilities. They call this tipping point the singularity, because they believe it is impossible to predict how the human future might unfold after this point. Once these machines exist, Kurzweil and Vinge claim, theyll possess a superhuman intelligence that is so incomprehensible to us that we cannot even rationally guess how our life experiences would be altered. Vinge asks us to ponder the role of humans in a world where machines are as much smarter than us as we are smarter than our pet dogs and cats. Kurzweil, who is a bit more optimistic, envisions a future in which developments in medical nanotechnology will allow us to download a copy of our individual brains into these superhuman machines, leave our bodies behind, and, in a sense, live forever. Its heady stuff.

While we suppose this kind of singularity might one day occur, we dont think it is near. In fact, we think it will be a very long time coming. Kurzweil disagrees, based on his extrapolations about the rate of relevant scientific and technical progress. He reasons that the rate of progress toward the singularity isnt just a progression of steadily increasing capability, but is in fact exponentially acceleratingwhat Kurzweil calls the Law of Accelerating Returns. He writes that:

So we wont experience 100 years of progress in the 21st centuryit will be more like 20,000 years of progress (at todays rate). The returns, such as chip speed and cost-effectiveness, also increase exponentially. Theres even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity [1]

By working through a set of models and historical data, Kurzweil famously calculates that the singularity will arrive around 2045.

This prediction seems to us quite far-fetched. Of course, we are aware that the history of science and technology is littered with people who confidently assert that some event cant happen, only to be later proven wrongoften in spectacular fashion. We acknowledge that it is possible but highly unlikely that Kurzweil will eventually be vindicated. An adult brain is a finite thing, so its basic workings can ultimately be known through sustained human effort. But if the singularity is to arrive by 2045, it will take unforeseeable and fundamentally unpredictable breakthroughs, and not because the Law of Accelerating Returns made it the inevitable result of a specific exponential rate of progress.

Kurzweils reasoning rests on the Law of Accelerating Returns and its siblings, but these are not physical laws. They are assertions about how past rates of scientific and technical progress can predict the future rate. Therefore, like other attempts to forecast the future from the past, these laws will work until they dont. More problematically for the singularity, these kinds of extrapolations derive much of their overall exponential shape from supposing that there will be a constant supply of increasingly more powerful computing capabilities. For the Law to apply and the singularity to occur circa 2045, the advances in capability have to occur not only in a computers hardware technologies (memory, processing power, bus speed, etc.) but also in the software we create to run on these more capable computers. To achieve the singularity, it isnt enough to just run todays software faster. We would also need to build smarter and more capable software programs. Creating this kind of advanced software requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this.

This prior need to understand the basic science of cognition is where the singularity is near arguments fail to persuade us. It is true that computer hardware technology can develop amazingly quickly once we have a solid scientific framework and adequate economic incentives. However, creating the software for a real singularity-level computer intelligence will require fundamental scientific progress beyond where we are today. This kind of progress is very different than the Moores Law-style evolution of computer hardware capabilities that inspired Kurzweil and Vinge. Building the complex software that would allow the singularity to happen requires us to first have a detailed scientific understanding of how the human brain works that we can use as an architectural guide, or else create it all de novo. This means not just knowing the physical structure of the brain, but also how the brain reacts and changes, and how billions of parallel neuron interactions can result in human consciousness and original thought. Getting this kind of comprehensive understanding of the brain is not impossible. If the singularity is going to occur on anything like Kurzweils timeline, though, then we absolutely require a massive acceleration of our scientific progress in understanding every facet of the human brain.

But history tells us that the process of original scientific discovery just doesnt behave this way, especially in complex areas like neuroscience, nuclear fusion, or cancer research. Overall scientific progress in understanding the brain rarely resembles an orderly, inexorable march to the truth, let alone an exponentially accelerating one. Instead, scientific advances are often irregular, with unpredictable flashes of insight punctuating the slow grind-it-out lab work of creating and testing theories that can fit with experimental observations. Truly significant conceptual breakthroughs dont arrive when predicted, and every so often new scientific paradigms sweep through the field and cause scientists to revaluate portions of what they thought they had settled. We see this in neuroscience with the discovery of long-term potentiation, the columnar organization of cortical areas, and neuroplasticity. These kinds of fundamental shifts dont support the overall Moores Law-style acceleration needed to get to the singularity on Kurzweils schedule.

The Complexity Brake

The foregoing points at a basic issue with how quickly a scientifically adequate account of human intelligence can be developed. We call this issue the complexity brake. As we go deeper and deeper in our understanding of natural systems, we typically find that we require more and more specialized knowledge to characterize them, and we are forced to continuously expand our scientific theories in more and more complex ways. Understanding the detailed mechanisms of human cognition is a task that is subject to this complexity brake. Just think about what is required to thoroughly understand the human brain at a micro level. The complexity of the brain is simply awesome. Every structure has been precisely shaped by millions of years of evolution to do a particular thing, whatever it might be. It is not like a computer, with billions of identical transistors in regular memory arrays that are controlled by a CPU with a few different elements. In the brain every individual structure and neural circuit has been individually refined by evolution and environmental factors. The closer we look at the brain, the greater the degree of neural variation we find. Understanding the neural structure of the human brain is getting harder as we learn more. Put another way, the more we learn, the more we realize there is to know, and the more we have to go back and revise our earlier understandings. We believe that one day this steady increase in complexity will endthe brain is, after all, a finite set of neurons and operates according to physical principles. But for the foreseeable future, it is the complexity brake and arrival of powerful new theories, rather than the Law of Accelerating Returns, that will govern the pace of scientific progress required to achieve the singularity.

So, while we think a fine-grained understanding of the neural structure of the brain is ultimately achievable, it has not shown itself to be the kind of area in which we can make exponentially accelerating progress. But suppose scientists make some brilliant new advance in brain scanning technology. Singularity proponents often claim that we can achieve computer intelligence just by numerically simulating the brain bottom up from a detailed neural-level picture. For example, Kurzweil predicts the development of nondestructive brain scanners that will allow us to precisely take a snapshot a persons living brain at the subneuron level. He suggests that these scanners would most likely operate from inside the brain via millions of injectable medical nanobots. But, regardless of whether nanobot-based scanning succeeds (and we arent even close to knowing if this is possible), Kurzweil essentially argues that this is the needed scientific advance that will gate the singularity: computers could exhibit human-level intelligence simply by loading the state and connectivity of each of a brains neurons inside a massive digital brain simulator, hooking up inputs and outputs, and pressing start.

However, the difficulty of building human-level software goes deeper than computationally modeling the structural connections and biology of each of our neurons. Brain duplication strategies like these presuppose that there is no fundamental issue in getting to human cognition other than having sufficient computer power and neuron structure maps to do the simulation.[2] While this may be true theoretically, it has not worked out that way in practice, because it doesnt address everything that is actually needed to build the software. For example, if we wanted to build software to simulate a birds ability to fly in various conditions, simply having a complete diagram of bird anatomy isnt sufficient. To fully simulate the flight of an actual bird, we also need to know how everything functions together. In neuroscience, there is a parallel situation. Hundreds of attempts have been made (using many different organisms) to chain together simulations of different neurons along with their chemical environment. The uniform result of these attempts is that in order to create an adequate simulation of the real ongoing neural activity of an organism, you also need a vast amount of knowledge about the functional role that these neurons play, how their connection patterns evolve, how they are structured into groups to turn raw stimuli into information, and how neural information processing ultimately affects an organisms behavior. Without this information, it has proven impossible to construct effective computer-based simulation models. Especially for the cognitive neuroscience of humans, we are not close to the requisite level of functional knowledge. Brain simulation projects underway today model only a small fraction of what neurons do and lack the detail to fully simulate what occurs in a brain. The pace of research in this area, while encouraging, hardly seems to be exponential. Again, as we learn more and more about the actual complexity of how the brain functions, the main thing we find is that the problem is actually getting harder.

The AI Approach

Singularity proponents occasionally appeal to developments in artificial intelligence (AI) as a way to get around the slow rate of overall scientific progress in bottom-up, neuroscience-based approaches to cognition. It is true that AI has had great successes in duplicating certain isolated cognitive tasks, most recently with IBMs Watson system for Jeopardy! question answering. But when we step back, we can see that overall AI-based capabilities havent been exponentially increasing either, at least when measured against the creation of a fully general human intelligence. While we have learned a great deal about how to build individual AI systems that do seemingly intelligent things, our systems have always remained brittletheir performance boundaries are rigidly set by their internal assumptions and defining algorithms, they cannot generalize, and they frequently give nonsensical answers outside of their specific focus areas. A computer program that plays excellent chess cant leverage its skill to play other games. The best medical diagnosis programs contain immensely detailed knowledge of the human body but cant deduce that a tightrope walker would have a great sense of balance.

Why has it proven so difficult for AI researchers to build human-like intelligence, even at a small scale? One answer involves the basic scientific framework that AI researchers use. As humans grow from infants to adults, they begin by acquiring a general knowledge about the world, and then continuously augment and refine this general knowledge with specific knowledge about different areas and contexts. AI researchers have typically tried to do the opposite: they have built systems with deep knowledge of narrow areas, and tried to create a more general capability by combining these systems. This strategy has not generally been successful, although Watsons performance on Jeopardy! indicates paths like this may yet have promise. The few attempts that have been made to directly create a large amount of general knowledge of the world, and then add the specialized knowledge of a domain (for example, the work of Cycorp), have also met with only limited success. And in any case, AI researchers are only just beginning to theorize about how to effectively model the complex phenomena that give human cognition its unique flexibility: uncertainty, contextual sensitivity, rules of thumb, self-reflection, and the flashes of insight that are essential to higher-level thought. Just as in neuroscience, the AI-based route to achieving singularity-level computer intelligence seems to require many more discoveries, some new Nobel-quality theories, and probably even whole new research approaches that are incommensurate with what we believe now. This kind of basic scientific progress doesnt happen on a reliable exponential growth curve. So although developments in AI might ultimately end up being the route to the singularity, again the complexity brake slows our rate of progress, and pushes the singularity considerably into the future.

The amazing intricacy of human cognition should serve as a caution to those who claim the singularity is close. Without having a scientifically deep understanding of cognition, we cant create the software that could spark the singularity. Rather than the ever-accelerating advancement predicted by Kurzweil, we believe that progress toward this understanding is fundamentally slowed by the complexity brake. Our ability to achieve this understanding, via either the AI or the neuroscience approaches, is itself a human cognitive act, arising from the unpredictable nature of human ingenuity and discovery. Progress here is deeply affected by the ways in which our brains absorb and process new information, and by the creativity of researchers in dreaming up new theories. It is also governed by the ways that we socially organize research work in these fields, and disseminate the knowledge that results. At Vulcan and at the Allen Institute for Brain Science, we are working on advanced tools to help researchers deal with this daunting complexity, and speed them in their research. Gaining a comprehensive scientific understanding of human cognition is one of the hardest problems there is. We continue to make encouraging progress. But by the end of the century, we believe, we will still be wondering if the singularity is near.

Paul G. Allen, who cofounded Microsoft in 1975, is a philanthropist and chairman of Vulcan, which invests in an array of technology, aerospace, entertainment, and sports businesses. Mark Greaves is a computer scientist who serves as Vulcans director for knowledge systems.

[1] Kurzweil, The Law of Accelerating Returns, March 2001.

[2] We are beginning to get within range of the computer power we might need to support this kind of massive brain simulation. Petaflop-class computers (such as IBMs BlueGene/P that was used in the Watson system) are now available commercially. Exaflop-class computers are currently on the drawing boards. These systems could probably deploy the raw computational capability needed to simulate the firing patterns for all of a brains neurons, though currently it happens many times more slowly than would happen in an actual brain.

UPDATE: Ray Kurzweil responds here.

See the rest here:
Paul Allen: The Singularity Isn't Near | MIT Technology Review

Posted in The Singularity | Comments Off on Paul Allen: The Singularity Isn’t Near | MIT Technology Review

SINGULARITY: a Joshua Gates, Destination Truth …

Posted: at 11:47 pm

Going to miParacon to see Josh and other paranormal personalities? Tweet your experiences and photos to @joshuagatesfans I will also be monitoring for stuff to share on the fan page once the convention is over!

Josh has been pretty quiet on social media lately. Could it be because we're going to get some news soon? Here is a clue, to your left. I won't say what/where it is or my sources, but I'll just say to "stay tuned" 🙂

The show has been met with much praise and according to Brad at the production company, viewer numbers have been good. For now, it looks like the only criticisms fans have had for the show are that they miss the ghost hunting/cryptids search elements, and they'd like the crew who follows him and helps make the show to be featured. Fans cannot deny the better quality of filming, the fact that each episode only focuses on one case, and because the episodes are less rushed, we get to see more of the destination, and humor is definitely not missing from EXU.

In the meantime, here's some news!

Be sure to follow Josh on Twitter HEREand follow this fan page on Twitter for fan interaction and exclusives HERE. Photo credit to Brandt, who you can follow on Twitter HERE

Read more from the original source:
SINGULARITY: a Joshua Gates, Destination Truth ...

Posted in The Singularity | Comments Off on SINGULARITY: a Joshua Gates, Destination Truth …

Singularity University – Solving Humanity’s Grand …

Posted: September 4, 2015 at 12:44 pm

What is Singularity University?

Our mission is to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges.

Even more discussions, stories and topics.

Incubating companies, one experiment at a time

Eager to connect with SU enthusiasts near you? Learn more here

Thoughtful coverage on science, technology, and the singularity

Our custom program for Fortune 500 companies

David Roberts

Salim Ismail

Kathryn Myronuk

Catherine Mohr

Neil Jacobstein

Ralph Merkle

Raymond McCauley

Marc Goodman

Daniel Kraft, MD

Brad Templeton

Gregg Maryniak

Robert Freitas

Andrew Hessel

Paul Saffo

Jonathan Knowles

Jeremy Howard

Eric Ries

Avi Reichental

Peter Diamandis

Ray Kurzweil

Nicholas Haan

John Hagel

Robert Hariri, MD, PhD

Ramez Naam

June 3rd, 2015 /

Top 10 exponential companies Salim Ismail, Singularity University, discusses which organizations are best at keeping up with rapid technological

June 2nd, 2015 /

Exponential Finance conference lineup CNBCs Bob Pisani provides a preview for his presentation at the Singularity University/CNBC Exponential

June 2nd, 2015 /

a rapid period of evolution, says Peter Diamandis, Singularity University, explaining how technological change is disrupting the financial industry

NASA Research Park Building 20 S. Akron Rd. MS 20-1 Moffett Field CA 94035-0001 Phone: +1-650-200-3434

Singularity University, Singularity Hub, Singularity Summit, SU Labs, Singularity Labs, Exponential Medicine, Exponential Finance and all associated logos and design elements are trademarks and/or service marks of Singularity Education Group.

Singularity University is not a degree granting institution.

Follow this link:

Singularity University - Solving Humanity's Grand ...

Posted in The Singularity | Comments Off on Singularity University – Solving Humanity’s Grand …

Page 112