What is a singularity? | Live Science

To understand what a singularity is, imagine the force of gravity compressing you down into an infinitely tiny point, so that you occupy literally no volume. That sounds impossible and it is. These "singularities" are found in the centers of black holes and at the beginning of the Big Bang. These singularities don't represent something physical. Rather, when they appear in mathematics, they are telling us that our theories of physics are breaking down, and we need to replace them with a better understanding.

Singularities can happen anywhere, and they are surprisingly common in the mathematics that physicists use to understand the universe. Put simply, singularities are places where the mathematics "misbehave," typically by generating infinitely large values. There are examples of mathematical singularities throughout physics: Typically, any time an equation uses 1/X, as X goes to zero, the value of the equation goes to infinity.

Most of these singularities, however, can usually be resolved by pointing out that the equations are missing some factor, or noting the physical impossibility of ever reaching the singularity point. In other words, they are probably not "real."

But there are singularities in physics that do not have simple resolutions. The most famous are gravitational singularities, the infinities that appear in Einstein's general relativity (GR), which is currently our best theory of how gravity works.

In general relativity, there are two kinds of singularities: coordinate singularities and true singularities. Coordinate singularities happen when an infinity appears in one coordinate system (a particular choice for recording separations in time and space) but disappears in another.

For example, the physicist Karl Schwarzschild applied general relativity to the simple system of a spherical mass, such as a star. He found that the solution contained two singularities, one in the very center and one at a certain distance from the center, known today as the Schwarzschild radius. For many years, physicists thought that both singularities signaled breakdowns in the theory, but it didn't matter as long as the radius of the spherical mass was larger than the Schwarzschild radius. All physicists needed was for GR to predict the gravitational influence outside the mass, according to San Jose State University (opens in new tab).

But what would happen if an object were squeezed below its own Schwarzschild radius? Then that singularity would be outside the mass, and it would mean that GR is breaking down in a region that it shouldn't.

It was soon discovered that the singularity at the Schwarzschild radius was a coordinate singularity. A change in coordinate systems removes the singularity, saving GR and allowing it to still make valid predictions, astrophysicist Ethan Siegel writes in Forbes (opens in new tab).

But the singularity at the centers of spherical masses remained. If you squeeze an object below its Schwarzschild radius, then its own gravity becomes so intense that it just keeps on squeezing all by itself, all the way down to an infinitely tiny point, according to National Geographic (opens in new tab).

For decades physicists debated whether a collapse to an infinitely tiny point was possible, or whether some other force was able to prevent total collapse. While white dwarfs and neutron stars can hold themselves up indefinitely, any object larger than about six times the mass of the sun will have too much gravity, overwhelming all the other forces and collapsing into an infinitely tiny point: a true singularity, according to NASA (opens in new tab).

These are what we call the black holes: a point of infinite density, surrounded by an event horizon located at the Schwarzschild radius. The event horizon "protects" the singularity, preventing outside observers from seeing it unless they traverse the event horizon, according to Quanta Magazine (opens in new tab).

Physicists long thought that in GR, all singularities like this are surrounded by event horizons, and this concept was known as the Cosmic Censorship Hypothesis so named because it was surmised that some process in the universe prevented (or "censored") singularities from being viewable. However, computer simulations and theoretical work have raised the possibility of exposed (or "naked") singularities. A naked singularity would be just that: a singularity without an event horizon, fully observable from the outside universe. Whether such exposed singularities exist continues to be a subject of considerable debate.

Because they are mathematical singularities, nobody knows what's really at the center of a black hole. To understand it, we need a theory of gravity beyond GR. Specifically, we need a quantum theory of gravity, one that can describe the behavior of strong gravity at very tiny scales, according to Physics of the Universe (opens in new tab).

Hypotheses that modify or replace general relativity to give us a replacement of the black hole singularity include Planck stars (a highly-compressed exotic form of matter), gravastars (a thin shell of matter supported by exotic gravity), and dark energy stars (an exotic state of vacuum energy that behaves like a black hole). To date, all these ideas are hypothetical, and a true answer must await a quantum theory of gravity.

The Big Bang theory, which assumes general relativity to be true, is the modern cosmological model of the history of the universe. It also contains a singularity. In the distant past, about 13.77 billion years ago, according to the Big Bang theory, the entire universe was compressed into an infinitely tiny point.

Physicists know that this conclusion is incorrect. Though the Big Bang theory is enormously successful at describing the history of the cosmos since that moment, just as with black holes, the presence of the singularity is telling scientists that the theory again, GR is incomplete, and needs to be updated.

One possible resolution to the Big Bang singularity is causal set theory. Under causal set theory, space-time is not a smooth continuum, as it is in GR, but rather made up of discrete chunks, named "space-time atoms." Since nothing can be smaller than one of these "atoms", singularities are impossible,Bruno Bento, a physicist studying this topic at the University of Liverpool in England, told Live Science.

Bento and his collaborators are attempting to replace the earliest moments of the Big Bang using causal set theory. After those initial moments, "somewhere along the away, the universe becomes large and 'well-behaved' enough so that a continuum space-time approximation becomes a good description and GR can take over to reproduce what we see," Bento said.

While there are no universally accepted solutions to the Big Bang singularity problem, physicists are hopeful they will find a solution soon and they're enjoying their work. As Bento said, "I've always been fascinated with the universe and the fact that reality has so many things that most people would associate with sci-fi or even fantasy."

Read more:

What is a singularity? | Live Science

What Is A Singularity? – Universe Today

Ever since scientists first discovered the existence of black holes in our universe, we have all wondered: what could possibly exist beyond the veil of that terrible void? In addition, ever since the theory of General Relativity was first proposed, scientists have been forced to wonder, what could have existed before the birth of the Universe i.e. before the Big Bang?

Interestingly enough, these two questions have come to be resolved (after a fashion) with the theoretical existence of something known as a Gravitational Singularity a point in space-time where the laws of physics as we know them break down. And while there remain challenges and unresolved issues about this theory, many scientists believe that beneath veil of an event horizon, and at the beginning of the Universe, this was what existed.

In scientific terms, a gravitational singularity (or space-time singularity) is a location where the quantities that are used to measure the gravitational field become infinite in a way that does not depend on the coordinate system. In other words, it is a point in which all physical laws are indistinguishable from one another, where space and time are no longer interrelated realities, but merge indistinguishably and cease to have any independent meaning.

Singularities were first predicated as a result of Einsteins Theory of General Relativity, which resulted in the theoretical existence of black holes. In essence, the theory predicted that any star reaching beyond a certain point in its mass (aka. the Schwarzschild Radius) would exert a gravitational force so intense that it would collapse.

At this point, nothing would be capable of escaping its surface, including light. This is due to the fact the gravitational force would exceed the speed of light in vacuum 299,792,458 meters per second (1,079,252,848.8 km/h; 670,616,629 mph).

This phenomena is known as the Chandrasekhar Limit, named after the Indian astrophysicist Subrahmanyan Chandrasekhar, who proposed it in 1930. At present, the accepted value of this limit is believed to be 1.39 Solar Masses (i.e. 1.39 times the mass of our Sun), which works out to a whopping 2.765 x 1030 kg (or 2,765 trillion trillion metric tons).

Another aspect of modern General Relativity is that at the time of the Big Bang (i.e. the initial state of the Universe) was a singularity. Roger Penrose and Stephen Hawking both developed theories that attempted to answer how gravitation could produce singularities, which eventually merged together to be known as the PenroseHawking Singularity Theorems.

According to the Penrose Singularity Theorem, which he proposed in 1965, a time-like singularity will occur within a black hole whenever matter reaches certain energy conditions. At this point, the curvature of space-time within the black hole becomes infinite, thus turning it into a trapped surface where time ceases to function.

The Hawking Singularity Theorem added to this by stating that a space-like singularity can occur when matter is forcibly compressed to a point, causing the rules that govern matter to break down. Hawking traced this back in time to the Big Bang, which he claimed was a point of infinite density. However, Hawking later revised this to claim that general relativity breaks down at times prior to the Big Bang, and hence no singularity could be predicted by it.

Some more recent proposals also suggest that the Universe did not begin as a singularity. These includes theories like Loop Quantum Gravity, which attempts to unify the laws of quantum physics with gravity. This theory states that, due to quantum gravity effects, there is a minimum distance beyond which gravity no longer continues to increase, or that interpenetrating particle waves mask gravitational effects that would be felt at a distance.

The two most important types of space-time singularities are known as Curvature Singularities and Conical Singularities. Singularities can also be divided according to whether they are covered by an event horizon or not. In the case of the former, you have the Curvature and Conical; whereas in the latter, you have what are known as Naked Singularities.

A Curvature Singularity is best exemplified by a black hole. At the center of a black hole, space-time becomes a one-dimensional point which contains a huge mass. As a result, gravity become infinite and space-time curves infinitely, and the laws of physics as we know them cease to function.

Conical singularities occur when there is a point where the limit of every general covariance quantity is finite. In this case, space-time looks like a cone around this point, where the singularity is located at the tip of the cone. An example of such a conical singularity is a cosmic string, a type of hypothetical one-dimensional point that is believed to have formed during the early Universe.

And, as mentioned, there is the Naked Singularity, a type of singularity which is not hidden behind an event horizon. These were first discovered in 1991 by Shapiro and Teukolsky using computer simulations of a rotating plane of dust that indicated that General Relativity might allow for naked singularities.

In this case, what actually transpires within a black hole (i.e. its singularity) would be visible. Such a singularity would theoretically be what existed prior to the Big Bang. The key word here is theoretical, as it remains a mystery what these objects would look like.

For the moment, singularities and what actually lies beneath the veil of a black hole remains a mystery. As time goes on, it is hoped that astronomers will be able to study black holes in greater detail. It is also hoped that in the coming decades, scientists will find a way to merge the principles of quantum mechanics with gravity, and that this will shed further light on how this mysterious force operates.

We have many interesting articles about gravitational singularities here at Universe Today. Here is 10 Interesting Facts About Black Holes, What Would A Black Hole Look Like?, Was the Big Bang Just a Black Hole?, Goodbye Big Bang, Hello Black Hole?, Who is Stephen Hawking?, and Whats on the Other Side of a Black Hole?

If youd like more info on singularity, check out these articles from NASA and Physlink.

Astronomy Cast has some relevant episodes on the subject. Heres Episode 6: More Evidence for the Big Bang, and Episode 18: Black Holes Big and Small and Episode 21: Black Hole Questions Answered.

Sources:

Like Loading...

Read more from the original source:

What Is A Singularity? - Universe Today

Singularity | technology | Britannica

singularity, theoretical condition that could arrive in the near future when a synthesis of several powerful new technologies will radically change the realities in which we find ourselves in an unpredictable manner. Most notably, the singularity would involve computer programs becoming so advanced that artificial intelligence transcends human intelligence, potentially erasing the boundary between humanity and computers. Often, nanotechnology is included as one of the key technologies that will make the singularity happen.

In 1993 the magazine Whole Earth Review published an article titled Technological Singularity by Vernor Vinge, a computer scientist and science fiction author. Vinge imagined that future information networks and human-machine interfaces would lead to novel conditions with new qualities: a new reality rules. But there was a trick to knowing the singularity. Even if one could know that it was imminent, one could not know what it would be like with any specificity. This condition will be, by definition, so thoroughly transcendent that we cannot imagine what it will be like. There was an opaque wall across the future, and the new era is simply too different to fit into the classical frame of good and evil. It could be amazing or apocalyptic, but we cannot know the details.

Since that time, the idea of the singularity has been expanded to accommodate numerous visions of apocalyptic changes and technological salvation, not limited to Vinges parameters of information systems. One version championed by the inventor and visionary Ray Kurzweil emphasizes biology, cryonics, and medicine (including nanomedicine): in the future we will have the medical tools to banish disease and disease-related death. Another is represented in the writings of the sociologist William Sims Bainbridge, who describes a promise of cyberimmortality, when we will be able to experience a spiritual eternity that persists long after our bodies have decayed, by uploading digital records of our thoughts and feelings into perpetual storage systems. This variation circles back to Vinges original vision of a singularity driven by information systems. Cyberimmortality will work perfectly if servers never crash, power systems never fail, and some people in later generations have plenty of time to examine the digital records of our own thoughts and feelings.

One can also find a less radical expression of the singularity in Converging Technologies for Improving Human Performance. This 2003 collection tacitly accepts the inevitability of so-called NBIC convergence, that is, the near-future synthesis of nanotech, biotech, infotech, and cognitive science. Because this volume was sponsored by the U.S. National Science Foundation and edited by two of its officers, Mihail Roco and Bainbridge, some saw it as a semiofficial government endorsement of expectations of the singularity.

Unprecedented new technologies will continue to arise, and perhaps they will synthesize with each other, but it is not inevitable that the changes they create will be apocalyptic. The idea of the singularity is a powerful inspiration for people who want technology to deliver a new spiritual and material reality within our lifetimes. This vision is sufficiently flexible that each person who expects the singularity can customize it to his or her own preferences.

Read more from the original source:

Singularity | technology | Britannica

Technological singularity – Wikipedia

Hypothetical point in time when technological growth becomes uncontrollable and irreversible

The technological singularityor simply the singularity[1]is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.[2][3] According to the most popular version of the singularity hypothesis, I.J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.[4]

The first person to use the concept of a "singularity" in the technological context was John von Neumann.[5] Stanislaw Ulam reports a discussion with von Neumann "centered on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue".[6] Subsequent authors have echoed this viewpoint.[3][7]

The concept and the term "singularity" were popularized by Vernor Vinge first in 1983 in an article that claimed that once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to "the knotted space-time at the center of a black hole",[8] and later in his 1993 essay The Coming Technological Singularity,[4][7] in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and would advance technologically at an incomprehensible rate. He wrote that he would be surprised if it occurred before 2005 or after 2030.[4] Another significant contributor to wider circulation of the notion was Ray Kurzweil's 2005 book The Singularity is Near, predicting singularity by 2045.[7]

Some scientists, including Stephen Hawking, have expressed concern that artificial superintelligence (ASI) could result in human extinction.[9][10] The consequences of the singularity and its potential benefit or harm to the human race have been intensely debated.

Prominent technologists and academics dispute the plausibility of a technological singularity and the associated artificial intelligence explosion, including Paul Allen,[11] Jeff Hawkins,[12] John Holland, Jaron Lanier, Steven Pinker,[12] Theodore Modis,[13] and Gordon Moore.[12] One claim made was that the artificial intelligence growth is likely to run into decreasing returns instead of accelerating ones, as was observed in previously developed human technologies.

Although technological progress has been accelerating in most areas, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia.[14] However, with the increasing power of computers and other technologies, it might eventually be possible to build a machine that is significantly more intelligent than humans.[15]

If a superhuman intelligence were to be inventedeither through the amplification of human intelligence or through artificial intelligenceit would vastly improve over human problem-solving and inventive skills. Such an AI is referred to as Seed AI[16][17] because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would have the potential to autonomously improve its own software and hardware to design an even more capable machine, which could repeat the process in turn. This recursive self-improvement could accelerate, potentially allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities.

I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion:[18][19]

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an intelligence explosion, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

One version of intelligence explosion is one where computing power approaches infinity in a finite amount of time. In this version, once AIs are doing the research to improve themselves, speed doubles e.g. after 2 years, then 1 year, then 6 months, then 3 months, then 1.5 months, etc., where the infinite sum of the doubling periods is 4 years. Unless prevented by physical limits of computation and time quantization, this process would literally achieve infinite computing power in 4 years, properly earning the name "singularity" for the final state. This form of intelligence explosion is described in Yudkowsky (1996).[20]

A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to the form or degree of intelligence possessed by such an agent. John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of super intelligence, arguing that it is difficult or impossible for present-day humans to predict what human beings' lives would be like in a post-singularity world.[4][21]

Technology forecasters and researchers disagree regarding when, or whether, human intelligence will likely be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that bypass human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence.[22][23] A number of futures studies scenarios combine these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification. The book The Age of Em by Robin Hanson outlines a future in which uploads of human brains emerge instead of or on the way to the emergence of superintelligence.[24]

Some writers use "the singularity" in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology,[25][26][27] although Vinge and other writers specifically state that without superintelligence, such changes would not qualify as a true singularity.[4]

A speed superintelligence describes an AI that can function like a human mind, only much faster.[28] For example, with a million-fold increase in the speed of information processing relative to that of humans, a subjective year would pass in 30 physical seconds.[29] Such a difference in information processing speed could drive the singularity.[30]

As per Chalmers, "Good (1965) predicts an ultraintelligent machine by 2000,[18] Vinge (1993) predicts greater-than-human intelligence between 2005 and 2030,[4] Yudkowsky (1996) predicts a singularity by 2021,[20] and Kurzweil (2005) predicts human-level artificial intelligence by 2030."[7] Moravec (1988) predicts human-level artificial intelligence in supercomputers by 2010 by extrapolating past trend using a chart,[31] while Moravec (1998/1999) predicts human-level artificial intelligence by 2040, and intelligence far beyond human by 2050.[32] Per 2017 interview, Kurzweil predicts human-level intelligence by 2029 and billion fold intelligence and singularity by 2045.[33][34]

Four polls of AI researchers, conducted in 2012 and 2013 by Nick Bostrom and Vincent C. Mller, suggested a confidence of 50% that artificial general intelligence (AGI) would be developed by 20402050.[35][36]

Prominent technologists and academics dispute the plausibility of a technological singularity, including Paul Allen,[11] Jeff Hawkins,[12] John Holland, Jaron Lanier, Steven Pinker,[12] Theodore Modis,[13] and Gordon Moore,[12] whose law is often cited in support of the concept.[37]

Most proposed methods for creating superhuman or transhuman minds fall into one of two categories: intelligence amplification of human brains and artificial intelligence. The many speculated ways to augment human intelligence include bioengineering, genetic engineering, nootropic drugs, AI assistants, direct braincomputer interfaces and mind uploading. These multiple possible paths to an intelligence explosion, all of which will presumably be pursued, makes a singularity more likely.[29]

Robin Hanson expressed skepticism of human intelligence augmentation, writing that once the "low-hanging fruit" of easy methods for increasing human intelligence have been exhausted, further improvements will become increasingly difficult.[38] Despite all of the speculated ways for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option among the hypotheses that would advance the singularity.[citation needed]

The possibility of an intelligence explosion depends on three factors.[39] The first accelerating factor is the new intelligence enhancements made possible by each previous improvement. Contrariwise, as the intelligences become more advanced, further advances will become more and more complicated, possibly outweighing the advantage of increased intelligence. Each improvement should generate at least one more improvement, on average, for movement towards singularity to continue. Finally, the laws of physics may eventually prevent further improvement.

There are two logically independent, but mutually reinforcing, causes of intelligence improvements: increases in the speed of computation, and improvements to the algorithms used.[7] The former is predicted by Moore's Law and the forecasted improvements in hardware,[40] and is comparatively similar to previous technological advances. But there are some AI researchers,[who?] who believe software is more important than hardware.[41]

A 2017 email survey of authors with publications at the 2015 NeurIPS and ICML machine learning conferences asked about the chance that "the intelligence explosion argument is broadly correct". Of the respondents, 12% said it was "quite likely", 17% said it was "likely", 21% said it was "about even", 24% said it was "unlikely" and 26% said it was "quite unlikely".[42]

Both for human and artificial intelligence, hardware improvements increase the rate of future hardware improvements. An analogy to Moore's Law suggests that if the first doubling of speed took 18 months, the second would take 18 subjective months; or 9 external months, whereafter, four months, two months, and so on towards a speed singularity.[43][20] Some upper limit on speed may eventually be reached. Jeff Hawkins has stated that a self-improving computer system would inevitably run into upper limits on computing power: "in the end there are limits to how big and fast computers can run. We would end up in the same place; we'd just get there a bit faster. There would be no singularity."[12]

It is difficult to directly compare silicon-based hardware with neurons. But Berglas (2008) notes that computer speech recognition is approaching human capabilities, and that this capability seems to require 0.01% of the volume of the brain. This analogy suggests that modern computer hardware is within a few orders of magnitude of being as powerful as the human brain.

The exponential growth in computing technology suggested by Moore's law is commonly cited as a reason to expect a singularity in the relatively near future, and a number of authors have proposed generalizations of Moore's law. Computer scientist and futurist Hans Moravec proposed in a 1998 book[44] that the exponential growth curve could be extended back through earlier computing technologies prior to the integrated circuit.

Ray Kurzweil postulates a law of accelerating returns in which the speed of technological change (and more generally, all evolutionary processes[45]) increases exponentially, generalizing Moore's law in the same manner as Moravec's proposal, and also including material technology (especially as applied to nanotechnology), medical technology and others.[46] Between 1986 and 2007, machines' application-specific capacity to compute information per capita roughly doubled every 14 months; the per capita capacity of the world's general-purpose computers has doubled every 18 months; the global telecommunication capacity per capita doubled every 34 months; and the world's storage capacity per capita doubled every 40 months.[47] On the other hand, it has been argued that the global acceleration pattern having the 21st century singularity as its parameter should be characterized as hyperbolic rather than exponential.[48]

Kurzweil reserves the term "singularity" for a rapid increase in artificial intelligence (as opposed to other technologies), writing for example that "The Singularity will allow us to transcend these limitations of our biological bodies and brains ... There will be no distinction, post-Singularity, between human and machine".[49] He also defines his predicted date of the singularity (2045) in terms of when he expects computer-based intelligences to significantly exceed the sum total of human brainpower, writing that advances in computing before that date "will not represent the Singularity" because they do "not yet correspond to a profound expansion of our intelligence."[50]

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term "singularity" in the context of technological progress, Stanislaw Ulam tells of a conversation with John von Neumann about accelerating change:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[6]

Kurzweil claims that technological progress follows a pattern of exponential growth, following what he calls the "law of accelerating returns". Whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history".[51] Kurzweil believes that the singularity will occur by approximately 2045.[46] His predictions differ from Vinge's in that he predicts a gradual ascent to the singularity, rather than Vinge's rapidly self-improving superhuman intelligence.

Oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy's April 2000 Wired magazine article "Why The Future Doesn't Need Us".[7][52]

Some intelligence technologies, like "seed AI",[16][17] may also have the potential to not just make themselves faster, but also more efficient, by modifying their source code. These improvements would make further improvements possible, which would make further improvements possible, and so on.

The mechanism for a recursively self-improving set of algorithms differs from an increase in raw computation speed in two ways. First, it does not require external influence: machines designing faster hardware would still require humans to create the improved hardware, or to program factories appropriately.[citation needed] An AI rewriting its own source code could do so while contained in an AI box.

Second, as with Vernor Vinges conception of the singularity, it is much harder to predict the outcome. While speed increases seem to be only a quantitative difference from human intelligence, actual algorithm improvements would be qualitatively different. Eliezer Yudkowsky compares it to the changes that human intelligence brought: humans changed the world thousands of times more rapidly than evolution had done, and in totally different ways. Similarly, the evolution of life was a massive departure and acceleration from the previous geological rates of change, and improved intelligence could cause change to be as different again.[53]

There are substantial dangers associated with an intelligence explosion singularity originating from a recursively self-improving set of algorithms. First, the goal structure of the AI might self-modify, potentially causing the AI to optimise for something other than what was originally intended.[54][55]

Secondly, AIs could compete for the same scarce resources humankind uses to survive.[56][57] While not actively malicious, AIs would promote the goals of their programming, not necessarily broader human goals, and thus might crowd out humans completely.[58][59][60]

Carl Shulman and Anders Sandberg suggest that algorithm improvements may be the limiting factor for a singularity; while hardware efficiency tends to improve at a steady pace, software innovations are more unpredictable and may be bottlenecked by serial, cumulative research. They suggest that in the case of a software-limited singularity, intelligence explosion would actually become more likely than with a hardware-limited singularity, because in the software-limited case, once human-level AI is developed, it could run serially on very fast hardware, and the abundance of cheap hardware would make AI research less constrained.[61] An abundance of accumulated hardware that can be unleashed once the software figures out how to use it has been called "computing overhang."[62]

Some critics, like philosopher Hubert Dreyfus[63] and philosopher John Searle,[64] assert that computers or machines cannot in principle achieve true human intelligence. Others, like physicist Stephen Hawking,[65] object that whether machines can achieve a true intelligence or merely something similar to intelligence is irrelevant if the net result is the same.

Psychologist Steven Pinker stated in 2008: "There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobilesall staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems. ..."[12]

Martin Ford[66] postulates a "technology paradox" in that before the singularity could occur most routine jobs in the economy would be automated, since this would require a level of technology inferior to that of the singularity. This would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity. Job displacement is increasingly no longer limited to those types of work traditionally considered to be "routine."[67]

Theodore Modis[68] and Jonathan Huebner[69] argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. Evidence for this decline is that the rise in computer clock rates is slowing, even while Moore's prediction of exponentially increasing circuit density continues to hold. This is due to excessive heat build-up from the chip, which cannot be dissipated quickly enough to prevent the chip from melting when operating at higher speeds. Advances in speed may be possible in the future by virtue of more power-efficient CPU designs and multi-cell processors.[70]

Theodore Modis holds the singularity cannot happen.[71][13][72] He claims the "technological singularity" and especially Kurzweil lack scientific rigor; Kurzweil is alleged to mistake the logistic function (S-function) for an exponential function, and to see a "knee" in an exponential function where there can in fact be no such thing.[73] In a 2021 article, Modis pointed out that no milestones breaks in historical perspective comparable in importance to the Internet, DNA, the transistor, or nuclear energy had been observed in the previous twenty years while five of them would have been expected according to the exponential trend advocated by the proponents of the technological singularity.[74]

AI researcher Jrgen Schmidhuber stated that the frequency of subjectively "notable events" appears to be approaching a 21st-century singularity, but cautioned readers to take such plots of subjective events with a grain of salt: perhaps differences in memory of recent and distant events could create an illusion of accelerating change where none exists.[75]

Microsoft co-founder Paul Allen argued the opposite of accelerating returns, the complexity brake;[11] the more progress science makes towards understanding intelligence, the more difficult it becomes to make additional progress. A study of the number of patents shows that human creativity does not show accelerating returns, but in fact, as suggested by Joseph Tainter in his The Collapse of Complex Societies,[76] a law of diminishing returns. The number of patents per thousand peaked in the period from 1850 to 1900, and has been declining since.[69] The growth of complexity eventually becomes self-limiting, and leads to a widespread "general systems collapse".

Hofstadter (2006) raises concern that Ray Kurzweil is not sufficiently scientifically rigorous, that an exponential tendency of technology is not a scientific law like one of physics, and that exponential curves have no "knees".[77] Nonetheless, he does not rule out the singularity in principle in the distant future.[12]

Jaron Lanier denies that the singularity is inevitable: "I do not think the technology is creating itself. It's not an autonomous process."[78] Furthermore: "The reason to believe in human agency over technological determinism is that you can then have an economy where people earn their own way and invent their own lives. If you structure a society on not emphasizing individual human agency, it's the same thing operationally as denying people clout, dignity, and self-determination ... to embrace [the idea of the Singularity] would be a celebration of bad data and bad politics."[78]

Economist Robert J. Gordon points out that measured economic growth has slowed around 1970 and slowed even further since the financial crisis of 20072008, and argues that the economic data show no trace of a coming Singularity as imagined by mathematician I. J. Good.[79]

Philosopher and cognitive scientist Daniel Dennett said in 2017: "The whole singularity stuff, thats preposterous. It distracts us from much more pressing problems", adding "AI tools that we become hyper-dependent on, that is going to happen. And one of the dangers is that we will give them more authority than they warrant."[80]

In addition to general criticisms of the singularity concept, several critics have raised issues with Kurzweil's iconic chart. One line of criticism is that a log-log chart of this nature is inherently biased toward a straight-line result. Others identify selection bias in the points that Kurzweil chooses to use. For example, biologist PZ Myers points out that many of the early evolutionary "events" were picked arbitrarily.[81] Kurzweil has rebutted this by charting evolutionary events from 15 neutral sources, and showing that they fit a straight line on a log-log chart. Kelly (2006) argues that the way the Kurzweil chart is constructed with x-axis having time before present, it always points to the singularity being "now", for any date on which one would construct such a chart, and shows this visually on Kurzweil's chart.[82]

Some critics suggest religious motivations or implications of singularity, especially Kurzweil's version of it. The buildup towards the Singularity is compared with Judeo-Christian end-of-time scenarios. Beam calls it "a Buck Rogers vision of the hypothetical Christian Rapture".[83] John Gray says "the Singularity echoes apocalyptic myths in which history is about to be interrupted by a world-transforming event".[84]

Dramatic changes in the rate of economic growth have occurred in the past because of technological advancement. Based on population growth, the economy doubled every 250,000 years from the Paleolithic era until the Neolithic Revolution. The new agricultural economy doubled every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world's economic output doubles every fifteen years, sixty times faster than during the agricultural era. If the rise of superhuman intelligence causes a similar revolution, argues Robin Hanson, one would expect the economy to double at least quarterly and possibly on a weekly basis.[85]

The term "technological singularity" reflects the idea that such change may happen suddenly, and that it is difficult to predict how the resulting new world would operate.[86][87] It is unclear whether an intelligence explosion resulting in a singularity would be beneficial or harmful, or even an existential threat.[88][89] Because AI is a major factor in singularity risk, a number of organizations pursue a technical theory of aligning AI goal-systems with human values, including the Future of Humanity Institute, the Machine Intelligence Research Institute,[86] the Center for Human-Compatible Artificial Intelligence, and the Future of Life Institute.

Physicist Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks."[90] Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand."[90] Hawking suggested that artificial intelligence should be taken more seriously and that more should be done to prepare for the singularity:[90]

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here we'll leave the lights on"? Probably not but this is more or less what is happening with AI.

Berglas (2008) claims that there is no direct evolutionary motivation for an AI to be friendly to humans. Evolution has no inherent tendency to produce outcomes valued by humans, and there is little reason to expect an arbitrary optimisation process to promote an outcome desired by humankind, rather than inadvertently leading to an AI behaving in a way not intended by its creators.[91][92][93] Anders Sandberg has also elaborated on this scenario, addressing various common counter-arguments.[94] AI researcher Hugo de Garis suggests that artificial intelligences may simply eliminate the human race for access to scarce resources,[56][95] and humans would be powerless to stop them.[96] Alternatively, AIs developed under evolutionary pressure to promote their own survival could outcompete humanity.[60]

Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

According to Eliezer Yudkowsky, a significant problem in AI safety is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.[97] Bill Hibbard (2014) harvtxt error: no target: CITEREFBill_Hibbard2014 (help) proposes an AI design that avoids several dangers including self-delusion,[98] unintended instrumental actions,[54][99] and corruption of the reward generator.[99] He also discusses social impacts of AI[100] and testing AI.[101] His 2001 book Super-Intelligent Machines advocates the need for public education about AI and public control over AI. It also proposed a simple design that was vulnerable to corruption of the reward generator.

While the technological singularity is usually seen as a sudden event, some scholars argue the current speed of change already fits this description.[citation needed]

In addition, some argue that we are already in the midst of a major evolutionary transition that merges technology, biology, and society. Digital technology has infiltrated the fabric of human society to a degree of indisputable and often life-sustaining dependence.

A 2016 article in Trends in Ecology & Evolution argues that "humans already embrace fusions of biology and technology. We spend most of our waking time communicating through digitally mediated channels... we trust artificial intelligence with our lives through antilock braking in cars and autopilots in planes... With one in three marriages in America beginning online, digital algorithms are also taking a role in human pair bonding and reproduction".

The article further argues that from the perspective of the evolution, several previous Major Transitions in Evolution have transformed life through innovations in information storage and replication (RNA, DNA, multicellularity, and culture and language). In the current stage of life's evolution, the carbon-based biosphere has generated a cognitive system (humans) capable of creating technology that will result in a comparable evolutionary transition.

The digital information created by humans has reached a similar magnitude to biological information in the biosphere. Since the 1980s, the quantity of digital information stored has doubled about every 2.5 years, reaching about 5 zettabytes in 2014 (51021 bytes).[103]

In biological terms, there are 7.2billion humans on the planet, each having a genome of 6.2billion nucleotides. Since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 11019 bytes. The digital realm stored 500 times more information than this in 2014 (see figure). The total amount of DNA contained in all of the cells on Earth is estimated to be about 5.31037 base pairs, equivalent to 1.3251037 bytes of information.

If growth in digital storage continues at its current rate of 3038% compound annual growth per year,[47] it will rival the total information content contained in all of the DNA in all of the cells on Earth in about 110 years. This would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years".[102]

In February 2009, under the auspices of the Association for the Advancement of Artificial Intelligence (AAAI), Eric Horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at Asilomar in Pacific Grove, California. The goal was to discuss the potential impact of the hypothetical possibility that robots could become self-sufficient and able to make their own decisions. They discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards.[104]

Some machines are programmed with various forms of semi-autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. Also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a "cockroach" stage of machine intelligence. The conference attendees noted that self-awareness as depicted in science-fiction is probably unlikely, but that other potential hazards and pitfalls exist.[104]

Frank S. Robinson predicts that once humans achieve a machine with the intelligence of a human, scientific and technological problems will be tackled and solved with brainpower far superior to that of humans. He notes that artificial systems are able to share data more directly than humans, and predicts that this would result in a global network of super-intelligence that would dwarf human capability.[105] Robinson also discusses how vastly different the future would potentially look after such an intelligence explosion. One example of this is solar energy, where the Earth receives vastly more solar energy than humanity captures, so capturing more of that solar energy would hold vast promise for civilizational growth.

In a hard takeoff scenario, an AGI rapidly self-improves, "taking control" of the world (perhaps in a matter of hours), too quickly for significant human-initiated error correction or for a gradual tuning of the AGI's goals. In a soft takeoff scenario, AGI still becomes far more powerful than humanity, but at a human-like pace (perhaps on the order of decades), on a timescale where ongoing human interaction and correction can effectively steer the AGI's development.[107][108]

Ramez Naam argues against a hard takeoff. He has pointed out that we already see recursive self-improvement by superintelligences, such as corporations. Intel, for example, has "the collective brainpower of tens of thousands of humans and probably millions of CPU cores to... design better CPUs!" However, this has not led to a hard takeoff; rather, it has led to a soft takeoff in the form of Moore's law.[109] Naam further points out that the computational complexity of higher intelligence may be much greater than linear, such that "creating a mind of intelligence 2 is probably more than twice as hard as creating a mind of intelligence 1."[110]

J. Storrs Hall believes that "many of the more commonly seen scenarios for overnight hard takeoff are circular they seem to assume hyperhuman capabilities at the starting point of the self-improvement process" in order for an AI to be able to make the dramatic, domain-general improvements required for takeoff. Hall suggests that rather than recursively self-improving its hardware, software, and infrastructure all on its own, a fledgling AI would be better off specializing in one area where it was most effective and then buying the remaining components on the marketplace, because the quality of products on the marketplace continually improves, and the AI would have a hard time keeping up with the cutting-edge technology used by the rest of the world.[111]

Ben Goertzel agrees with Hall's suggestion that a new human-level AI would do well to use its intelligence to accumulate wealth. The AI's talents might inspire companies and governments to disperse its software throughout society. Goertzel is skeptical of a hard five minute takeoff but speculates that a takeoff from human to superhuman level on the order of five years is reasonable. Goerzel refers to this scenario as a "semihard takeoff".[112]

Max More disagrees, arguing that if there were only a few superfast human-level AIs, that they would not radically change the world, as they would still depend on other people to get things done and would still have human cognitive constraints. Even if all superfast AIs worked on intelligence augmentation, it is unclear why they would do better in a discontinuous way than existing human cognitive scientists at producing super-human intelligence, although the rate of progress would increase. More further argues that a superintelligence would not transform the world overnight: a superintelligence would need to engage with existing, slow human systems to accomplish physical impacts on the world. "The need for collaboration, for organization, and for putting ideas into physical changes will ensure that all the old rules are not thrown out overnight or even within years."[113]

Drexler (1986), one of the founders of nanotechnology, postulates cell repair devices, including ones operating within cells and utilizing as yet hypothetical biological machines.[114] According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micromachines. Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom.[115]

Moravec (1988)[31] predicts possibility of "uploading" human mind into a human-like robot, achieving quasi-immortality by extreme longevity via transfer of the human mind between successive new robots and the old ones wear out; beyond that, he predicts later exponential acceleration of subjective experience of time leading to subjective sense of immortality.

Kurzweil (2005) suggests that medical advances would allow people to protect their bodies from the effects of aging, making the life expectancy limitless. Kurzweil argues that the technological advances in medicine would allow us to continuously repair and replace defective components in our bodies, prolonging life to an undetermined age.[116] Kurzweil further buttresses his argument by discussing current bio-engineering advances. Kurzweil suggests somatic gene therapy; after synthetic viruses with specific genetic information, the next step would be to apply this technology to gene therapy, replacing human DNA with synthesized genes.[117]

Beyond merely extending the operational life of the physical body, Jaron Lanier argues for a form of immortality called "Digital Ascension" that involves "people dying in the flesh and being uploaded into a computer and remaining conscious."[118]

A paper by Mahendra Prasad, published in AI Magazine, asserts that the 18th-century mathematician Marquis de Condorcet was the first person to hypothesize and mathematically model an intelligence explosion and its effects on humanity.[119]

An early description of the idea was made in John W. Campbell's 1932 short story "The last evolution".

In his 1958 obituary for John von Neumann, Ulam recalled a conversation with von Neumann about the "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue."[6]

In 1965, Good wrote his essay postulating an "intelligence explosion" of recursive self-improvement of a machine intelligence.[18][19]

In 1977, Hans Moravec wrote an article with unclear publishing status where he envisioned a development of self-improving thinking machines, a creation of "super-consciousness, the synthesis of terrestrial life, and perhaps jovian and martian life as well, constantly improving and extending itself, spreading outwards from the solar system, converting non-life into mind."[120][121] The article describes the human mind uploading later covered in Moravec (1988). The machines are going to reach human level and then improve themselves beyond that ("Most significantly of all, they [the machines] can be put to work as programmers and engineers, with the task of optimizing the software and hardware which make them what they are. The successive generations of machines produced this way will be increasingly smarter and more cost effective.") Humans will no longer be needed, and will be overtaken by the machines: "In the long run the sheer physical inability of humans to keep up with these rapidly evolving progeny of our minds will ensure that the ratio of people to machines approaches zero, and that a direct descendant of our culture, but not our genes, inherits the universe." While the word "singularity" is not used, the notion of human-level thinking machines thereafter improving themselves beyond human level is there. There is no intelligence explosion in the sense of a very rapid intelligence increase once human equivalence is reached. An updated version of the article was published in 1979 in Analog Science Fiction and Fact.[122][121]

In 1981, Stanisaw Lem published his science fiction novel Golem XIV. It describes a military AI computer (Golem XIV) who obtains consciousness and starts to increase his own intelligence, moving towards personal technological singularity. Golem XIV was originally created to aid its builders in fighting wars, but as its intelligence advances to a much higher level than that of humans, it stops being interested in the military requirements because it finds them lacking internal logical consistency.

In 1983, Vernor Vinge addressed Good's intelligence explosion in print in the January 1983 issue of Omni magazine. In this op-ed piece, Vinge seems to have been the first to use the term "singularity" (although not "technological singularity") in a way that was specifically tied to the creation of intelligent machines:[8][121]

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between ... so that the world remains intelligible.

In 1985, in "The Time Scale of Artificial Intelligence", artificial intelligence researcher Ray Solomonoff articulated mathematically the related notion of what he called an "infinity point": if a research community of human-level self-improving AIs take four years to double their own speed, then two years, then one year and so on, their capabilities increase infinitely in finite time.[7][123]

In 1986, Vernor Vinge published Marooned in Realtime, a science-fiction novel where a few remaining humans traveling forward in the future have survived an unknown extinction event that might well be a singularity. In a short afterword, the author states that an actual technological singularity would not be the end of the human species: "of course it seems very unlikely that the Singularity would be a clean vanishing of the human race. (On the other hand, such a vanishing is the timelike analog of the silence we find all across the sky.)".[124][125]

In 1988, Vinge used the phrase "technological singularity" (including "technological") in the short story collection Threats and Other Promises, writing in the introduction to his story "The Whirligig of Time" (p.72): Barring a worldwide catastrophe, I believe that technology will achieve our wildest dreams, and soon. When we raise our own intelligence and that of our creations, we are no longer in a world of human-sized characters. At that point we have fallen into a technological "black hole," a technological singularity.[126]

In 1988, Hans Moravec published Mind Children,[31] in which he predicted human-level intelligence in supercomputers by 2010, self-improving intelligent machines far surpassing human intelligence later, human mind uploading into human-like robots later, intelligent machines leaving humans behind, and space colonization. He did not mention "singularity", though, and he did not speak of a rapid explosion of intelligence immediately after the human level is achieved. Nonetheless, the overall singularity tenor is there in predicting both human-level artificial intelligence and further artificial intelligence far surpassing humans later.

Vinge's 1993 article "The Coming Technological Singularity: How to Survive in the Post-Human Era",[4] spread widely on the internet and helped to popularize the idea.[127] This article contains the statement, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Vinge argues that science-fiction authors cannot write realistic post-singularity characters who surpass the human intellect, as the thoughts of such an intellect would be beyond the ability of humans to express.[4]

Minsky's 1994 article says robots will "inherit the Earth", possibly with the use of nanotechnology, and proposes to think of robots as human "mind children", drawing the analogy from Moravec. The rhetorical effect of that analogy is that if humans are fine to pass the world to their biological children, they should be equally fine to pass it to robots, their "mind" children. As per Minsky, 'we could design our "mind-children" to think a million times faster than we do. To such a being, half a minute might seem as long as one of our years, and each hour as long as an entire human lifetime.' The feature of the singularity present in Minsky is the development of superhuman artificial intelligence ("million times faster"), but there is no talk of sudden intelligence explosion, self-improving thinking machines or unpredictability beyond any specific event and the word "singularity" is not used.[128]

Tipler's 1994 book The Physics of Immortality predicts future where superintelligent machines will build enormously powerful computers, people will be "emulated" in computers, life will reach every galaxy and people will achieve immortality when they reach Omega Point.[129] There is no talk of Vingean "singularity" or sudden intelligence explosion, but intelligence much greater than human is there, as well as immortality.

In 1996, Yudkowsky predicted a singularity by 2021.[20] His version of singularity involves intelligence explosion: once AIs are doing the research to improve themselves, speed doubles after 2 years, then 1 one year, then after 6 months, then after 3 months, then after 1.5 months, and after more iterations, the "singularity" is reached.[20] This construction implies that the speed reaches infinity in finite time.

In 2000, Bill Joy, a prominent technologist and a co-founder of Sun Microsystems, voiced concern over the potential dangers of robotics, genetic engineering, and nanotechnology.[52]

In 2005, Kurzweil published The Singularity Is Near. Kurzweil's publicity campaign included an appearance on The Daily Show with Jon Stewart.[130]

From 2006 to 2012, annual Singularity Summit conference was organized by Machine Intelligence Research Institute, founded by Eliezer Yudkowsky.

In 2007, Eliezer Yudkowsky suggested that many of the varied definitions that have been assigned to "singularity" are mutually incompatible rather than mutually supporting.[26][131] For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good's proposed discontinuous upswing in intelligence and Vinge's thesis on unpredictability.[26]

In 2009, Kurzweil and X-Prize founder Peter Diamandis announced the establishment of Singularity University, a nonaccredited private institute whose stated mission is "to educate, inspire and empower leaders to apply exponential technologies to address humanity's grand challenges."[132] Funded by Google, Autodesk, ePlanet Ventures, and a group of technology industry leaders, Singularity University is based at NASA's Ames Research Center in Mountain View, California. The not-for-profit organization runs an annual ten-week graduate program during summer that covers ten different technology and allied tracks, and a series of executive programs throughout the year.

In 2007, the Joint Economic Committee of the United States Congress released a report about the future of nanotechnology. It predicts significant technological and political changes in the mid-term future, including possible technological singularity.[133][134][135]

Continue reading here:

Technological singularity - Wikipedia

Singularity (2017) – IMDb

I gave it a 2, but it's more of 4/10 movie...maybe. I lowered the rating because producers/owners of the movie are using one of many sites that give you free positive ratings, which is illegal. They should've spent that budget on making the movie better! Just google: buy IMDb ratings and many sites will pop-up offering this service. Real rating of this movie is more like 4/10 at the MOST!

OK acting by some...Bad for some others and of course 1 good actor.

The issue I have with the movie is that it is pointless, slow and predictable. I won't discuss bad graphics sfx as certainly the budget wasn't big.

The story could've had so much more to it. It just sad they decided to dumb-out the script and make it so plain. It is a very slow pace movie and not in a good way as I got bored 1/3 of the way and not sure how I got through the rest- I was really expecting it to pick up and it never did. The idea behind the movie is nice,but it was never developed at all.

Do NOT believe the high rating (currently 8) as those are fake ratings. There are many sites that sell Anyone giving this more than 5 or 6 did not watch the movie at all. I'd say any reviews of 2-5 can be considered real.

The rest is here:

Singularity (2017) - IMDb

Overview | Comets NASA Solar System Exploration

Key FactsComets

Comets are frozen leftovers from the formation of the solar system composed of dust, rock, and ices. They range from a few miles to tens of miles wide, but as they orbit closer to the Sun, they heat up and spew gases and dust into a glowing head that can be larger than a planet. This material forms a tail that stretches millions of miles.

Comets are cosmic snowballs of frozen gases, rock, and dust that orbit the Sun. When frozen, they are the size of a small town. When a comet's orbit brings it close to the Sun, it heats up and spews dust and gases into a giant glowing head larger than most planets. The dust and gases form a tail that stretches away from the Sun for millions of miles. There are likely billions of comets orbiting our Sun in the Kuiper Belt and even more distant Oort Cloud.

The current number of known comets is:

Go farther. Explore Comets in Depth

Key Science Targets

Read more:

Overview | Comets NASA Solar System Exploration

What Is a Comet? | NASA Space Place NASA Science for Kids

The Short Answer:

Comets are large objects made of dust and ice that orbit the Sun. Best known for their long, streaming tails, these ancient objects are leftovers from the formation of the solar system 4.6 billion years ago.

Comets, such as the comet ISON pictured here, are thought to hold material from the time when the Sun and planets were forming. They are like giant, frozen time capsules in our solar system. Credit: NASA/MSFC/Aaron Kingery

Comets are mostly found way out in the solar system. Some exist in a wide disk beyond the orbit of Neptune called the Kuiper Belt. We call these short-period comets. They take less than 200 years to orbit the Sun.

Other comets live in the Oort Cloud, the sphere-shaped, outer edge of the solar system that is about 50 times farther away from the Sun than the Kuiper Belt. These are called long-period comets because they take much longer to orbit the Sun. The comet with the longest known orbit takes more than 250,000 years to make just one trip around the Sun!

The Kuiper Belt is beyond the orbits of the planets in our solar system. The Oort Cloud is far beyond the Kuiper belt. Credit: NASA/JPL-Caltech

The gravity of a planet or star can pull comets from their homes in the Kuiper Belt or Oort Cloud. This tug can redirect a comet toward the Sun. The paths of these redirected comets look like long, stretched ovals.

As the comet is pulled faster and faster toward the Sun, it swings around behind the Sun, then heads back toward where it came from. Some comets dive right into the Sun, never to be seen again. When the comet is in the inner solar system, either coming or going, that's when we may see it in our skies.

This animation represents the 76-year, elliptical orbit of Halleys comet (the white dot) against the more circular orbits of the planets. Credit: NASA/JPL-Caltech

At the heart of every comet is a solid, frozen core called the nucleus. This ball of dust and ice is usually less than 10 miles (16 kilometers) across about the size of a small town. When comets are out in the Kuiper Belt or Oort Cloud, scientists believe thats pretty much all there is to them just frozen nuclei.

But when a comet gets close to the Sun, it starts heating up. Eventually, the ice begins to turn to gas. This can also cause jets of gas to burst out of the comet, bringing dust with it. The gas and dust create a huge, fuzzy cloud around the nucleus called the coma.

This diagram shows the anatomy of a comet. Credit: NASA/JPL-Caltech

As dust and gases stream away from the nucleus, sunlight and particles coming from the Sun push them into a bright tail that stretches behind the comet for millions of miles.

When astronomers look closely, they find that comets actually have two separate tails. One looks white and is made of dust. This dust tail traces a broad, gently curving path behind the comet. The other tail is bluish and is made up of electrically charged gas molecules, or ions. The ion tail always points directly away from the Sun.

A comet has two tails that get longer the closer it gets to the Sun. Both tails are always directed away from the Sun. The ion tail (blue) always points directly away from the Sun, while the dust tail (yellow) points away from the Sun in a slightly different direction than the ion tail. Credit: NASA/JPL-Caltech

People have been interested in comets for thousands of years. But it wasn't possible to get a good view of a comet nucleus from Earth since it is shrouded by the gas and dust of the coma. In recent years, though, several spacecraft have had the chance to study comets up close.

NASAs Stardust mission collected samples from Comet Wild 2 (prounounced like Vilt two) and brought them back to Earth. Scientists found those particles to be rich in hydrocarbons, which are chemicals we consider the building blocks of life.

Rosetta, a mission of the European Space Agency that had several NASA instruments onboard, studied Comet 67P Churyumov-Gerasimenko. Rosetta dropped a lander on the nucleus, then orbited the comet for two years. Rosetta detected building blocks of life on this comet, too. And images showed Comet 67P to be a rugged object with lots of activity shaping its surface.

Rosetta captured incredible images of the rubber ducky-shaped Comet 67P. Credit: ESA/Rosetta/NavCam CC BY-SA IGO 3.0

Thanks to these missions and others like them, we now know a lot more about the structure of comets and the types of chemicals found on and around them. Weve even learned a bit more about the formation of our solar system!

Read more from the original source:

What Is a Comet? | NASA Space Place NASA Science for Kids

Comet – Wikipedia

Natural object in space that releases gas

A comet is an icy, small Solar System body that, when passing close to the Sun, warms and begins to release gases, a process that is called outgassing. This produces a visible atmosphere or coma, and sometimes also a tail. These phenomena are due to the effects of solar radiation and the solar wind acting upon the nucleus of the comet. Comet nuclei range from a few hundred meters to tens of kilometers across and are composed of loose collections of ice, dust, and small rocky particles. The coma may be up to 15 times Earth's diameter, while the tail may stretch beyond one astronomical unit. If sufficiently bright, a comet may be seen from Earth without the aid of a telescope and may subtend an arc of 30 (60 Moons) across the sky. Comets have been observed and recorded since ancient times by many cultures and religions.

Comets usually have highly eccentric elliptical orbits, and they have a wide range of orbital periods, ranging from several years to potentially several millions of years. Short-period comets originate in the Kuiper belt or its associated scattered disc, which lie beyond the orbit of Neptune. Long-period comets are thought to originate in the Oort cloud, a spherical cloud of icy bodies extending from outside the Kuiper belt to halfway to the nearest star.[1] Long-period comets are set in motion towards the Sun from the Oort cloud by gravitational perturbations caused by passing stars and the galactic tide. Hyperbolic comets may pass once through the inner Solar System before being flung to interstellar space. The appearance of a comet is called an apparition.

Comets are distinguished from asteroids by the presence of an extended, gravitationally unbound atmosphere surrounding their central nucleus. This atmosphere has parts termed the coma (the central part immediately surrounding the nucleus) and the tail (a typically linear section consisting of dust or gas blown out from the coma by the Sun's light pressure or outstreaming solar wind plasma). However, extinct comets that have passed close to the Sun many times have lost nearly all of their volatile ices and dust and may come to resemble small asteroids.[2] Asteroids are thought to have a different origin from comets, having formed inside the orbit of Jupiter rather than in the outer Solar System.[3][4] The discovery of main-belt comets and active centaur minor planets has blurred the distinction between asteroids and comets. In the early 21st century, the discovery of some minor bodies with long-period comet orbits, but characteristics of inner solar system asteroids, were called Manx comets. They are still classified as comets, such as C/2014 S3 (PANSTARRS).[5] Twenty-seven Manx comets were found from 2013 to 2017.[6]

As of November2021[update] there are 4584 known comets.[7] However, this represents a very small fraction of the total potential comet population, as the reservoir of comet-like bodies in the outer Solar System (in the Oort cloud) is about one trillion.[8][9] Roughly one comet per year is visible to the naked eye, though many of those are faint and unspectacular.[10] Particularly bright examples are called "great comets". Comets have been visited by unmanned probes such as the European Space Agency's Rosetta, which became the first to land a robotic spacecraft on a comet,[11] and NASA's Deep Impact, which blasted a crater on Comet Tempel 1 to study its interior.

The word comet derives from the Old English cometa from the Latin comta or comts. That, in turn, is a romanization of the Greek 'wearing long hair', and the Oxford English Dictionary notes that the term () already meant 'long-haired star, comet' in Greek. was derived from (koman) 'to wear the hair long', which was itself derived from (kom) 'the hair of the head' and was used to mean 'the tail of a comet'.[12][13]

The astronomical symbol for comets (represented in Unicode) is U+2604 COMET, consisting of a small disc with three hairlike extensions.[14]

The solid, core structure of a comet is known as the nucleus. Cometary nuclei are composed of an amalgamation of rock, dust, water ice, and frozen carbon dioxide, carbon monoxide, methane, and ammonia.[15] As such, they are popularly described as "dirty snowballs" after Fred Whipple's model.[16] Comets with a higher dust content have been called "icy dirtballs".[17] The term "icy dirtballs" arose after observation of Comet 9P/Tempel 1 collision with an "impactor" probe sent by NASA Deep Impact mission in July 2005. Research conducted in 2014 suggests that comets are like "deep fried ice cream", in that their surfaces are formed of dense crystalline ice mixed with organic compounds, while the interior ice is colder and less dense.[18]

The surface of the nucleus is generally dry, dusty or rocky, suggesting that the ices are hidden beneath a surface crust several metres thick. In addition to the gases already mentioned, the nuclei contain a variety of organic compounds, which may include methanol, hydrogen cyanide, formaldehyde, ethanol, ethane, and perhaps more complex molecules such as long-chain hydrocarbons and amino acids.[19][20] In 2009, it was confirmed that the amino acid glycine had been found in the comet dust recovered by NASA's Stardust mission.[21] In August 2011, a report, based on NASA studies of meteorites found on Earth, was published suggesting DNA and RNA components (adenine, guanine, and related organic molecules) may have been formed on asteroids and comets.[22][23]

The outer surfaces of cometary nuclei have a very low albedo, making them among the least reflective objects found in the Solar System. The Giotto space probe found that the nucleus of Halley's Comet (1P/Halley) reflects about four percent of the light that falls on it,[24] and Deep Space 1 discovered that Comet Borrelly's surface reflects less than 3.0%;[24] by comparison, asphalt reflects seven percent. The dark surface material of the nucleus may consist of complex organic compounds. Solar heating drives off lighter volatile compounds, leaving behind larger organic compounds that tend to be very dark, like tar or crude oil. The low reflectivity of cometary surfaces causes them to absorb the heat that drives their outgassing processes.[25]

Comet nuclei with radii of up to 30 kilometers (19mi) have been observed,[26] but ascertaining their exact size is difficult.[27] The nucleus of 322P/SOHO is probably only 100200 meters (330660ft) in diameter.[28] A lack of smaller comets being detected despite the increased sensitivity of instruments has led some to suggest that there is a real lack of comets smaller than 100 meters (330ft) across.[29] Known comets have been estimated to have an average density of 0.6g/cm3 (0.35oz/cuin).[30] Because of their low mass, comet nuclei do not become spherical under their own gravity and therefore have irregular shapes.[31]

Roughly six percent of the near-Earth asteroids are thought to be the extinct nuclei of comets that no longer experience outgassing,[32] including 14827 Hypnos and 3552 Don Quixote.

Results from the Rosetta and Philae spacecraft show that the nucleus of 67P/ChuryumovGerasimenko has no magnetic field, which suggests that magnetism may not have played a role in the early formation of planetesimals.[33][34] Further, the ALICE spectrograph on Rosetta determined that electrons (within 1km (0.62mi) above the comet nucleus) produced from photoionization of water molecules by solar radiation, and not photons from the Sun as thought earlier, are responsible for the degradation of water and carbon dioxide molecules released from the comet nucleus into its coma.[35][36] Instruments on the Philae lander found at least sixteen organic compounds at the comet's surface, four of which (acetamide, acetone, methyl isocyanate and propionaldehyde) have been detected for the first time on a comet.[37][38][39]

The streams of dust and gas thus released form a huge and extremely thin atmosphere around the comet called the "coma". The force exerted on the coma by the Sun's radiation pressure and solar wind cause an enormous "tail" to form pointing away from the Sun.[48]

The coma is generally made of water and dust, with water making up to 90% of the volatiles that outflow from the nucleus when the comet is within 3 to 4 astronomical units (450,000,000 to 600,000,000km; 280,000,000 to 370,000,000mi) of the Sun.[49] The H2O parent molecule is destroyed primarily through photodissociation and to a much smaller extent photoionization, with the solar wind playing a minor role in the destruction of water compared to photochemistry.[49] Larger dust particles are left along the comet's orbital path whereas smaller particles are pushed away from the Sun into the comet's tail by light pressure.[50]

Although the solid nucleus of comets is generally less than 60 kilometers (37mi) across, the coma may be thousands or millions of kilometers across, sometimes becoming larger than the Sun.[51] For example, about a month after an outburst in October 2007, comet 17P/Holmes briefly had a tenuous dust atmosphere larger than the Sun.[52] The Great Comet of 1811 also had a coma roughly the diameter of the Sun.[53] Even though the coma can become quite large, its size can decrease about the time it crosses the orbit of Mars around 1.5 astronomical units (220,000,000km; 140,000,000mi) from the Sun.[53] At this distance the solar wind becomes strong enough to blow the gas and dust away from the coma, and in doing so enlarging the tail.[53] Ion tails have been observed to extend one astronomical unit (150million km) or more.[52]

Both the coma and tail are illuminated by the Sun and may become visible when a comet passes through the inner Solar System, the dust reflects sunlight directly while the gases glow from ionisation.[54] Most comets are too faint to be visible without the aid of a telescope, but a few each decade become bright enough to be visible to the naked eye.[55] Occasionally a comet may experience a huge and sudden outburst of gas and dust, during which the size of the coma greatly increases for a period of time. This happened in 2007 to Comet Holmes.[56]

In 1996, comets were found to emit X-rays.[57] This greatly surprised astronomers because X-ray emission is usually associated with very high-temperature bodies. The X-rays are generated by the interaction between comets and the solar wind: when highly charged solar wind ions fly through a cometary atmosphere, they collide with cometary atoms and molecules, "stealing" one or more electrons from the atom in a process called "charge exchange". This exchange or transfer of an electron to the solar wind ion is followed by its de-excitation into the ground state of the ion by the emission of X-rays and far ultraviolet photons.[58]

Bow shocks form as a result of the interaction between the solar wind and the cometary ionosphere, which is created by the ionization of gases in the coma. As the comet approaches the Sun, increasing outgassing rates cause the coma to expand, and the sunlight ionizes gases in the coma. When the solar wind passes through this ion coma, the bow shock appears.

The first observations were made in the 1980s and 1990s as several spacecraft flew by comets 21P/GiacobiniZinner,[59] 1P/Halley,[60] and 26P/GriggSkjellerup.[61] It was then found that the bow shocks at comets are wider and more gradual than the sharp planetary bow shocks seen at, for example, Earth. These observations were all made near perihelion when the bow shocks already were fully developed.

The Rosetta spacecraft observed the bow shock at comet 67P/ChuryumovGerasimenko at an early stage of bow shock development when the outgassing increased during the comet's journey toward the Sun. This young bow shock was called the "infant bow shock". The infant bow shock is asymmetric and, relative to the distance to the nucleus, wider than fully developed bow shocks.[62]

In the outer Solar System, comets remain frozen and inactive and are extremely difficult or impossible to detect from Earth due to their small size. Statistical detections of inactive comet nuclei in the Kuiper belt have been reported from observations by the Hubble Space Telescope[63][64] but these detections have been questioned.[65][66] As a comet approaches the inner Solar System, solar radiation causes the volatile materials within the comet to vaporize and stream out of the nucleus, carrying dust away with them.

The streams of dust and gas each form their own distinct tail, pointing in slightly different directions. The tail of dust is left behind in the comet's orbit in such a manner that it often forms a curved tail called the type II or dust tail.[54] At the same time, the ion or type I tail, made of gases, always points directly away from the Sun because this gas is more strongly affected by the solar wind than is dust, following magnetic field lines rather than an orbital trajectory.[67] On occasionssuch as when Earth passes through a comet's orbital plane, the antitail, pointing in the opposite direction to the ion and dust tails, may be seen.[68]

The observation of antitails contributed significantly to the discovery of solar wind.[69] The ion tail is formed as a result of the ionization by solar ultra-violet radiation of particles in the coma. Once the particles have been ionized, they attain a net positive electrical charge, which in turn gives rise to an "induced magnetosphere" around the comet. The comet and its induced magnetic field form an obstacle to outward flowing solar wind particles. Because the relative orbital speed of the comet and the solar wind is supersonic, a bow shock is formed upstream of the comet in the flow direction of the solar wind. In this bow shock, large concentrations of cometary ions (called "pick-up ions") congregate and act to "load" the solar magnetic field with plasma, such that the field lines "drape" around the comet forming the ion tail.[70]

If the ion tail loading is sufficient, the magnetic field lines are squeezed together to the point where, at some distance along the ion tail, magnetic reconnection occurs. This leads to a "tail disconnection event".[70] This has been observed on a number of occasions, one notable event being recorded on 20 April 2007, when the ion tail of Encke's Comet was completely severed while the comet passed through a coronal mass ejection. This event was observed by the STEREO space probe.[71]

In 2013, ESA scientists reported that the ionosphere of the planet Venus streams outwards in a manner similar to the ion tail seen streaming from a comet under similar conditions."[72][73]

Uneven heating can cause newly generated gases to break out of a weak spot on the surface of comet's nucleus, like a geyser.[74] These streams of gas and dust can cause the nucleus to spin, and even split apart.[74] In 2010 it was revealed dry ice (frozen carbon dioxide) can power jets of material flowing out of a comet nucleus.[75] Infrared imaging of Hartley2 shows such jets exiting and carrying with it dust grains into the coma.[76]

Most comets are small Solar System bodies with elongated elliptical orbits that take them close to the Sun for a part of their orbit and then out into the further reaches of the Solar System for the remainder.[77] Comets are often classified according to the length of their orbital periods: The longer the period the more elongated the ellipse.

Periodic comets or short-period comets are generally defined as those having orbital periods of less than 200 years.[78] They usually orbit more-or-less in the ecliptic plane in the same direction as the planets.[79] Their orbits typically take them out to the region of the outer planets (Jupiter and beyond) at aphelion; for example, the aphelion of Halley's Comet is a little beyond the orbit of Neptune. Comets whose aphelia are near a major planet's orbit are called its "family".[80] Such families are thought to arise from the planet capturing formerly long-period comets into shorter orbits.[81]

At the shorter orbital period extreme, Encke's Comet has an orbit that does not reach the orbit of Jupiter, and is known as an Encke-type comet. Short-period comets with orbital periods less than 20 years and low inclinations (up to 30 degrees) to the ecliptic are called traditional Jupiter-family comets (JFCs).[82][83] Those like Halley, with orbital periods of between 20 and 200 years and inclinations extending from zero to more than 90 degrees, are called Halley-type comets (HTCs).[84][85] As of 2022[update], 94 HTCs have been observed,[86] compared with 744 identified JFCs.[87]

Recently discovered main-belt comets form a distinct class, orbiting in more circular orbits within the asteroid belt.[88][89]

Because their elliptical orbits frequently take them close to the giant planets, comets are subject to further gravitational perturbations.[90] Short-period comets have a tendency for their aphelia to coincide with a giant planet's semi-major axis, with the JFCs being the largest group.[83] It is clear that comets coming in from the Oort cloud often have their orbits strongly influenced by the gravity of giant planets as a result of a close encounter. Jupiter is the source of the greatest perturbations, being more than twice as massive as all the other planets combined. These perturbations can deflect long-period comets into shorter orbital periods.[91][92]

Based on their orbital characteristics, short-period comets are thought to originate from the centaurs and the Kuiper belt/scattered disc[93] a disk of objects in the trans-Neptunian regionwhereas the source of long-period comets is thought to be the far more distant spherical Oort cloud (after the Dutch astronomer Jan Hendrik Oort who hypothesized its existence).[94] Vast swarms of comet-like bodies are thought to orbit the Sun in these distant regions in roughly circular orbits. Occasionally the gravitational influence of the outer planets (in the case of Kuiper belt objects) or nearby stars (in the case of Oort cloud objects) may throw one of these bodies into an elliptical orbit that takes it inwards toward the Sun to form a visible comet. Unlike the return of periodic comets, whose orbits have been established by previous observations, the appearance of new comets by this mechanism is unpredictable.[95] When flung into the orbit of the sun, and being continuously dragged towards it, tons of matter are stripped from the comets which greatly influence their lifetime; the more stripped, the shorter they live and vice versa.[96]

Long-period comets have highly eccentric orbits and periods ranging from 200 years to thousands or even millions of years.[97] An eccentricity greater than 1 when near perihelion does not necessarily mean that a comet will leave the Solar System.[98] For example, Comet McNaught had a heliocentric osculating eccentricity of 1.000019 near its perihelion passage epoch in January 2007 but is bound to the Sun with roughly a 92,600-year orbit because the eccentricity drops below 1 as it moves farther from the Sun. The future orbit of a long-period comet is properly obtained when the osculating orbit is computed at an epoch after leaving the planetary region and is calculated with respect to the center of mass of the Solar System. By definition long-period comets remain gravitationally bound to the Sun; those comets that are ejected from the Solar System due to close passes by major planets are no longer properly considered as having "periods". The orbits of long-period comets take them far beyond the outer planets at aphelia, and the plane of their orbits need not lie near the ecliptic. Long-period comets such as C/1999 F1 and C/2017 T2 (PANSTARRS) can have aphelion distances of nearly 70,000AU (0.34pc; 1.1ly) with orbital periods estimated around 6million years.

Single-apparition or non-periodic comets are similar to long-period comets because they also have parabolic or slightly hyperbolic trajectories[97] when near perihelion in the inner Solar System. However, gravitational perturbations from giant planets cause their orbits to change. Single-apparition comets have a hyperbolic or parabolic osculating orbit which allows them to permanently exit the Solar System after a single pass of the Sun.[99] The Sun's Hill sphere has an unstable maximum boundary of 230,000AU (1.1pc; 3.6ly).[100] Only a few hundred comets have been seen to reach a hyperbolic orbit (e > 1) when near perihelion[101] that using a heliocentric unperturbed two-body best-fit suggests they may escape the Solar System.

As of 2022[update], only two objects have been discovered with an eccentricity significantly greater than one: 1I/Oumuamua and 2I/Borisov, indicating an origin outside the Solar System. While Oumuamua, with an eccentricity of about 1.2, showed no optical signs of cometary activity during its passage through the inner Solar System in October 2017, changes to its trajectorywhich suggests outgassingindicate that it is probably a comet.[102] On the other hand, 2I/Borisov, with an estimated eccentricity of about 3.36, has been observed to have the coma feature of comets, and is considered the first detected interstellar comet.[103][104] Comet C/1980 E1 had an orbital period of roughly 7.1million years before the 1982 perihelion passage, but a 1980 encounter with Jupiter accelerated the comet giving it the largest eccentricity (1.057) of any known solar comet with a reasonable observation arc.[105] Comets not expected to return to the inner Solar System include C/1980 E1, C/2000 U5, C/2001 Q4 (NEAT), C/2009 R1, C/1956 R1, and C/2007 F1 (LONEOS).

Some authorities use the term "periodic comet" to refer to any comet with a periodic orbit (that is, all short-period comets plus all long-period comets),[106] whereas others use it to mean exclusively short-period comets.[97] Similarly, although the literal meaning of "non-periodic comet" is the same as "single-apparition comet", some use it to mean all comets that are not "periodic" in the second sense (that is, to also include all comets with a period greater than 200 years).

Early observations have revealed a few genuinely hyperbolic (i.e. non-periodic) trajectories, but no more than could be accounted for by perturbations from Jupiter. Comets from interstellar space are moving with velocities of the same order as the relative velocities of stars near the Sun (a few tens of km per second). When such objects enter the Solar System, they have a positive specific orbital energy resulting in a positive velocity at infinity ( v {displaystyle v_{infty }!} ) and have notably hyperbolic trajectories. A rough calculation shows that there might be four hyperbolic comets per century within Jupiter's orbit, give or take one and perhaps two orders of magnitude.[107]

The Oort cloud is thought to occupy a vast space starting from between 2,000 and 5,000AU (0.03 and 0.08ly)[109] to as far as 50,000AU (0.79ly)[84] from the Sun. This cloud encases the celestial bodies that start at the middle of the Solar Systemthe Sun, all the way to outer limits of the Kuiper Belt. The Oort cloud consists of viable materials necessary for the creation of celestial bodies. The Solar System's planets exist only because of the planetesimals (chunks of leftover space that assisted in the creation of planets) that were condensed and formed by the gravity of the Sun. The eccentric made from these trapped planetesimals is why the Oort Cloud even exists.[110] Some estimates place the outer edge at between 100,000 and 200,000AU (1.58 and 3.16ly).[109] The region can be subdivided into a spherical outer Oort cloud of 20,00050,000AU (0.320.79ly), and a doughnut-shaped inner cloud, the Hills cloud, of 2,00020,000AU (0.030.32ly).[111] The outer cloud is only weakly bound to the Sun and supplies the long-period (and possibly Halley-type) comets that fall to inside the orbit of Neptune.[84] The inner Oort cloud is also known as the Hills cloud, named after J. G. Hills, who proposed its existence in 1981.[112] Models predict that the inner cloud should have tens or hundreds of times as many cometary nuclei as the outer halo;[112][113][114] it is seen as a possible source of new comets that resupply the relatively tenuous outer cloud as the latter's numbers are gradually depleted. The Hills cloud explains the continued existence of the Oort cloud after billions of years.[115]

Exocomets beyond the Solar System have also been detected and may be common in the Milky Way.[116] The first exocomet system detected was around Beta Pictoris, a very young A-type main-sequence star, in 1987.[117][118] A total of 11 such exocomet systems have been identified as of 2013[update], using the absorption spectrum caused by the large clouds of gas emitted by comets when passing close to their star.[116][117] For ten years the Kepler space telescope was responsible for searching for planets and other forms outside of the solar system. The first transiting exocomets were found in February 2018 by a group consisting of professional astronomers and citizen scientists in light curves recorded by the Kepler Space Telescope.[119][120] After Kepler Space Telescope retired in October 2018, a new telescope called TESS Telescope has taken over Kepler's mission. Since the launch of TESS, astronomers have discovered the transits of comets around the star Beta Pictoris using a light curve from TESS.[121][122] Since TESS has taken over, astronomers have since been able to better distinguish exocomets with the spectroscopic method. New planets are detected by the white light curve method which is viewed as a symmetrical dip in the charts readings when a planet overshadows its parent star. However, after further evaluation of these light curves, it has been discovered that the asymmetrical patterns of the dips presented are caused by the tail of a comet or of hundreds of comets.[123]

As a comet is heated during close passes to the Sun, outgassing of its icy components also releases solid debris too large to be swept away by radiation pressure and the solar wind.[124] If Earth's orbit sends it through that trail of debris, which is composed mostly of fine grains of rocky material, there is likely to be a meteor shower as Earth passes through. Denser trails of debris produce quick but intense meteor showers and less dense trails create longer but less intense showers. Typically, the density of the debris trail is related to how long ago the parent comet released the material.[125][126] The Perseid meteor shower, for example, occurs every year between 9 and 13 August, when Earth passes through the orbit of Comet SwiftTuttle. Halley's Comet is the source of the Orionid shower in October.[127][128]

Many comets and asteroids collided with Earth in its early stages. Many scientists think that comets bombarding the young Earth about 4billion years ago brought the vast quantities of water that now fill Earth's oceans, or at least a significant portion of it. Others have cast doubt on this idea.[129] The detection of organic molecules, including polycyclic aromatic hydrocarbons,[18] in significant quantities in comets has led to speculation that comets or meteorites may have brought the precursors of lifeor even life itselfto Earth.[130] In 2013 it was suggested that impacts between rocky and icy surfaces, such as comets, had the potential to create the amino acids that make up proteins through shock synthesis.[131] The speed at which the comets entered the atmosphere, combined with the magnitude of energy created after initial contact, allowed smaller molecules to condense into the larger macro-molecules that served as the foundation for life.[132] In 2015, scientists found significant amounts of molecular oxygen in the outgassings of comet 67P, suggesting that the molecule may occur more often than had been thought, and thus less an indicator of life as has been supposed.[133]

It is suspected that comet impacts have, over long timescales, also delivered significant quantities of water to Earth's Moon, some of which may have survived as lunar ice.[134] Comet and meteoroid impacts are also thought to be responsible for the existence of tektites and australites.[135]

Fear of comets as acts of God and signs of impending doom was highest in Europe from AD 1200 to 1650.[136] The year after the Great Comet of 1618, for example, Gotthard Arthusius published a pamphlet stating that it was a sign that the Day of Judgment was near.[137] He listed ten pages of comet-related disasters, including "earthquakes, floods, changes in river courses, hail storms, hot and dry weather, poor harvests, epidemics, war and treason and high prices".[136]

By 1700 most scholars concluded that such events occurred whether a comet was seen or not. Using Edmond Halley's records of comet sightings, however, William Whiston in 1711 wrote that the Great Comet of 1680 had a periodicity of 574 years and was responsible for the worldwide flood in the Book of Genesis, by pouring water on Earth. His announcement revived for another century fear of comets, now as direct threats to the world instead of signs of disasters.[136] Spectroscopic analysis in 1910 found the toxic gas cyanogen in the tail of Halley's Comet,[138] causing panicked buying of gas masks and quack "anti-comet pills" and "anti-comet umbrellas" by the public.[139]

If a comet is traveling fast enough, it may leave the Solar System. Such comets follow the open path of a hyperbola, and as such, they are called hyperbolic comets. Solar comets are only known to be ejected by interacting with another object in the Solar System, such as Jupiter.[140] An example of this is Comet C/1980 E1, which was shifted from an orbit of 7.1million years around the Sun, to a hyperbolic trajectory, after a 1980 close pass by the planet Jupiter.[141] Interstellar comets such as 1I/Oumuamua and 2I/Borisov never orbited the Sun and therefore do not require a 3rd-body interaction to be ejected from the Solar System.

Jupiter-family comets and long-period comets appear to follow very different fading laws. The JFCs are active over a lifetime of about 10,000 years or ~1,000 orbits whereas long-period comets fade much faster. Only 10% of the long-period comets survive more than 50 passages to small perihelion and only 1% of them survive more than 2,000 passages.[32] Eventually most of the volatile material contained in a comet nucleus evaporates, and the comet becomes a small, dark, inert lump of rock or rubble that can resemble an asteroid.[142] Some asteroids in elliptical orbits are now identified as extinct comets.[143][144][145][146] Roughly six percent of the near-Earth asteroids are thought to be extinct comet nuclei.[32]

The nucleus of some comets may be fragile, a conclusion supported by the observation of comets splitting apart.[147] A significant cometary disruption was that of Comet ShoemakerLevy 9, which was discovered in 1993. A close encounter in July 1992 had broken it into pieces, and over a period of six days in July 1994, these pieces fell into Jupiter's atmospherethe first time astronomers had observed a collision between two objects in the Solar System.[148][149] Other splitting comets include 3D/Biela in 1846 and 73P/SchwassmannWachmann from 1995 to 2006.[150] Greek historian Ephorus reported that a comet split apart as far back as the winter of 372373 BC.[151] Comets are suspected of splitting due to thermal stress, internal gas pressure, or impact.[152]

Comets 42P/Neujmin and 53P/Van Biesbroeck appear to be fragments of a parent comet. Numerical integrations have shown that both comets had a rather close approach to Jupiter in January 1850, and that, before 1850, the two orbits were nearly identical.[153]

Some comets have been observed to break up during their perihelion passage, including great comets West and IkeyaSeki. Biela's Comet was one significant example when it broke into two pieces during its passage through the perihelion in 1846. These two comets were seen separately in 1852, but never again afterward. Instead, spectacular meteor showers were seen in 1872 and 1885 when the comet should have been visible. A minor meteor shower, the Andromedids, occurs annually in November, and it is caused when Earth crosses the orbit of Biela's Comet.[154]

Some comets meet a more spectacular end either falling into the Sun[155] or smashing into a planet or other body. Collisions between comets and planets or moons were common in the early Solar System: some of the many craters on the Moon, for example, may have been caused by comets. A recent collision of a comet with a planet occurred in July 1994 when Comet ShoemakerLevy 9 broke up into pieces and collided with Jupiter.[156]

Ghost tail of C/2015 D1 (SOHO) after passage at the Sun

The names given to comets have followed several different conventions over the past two centuries. Prior to the early 20th century, most comets were referred to by the year when they appeared, sometimes with additional adjectives for particularly bright comets; thus, the "Great Comet of 1680", the "Great Comet of 1882", and the "Great January Comet of 1910".

After Edmond Halley demonstrated that the comets of 1531, 1607, and 1682 were the same body and successfully predicted its return in 1759 by calculating its orbit, that comet became known as Halley's Comet.[158] Similarly, the second and third known periodic comets, Encke's Comet[159] and Biela's Comet,[160] were named after the astronomers who calculated their orbits rather than their original discoverers. Later, periodic comets were usually named after their discoverers, but comets that had appeared only once continued to be referred to by the year of their appearance.[161]

In the early 20th century, the convention of naming comets after their discoverers became common, and this remains so today. A comet can be named after its discoverers or an instrument or program that helped to find it.[161] For example, in 2019, astronomer Gennadiy Borisov observed a comet that appeared to have originated outside of the solar system; the comet was named 2I/Borisov after him.[162]

From ancient sources, such as Chinese oracle bones, it is known that comets have been noticed by humans for millennia.[163] Until the sixteenth century, comets were usually considered bad omens of deaths of kings or noble men, or coming catastrophes, or even interpreted as attacks by heavenly beings against terrestrial inhabitants.[164][165]

Aristotle (384322 BC) was the first known scientist to use various theories and observational facts to employ a consistent, structured cosmological theory of comets. He believed that comets were atmospheric phenomena, due to the fact that they could appear outside of the zodiac and vary in brightness over the course of a few days. Aristotle's cometary theory arose from his observations and cosmological theory that everything in the cosmos is arranged in a distinct configuration.[166] Part of this configuration was a clear separation between the celestial and terrestrial, believing comets to be strictly associated with the latter. According to Aristotle, comets must be within the sphere of the moon and clearly separated from the heavens. Also in the 4th century BC, Apollonius of Myndus supported the idea that comets moved like the planets.[167] Aristotelian theory on comets continued to be widely accepted throughout the Middle Ages, despite several discoveries from various individuals challenging aspects of it.[168]

In the 1st century AD, Seneca the Younger questioned Aristotle's logic concerning comets. Because of their regular movement and imperviousness to wind, they cannot be atmospheric,[169] and are more permanent than suggested by their brief flashes across the sky.[a] He pointed out that only the tails are transparent and thus cloudlike, and argued that there is no reason to confine their orbits to the zodiac.[169] In criticizing Apollonius of Myndus, Seneca argues, "A comet cuts through the upper regions of the universe and then finally becomes visible when it reaches the lowest point of its orbit."[170] While Seneca did not author a substantial theory of his own,[171] his arguments would spark much debate among Aristotle's critics in the 16th and 17th centuries.[168][b]

Also in the 1st century, Pliny the Elder believed that comets were connected with political unrest and death.[173] Pliny observed comets as "human like", often describing their tails with "long hair" or "long beard".[174] His system for classifying comets according to their color and shape was used for centuries.[175]

In India, by the 6th century astronomers believed that comets were celestial bodies that re-appeared periodically. This was the view expressed in the 6th century by the astronomers Varhamihira and Bhadrabahu, and the 10th-century astronomer Bhaotpala listed the names and estimated periods of certain comets, but it is not known how these figures were calculated or how accurate they were.[176] In 1301, the Italian painter Giotto was the first person to accurately and anatomically portray a comet. In his work Adoration of the Magi, Giotto's depiction of Halley's Comet in the place of the Star of Bethlehem would go unmatched in accuracy until the 19th century and be bested only with the invention of photography.[177]

Astrological interpretations of comets proceeded to take precedence clear into the 15th century, despite the presence of modern scientific astronomy beginning to take root. Comets continued to forewarn of disaster, as seen in the Luzerner Schilling chronicles and in the warnings of Pope Callixtus III.[177] In 1578, German Lutheran bishop Andreas Celichius defined comets as "the thick smoke of human sins... kindled by the hot and fiery anger of the Supreme Heavenly Judge". The next year, Andreas Dudith stated that "If comets were caused by the sins of mortals, they would never be absent from the sky."[178]

Crude attempts at a parallax measurement of Halley's Comet were made in 1456, but were erroneous.[179] Regiomontanus was the first to attempt to calculate diurnal parallax by observing the great comet of 1472. His predictions were not very accurate, but they were conducted in the hopes of estimating the distance of a comet from Earth.[175]

In the 16th century, Tycho Brahe and Michael Maestlin demonstrated that comets must exist outside of Earth's atmosphere by measuring the parallax of the Great Comet of 1577.[180] Within the precision of the measurements, this implied the comet must be at least four times more distant than from Earth to the Moon.[181][182] Based on observations in 1664, Giovanni Borelli recorded the longitudes and latitudes of comets that he observed, and suggested that cometary orbits may be parabolic.[183] Galileo Galilei, one of the most renowned astronomers to date, even attempted writings on comets in The Assayer. He rejected Brahe's theories on the parallax of comets and claimed that they may be a mere optical illusion. Intrigued as early scientists were about the nature of comets, Galileo presented his own theories despite little personal observation.[175] Maestlin's student Johannes Kepler responded to these unjust criticisms in his work Hyperaspistes. Jakob Bernoulli published another attempt to explain comets (Conamen Novi Systematis Cometarum) in 1682.

Also occurring in the early modern period was the study of comets and their astrological significance in medical disciplines. Many healers of this time considered medicine and astronomy to be inter-disciplinary and employed their knowledge of comets and other astrological signs for diagnosing and treating patients.[184]

Isaac Newton, in his Principia Mathematica of 1687, proved that an object moving under the influence of gravity by an inverse square law must trace out an orbit shaped like one of the conic sections, and he demonstrated how to fit a comet's path through the sky to a parabolic orbit, using the comet of 1680 as an example.[185]He describes comets as compact and durable solid bodies moving in oblique orbit and their tails as thin streams of vapor emitted by their nuclei, ignited or heated by the Sun. He suspected that comets were the origin of the life-supporting component of air.[186] He also pointed out that comets usually appear near the Sun, and therefore most likely orbit it.[169] On their luminosity, he stated, "The comets shine by the Sun's light, which they reflect," with their tails illuminated by "the Sun's light reflected by a smoke arising from [the coma]".[169]

In 1705, Edmond Halley (16561742) applied Newton's method to 23 cometary apparitions that had occurred between 1337 and 1698. He noted that three of these, the comets of 1531, 1607, and 1682, had very similar orbital elements, and he was further able to account for the slight differences in their orbits in terms of gravitational perturbation caused by Jupiter and Saturn. Confident that these three apparitions had been three appearances of the same comet, he predicted that it would appear again in 175859.[187] Halley's predicted return date was later refined by a team of three French mathematicians: Alexis Clairaut, Joseph Lalande, and Nicole-Reine Lepaute, who predicted the date of the comet's 1759 perihelion to within one month's accuracy.[188][189] When the comet returned as predicted, it became known as Halley's Comet.[190]

From his huge vapouring train perhaps to shakeReviving moisture on the numerous orbs,Thro' which his long ellipsis winds; perhapsTo lend new fuel to declining suns,To light up worlds, and feed th' ethereal fire.

James Thomson The Seasons (1730; 1748)[191]

As early as the 18th century, some scientists had made correct hypotheses as to comets' physical composition. In 1755, Immanuel Kant hypothesized in his Universal Natural History that comets were condensed from "primitive matter" beyond the known planets, which is "feebly moved" by gravity, then orbit at arbitrary inclinations, and are partially vaporized by the Sun's heat as they near perihelion.[192] In 1836, the German mathematician Friedrich Wilhelm Bessel, after observing streams of vapor during the appearance of Halley's Comet in 1835, proposed that the jet forces of evaporating material could be great enough to significantly alter a comet's orbit, and he argued that the non-gravitational movements of Encke's Comet resulted from this phenomenon.[193]

In the 19th century, the Astronomical Observatory of Padova was an epicenter in the observational study of comets. Led by Giovanni Santini (17871877) and followed by Giuseppe Lorenzoni (18431914), this observatory was devoted to classical astronomy, mainly to the new comets and planets orbit calculation, with the goal of compiling a catalog of almost ten thousand stars. Situated in the Northern portion of Italy, observations from this observatory were key in establishing important geodetic, geographic, and astronomical calculations, such as the difference of longitude between Milan and Padua as well as Padua to Fiume.[194] In addition to these geographic observations, correspondence within the observatory, particularly between Santini and another astronomer Giuseppe Toaldo, about the importance of comet and planetary orbital observations.[195]

In 1950, Fred Lawrence Whipple proposed that rather than being rocky objects containing some ice, comets were icy objects containing some dust and rock.[196] This "dirty snowball" model soon became accepted and appeared to be supported by the observations of an armada of spacecraft (including the European Space Agency's Giotto probe and the Soviet Union's Vega 1 and Vega 2) that flew through the coma of Halley's Comet in 1986, photographed the nucleus, and observed jets of evaporating material.[197]

On 22 January 2014, ESA scientists reported the detection, for the first definitive time, of water vapor on the dwarf planet Ceres, the largest object in the asteroid belt.[198] The detection was made by using the far-infrared abilities of the Herschel Space Observatory.[199] The finding is unexpected because comets, not asteroids, are typically considered to "sprout jets and plumes". According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids."[199] On 11 August 2014, astronomers released studies, using the Atacama Large Millimeter/Submillimeter Array (ALMA) for the first time, that detailed the distribution of HCN, HNC, H2CO, and dust inside the comae of comets C/2012 F6 (Lemmon) and C/2012 S1 (ISON).[200][201]

Approximately once a decade, a comet becomes bright enough to be noticed by a casual observer, leading such comets to be designated as great comets.[151] Predicting whether a comet will become a great comet is notoriously difficult, as many factors may cause a comet's brightness to depart drastically from predictions.[210] Broadly speaking, if a comet has a large and active nucleus, will pass close to the Sun, and is not obscured by the Sun as seen from Earth when at its brightest, it has a chance of becoming a great comet. However, Comet Kohoutek in 1973 fulfilled all the criteria and was expected to become spectacular but failed to do so.[211] Comet West, which appeared three years later, had much lower expectations but became an extremely impressive comet.[212]

The Great Comet of 1577 is a well-known example of a great comet. It passed near Earth as a non-periodic comet and was seen by many, including well-known astronomers Tycho Brahe and Taqi ad-Din. Observations of this comet led to several significant findings regarding cometary science, especially for Brahe.

The late 20th century saw a lengthy gap without the appearance of any great comets, followed by the arrival of two in quick successionComet Hyakutake in 1996, followed by HaleBopp, which reached maximum brightness in 1997 having been discovered two years earlier. The first great comet of the 21st century was C/2006 P1 (McNaught), which became visible to naked eye observers in January 2007. It was the brightest in over 40 years.[213]

A sungrazing comet is a comet that passes extremely close to the Sun at perihelion, generally within a few million kilometers.[214] Although small sungrazers can be completely evaporated during such a close approach to the Sun, larger sungrazers can survive many perihelion passages. However, the strong tidal forces they experience often lead to their fragmentation.[215]

About 90% of the sungrazers observed with SOHO are members of the Kreutz group, which all originate from one giant comet that broke up into many smaller comets during its first passage through the inner Solar System.[216] The remainder contains some sporadic sungrazers, but four other related groups of comets have been identified among them: the Kracht, Kracht 2a, Marsden, and Meyer groups. The Marsden and Kracht groups both appear to be related to Comet 96P/Machholz, which is also the parent of two meteor streams, the Quadrantids and the Arietids.[217]

Of the thousands of known comets, some exhibit unusual properties. Comet Encke (2P/Encke) orbits from outside the asteroid belt to just inside the orbit of the planet Mercury whereas the Comet 29P/SchwassmannWachmann currently travels in a nearly circular orbit entirely between the orbits of Jupiter and Saturn.[218] 2060 Chiron, whose unstable orbit is between Saturn and Uranus, was originally classified as an asteroid until a faint coma was noticed.[219] Similarly, Comet ShoemakerLevy 2 was originally designated asteroid 1990 UL3.[220]

The largest known periodic comet is 95P/Chiron at 200km in diameter that comes to perihelion every 50 years just inside of Saturn's orbit at 8 AU. The largest known Oort cloud comet is suspected of being Comet Bernardinelli-Bernstein at 150km that will not come to perihelion until January 2031 just outside of Saturn's orbit at 11 AU. The Comet of 1729 is estimated to have been 100km in diameter and came to perihelion inside of Jupiter's orbit at 4 AU.

Centaurs typically behave with characteristics of both asteroids and comets.[221] Centaurs can be classified as comets such as 60558 Echeclus, and 166P/NEAT. 166P/NEAT was discovered while it exhibited a coma, and so is classified as a comet despite its orbit, and 60558 Echeclus was discovered without a coma but later became active,[222] and was then classified as both a comet and an asteroid (174P/Echeclus). One plan for Cassini involved sending it to a centaur, but NASA decided to destroy it instead.[223]

A comet may be discovered photographically using a wide-field telescope or visually with binoculars. However, even without access to optical equipment, it is still possible for the amateur astronomer to discover a sungrazing comet online by downloading images accumulated by some satellite observatories such as SOHO.[224] SOHO's 2000th comet was discovered by Polish amateur astronomer Micha Kusiak on 26 December 2010[225] and both discoverers of HaleBopp used amateur equipment (although Hale was not an amateur).

A number of periodic comets discovered in earlier decades or previous centuries are now lost comets. Their orbits were never known well enough to predict future appearances or the comets have disintegrated. However, occasionally a "new" comet is discovered, and calculation of its orbit shows it to be an old "lost" comet. An example is Comet 11P/TempelSwiftLINEAR, discovered in 1869 but unobservable after 1908 because of perturbations by Jupiter. It was not found again until accidentally rediscovered by LINEAR in 2001.[226] There are at least 18 comets that fit this category.[227]

The depiction of comets in popular culture is firmly rooted in the long Western tradition of seeing comets as harbingers of doom and as omens of world-altering change.[228] Halley's Comet alone has caused a slew of sensationalist publications of all sorts at each of its reappearances. It was especially noted that the birth and death of some notable persons coincided with separate appearances of the comet, such as with writers Mark Twain (who correctly speculated that he'd "go out with the comet" in 1910)[228] and Eudora Welty, to whose life Mary Chapin Carpenter dedicated the song "Halley Came to Jackson".[228]

In times past, bright comets often inspired panic and hysteria in the general population, being thought of as bad omens. More recently, during the passage of Halley's Comet in 1910, Earth passed through the comet's tail, and erroneous newspaper reports inspired a fear that cyanogen in the tail might poison millions,[229] whereas the appearance of Comet HaleBopp in 1997 triggered the mass suicide of the Heaven's Gate cult.[230]

In science fiction, the impact of comets has been depicted as a threat overcome by technology and heroism (as in the 1998 films Deep Impact and Armageddon), or as a trigger of global apocalypse (Lucifer's Hammer, 1979) or zombies (Night of the Comet, 1984).[228] In Jules Verne's Off on a Comet a group of people are stranded on a comet orbiting the Sun, while a large crewed space expedition visits Halley's Comet in Sir Arthur C. Clarke's novel 2061: Odyssey Three.[231]

The long-period comet first recorded by Pons in Florence on 15 July 1825 inspired Lydia Sigourney's humorous poem The Comet of 1825. in which all the celestial bodies argue over the comet's appearance and purpose.

NASA is developing a comet harpoon for returning samples to Earth

Here is the original post:

Comet - Wikipedia

How to Watch the Green Comet During the New Moon – The New York Times

A green-hued comet from the outer solar system is set to swing through Earths neighborhood in the coming days for the first time in 50,000 years.

The comet has been steadily gaining brightness and will make its closest approach on Feb. 2, when it comes within 26.4 million miles of the planet 110 times the distance to the moon. From the Northern Hemisphere, the comet is likely to be faintly visible to the naked eye.

But you dont have to wait until February to spot this visitor. The coming weekend may offer favorable viewing opportunities with a pair of binoculars when the new moon creates darker skies.

The comet is known as C/2022 E3 (Z.T.F.) because astronomers discovered it in March 2022 using a telescope on Palomar Mountain in California called the Zwicky Transient Facility (or Z.T.F.).

At the time, the cosmic interloper was just inside the orbit of Jupiter and roughly 25,000 times dimmer than the faintest star visible to the naked eye. But Z.T.F., with a camera that has a wide field of view, scans the entire visible sky each night and is well-suited to discover such objects.

Comets are clumps of dust and frozen gases, sometimes described by astronomers as dirty snowballs. Most are believed to originate from the distant, icy reaches of the solar system where gravitational agitations sometimes push them toward the sun an interaction that transforms them into gorgeous cosmic objects.

When they leave their deep freeze, the heat from the sun erodes their surfaces, and they start spewing gases and dust until they host a glowing core, known as the coma, and a flamelike tail that can stretch for millions of miles.

Theyre alive, Laurence ORourke, an astronomer with the European Space Agency, said. When theyre far from the sun, theyre sleeping, and when they get close to the sun, they wake up.

C/2022 E3 (Z.T.F.), for example, is now glowing green because ultraviolet radiation from the sun is absorbed by a molecule in the comet called diatomic carbon that is, two carbon atoms fused together. The reaction emits green light.

The brightness of comets can be unpredictable. When scientists first discovered the object last year, they knew only that it had potential to be visible from Earth.

Because each comet is its own living being, you dont know how its going to react until it passes the sun, Dr. ORourke said.

Comet C/2022 E3 (Z.T.F.) made its closest approach to the sun on Jan. 12, and the comet is now steadily brightening as it swings toward the Earth. While the comet wont pass us until Feb. 2, it is already nearly visible to the naked eye an encouraging sign for viewing opportunities, said Mike Kelley, an astronomer at the University of Maryland and the co-lead of the solar system working group at the Zwicky Transient Facility.

Still, seeing the comet could require dark skies and an experienced observer, Dr. Kelly said.

In addition, comets can always surprise us. Sometimes there can be a big explosion of gas and dust, and the comet might get suddenly brighter even after it has left the sun behind.

To catch the comet, look north.

On Jan. 21, the night of the new moon and thus the darkest skies, the comet will be close to Draco the dragon-shaped constellation that runs between the Big Dipper and the Little Dipper.

Over the following nights, the comet will creep along the dragons tail. And on Jan. 30, the comet will reside directly between the Big Dippers cup and Polaris, the North Star. If youre accustomed to finding the North Star by following the two stars on the end of the Big Dippers cup, then you should be able to spot the comet. Simply scan that imaginary line until you see a faint smudge.

If youre struggling, the comet might still be too faint or there might be too much light pollution. Try with a pair of binoculars.

Even with relatively modest binoculars, the powdery, fuzzy or smoky character of the star ought to make it clear its a comet, said E. C. Krupp, the director at Griffith Observatory in Los Angeles.

A telescope will help you spot the colors and finer details, including the comets glowing coma and lengthy tail.

For anyone living above the 35th parallel imagine a curving East-West line running from North Carolina through the Texas Panhandle out to Southern California the comet will be visible all night starting Jan. 22. But it is relatively low on the horizon in the early evening, and it might be better to look for the comet later in the evening or even early in the morning when the comet swings higher in the sky.

Dr. Krupp recommends looking this weekend when the phase of the moon is new, and it therefore wont cast a glow over the sky. But the comet will become brighter as it gets closer to Earth and will be easier to spot toward the end of the month. If you wait until then, you might want to try early in the morning after the moon has set.

Either way, the hunt will be fun.

Its sort of like searching for some endangered species, and then it pops into view, Dr. Krupp said. That really is a charmer of an experience.

Comets are relics of the early solar system and may have been responsible for seeding early Earth with the building blocks for life.

It really is a situation where we most likely would not exist without their existence, Dr. ORourke said.

And yet we dont get many opportunities to study these objects, given that only a few each year are bright enough to be seen with the naked eye. As such, cometary astronomers across the globe will observe C/2022 E3 (Z.T.F.) over the coming months.

Were looking for our solar systems place in the universe, said Dr. Kelley, who will use the James Webb Space Telescope to observe the comet at the end of February. He wants to better understand how our planet formed in order to note the conditions that gave rise to life on Earth.

But Dr. Kelley and others have to work quickly. After a brief appearance in the night sky, its unclear where C/2022 E3 (Z.T.F.) may go. Because these objects are so loosely bound to our solar system, the suns gravitational influence might force the comet to take another trip around our star perhaps not returning for another 50,000 years. Or the sun might fling the comet from the solar system entirely.

See the original post:

How to Watch the Green Comet During the New Moon - The New York Times

Comet | Definition, Composition, & Facts | Britannica

Summary

comet, a small body orbiting the Sun with a substantial fraction of its composition made up of volatile ices. When a comet comes close to the Sun, the ices sublimate (go directly from the solid to the gas phase) and form, along with entrained dust particles, a bright outflowing atmosphere around the comet nucleus known as a coma. As dust and gas in the coma flow freely into space, the comet forms two tails, one composed of ionized molecules and radicals and one of dust. The word comet comes from the Greek (kometes), which means long-haired. Indeed, it is the appearance of the bright coma that is the standard observational test for whether a newly discovered object is a comet or an asteroid.

Comets are among the most-spectacular objects in the sky, with their bright glowing comae and their long dust tails and ion tails. Comets can appear at random from any direction and provide a fabulous and ever-changing display for many months as they move in highly eccentric orbits around the Sun.

Comets are important to scientists because they are primitive bodies left over from the formation of the solar system. They were among the first solid bodies to form in the solar nebula, the collapsing interstellar cloud of dust and gas out of which the Sun and planets formed. Comets formed in the outer regions of the solar nebula where it was cold enough for volatile ices to condense. This is generally taken to be beyond 5 astronomical units (AU; 748 million km, or 465 million miles), or beyond the orbit of Jupiter. Because comets have been stored in distant orbits beyond the planets, they have undergone few of the modifying processes that have melted or changed the larger bodies in the solar system. Thus, they retain a physical and chemical record of the primordial solar nebula and of the processes involved in the formation of planetary systems.

A comet is made up of four visible parts: the nucleus, the coma, the ion tail, and the dust tail. The nucleus is a solid body typically a few kilometres in diameter and made up of a mixture of volatile ices (predominantly water ice) and silicate and organic dust particles. The coma is the freely escaping atmosphere around the nucleus that forms when the comet comes close to the Sun and the volatile ices sublimate, carrying with them dust particles that are intimately mixed with the frozen ices in the nucleus. The dust tail forms from those dust particles and is blown back by solar radiation pressure to form a long curving tail that is typically white or yellow in colour. The ion tail forms from the volatile gases in the coma when they are ionized by ultraviolet photons from the Sun and blown away by the solar wind. Ion tails point almost exactly away from the Sun and glow bluish in colour because of the presence of CO+ ions.

Comets differ from other bodies in the solar system in that they are generally in orbits that are far more eccentric than those of the planets and most asteroids and far more inclined to the ecliptic (the plane of Earths orbit). Some comets appear to come from distances of over 50,000 AU, a substantial fraction of the distance to the nearest stars. Their orbital periods can be millions of years in length. Other comets have shorter periods and smaller orbits that carry them from the orbits of Jupiter and Saturn inward to the orbits of the terrestrial planets. Some comets even appear to come from interstellar space, passing around the Sun on open, hyperbolic orbits, but in fact are members of the solar system.

Comets are typically named for their discoverers, though some comets (e.g., Halley and Encke) are named for the scientists who first recognized that their orbits were periodic. The International Astronomical Union (IAU) prefers a maximum of two discoverers to be in a comets name. In some cases where a comet has been lost (its orbit was not determined well enough to predict its return), the comet is named for the original discoverer and also the observer(s) who found it again. A designation of C/ before a comets name denotes that it is a long-period comet (period greater than 200 years), while P/ denotes that the comet is periodic; i.e., it returns at regular, predictable intervals of fewer than 200 years. A designation of D/ denotes that the comet is deceased or destroyed, such as D/Shoemaker-Levy 9, the comet whose components struck Jupiter in July 1994. Numbers appearing before the name of a comet denote that it is periodic; the comets are numbered in the order that they are confirmed to be periodic. Comet 1P/Halley is the first comet to be recognized as periodic and is named after English astronomer Edmond Halley, who determined that it was periodic.

In 1995 the IAU implemented a new identification system for each appearance of a comet, whether it is periodic or long-period. The system uses the year of the comets discovery, the half-month in the year denoted by a letter A through Y (with I omitted to avoid confusion), and a number signifying the order in which the comet was found within that half-month. Thus, Halleys Comet is designated 1P/1682 Q1 when Halley saw it in August 1682, but 1P/1982 U1 when it was first spotted by astronomers before its predicted perihelion (point when closest to the Sun) passage in 1986. This identification system is similar to that now used for asteroid discoveries, though the asteroids are so designated only when they are first discovered. (The asteroids are later given official catalog numbers and names.) Formerly, a number after the name of a periodic comet denoted its order among comets discovered by that individual or group, but for new comets there would be no such distinguishing number.

Read more from the original source:

Comet | Definition, Composition, & Facts | Britannica

How to view and photograph comets | Space

Comets are notoriously fickle things. Some are given high expectations of putting on a good show and fail to deliver. In contrast, others, originally thought to be unremarkable, may suddenly flare up to glow at a magnitude that's visible to the unaided eye. In general, it's quite difficult to say just how a comet will behave.

In this guide, we'll be highlighting the latest naked eye comet passing by Earth and give you plenty of tips on how to observe and photograph this and many other types of comets. We'll be covering the best viewing locations, exploring ideal viewing situations and making concrete suggestions to specific telescopes and binoculars that will aid skywatchers as they observe the comet, as well as outlining the history of each comet and where it's come from.

Below we've also laid out a fool-proof guide on how to photograph a comet with tips on which camera, lens and photography accessories to choose from, with additional advice on composition, camera settings and other techniques required to photograph comets.

The question is, can you see comet C/2022 E3 (ZTF) with the naked eye? By perigee (the point in an orbit at which it is closest to Earth) it is hoped that C/2022 E3 (ZTF) will be brighter than magnitude +6.0, making it visible to the naked eye, at least from very dark sites. As previously mentioned, how much brighter than magnitude +6.0 will remain unknown comets can be notoriously unpredictable.

In mid-January, the comet rises in the northeast around midnight local time and is best viewed higher in the sky before dawn, while it is still dark. As the month progresses, C/2022 E3 (ZTF) rises earlier and earlier at perigee, your best bet is to look for it around 10 pm local time. The Pole Star, Polaris, will be a good guide, since the comet is 11.5 degrees to the northeast of it on 31 January.

Currently, C/2022 E3 (ZTF) is just above magnitude +7.0. This means it is too faint to be seen with the unaided eye; you'll need 7x50 or 10x50 binoculars for comfortable viewing or a small telescope, around 4-inches (100mm) in aperture in order to spot it as a greenish, fuzzy smudge of light. You can check out our best telescopes or best binoculars guides for some great models to suit a variety of budgets. Although sometimes comets have long tails, C/2022 E3 (ZTF) doesn't have a very long tail, at least not yet.

Best telescopes for viewing comets

If the comet does brighten near perigee, as is hoped, it could reach magnitude +5.0. Technically this is within visibility of the unaided human eye, but it is still very faint if you live near a town with light pollution, you probably won't be able to see the comet, so it is recommended that you head to a dark site out of your town to have the best chance of seeing it. Of course, it might not get this bright, or it might have an outburst and be even brighter than expected. We'll just have to wait and see.

Best binoculars for viewing comets

Since the naked-eye view is unlikely to wow you, then for the best scenes, imaging is the way to go. However, to get a good image you'll need some very particular instruments.

There are two main ways to photograph comets successfully: with a DSLR or mirrorless camera, a camera lens and a tripod; or a camera/smartphone hooked up to a telescope.

The best camera to photograph a comet is, unsurprisingly, one of the best cameras for astrophotography. This calls for good sensitivity to light by utilizing a high ISO range, the ability to keep noise to a minimum and a large image sensor (ideally 35mm full-frame or larger) for lower image noise and a propensity toward a wider dynamic range. See our two top suggestions below, and take a look at our best cameras for astrophotography guide if you want to shop around.

Best mirrorless for photographing comets

There are several types of camera lenses that are suitable for comet photography, and which one you end up using depends mostly on the brand of your camera system. However, on the whole you want to look for a lens that has a fast maximum aperture (f/2.8 or narrower) and has minimal chromatic aberration (color fringing) and optical distortion. See two suggestions below that sit in our best lenses for astrophotography and best zoom lenses guides.

Best lenses for photographing comets

But for the best images, you'll need either a DSLR/mirrorless camera or a dedicated CCD or CMOS camera attached to a long-focal-length telescope, all on a sturdy mount that isn't going to shake and is controlled by a laptop or tablet.

For comet photography through a telescope, we would recommend at least 4-inches (100mm) aperture, a 1.25-inch eyepiece kit (as they fit the majority of telescopes) or binoculars that have large objective lenses with good magnification like 7x50 up to 20x80. See below for our specific recommendations on the latest and best telescopes, binoculars and accessories that will help you view comets.

Astronomy gear to view comets

There is a bagful of camera accessories astrophotographers can buy that will make comet photography easier. However, the key items to get in order to take better, more accurate astrophotos of comets are as follows.

A good tripod is one that will keep the camera and lens stable even during strong winds. Long exposures are required when photographing comets because they are normally best viewed at night because they are quite dim. If the camera moves during exposure then the entire photograph will become blurred, so one of the best tripods for astrophotography will keep the camera stationary when shooting. Pay attention to the maximum payload of each tripod though, the max payload can be calculated by adding the weight of your camera (with memory cards and battery inserted) and your lens, plus any accessories like lens warmers or star trackers. If the weight of all the gear you'll be using exceeds the tripod's maximum payload the tripod may be unstable and it is unlikely you will get sharp photographs of comets.

Star trackers are mounts that move with earth's rotation in order to allow longer exposures (or multiple long exposures over several minutes or hours) of celestial objects. While they are designed for tracking stars they may be helpful in keeping the frame steady when shooting comets, especially on long focal length telephoto lenses.

It's important not to knock the camera or lens during long exposures so as to avoid camera shake blur. That's why a remote shutter release should be used when photographing comets and all astrophotography subjects. They come in a variety of shapes and sizes, with wires that attach to proprietary ports on cameras but also in wireless models. We would recommend wireless models on the whole because it avoids the inevitable tangling on wires when shooting, but wired options are usually less expensive.

There aren't many filters that are useful for comet photography. Neutral density filters make the scene darker so would actually hinder rather than help, and graduated neutral density filters are usually used to darken a bright sky which isn't a problem at night.

However, light pollution filters may prove beneficial to those that are forced to shoot comets near towns and cities with street lighting. The best light pollution filters help alleviate the orange glow found in astrophotos taken near light-polluted areas. Many photographers prefer to edit light pollution out in post-processing software such as Adobe Lightroom or Adobe Photoshop, but for those that want a running start or who prefer to avoid image editing altogether, light pollution filters are a safe bet.

Shooting locations vary wildly and depend on the trajectory of a comet as it travels through space and past earth. Keep an eye at the top of this page, and on our other news stories to see where the latest comet is visible in skies near you. There are, however, some key tips on the best comet shooting locations.

The best location for photographing a comet is one that is dark and free from light pollution. Avoid cities and busy towns that have a lot of street lighting as light pollution will glow in the night sky. Instead, use a website like Dark Site Finder to find dark sky locations nearby and schedule a visit.

Cloudless nights are best so use a weather app or check the local forecast before heading out. In the northern hemisphere, the winter months give longer nights which offers greater opportunity for shooting comets and astrophotographs but this often comes with colder temperatures and inclement weather so dress appropriately.

Capturing the comet hanging in the sky on its own is fine for recording purposes, but to make a truly captivating image it may be worth pre-planning a shoot in order to include interesting foreground features. A simple landscape in the foreground adds context but seeking out local landmarks and interesting land features will help push your comet photographs to the next level. Waterfalls, ruins, rock formations or even distant tall buildings (provided there isn't too much light pollution) can all improve a comet photograph.

Comet C/2021 A1 a.k.a.Comet Leonard (opens in new tab) is on a last dash through our solar system before disappearing a little later in 2022. The comet has been a dazzling sight in binoculars or telescopes, appearing with a twisted tail and if you have a great camera, a green coma.

See more here:

How to view and photograph comets | Space

When is the next great comet? | Space | EarthSky

View at EarthSky Community Photos. | In early July 2020, people are getting amazing shots of comet C/2020 F3 (NEOWISE). Its not a great comet, but its a pretty good one! Alexander Krivenyshev in Guttenberg, New Jersey of the website WorldTimeZone.com wrote: Despite a layer of clouds on the horizon, I was able to capture my first comet over New York City on the early morning of July 6, 2020. Cool shot, Alexander! Thank you. Heres how to see Comet NEOWISE.

Were now treated to a near-constant barrage of wonderful comet photos including those coming in this week of comet C/2020 F3 (NEOWISE). Most are from experienced astrophotographers, most with excellent skies, employing telescopes and modern cameras and sometimes later creating composites of several images. We now sometimes see comet images from the International Space Station, too. Meanwhile, from the ground and with the eye alone? Yes, NEOWISE is a nice comet. But most will need binoculars to see it. The last two great comets which were McNaught in 2007 and Lovejoy in 2011 were mainly seen under Southern Hemisphere skies. Not since Hale-Bopp in 1996-97 has the Northern Hemisphere seen a magnificent comet.

Whats more, some skygazers wouldnt even classify Hale-Bopp as a great comet. In that case, we in the Northern Hemisphere might have to look all the way back to comet West in 1976 44 years ago to find a truly great great comet. When will we see the next one?

Lets consider some of the incredible comets of recent times and historic records, to find out when the Northern and Southern Hemispheres might expect to see the next great comet.

First, how are we defining a great comet? Theres no official definition. The label great comet stems from some combination of a comets brightness, longevity and breadth across the sky.

For purposes of this article, to consider the question of great comets of the north and south, and their frequency, well define great comets as those that achieve a brightness equal to the brightest planet Venus (magnitude -3 to -4) or brighter with tails that span 30 degrees or more of sky.

We can consider some other major comets, too, those that reached magnitude 1 or brighter in other words, they became as bright as the brightest stars with tails spanning 15 degrees or more. These major comets would have been visible long enough for Earths citizenry to take notice (some impressive comets have such extreme orbits that they arent visible long, and hardly anyone besides astronomers notices them).

Consider, also, that humanitys ability to view the heavens has completely changed in the last 50 years.

In that time, space travel has become a reality and solid-state electronics have revolutionized photography. Space probes have been sent to comets, most recently the European Space Agencys Rosetta spacecraft, which spent two years (2014 to 2016) becoming intimately acquainted with comet 67P/ChuryumovGerasimenko.

And the transistor and sensitive solid-state detectors revolutionized astrophotography providing amateurs with observing capabilities far exceeding professionals prior to modern electronics.

The years 1996-1997 were all about Hale-Bopp for comet fans. It was primarily a Northern Hemisphere comet. For weeks on end, Hale-Bopp was a fixture in our western sky, and it probably became one of the most-viewed comets in history.

This comet was indeed a major comet, but a great comet?

Nearly all comets have short periods of visibility. Hale-Bopp literally smashed the previous record for longevity in our skies, which had been held for nearly two centuries by the great comet of 1811. The 1811 comet remained visible to the unaided eye for nine months. Hale-Bopp was visible for a historic 18 months, truly the Cal Ripken Jr. of comets.

Hale-Bopp was bright early on, nearly but not quite as bright as Venus. The size of its nucleus the icy core of the comet, hurtling through space was estimated to be 60 kilometers +/- 20 km (37 miles +/- 12). That makes Hale-Bopps nucleus some six times larger than the nucleus of Halleys comet and 20 times that of Rosettas comet, 67P/ChuryumovGerasimenko.

Hale-Bopp had a long tail, up to 30 degrees long, but what was visible and bright was relatively a short tail, less than 10 degrees long, for nearly its entire period of visibility. Yes, some former great comets did not have 30 degree or longer tails, but those comets were, instead, extremely bright.

Bright generally means as bright as Venus or brighter. Hale-Bopp was not quite that bright. Some great comets are visible in daylight, but Hale-Bopp was not.

Finally, probably, we have to concede that Hale-Bopp straddles the edge of greatness.

In 1973, skygazers were alerted to the early discovery of a comet called Kohoutek. At the distance at which it was discovered and its brightness, astronomers projected that this was going to be a Comet of the Century, perhaps a daylight comet, a once-in-a-lifetime event.

But Kohoutek fizzled. It really disappointed skygazers even though, for professional astronomers, the drawn-out observations of Kohoutek were quite valuable.

Astronomers thought they had learned a lesson from Kohoutek. Too many astronomers stood outdoors at public star parties that year, trying to show a disappointed public a difficult-to-see comet.

Unfortunately, the lesson learned from this comet led astronomers to downplay the next contender for greatness: comet West in 1976. That was too bad, because comet West did not disappoint. It was a magnificent comet! Still, many average skygazers were left out because astronomers remained quiet and the media did not report on it. Thus comet West was not seen and appreciated as it should have been.

From comet West, fast forward a full 31 years to 2007 and the next truly great comet (sidestepping Hale-Bopp). The comet hunter Robert H. McNaught who has discovered more than 50 comets discovered it. This 2007 comet is sometimes called the Great Comet of 2007. Youre in the Northern Hemisphere and dont remember a great comet that year? Thats because, due to the inclination and high eccentricity of comet orbits, many are viewable from only one Earth hemisphere or the other. That was the case for comet McNaught in 2007.

Only Southern Hemisphere skygazers had a chance to become enamored of comet McNaught in 2007. Then, just four years later, another great comet appeared in Southern Hemisphere skies, comet Lovejoy of 2011. Northerners could only watch these two comets from a distance, through the wizardry of the digital age.

Or they could hitch a costly ride to place themselves under the southern skies.

So now consider the following chart which plots the major and great comets going back to 1680. Bear in mind that astronomical records appear to have reached a high level of fidelity about 200 years ago. Looking at this data statistically, what does it reveal?

On average, every five years, one can expect to see a major comet visible from the Earth. However, the variability around that average is also about five years (one standard deviation).

This means that, on average, a major comet arrives every five to 10 years.

Sometimes the visitations are clustered. A prime example is the years 1910 and 1911, when four major comets crossed the sky.

The data also reveal that great comets arrive on average every 20 years. The variability is 10 years, as represented by a standard deviation around the average. So truly great comets may be visible from Earth every 20 to 30 years. Some centuries might have two or three (1800s) while others, four or more (1900s).

Statistically, accounting for comet activity over 250 years 38 major comets is pretty sparse data, but one can see in the plot a historic trend. It is possible that if data could reveal a leaning towards one hemisphere, it could be an indicator that the Oort Cloud north or south of the ecliptic plane was affected by some object, e.g. a passing star. There is no indication of this in the records.

Does it answer the question? Has the Northern Hemisphere missed out on great comets?

There is certainly a recent trend towards the Southern Hemisphere for great comets. The data reveal that the long-term trend for both the Southern and Northern Hemispheres is a great comet every 25 to 40 years.

But, if you discount Hale-Bopp, then the last great comet for the Northern Hemisphere was Comet West, 44 years ago. Even if you consider Hale-Bopp as great, 23 years have passed.

It would seem that the north is statistically ready to receive its next great comet. Bring it on!

Bottom line: The Southern Hemisphere has had two great comets in this century: McNaught in 2007 and Lovejoy in 2011. But what about the Northern Hemisphere? Our last widely seen comet was Hale-Bopp in 1996-97. Comet West in 1976 was probably our last great comet. Were due for one!

Read more and see charts: How to see Comet NEOWISE

Excerpt from:

When is the next great comet? | Space | EarthSky

Comets Facts | Types, Composition, Size, Information, History & Definition

Comet History

As of 1995, 878 comets have been cataloged and their orbits at least roughly calculated. Of these 184 areperiodiccomets (orbital periods less than 200 years); some of the remainder are no doubt periodic as well, but their orbits have not been determined with sufficient accuracy to tell for sure.

Comets are sometimes calleddirty snowballsor "icy mudballs". They are a mixture of ices (both water and frozen gases) and dust that for some reason didn't get incorporated into planets when the solar system was formed. This makes them very interesting as samples of the early history of the solar system.

When they are near theSunand active, comets have several distinct parts:

Comets are invisible except when they are near the Sun. Most comets have highly eccentric orbits which take them far beyond the orbit ofPluto; these are seen once and then disappear for millennia. Only the short- and intermediate-period comets (like Comet Halley), stay within the orbit of Pluto for a significant fraction of their orbits.

After 500 or so passes near the Sun off most of a comet's ice and gasis lost leaving a rocky object very much like anasteroidin appearance. (Perhaps half of the near-Earth asteroids may be "dead" comets.) A comet whose orbit takes it near the Sun is also likely to either impact one of the planets or the Sun or to be ejected out of the solar system by a close encounter (esp. with Jupiter).

By far the most famous comet isComet HalleybutSL 9was a "big hit" for a week in the summer of 1994.

Meteor shower sometimes occur when the Earth passes thru the orbit of a comet. Some occur with great regularity: thePerseidmeteor shower occurs every year between August 9 and 13 when the Earth passes thru the orbit of CometSwift-Tuttle. Comet Halley is the source of theOrionidshower in October.

Many comets are first discovered by amateur astronomers. Since comets are brightest when near the Sun, they are usually visible only at sunrise or sunset. Charts showing the positions in the sky of some comets can be created with aplanetarium program.

Here is the original post:

Comets Facts | Types, Composition, Size, Information, History & Definition

What Is Aerospace? Aerospace Industry & Engineering. | Built In

Image: Shutterstock / Built InWhat Is Aerospace Engineering?

Aerospace engineering is the branch of engineering that works in the design, development, testing and production of airborne objects such as aircraft, missiles, spacecraft, rocket propulsion systems and other related systems. Aerospace engineering can either fall into the categories of aeronautical engineering or astronautical engineering.

Early aerospace engineering and its concepts can be traced back to the late 19th century. The true birth of the aerospace industry, however, took place in 1903, when Wilbur and Orville Wright demonstrated the first example of an airplane capable of sustained flight. The brothers conducted extensive research and development, which led to a breakthrough in developing an onboard system that would allow pilots to control the warping of the planes wings for altitude control. The Wright brothers began licensing their technology to governments and military contractors, and by 1909, they were able to develop the first plane capable of flying faster than 40 miles per hour.

Fast forward through several years of development bolstered by the emergence of both World War I and World War II, plus the introduction of commercial airliners in the 1930s and the aerospace industry would continue to take shape well into the 1950s. Along the way, superpowered jets were produced as well as missile defense systems that would further revolutionize combat. During the late 1950s, a new goal of reaching yet another frontier space became increasingly realistic.

The Space Age was marked by fierce competition between the Americans and the Soviets, both aspiring to become the first to explore beyond the sky. The Soviets were the first to succeed with the launch of a small satellite, Sputnik, first entering orbit in 1957. Sputniks achievement was a result of the evolution of missile systems and used rockets of similar construction to boost small payloads past the atmosphere. The United States completed its first successful launch in 1958 with Project SCORE, successfully placing the first low-orbit communications satellite into orbit.

Several additional satellites were launched and followed by the launch of the first successful human-piloted spacecraft to enter orbit, accomplished by Yury A. Gagarin aboard the Soviet Unions Vostok 1. Since Gagarins orbit, there have been hundreds of successful missions to space completed by both manned and autonomous aircraft.

Modern successes of the aerospace industry include manned missions to the moon, the exploration of Mars by rovers, an intricate system of navigational satellites launched into space and the permanent installation of an International Space Station in orbit.

Modern aerospace developments and breakthroughs often fall into one of two categories: Aeronautical Engineering and Astronautical Engineering.

Aeronautical engineering refers to the science, theory, technology, practices and advancements that make flight possible within the earths atmosphere, while astronautical engineering focuses on enabling space exploration, which includes the construction of spacecrafts and launch vehicles.

Enabling flight both above and below the atmosphere requires the cooperation and collaboration of engineering experts across multiple fields. Organizations within these fields are responsible for designing systems that are both compatible with existing technology and sustainable enough to remain in use without the need for constant redesigns. These systems are designed through rigorous research and development and built around several key aerospace engineering concepts. By studying these concepts, aerospace engineers can choose the field that they would most like to specialize in and take on a role in some of the most critical jobs.

Aerodynamics refer to how air moves and the interaction between the air and any solid masses passing through it. This is the foundation of aerospace engineering and provides a baseline for sustained flight.

Thermodynamics is the science of the relationship between heat, temperature, energy and output. This concept is key to mechanical engineering as it defines how heat is transformed into energy and creates mechanical output.

Read MoreWhat Is a Drone? Drone Definition and Uses.

Celestial mechanics applies principles of physics to astronomical objects, including stars, planets, asteroids, and other organic material in order to project the motion of objects throughout outer space. Astronautical engineering relies on celestial mechanics to propel engines and avoid contact with objects in orbit.

There are four forces that play into successful flight: thrust, drag, weight and lift. All of these forces must be balanced and react to changes in any of the other forces to sustain flight. Thrust is the result of propulsion and is controlled by engines, propellers, or rockets; drag slows a flying object down; weight is the effect gravity has on an object; and lift suspends flying objects in the air, often through the use of wings.

Propulsion is the use of a system to drive or push an object forward. Thrust is a result of propulsion, crucial to acceleration and maintaining speed in any craft.

Acoustics principles within aerospace are applied when evaluating and addressing aeroacoustic noise in spacecraft, launch environments, engines, and propulsion systems due to aerodynamic flow. Proper acoustics are crucial to maintaining a safe and manageable environment for those near a flying craft and require careful consideration due to changing pressures that can create catastrophic failure.

All aerospace engineering concepts come into play when designing guidance and control systems, allowing pilots and controllers to adjust systems as needed to maintain flight. Guidance and control systems also utilize GPS navigation to ensure safe travel through low visibility environments.

Best Aerospace CompaniesView Top Aerospace Companies Hiring Now

Aerospace engineers should possess a deep understanding of several elements crucial to success in any aerospace field. These concepts, plus several others, are imperative to building successful systems and playing a role in the future of aerospace capabilities:

Excerpt from:

What Is Aerospace? Aerospace Industry & Engineering. | Built In

Aerospace engineering – Wikipedia

Branch of engineering

Aerospace engineering is the primary field of engineering concerned with the development of aircraft and spacecraft.[3] It has two major and overlapping branches: aeronautical engineering and astronautical engineering. Avionics engineering is similar, but deals with the electronics side of aerospace engineering.

"Aeronautical engineering" was the original term for the field. As flight technology advanced to include vehicles operating in outer space, the broader term "aerospace engineering" has come into use.[4] Aerospace engineering, particularly the astronautics branch, is often colloquially referred to as "rocket science".[5][a]

Flight vehicles are subjected to demanding conditions such as those caused by changes in atmospheric pressure and temperature, with structural loads applied upon vehicle components. Consequently, they are usually the products of various technological and engineering disciplines including aerodynamics, Air propulsion, avionics, materials science, structural analysis and manufacturing. The interaction between these technologies is known as aerospace engineering. Because of the complexity and number of disciplines involved, aerospace engineering is carried out by teams of engineers, each having their own specialized area of expertise.[7]

The origin of aerospace engineering can be traced back to the aviation pioneers around the late 19th to early 20th centuries, although the work of Sir George Cayley dates from the last decade of the 18th to mid-19th century. One of the most important people in the history of aeronautics[8] and a pioneer in aeronautical engineering,[9] Cayley is credited as the first person to separate the forces of lift and drag, which affect any atmospheric flight vehicle.[10]

Early knowledge of aeronautical engineering was largely empirical, with some concepts and skills imported from other branches of engineering.[11] Some key elements, like fluid dynamics, were understood by 18th-century scientists.[citation needed]

In December 1903, the Wright Brothers performed the first sustained, controlled flight of a powered, heavier-than-air aircraft, lasting 12 seconds. The 1910s saw the development of aeronautical engineering through the design of World War I military aircraft.

Between World Wars I and II, great leaps were made in the field, accelerated by the advent of mainstream civil aviation. Notable airplanes of this era include the Curtiss JN 4, the Farman F.60 Goliath, and Fokker Trimotor. Notable military airplanes of this period include the Mitsubishi A6M Zero, the Supermarine Spitfire and the Messerschmitt Bf 109 from Japan, United Kingdom, and Germany respectively. A significant development in aerospace engineering came with the first operational Jet engine-powered airplane, the Messerschmitt Me 262 which entered service in 1944 towards the end of the second World War.[12]

The first definition of aerospace engineering appeared in February 1958,[4] considering the Earth's atmosphere and outer space as a single realm, thereby encompassing both aircraft (aero) and spacecraft (space) under the newly coined term aerospace.

In response to the USSR launching the first satellite, Sputnik, into space on October 4, 1957, U.S. aerospace engineers launched the first American satellite on January 31, 1958. The National Aeronautics and Space Administration was founded in 1958 as a response to the Cold War. In 1969, Apollo 11, the first human space mission to the moon took place. It saw three astronauts enter orbit around the Moon, with two, Neil Armstrong and Buzz Aldrin, visiting the lunar surface. The third astronaut, Michael Collins, stayed in orbit to rendezvous with Armstrong and Aldrin after their visit.[13]

An important innovation came on January 30, 1970, when the Boeing 747 made its first commercial flight from New York to London. This aircraft made history and became known as the "Jumbo Jet" or "Whale"[14] due to its ability to hold up to 480 passengers.[15]

Another significant development in aerospace engineering came in 1976, with the development of the first passenger supersonic aircraft, the Concorde. The development of this aircraft was agreed upon by the French and British on November 29, 1962.[16]

On December 21, 1988, the Antonov An-225 Mriya cargo aircraft commenced its first flight. It holds the records for the world's heaviest aircraft, heaviest airlifted cargo, and longest airlifted cargo, and has the widest wingspan of any aircraft in operational service.[17]

On October 25, 2007, the Airbus A380 made its maiden commercial flight from Singapore to Sydney, Australia. This aircraft was the first passenger plane to surpass the Boeing 747 in terms of passenger capacity, with a maximum of 853. Though development of this aircraft began in 1988 as a competitor to the 747, the A380 made its first test flight in April 2005.[18]

Some of the elements of aerospace engineering are:[19][20]

The basis of most of these elements lies in theoretical physics, such as fluid dynamics for aerodynamics or the equations of motion for flight dynamics. There is also a large empirical component. Historically, this empirical component was derived from testing of scale models and prototypes, either in wind tunnels or in the free atmosphere. More recently, advances in computing have enabled the use of computational fluid dynamics to simulate the behavior of the fluid, reducing time and expense spent on wind-tunnel testing. Those studying hydrodynamics or hydroacoustics often obtain degrees in aerospace engineering.

Additionally, aerospace engineering addresses the integration of all components that constitute an aerospace vehicle (subsystems including power, aerospace bearings, communications, thermal control, life support, etc.) and its life cycle (design, temperature, pressure, radiation, velocity, lifetime).

Aerospace engineering may be studied at the advanced diploma, bachelor's, master's, and Ph.D. levels in aerospace engineering departments at many universities, and in mechanical engineering departments at others. A few departments offer degrees in space-focused astronautical engineering. Some institutions differentiate between aeronautical and astronautical engineering. Graduate degrees are offered in advanced or specialty areas for the aerospace industry.

A background in chemistry, physics, computer science and mathematics is important for students pursuing an aerospace engineering degree.[22]

The term "rocket scientist" is sometimes used to describe a person of great intelligence since rocket science is seen as a practice requiring great mental ability, especially technically and mathematically. The term is used ironically in the expression "It's not rocket science" to indicate that a task is simple.[23] Strictly speaking, the use of "science" in "rocket science" is a misnomer since science is about understanding the origins, nature, and behavior of the universe; engineering is about using scientific and engineering principles to solve problems and develop new technology.[5][6] The more etymologically correct version of this phrase would be "rocket engineer". However, "science" and "engineering" are often misused as synonyms.[5][6][24]

Here is the original post:

Aerospace engineering - Wikipedia

Technology Types & Uses | What is Technology? – Study.com

Though mechanical technology is simple, it has allowed for extremely important advancements in the human experience. Early humans using the wheel allowed for the transportation of heavy material to be faster and easier. The first wheel was found in ancient Mesopotamia, and is thought to be used as a potter's wheel to throw clay pots. Ancient Egypt and India saw the invention of the shaduf, a hand-operated lever and fulcrum used to lift water to irrigate crops. Ancient Greek philosopher Archimedes was the first to record simple machines, including pulleys, levers, and inclined planes, all used as machines to lessen the work needed to accomplish a task. During the Industrial Revolution, mechanical principles were used in the invention of engines, which use a system of pistons to generate large amounts of force needed to move trains and power factories. In modern times, mechanical technology is employed to accomplish all sorts of engineering tasks, such as running our cars, lifting heavy objects, and transporting goods.

Medical technology is defined as the application of scientific principles to develop solutions to problems regarding health, prevent or delay the onset of disease, and promote the overall health of humans. Medical technology is used to prevent, diagnose, treat, and monitor symptoms and diseases. This includes the production of drugs and medications, and the use of x-rays, MRIs, and ultrasounds which are tools used to look in the body for ailments. Ventilators are another type of medical technology, and are used to assist people in breathing. Medical technology can also include the equipment invented and used specifically for medical practices, from stethoscopes to scalpels.

An MRI scans the internal organs of a person, and helps doctors diagnose ailments.

Communications technology is the application of scientific knowledge to communicate. This includes everything from telegrams to landlines to cell phones. The internet is considered a communications technology, because it is a system that communicates information in infinite ways. Communications technology also includes systems that aid in the effectiveness and efficiency of communication, such as communication satellites.

Electronic technology is the application of scientific understanding of electricity to do work and perform tasks. We think of electronic technology as the many electronic devices, known as electronics, used in our modern world, such as tablets, laptops, and phones, all with internal computers that run on electricity. However, electronic technology includes any machine that runs on electricity. This includes washing machines, lights, and toasters. Electronic technology uses a system of electrical circuits to achieve a goal. This system can be simple, like that of a light circuit, or can be very complex, like that of a computer. Regardless of the system's complexity, if it uses electricity, it is considered electronic technology.

A Nintendo 64 controller circuit board shows the intricate ways electricity powers a computer.

Industrial and manufacturing technologies is the application of scientific principles to make the production of objects faster, safer, and more efficient. It is a large field that includes many other forms of technology, including electrical and mechanical technologies. In the Industrial Revolution of the 1700s and 1800s, this type of technology revolutionized how humans travel, eat, and live. The invention of engines enabled factories to build machines that mass-produced objects. Engines also enabled those products to be shipped like never before, and a huge variety of products were available to people all over the world. The advancement of industrial and manufacturing technologies also revolutionized war, making the production of weapons faster and cheaper. Through the 1940s, '50s, and '60s, manufacturing technologies brought the world fast food, paper plates, and cheap and affordable housing.

A woman working in a 1914 British wartime factory.

Forms of technology have been developed and used as far back as humans have existed. Some of the earliest tools used by humans included sharpened stones used as arrowheads, axes, and cutting tools, which can be considered mechanical technology. With the invention of the wheel, early humans were able to make more sophisticated pottery, as well as lighten the load through the use of wheelbarrows. Boats, which were invented and used before the wheel, utilized all sorts of technologies, from navigational tools to pulley systems and wind power. Plumbing and irrigation technologies allowed for the transportation of water to vital systems needed for human life, such as watering crops and providing water for human consumption, to the disposal of waste. Building structures became easier as technology advanced, which lead to some of the most impressive structures ever built by humans, including Stonehenge and the Egyptian Pyramids. As technologies advanced, so too could civilization. More technology meant more solutions, which meant bigger cities, more trade, and the expansion of civilization.

Industrial and manufacturing technologies is the application of scientific principles to make the production of objects faster, safer, and more efficient. It is a large field that includes many other forms of technology, including electrical and mechanical technologies. In the Industrial Revolution of the 1700s and 1800s, this type of technology revolutionized how humans travel, eat, and live. The invention of engines enabled factories to build machines that mass-produced objects. Engines also enabled those products to be shipped like never before, and a huge variety of products were available to people all over the world. The advancement of industrial and manufacturing technologies also revolutionized war, making the production of weapons faster and cheaper. Through the 1940s, '50s, and '60s, manufacturing technologies brought the world fast food, paper plates, and cheap and affordable housing.

A woman working in a 1914 British wartime factory.

Forms of technology have been developed and used as far back as humans have existed. Some of the earliest tools used by humans included sharpened stones used as arrowheads, axes, and cutting tools, which can be considered mechanical technology. With the invention of the wheel, early humans were able to make more sophisticated pottery, as well as lighten the load through the use of wheelbarrows. Boats, which were invented and used before the wheel, utilized all sorts of technologies, from navigational tools to pulley systems and wind power. Plumbing and irrigation technologies allowed for the transportation of water to vital systems needed for human life, such as watering crops and providing water for human consumption, to the disposal of waste. Building structures became easier as technology advanced, which lead to some of the most impressive structures ever built by humans, including Stonehenge and the Egyptian Pyramids. As technologies advanced, so too could civilization. More technology meant more solutions, which meant bigger cities, more trade, and the expansion of civilization.

What does technology mean for the future? If the past is any indication, technology is the key to solving the world's problems. Technology has revolutionized different ways of doing things, ways that improve the lives of humans, and advanced industries. The advancement of technology has also led to harmful effects on both people and the environment, particularly the invention and overuse of fossil fuels as an energy source to power so many of the technologies used in our modern life. Though technology has harmed the planet in our overuse of the planet's resources, technology could be the answer to many of the climate problems we face today. Technology has the power to solve modern problems as well, and the problems humans have faced in the past. With the world's greatest minds ever seeking solutions to the problems we face today, and an increasing number of people recognizing those problems, technology is sure to find ways of increasing the quality of life for humans, as well as the overall health of the planet and all who live here.

Technology has advanced human society for as long as our species has been in existence. Human life is full of problems that need solving, and technology provides innovative solutions to those problems that reduce effort and increase efficiency. Technology is the use of scientific knowledge for practical purposes that benefit our everyday lives, as well as the industries created by humans. There are several types of technologies. Mechanical technology includes the use of machines to do work, and utilizes many simple machines such as wheels, levers, pulleys, and even cogs and gears that all function together to accomplish a task. Machines that move are mechanical technology. Medical technology is another type, and includes ventilators, medication, and MRIs. Communication technology is the third type, and includes all types of tools used to communicate, from telegrams to telephones. Electronic technology includes technology that requires electricity, from dishwashers to blenders to various electronic devices. Finally, industrial and manufacturing technologies advance ways of producing objects used by people around the world.

Excerpt from:

Technology Types & Uses | What is Technology? - Study.com

Technology – Wikipedia

Use of knowledge for practical goals

Technology is the application of knowledge for achieving practical goals in a reproducible way.[1] The word technology can also mean the products resulting from such efforts,[2]:117[3] including both tangible tools such as utensils or machines, and intangible ones such as software. Technology plays a critical role in science, engineering, and everyday life.

Technological advancements have led to significant changes in society. The earliest known technology is the stone tool, used during prehistoric times, followed by the control of fire, which contributed to the growth of the human brain and the development of language during the Ice Age. The invention of the wheel in the Bronze Age allowed greater travel and the creation of more complex machines. More recent technological inventions, including the printing press, telephone, and the Internet, have lowered barriers to communication and ushered in the knowledge economy.

While technology contributes to economic development and improve human prosperity, it can also have negative impacts like pollution or resource depletion, or may cause social harms like technological unemployment resulting from automation. As a result, there are ongoing philosophical and political debates about the role and use of technology, the ethics of technology, and ways to mitigate potential downsides.

Technology is a term dating back to the early 17th century that meant 'systematic treatment' (from Greek , from 'art, craft' and -, 'study, knowledge').[4][5] It is predated in use by the Ancient Greek , used to mean 'knowledge of how to make things', which encompassed activities like architecture.[6]

Starting in the 19th century, continental Europeans started using the terms Technik (German) or technique (French) to refer to a 'way of doing', which included all technical arts, such as dancing, navigation, or printing, whether or not they required tools or instruments.[2]:114115 At the time, Technologie (German and French) referred either to the academic discipline studying the "methods of arts and crafts", or to the political discipline "intended to legislate on the functions of the arts and crafts."[2]:117 Since the distinction between Technik and Technologie is absent in English, both were translated as technology. The term was previously uncommon in English and mostly referred to the academic discipline, as in the Massachusetts Institute of Technology.[7]

In the 20th century, as a result of scientific progress and the Second Industrial Revolution, technology stopped being considered a distinct academic discipline and took on its current-day meaning: the systemic use of knowledge to practical ends.[2]:119

Tools were initially developed by hominids through observation and trial and error.[8] Around 2 Mya (million years ago), they learned to make the first stone tools by hammering flakes off a pebble, forming a sharp hand axe.[9] This practice was refined 75 kya (thousand years ago) into pressure flaking, enabling much finer work.[10]

The discovery of fire was described by Charles Darwin as "possibly the greatest ever made by man".[11] Archeological, dietary, and social evidence point to "continuous [human] fire-use" at least 1.5 Mya.[12] Fire, fueled with wood and charcoal, allowed early humans to cook their food to increase its digestibility, improving its nutrient value and broadening the number of foods that could be eaten.[13] The cooking hypothesis proposes that the ability to cook promoted an increase in hominid brain size, though some researchers find the evidence inconclusive.[14] Archeological evidence of hearths was dated to 790 kya; researchers believe this is likely to have intensified human socialization and may have contributed to the emergence of language.[15][16]

Other technological advances made during the Paleolithic era include clothing and shelter.[17] No consensus exists on the approximate time of adoption of either technology, but archeologists have found archeological evidence of clothing 90-120 kya[18] and shelter 450 kya.[17] As the Paleolithic era progressed, dwellings became more sophisticated and more elaborate; as early as 380 kya, humans were constructing temporary wood huts.[19][20] Clothing, adapted from the fur and hides of hunted animals, helped humanity expand into colder regions; humans began to migrate out of Africa around 200 kya, initially moving to Eurasia.[21][22][23]

The Neolithic Revolution (or First Agricultural Revolution) brought about an acceleration of technological innovation, and a consequent increase in social complexity.[24] The invention of the polished stone axe was a major advance that allowed large-scale forest clearance and farming.[25] This use of polished stone axes increased greatly in the Neolithic but was originally used in the preceding Mesolithic in some areas such as Ireland.[26] Agriculture fed larger populations, and the transition to sedentism allowed for the simultaneous raising of more children, as infants no longer needed to be carried around by nomads. Additionally, children could contribute labor to the raising of crops more readily than they could participate in hunter-gatherer activities.[27][28]

With this increase in population and availability of labor came an increase in labor specialization.[29] What triggered the progression from early Neolithic villages to the first cities, such as Uruk, and the first civilizations, such as Sumer, is not specifically known; however, the emergence of increasingly hierarchical social structures and specialized labor, of trade and war amongst adjacent cultures, and the need for collective action to overcome environmental challenges such as irrigation, are all thought to have played a role.[30]

Continuing improvements led to the furnace and bellows and provided, for the first time, the ability to smelt and forge gold, copper, silver, and lead native metals found in relatively pure form in nature.[31] The advantages of copper tools over stone, bone and wooden tools were quickly apparent to early humans, and native copper was probably used from near the beginning of Neolithic times (about 10 ka).[32] Native copper does not naturally occur in large amounts, but copper ores are quite common and some of them produce metal easily when burned in wood or charcoal fires. Eventually, the working of metals led to the discovery of alloys such as bronze and brass (about 4,000 BCE). The first use of iron alloys such as steel dates to around 1,800 BCE.[33][34]

After harnessing fire, humans discovered other forms of energy. The earliest known use of wind power is the sailing ship; the earliest record of a ship under sail is that of a Nile boat dating to around 7,000 BCE.[35] From prehistoric times, Egyptians likely used the power of the annual flooding of the Nile to irrigate their lands, gradually learning to regulate much of it through purposely built irrigation channels and "catch" basins.[36] The ancient Sumerians in Mesopotamia used a complex system of canals and levees to divert water from the Tigris and Euphrates rivers for irrigation.[37]

Archaeologists estimate that the wheel was invented independently and concurrently in Mesopotamia (in present-day Iraq), the Northern Caucasus (Maykop culture), and Central Europe.[38] Time estimates range from 5,500 to 3,000 BCE with most experts putting it closer to 4,000 BCE.[39] The oldest artifacts with drawings depicting wheeled carts date from about 3,500 BCE.[40] More recently, the oldest-known wooden wheel in the world was found in the Ljubljana Marsh of Slovenia.[41]

The invention of the wheel revolutionized trade and war. It did not take long to discover that wheeled wagons could be used to carry heavy loads. The ancient Sumerians used a potter's wheel and may have invented it.[42] A stone pottery wheel found in the city-state of Ur dates to around 3,429 BCE,[43] and even older fragments of wheel-thrown pottery have been found in the same area.[43] Fast (rotary) potters' wheels enabled early mass production of pottery, but it was the use of the wheel as a transformer of energy (through water wheels, windmills, and even treadmills) that revolutionized the application of nonhuman power sources. The first two-wheeled carts were derived from travois[44] and were first used in Mesopotamia and Iran in around 3,000 BCE.[44]

The oldest known constructed roadways are the stone-paved streets of the city-state of Ur, dating to circa 4,000 BCE,[45] and timber roads leading through the swamps of Glastonbury, England, dating to around the same period.[45] The first long-distance road, which came into use around 3,500 BCE,[45] spanned 2,400km from the Persian Gulf to the Mediterranean Sea,[45] but was not paved and was only partially maintained.[45] In around 2,000 BCE, the Minoans on the Greek island of Crete built a 50km road leading from the palace of Gortyn on the south side of the island, through the mountains, to the palace of Knossos on the north side of the island.[45] Unlike the earlier road, the Minoan road was completely paved.[45]

Ancient Minoan private homes had running water.[47] A bathtub virtually identical to modern ones was unearthed at the Palace of Knossos.[47][48] Several Minoan private homes also had toilets, which could be flushed by pouring water down the drain.[47] The ancient Romans had many public flush toilets,[48] which emptied into an extensive sewage system.[48] The primary sewer in Rome was the Cloaca Maxima;[48] construction began on it in the sixth century BCE and it is still in use today.[48]

The ancient Romans also had a complex system of aqueducts,[46] which were used to transport water across long distances.[46] The first Roman aqueduct was built in 312 BCE.[46] The eleventh and final ancient Roman aqueduct was built in 226 CE.[46] Put together, the Roman aqueducts extended over 450km,[46] but less than 70km of this was above ground and supported by arches.[46]

Innovations continued through the Middle Ages with the introduction of silk production (in Asia and later Europe), the horse collar, and horseshoes. Simple machines (such as the lever, the screw, and the pulley) were combined into more complicated tools, such as the wheelbarrow, windmills, and clocks.[49] A system of universities developed and spread scientific ideas and practices, including Oxford and Cambridge.[50]

The Renaissance era produced many innovations, including the introduction of the movable type printing press to Europe, which facilitated the communication of knowledge. Technology became increasingly influenced by science, beginning a cycle of mutual advancement.[51]

Starting in the United Kingdom in the 18th century, the discovery of steam power set off the Industrial Revolution, which saw wide-ranging technological discoveries, particularly in the areas of agriculture, manufacturing, mining, metallurgy, and transport, and the widespread application of the factory system.[52] This was followed a century later by the Second Industrial Revolution which led to rapid scientific discovery, standardization, and mass production. New technologies were developed, including sewage systems, electricity, light bulbs, electric motors, railroads, automobiles, and airplanes. These technological advances led to significant developments in medicine, chemistry, physics, and engineering.[53] They were accompanied by consequential social change, with the introduction of skyscrapers accompanied by rapid urbanization.[54] Communication improved with the invention of the telegraph, the telephone, the radio, and television.[55]

The 20th century brought a host of innovations. In physics, the discovery of nuclear fission in the Atomic Age led to both nuclear weapons and nuclear power. Computers were invented and later shifted from analog to digital in the Digital Revolution. Information technology, particularly optical fiber and optical amplifiers led to the birth of the Internet, which ushered in the Information Age. The Space Age began with the launch of Sputnik 1 in 1957, and later the launch of crewed missions to the moon in the 1960s. Organized efforts to search for extraterrestrial intelligence have used radio telescopes to detect signs of technology use, or technosignatures, given off by alien civilizations. In medicine, new technologies were developed for diagnosis (CT, PET, and MRI scanning), treatment (like the dialysis machine, defibrillator, pacemaker, and a wide array of new pharmaceutical drugs), and research (like interferon cloning and DNA microarrays).[56]

Complex manufacturing and construction techniques and organizations are needed to make and maintain more modern technologies, and entire industries have arisen to develop succeeding generations of increasingly more complex tools. Modern technology increasingly relies on training and education their designers, builders, maintainers, and users often require sophisticated general and specific training.[57] Moreover, these technologies have become so complex that entire fields have developed to support them, including engineering, medicine, and computer science; and other fields have become more complex, such as construction, transportation, and architecture.

Technological change is the largest cause of long-term economic growth.[58][59] Throughout human history, energy production was the main constraint on economic development, and new technologies allowed humans to significantly increase the amount of available energy. First came fire, which made edible a wider variety of foods, and made it less physically demanding to digest them. Fire also enabled smelting, and the use of tin, copper, and iron tools, used for hunting or tradesmanship. Then came the agricultural revolution: humans no longer needed to hunt or gather to survive, and began to settle in towns and cities, forming more complex societies, with militaries and more organized forms of religion.[60]

Technologies have contributed to human welfare through increased prosperity, improved comfort and quality of life, and medical progress, but they can also disrupt existing social hierarchies, cause pollution, and harm individuals or groups.

Recent years have brought about a rise in social media's cultural prominence, with potential repercussions on democracy, and economic and social life. Early on, the internet was seen as a "liberation technology" that would democratize knowledge, improve access to education, and promote democracy. Modern research has turned to investigate the internet's downsides, including disinformation, polarization, hate speech, and propaganda.[61]

Since the 1970s, technology's impact on the environment has been criticized, leading to a surge in investment in solar, wind, and other forms of clean energy.

Since the invention of the wheel, technologies have helped increase humans' economic output. Past automation has both substituted and complemented labor; machines replaced humans at some lower-paying jobs (for example in agriculture), but this was compensated by the creation of new, higher-paying jobs.[62] Studies have found that computers did not create significant net technological unemployment. [63] Due to artificial intelligence being far more capable than computers, and still being in its infancy, it is not known whether it will follow the same trend; the question has been debated at length among economists and policymakers. A 2017 survey found no clear consensus among economists on whether AI would increase long-term unemployment.[64] According to the World Economic Forum's "The Future of Jobs Report 2020", AI is predicted to replace 85 million jobs worldwide, and create 97 million new jobs by 2025.[65][66] From 1990 to 2007, a study in the U.S by MIT economist Daron Acemoglu showed that an addition of one robot for every 1,000 workers decreased the employment-to-population ratio by 0.2%, or about 3.3 workers, and lowered wages by 0.42%.[67][68] Concerns about technology replacing human labor however are long-lasting. As US president Lyndon Johnson said in 1964, Technology is creating both new opportunities and new obligations for us, opportunity for greater productivity and progress; obligation to be sure that no workingman, no family must pay an unjust price for progress. upon signing the National Commission on Technology, Automation, and Economic Progress bill.[69][70][71][72][73]

With the growing reliance of technology, there have been security and privacy concerns along with it. Billions of people use different online payment methods, such as WeChat Pay, PayPal, Alipay, and much more to help transfer money. Although security measures are placed, some criminals are able to bypass them.[74] In March 2022, North Korea used Blender.io, a mixer which helped them to hide their cryptocurrency exchanges, to launder over $20.5 million in cryptocurrency, from Axie Infinity, and steal over $600 million worth of cryptocurrency from the games owner. Because of this, the U.S. Treasury Department sanctioned Blender.io, which marked the first time it has taken action against a mixer, to try and crack down on North Korean hackers.[75][76] The privacy of cryptocurrency has been debated. Although many customers like the privacy of cryptocurrency, many also argue that it needs more transparency and stability.[74]

Philosophy of technology is a branch of philosophy that studies the "practice of designing and creating artifacts", and the "nature of the things so created."[77] It emerged as a discipline over the past two centuries, and has grown "considerably" since the 1970s.[78] The humanities philosophy of technology is concerned with the "meaning of technology for, and its impact on, society and culture".[77]

Initially, technology was seen as an extension of the human organism that replicated or amplified bodily and mental faculties.[79] Marx framed it as a tool used by capitalists to oppress the proletariat, but believe technology would be a fundamentally liberating force once it was "freed from societal deformations". Second-wave philosophers like Ortega later shifted their focus from economics and politics to "daily life and living in a techno-material culture," arguing that technology could oppress "even the members of the bourgeoisie who were its ostensible masters and possessors." Third-stage philosophers like Don Ihde and Albert Borgmann represent a turn toward de-generalization and empiricism, and considered how humans can learn to live with technology.[78][pageneeded]

Early scholarship on technology was split between two arguments: technological determinism, and social construction. Technological determinism is the idea that technologies cause unavoidable social changes.[80]:95 It usually encompasses a related argument, technological autonomy, which asserts that technological progress follows a natural progression and cannot be prevented.[81] Social constructivists[who?] argue that technologies follow no natural progression, and are shaped by cultural values, laws, politics, and economic incentives. Modern scholarship has shifted towards an analysis of sociotechnical systems, "assemblages of things, people, practices, and meanings", looking at the value judgments that shape technology.[80][pageneeded]

Cultural critic Neil Postman distinguished tool-using societies from technological societies and from what he called "technopolies," societies that are dominated by an ideology of technological and scientific progress to the detriment of other cultural practices, values, and world views.[82] Herbert Marcuse and John Zerzan suggest that technological society will inevitably deprive us of our freedom and psychological health.[83]

The ethics of technology is an interdisciplinary subfield of ethics that analyzes technology's ethical implications and explores ways to mitigate the potential negative impacts of new technologies. There is a broad range of ethical issues revolving around technology, from specific areas of focus affecting professionals working with technology to broader social, ethical, and legal issues concerning the role of technology in society and everyday life.[84]

Prominent debates have surrounded genetically modified organisms, the use of robotic soldiers, algorithmic bias, and the issue of aligning AI behavior with human values[85]

Technology ethics encompasses several key fields. Bioethics looks at ethical issues surrounding biotechnologies and modern medicine, including cloning, human genetic engineering, and stem cell research. Computer ethics focuses on issues related to computing. Cyberethics explores internet-related issues like intellectual property rights, privacy, and censorship. Nanoethics examines issues surrounding the alteration of matter at the atomic and molecular level in various disciplines including computer science, engineering, and biology. And engineering ethics deals with the professional standards of engineers, including software engineers and their moral responsibilities to the public.[86]

A wide branch of technology ethics is concerned with the ethics of artificial intelligence: it includes robot ethics, which deals with ethical issues involved in the design, construction, use, and treatment of robots,[87] as well as machine ethics, which is concerned with ensuring the ethical behavior of artificial intelligent agents.[88] Within the field of AI ethics, significant yet-unsolved research problems include AI alignment (ensuring that AI behaviors are aligned with their creators' intended goals and interests) and the reduction of algorithmic bias. Some researchers have warned against the hypothetical risk of an AI takeover, and have advocated for the use of AI capability control in addition to AI alignment methods.

Other fields of ethics have had to contend with technology-related issues, including military ethics, media ethics, and educational ethics.

Futures studies is the systematic and interdisciplinary study of social and technological progress. It aims to quantitatively and qualitatively explore the range of plausible futures and to incorporate human values in the development of new technologies.[89]:54 More generally, futures researchers are interested in improving "the freedom and welfare of humankind".[89]:73 It relies on a thorough quantitative and qualitative analysis of past and present technological trends, and attempts to rigorously extrapolate them into the future.[89] Science fiction is often used as a source of ideas.[89]:173 Futures research methodologies include survey research, modeling, statistical analysis, and computer simulations.[89]:187

Existential risk researchers analyze risks that could lead to human extinction or civilizational collapse, and look for ways to build resilience against them.[90][91] Relevant research centers include the Cambridge Center for the Study of Existential Risk, and the Stanford Existential Risk Initiative.[92] Future technologies may contribute to the risks of artificial general intelligence, biological warfare, nuclear warfare, nanotechnology, anthropogenic climate change, global warming, or stable global totalitarianism, though technologies may also help us mitigate asteroid impacts and gamma-ray bursts.[93] In 2019 philosopher Nick Bostrom introduced the notion of a vulnerable world, "one in which there is some level of technological development at which civilization almost certainly gets devastated by default", citing the risks of a pandemic caused by bioterrorists, or an arms race triggered by the development of novel armaments and the loss of mutual assured destruction.[94] He invites policymakers to question the assumptions that technological progress is always beneficial, that scientific openness is always preferable, or that they can afford to wait until a dangerous technology has been invented before they prepare mitigations.[94]

Emerging technologies are novel technologies whose development or practical applications are still largely unrealized. They include nanotechnology, biotechnology, robotics, 3D printing, blockchains, and artificial intelligence.

In 2005, futurist Ray Kurzweil claimed the next technological revolution would rest upon advances in genetics, nanotechnology, and robotics, with robotics being the most impactful of the three.[95] Genetic engineering will allow far greater control over human biological nature through a process called directed evolution. Some thinkers believe that this may shatter our sense of self, and have urged for renewed public debate exploring the issue more thoroughly;[96] others fear that directed evolution could lead to eugenics or extreme social inequality. Nanotechnology will grant us the ability to manipulate matter "at the molecular and atomic scale",[97] which could allow us to reshape ourselves and our environment in fundamental ways.[98] Nanobots could be used within the human body to destroy cancer cells or form new body parts, blurring the line between biology and technology.[99] Autonomous robots have undergone rapid progress, and are expected to replace humans at many dangerous tasks, including search and rescue, bomb disposal, firefighting, and war.[100]

Estimates on the advent of artificial general intelligence vary, but half of machine learning experts surveyed in 2018 believe that AI will "accomplish every task better and more cheaply" than humans by 2063, and automate all human jobs by 2140.[101] This expected technological unemployment has led to calls for increased emphasis on computer science education and debates about UBI. Political science experts predict that this could lead to a rise in extremism, while others see it as an opportunity to usher in a post-scarcity economy.

Some segments of the 1960s hippie counterculture grew to dislike urban living and developed a preference for locally autonomous, sustainable, and decentralized technology, termed appropriate technology. This later influenced hacker culture and technopaganism.

Technological utopianism refers to the belief that technological development is a moral good, which can and should bring about a utopia, that is, a society in which laws, governments, and social conditions serve the needs of all its citizens.[102] Examples of techno-utopian goals include post-scarcity economics, life extension, mind uploading, cryonics, and the creation of artificial superintelligence. Major techno-utopian movements include transhumanism and singularitarianism.

The transhumanism movement is founded upon the "continued evolution of human life beyond its current human form" through science and technology, informed by "life-promoting principles and values."[103] The movement gained wider popularity in the early 21st century.[104]

Singularitarians believe that machine superintelligence will "accelerate technological progress" by orders of magnitude and "create even more intelligent entities ever faster", which may lead to a pace of societal and technological change that is "incomprehensible" to us. This event horizon is known as the technological singularity.[105]

Major figures of techno-utopianism include Ray Kurzweil and Nick Bostrom. Techno-utopianism has attracted both praise and criticism from progressive, religious, and conservative thinkers.[106]

Technology's central role in our lives has drawn concerns and backlash. The backlash against technology is not a uniform movement and encompasses many heterogeneous ideologies.[107]

The earliest known revolt against technology was Luddism, a pushback against early automation in textile production. Automation had resulted in a need for fewer workers, a process known as technological unemployment.

Between the 1970s and 1990s, American terrorist Ted Kaczynski carried out a series of bombings across America and published the Unabomber Manifesto denouncing technology's negative impacts on nature and human freedom. The essay resonated with a large part of the American public.[108] It was partly inspired by Jacques Ellul's The Technological Society.[109]

Some subcultures, like the off-the-grid movement, advocate a withdrawal from technology and a return to nature. The ecovillage movement seeks to reestablish harmony between technology and nature.[110]

Engineering is the process by which technology is developed. It often requires problem-solving under strict constraints.[111] Technological development is "action-oriented", while scientific knowledge is fundamentally explanatory.[112] Polish philosopher Henryk Skolimowski framed it like so: "science concerns itself with what is, technology with what is to be."[113]:375

The direction of causality between scientific discovery and technological innovation has been debated by scientists, philosophers and policymakers.[114] Because innovation is often undertaken at the edge of scientific knowledge, most technologies are not derived from scientific knowledge, but instead from engineering, tinkering and chance.[115]:217240 For example, in the 1940s and 1950s, when knowledge of turbulent combustion or fluid dynamics was still crude, jet engines were invented through "running the device to destruction, analyzing what broke [...] and repeating the process".[111] Scientific explanations often follow technological developments rather than preceding them.[115]:217240 Many discoveries also arose from pure chance, like the discovery of penicillin as a result of accidental lab contamination.[116] Since the 1960s, the assumption that government funding of basic research would lead to the discovery of marketable technologies has lost credibility.[117][118] Probabilist Nassim Taleb argues that national research programs that implement the notions of serendipity and convexity through frequent trial and error are more likely to lead to useful innovations than research that aims to reach specific outcomes.[115][119]

Despite this, modern technology is increasingly reliant on deep, domain-specific scientific knowledge. In 1979, an average of one in three patents granted in the U.S. cited the scientific literature; by 1989, this increased to an average of one citation per patent. The average was skewed upwards by patents related to the pharmaceutical industry, chemistry, and electronics.[120] A 2021 analysis shows that patents that are based on scientific discoveries are on average 26% more valuable than equivalent non-science-based patents.[121]

The use of basic technology is also a feature of non-human animal species. Tool use was once considered a defining characteristic of the genus Homo.[122] This view was supplanted after discovering evidence of tool use among chimpanzees and other primates,[123] dolphins,[124] and crows.[125][126] For example, researchers have observed wild chimpanzees using basic foraging tools, pestles, levers, using leaves as sponges, and tree bark or vines as probes to fish termites.[127] West African chimpanzees use stone hammers and anvils for cracking nuts,[128] as do capuchin monkeys of Boa Vista, Brazil.[129] Tool use is not the only form of animal technology use; for example, beaver dams, built with wooden sticks or large stones, are a technology with "dramatic" impacts on river habitats and ecosystems.[130]

Man's relationship with technology has been explored in science-fiction literature, for example in Brave New World, A Clockwork Orange, Nineteen Eighty-Four, Isaac Asimov's essays, and movies like Minority Report, Total Recall, Gattaca, and Inception. It has spawned the dystopian and futuristic cyberpunk genre, which juxtaposes futuristic technology with societal collapse, dystopia or decay.[131] Notable cyberpunk works include William Gibson's Neuromancer novel, and movies like Blade Runner, and The Matrix.

Here is the original post:

Technology - Wikipedia