12345...10...


Space exploration – Wikipedia

Space exploration is the discovery and exploration of celestial structures in outer space by means of evolving and growing space technology. While the study of space is carried out mainly by astronomers with telescopes, the physical exploration of space is conducted both by unmanned robotic space probes and human spaceflight.

While the observation of objects in space, known as astronomy, predates reliable recorded history, it was the development of large and relatively efficient rockets during the mid-twentieth century that allowed physical space exploration to become a reality. Common rationales for exploring space include advancing scientific research, national prestige, uniting different nations, ensuring the future survival of humanity, and developing military and strategic advantages against other countries.[1]

Space exploration has often been used as a proxy competition for geopolitical rivalries such as the Cold War. The early era of space exploration was driven by a “Space Race” between the Soviet Union and the United States. The launch of the first human-made object to orbit Earth, the Soviet Union’s Sputnik 1, on 4 October 1957, and the first Moon landing by the American Apollo 11 mission on 20 July 1969 are often taken as landmarks for this initial period. The Soviet Space Program achieved many of the first milestones, including the first living being in orbit in 1957, the first human spaceflight (Yuri Gagarin aboard Vostok 1) in 1961, the first spacewalk (by Aleksei Leonov) on 18 March 1965, the first automatic landing on another celestial body in 1966, and the launch of the first space station (Salyut 1) in 1971. After the first 20 years of exploration, focus shifted from one-off flights to renewable hardware, such as the Space Shuttle program, and from competition to cooperation as with the International Space Station (ISS).

With the substantial completion of the ISS[2] following STS-133 in March 2011, plans for space exploration by the U.S. remain in flux. Constellation, a Bush Administration program for a return to the Moon by 2020[3] was judged inadequately funded and unrealistic by an expert review panel reporting in 2009.[4] The Obama Administration proposed a revision of Constellation in 2010 to focus on the development of the capability for crewed missions beyond low Earth orbit (LEO), envisioning extending the operation of the ISS beyond 2020, transferring the development of launch vehicles for human crews from NASA to the private sector, and developing technology to enable missions to beyond LEO, such as EarthMoon L1, the Moon, EarthSun L2, near-Earth asteroids, and Phobos or Mars orbit.[5]

In the 2000s, the People’s Republic of China initiated a successful manned spaceflight program, while the European Union, Japan, and India have also planned future crewed space missions. China, Russia, Japan, and India have advocated crewed missions to the Moon during the 21st century, while the European Union has advocated manned missions to both the Moon and Mars during the 20th and 21st century.

From the 1990s onwards, private interests began promoting space tourism and then public space exploration of the Moon (see Google Lunar X Prize).

The highest known projectiles prior to the rockets of the 1940s were the shells of the Paris Gun, a type of German long-range siege gun, which reached at least 40 kilometers altitude during World War One.[6] Steps towards putting a human-made object into space were taken by German scientists during World War II while testing the V-2 rocket, which became the first human-made object in space on 3 October 1942 with the launching of the A-4. After the war, the U.S. used German scientists and their captured rockets in programs for both military and civilian research. The first scientific exploration from space was the cosmic radiation experiment launched by the U.S. on a V-2 rocket on 10 May 1946.[7] The first images of Earth taken from space followed the same year[8][9] while the first animal experiment saw fruit flies lifted into space in 1947, both also on modified V-2s launched by Americans. Starting in 1947, the Soviets, also with the help of German teams, launched sub-orbital V-2 rockets and their own variant, the R-1, including radiation and animal experiments on some flights. These suborbital experiments only allowed a very short time in space which limited their usefulness.

The first successful orbital launch was of the Soviet uncrewed Sputnik 1 (“Satellite 1”) mission on 4 October 1957. The satellite weighed about 83kg (183lb), and is believed to have orbited Earth at a height of about 250km (160mi). It had two radio transmitters (20 and 40MHz), which emitted “beeps” that could be heard by radios around the globe. Analysis of the radio signals was used to gather information about the electron density of the ionosphere, while temperature and pressure data was encoded in the duration of radio beeps. The results indicated that the satellite was not punctured by a meteoroid. Sputnik 1 was launched by an R-7 rocket. It burned up upon re-entry on 3 January 1958.

The second one was Sputnik 2. Launched by the USSR on November 3, 1957, it carried the dog Laika, who became the first animal in orbit.

This success led to an escalation of the American space program, which unsuccessfully attempted to launch a Vanguard satellite into orbit two months later. On 31 January 1958, the U.S. successfully orbited Explorer 1 on a Juno rocket.

The first successful human spaceflight was Vostok 1 (“East 1”), carrying 27-year-old Russian cosmonaut Yuri Gagarin on 12 April 1961. The spacecraft completed one orbit around the globe, lasting about 1 hour and 48 minutes. Gagarin’s flight resonated around the world; it was a demonstration of the advanced Soviet space program and it opened an entirely new era in space exploration: human spaceflight.

The U.S. first launched a person into space within a month of Vostok 1 with Alan Shepard’s suborbital flight on Freedom 7. Orbital flight was achieved by the United States when John Glenn’s Friendship 7 orbited Earth on 20 February 1962.

Valentina Tereshkova, the first woman in space, orbited Earth 48 times aboard Vostok 6 on 16 June 1963.

China first launched a person into space 42 years after the launch of Vostok 1, on 15 October 2003, with the flight of Yang Liwei aboard the Shenzhou 5 (Divine Vessel 5) spacecraft.

The first artificial object to reach another celestial body was Luna 2 in 1959.[10] The first automatic landing on another celestial body was performed by Luna 9[11] in 1966. Luna 10 became the first artificial satellite of the Moon.[12]

The first crewed landing on another celestial body was performed by Apollo 11 on 20 July 1969.

The first successful interplanetary flyby was the 1962 Mariner 2 flyby of Venus (closest approach 34,773 kilometers). The other planets were first flown by in 1965 for Mars by Mariner 4, 1973 for Jupiter by Pioneer 10, 1974 for Mercury by Mariner 10, 1979 for Saturn by Pioneer 11, 1986 for Uranus by Voyager 2, 1989 for Neptune by Voyager 2. In 2015, the dwarf planets Ceres and Pluto were orbited by Dawn and passed by New Horizons, respectively.

The first interplanetary surface mission to return at least limited surface data from another planet was the 1970 landing of Venera 7 on Venus which returned data to Earth for 23 minutes. In 1975 the Venera 9 was the first to return images from the surface of another planet. In 1971 the Mars 3 mission achieved the first soft landing on Mars returning data for almost 20 seconds. Later much longer duration surface missions were achieved, including over six years of Mars surface operation by Viking 1 from 1975 to 1982 and over two hours of transmission from the surface of Venus by Venera 13 in 1982, the longest ever Soviet planetary surface mission.

The dream of stepping into the outer reaches of Earth’s atmosphere was driven by the fiction of Peter Francis Geraci[13][14][15] and H. G. Wells,[16] and rocket technology was developed to try to realize this vision. The German V-2 was the first rocket to travel into space, overcoming the problems of thrust and material failure. During the final days of World War II this technology was obtained by both the Americans and Soviets as were its designers. The initial driving force for further development of the technology was a weapons race for intercontinental ballistic missiles (ICBMs) to be used as long-range carriers for fast nuclear weapon delivery, but in 1961 when the Soviet Union launched the first man into space, the United States declared itself to be in a “Space Race” with the Soviets.

Konstantin Tsiolkovsky, Robert Goddard, Hermann Oberth, and Reinhold Tiling laid the groundwork of rocketry in the early years of the 20th century.

Wernher von Braun was the lead rocket engineer for Nazi Germany’s World War II V-2 rocket project. In the last days of the war he led a caravan of workers in the German rocket program to the American lines, where they surrendered and were brought to the United States to work on their rocket development (“Operation Paperclip”). He acquired American citizenship and led the team that developed and launched Explorer 1, the first American satellite. Von Braun later led the team at NASA’s Marshall Space Flight Center which developed the Saturn V moon rocket.

Initially the race for space was often led by Sergei Korolyov, whose legacy includes both the R7 and Soyuzwhich remain in service to this day. Korolev was the mastermind behind the first satellite, first man (and first woman) in orbit and first spacewalk. Until his death his identity was a closely guarded state secret; not even his mother knew that he was responsible for creating the Soviet space program.

Kerim Kerimov was one of the founders of the Soviet space program and was one of the lead architects behind the first human spaceflight (Vostok 1) alongside Sergey Korolyov. After Korolyov’s death in 1966, Kerimov became the lead scientist of the Soviet space program and was responsible for the launch of the first space stations from 1971 to 1991, including the Salyut and Mir series, and their precursors in 1967, the Cosmos 186 and Cosmos 188.[17][18]

Although the Sun will probably not be physically explored at all, the study of the Sun has nevertheless been a major focus of space exploration. Being above the atmosphere in particular and Earth’s magnetic field gives access to the solar wind and infrared and ultraviolet radiations that cannot reach Earth’s surface. The Sun generates most space weather, which can affect power generation and transmission systems on Earth and interfere with, and even damage, satellites and space probes. Numerous spacecraft dedicated to observing the Sun, beginning with the Apollo Telescope Mount, have been launched and still others have had solar observation as a secondary objective. Parker Solar Probe, planned for a 2018 launch, will approach the Sun to within 1/8th the orbit of Mercury.

Mercury remains the least explored of the Terrestrial planets. As of May 2013, the Mariner 10 and MESSENGER missions have been the only missions that have made close observations of Mercury. MESSENGER entered orbit around Mercury in March 2011, to further investigate the observations made by Mariner 10 in 1975 (Munsell, 2006b).

A third mission to Mercury, scheduled to arrive in 2025, BepiColombo is to include two probes. BepiColombo is a joint mission between Japan and the European Space Agency. MESSENGER and BepiColombo are intended to gather complementary data to help scientists understand many of the mysteries discovered by Mariner 10’s flybys.

Flights to other planets within the Solar System are accomplished at a cost in energy, which is described by the net change in velocity of the spacecraft, or delta-v. Due to the relatively high delta-v to reach Mercury and its proximity to the Sun, it is difficult to explore and orbits around it are rather unstable.

Venus was the first target of interplanetary flyby and lander missions and, despite one of the most hostile surface environments in the Solar System, has had more landers sent to it (nearly all from the Soviet Union) than any other planet in the Solar System. The first successful Venus flyby was the American Mariner 2 spacecraft, which flew past Venus in 1962. Mariner 2 has been followed by several other flybys by multiple space agencies often as part of missions using a Venus flyby to provide a gravitational assist en route to other celestial bodies. In 1967 Venera 4 became the first probe to enter and directly examine the atmosphere of Venus. In 1970, Venera 7 became the first successful lander to reach the surface of Venus and by 1985 it had been followed by eight additional successful Soviet Venus landers which provided images and other direct surface data. Starting in 1975 with the Soviet orbiter Venera 9 some ten successful orbiter missions have been sent to Venus, including later missions which were able to map the surface of Venus using radar to pierce the obscuring atmosphere.

Space exploration has been used as a tool to understand Earth as a celestial object in its own right. Orbital missions can provide data for Earth that can be difficult or impossible to obtain from a purely ground-based point of reference.

For example, the existence of the Van Allen radiation belts was unknown until their discovery by the United States’ first artificial satellite, Explorer 1. These belts contain radiation trapped by Earth’s magnetic fields, which currently renders construction of habitable space stations above 1000km impractical. Following this early unexpected discovery, a large number of Earth observation satellites have been deployed specifically to explore Earth from a space based perspective. These satellites have significantly contributed to the understanding of a variety of Earth-based phenomena. For instance, the hole in the ozone layer was found by an artificial satellite that was exploring Earth’s atmosphere, and satellites have allowed for the discovery of archeological sites or geological formations that were difficult or impossible to otherwise identify.

The Moon was the first celestial body to be the object of space exploration. It holds the distinctions of being the first remote celestial object to be flown by, orbited, and landed upon by spacecraft, and the only remote celestial object ever to be visited by humans.

In 1959 the Soviets obtained the first images of the far side of the Moon, never previously visible to humans. The U.S. exploration of the Moon began with the Ranger 4 impactor in 1962. Starting in 1966 the Soviets successfully deployed a number of landers to the Moon which were able to obtain data directly from the Moon’s surface; just four months later, Surveyor 1 marked the debut of a successful series of U.S. landers. The Soviet uncrewed missions culminated in the Lunokhod program in the early 1970s, which included the first uncrewed rovers and also successfully brought lunar soil samples to Earth for study. This marked the first (and to date the only) automated return of extraterrestrial soil samples to Earth. Uncrewed exploration of the Moon continues with various nations periodically deploying lunar orbiters, and in 2008 the Indian Moon Impact Probe.

Crewed exploration of the Moon began in 1968 with the Apollo 8 mission that successfully orbited the Moon, the first time any extraterrestrial object was orbited by humans. In 1969, the Apollo 11 mission marked the first time humans set foot upon another world. Crewed exploration of the Moon did not continue for long, however. The Apollo 17 mission in 1972 marked the most recent human visit there, and the next, Exploration Mission 2, is due to orbit the Moon in 2021. Robotic missions are still pursued vigorously.

The exploration of Mars has been an important part of the space exploration programs of the Soviet Union (later Russia), the United States, Europe, Japan and India. Dozens of robotic spacecraft, including orbiters, landers, and rovers, have been launched toward Mars since the 1960s. These missions were aimed at gathering data about current conditions and answering questions about the history of Mars. The questions raised by the scientific community are expected to not only give a better appreciation of the red planet but also yield further insight into the past, and possible future, of Earth.

The exploration of Mars has come at a considerable financial cost with roughly two-thirds of all spacecraft destined for Mars failing before completing their missions, with some failing before they even began. Such a high failure rate can be attributed to the complexity and large number of variables involved in an interplanetary journey, and has led researchers to jokingly speak of The Great Galactic Ghoul[20] which subsists on a diet of Mars probes. This phenomenon is also informally known as the “Mars Curse”.[21] In contrast to overall high failure rates in the exploration of Mars, India has become the first country to achieve success of its maiden attempt. India’s Mars Orbiter Mission (MOM)[22][23][24] is one of the least expensive interplanetary missions ever undertaken with an approximate total cost of 450 Crore (US$73 million).[25][26] The first mission to Mars by any Arab country has been taken up by the United Arab Emirates. Called the Emirates Mars Mission, it is scheduled for launch in 2020. The uncrewed exploratory probe has been named “Hope Probe” and will be sent to Mars to study its atmosphere in detail.[27]

The Russian space mission Fobos-Grunt, which launched on 9 November 2011 experienced a failure leaving it stranded in low Earth orbit.[28] It was to begin exploration of the Phobos and Martian circumterrestrial orbit, and study whether the moons of Mars, or at least Phobos, could be a “trans-shipment point” for spaceships traveling to Mars.[29]

The exploration of Jupiter has consisted solely of a number of automated NASA spacecraft visiting the planet since 1973. A large majority of the missions have been “flybys”, in which detailed observations are taken without the probe landing or entering orbit; such as in Pioneer and Voyager programs. The Galileo and Juno spacecraft are the only spacecraft to have entered the planet’s orbit. As Jupiter is believed to have only a relatively small rocky core and no real solid surface, a landing mission is nearly impossible.

Reaching Jupiter from Earth requires a delta-v of 9.2km/s,[30] which is comparable to the 9.7km/s delta-v needed to reach low Earth orbit.[31] Fortunately, gravity assists through planetary flybys can be used to reduce the energy required at launch to reach Jupiter, albeit at the cost of a significantly longer flight duration.[30]

Jupiter has 69 known moons, many of which have relatively little known information about them.

Saturn has been explored only through uncrewed spacecraft launched by NASA, including one mission (CassiniHuygens) planned and executed in cooperation with other space agencies. These missions consist of flybys in 1979 by Pioneer 11, in 1980 by Voyager 1, in 1982 by Voyager 2 and an orbital mission by the Cassini spacecraft, which lasted from 2004 until 2017.

Saturn has at least 62 known moons, although the exact number is debatable since Saturn’s rings are made up of vast numbers of independently orbiting objects of varying sizes. The largest of the moons is Titan, which holds the distinction of being the only moon in the Solar System with an atmosphere denser and thicker than that of Earth. Titan holds the distinction of being the only object in the Outer Solar System that has been explored with a lander, the Huygens probe deployed by the Cassini spacecraft.

The exploration of Uranus has been entirely through the Voyager 2 spacecraft, with no other visits currently planned. Given its axial tilt of 97.77, with its polar regions exposed to sunlight or darkness for long periods, scientists were not sure what to expect at Uranus. The closest approach to Uranus occurred on 24 January 1986. Voyager 2 studied the planet’s unique atmosphere and magnetosphere. Voyager 2 also examined its ring system and the moons of Uranus including all five of the previously known moons, while discovering an additional ten previously unknown moons.

Images of Uranus proved to have a very uniform appearance, with no evidence of the dramatic storms or atmospheric banding evident on Jupiter and Saturn. Great effort was required to even identify a few clouds in the images of the planet. The magnetosphere of Uranus, however, proved to be completely unique and proved to be profoundly affected by the planet’s unusual axial tilt. In contrast to the bland appearance of Uranus itself, striking images were obtained of the Moons of Uranus, including evidence that Miranda had been unusually geologically active.

The exploration of Neptune began with the 25 August 1989 Voyager 2 flyby, the sole visit to the system as of 2014. The possibility of a Neptune Orbiter has been discussed, but no other missions have been given serious thought.

Although the extremely uniform appearance of Uranus during Voyager 2’s visit in 1986 had led to expectations that Neptune would also have few visible atmospheric phenomena, the spacecraft found that Neptune had obvious banding, visible clouds, auroras, and even a conspicuous anticyclone storm system rivaled in size only by Jupiter’s small Spot. Neptune also proved to have the fastest winds of any planet in the Solar System, measured as high as 2,100km/h.[32] Voyager 2 also examined Neptune’s ring and moon system. It discovered 900 complete rings and additional partial ring “arcs” around Neptune. In addition to examining Neptune’s three previously known moons, Voyager 2 also discovered five previously unknown moons, one of which, Proteus, proved to be the last largest moon in the system. Data from Voyager 2 supported the view that Neptune’s largest moon, Triton, is a captured Kuiper belt object.[33]

The dwarf planet Pluto presents significant challenges for spacecraft because of its great distance from Earth (requiring high velocity for reasonable trip times) and small mass (making capture into orbit very difficult at present). Voyager 1 could have visited Pluto, but controllers opted instead for a close flyby of Saturn’s moon Titan, resulting in a trajectory incompatible with a Pluto flyby. Voyager 2 never had a plausible trajectory for reaching Pluto.[34]

Pluto continues to be of great interest, despite its reclassification as the lead and nearest member of a new and growing class of distant icy bodies of intermediate size (and also the first member of the important subclass, defined by orbit and known as “plutinos”). After an intense political battle, a mission to Pluto dubbed New Horizons was granted funding from the United States government in 2003.[35] New Horizons was launched successfully on 19 January 2006. In early 2007 the craft made use of a gravity assist from Jupiter. Its closest approach to Pluto was on 14 July 2015; scientific observations of Pluto began five months prior to closest approach and continued for 16 days after the encounter.

Until the advent of space travel, objects in the asteroid belt were merely pinpricks of light in even the largest telescopes, their shapes and terrain remaining a mystery. Several asteroids have now been visited by probes, the first of which was Galileo, which flew past two: 951 Gaspra in 1991, followed by 243 Ida in 1993. Both of these lay near enough to Galileo’s planned trajectory to Jupiter that they could be visited at acceptable cost. The first landing on an asteroid was performed by the NEAR Shoemaker probe in 2000, following an orbital survey of the object. The dwarf planet Ceres and the asteroid 4 Vesta, two of the three largest asteroids, were visited by NASA’s Dawn spacecraft, launched in 2007.

Although many comets have been studied from Earth sometimes with centuries-worth of observations, only a few comets have been closely visited. In 1985, the International Cometary Explorer conducted the first comet fly-by (21P/Giacobini-Zinner) before joining the Halley Armada studying the famous comet. The Deep Impact probe smashed into 9P/Tempel to learn more about its structure and composition and the Stardust mission returned samples of another comet’s tail. The Philae lander successfully landed on Comet ChuryumovGerasimenko in 2014 as part of the broader Rosetta mission.

Hayabusa was an unmanned spacecraft developed by the Japan Aerospace Exploration Agency to return a sample of material from the small near-Earth asteroid 25143 Itokawa to Earth for further analysis. Hayabusa was launched on 9 May 2003 and rendezvoused with Itokawa in mid-September 2005. After arriving at Itokawa, Hayabusa studied the asteroid’s shape, spin, topography, color, composition, density, and history. In November 2005, it landed on the asteroid to collect samples. The spacecraft returned to Earth on 13 June 2010.

Deep space exploration is the branch of astronomy, astronautics and space technology that is involved with the exploration of distant regions of outer space.[36] Physical exploration of space is conducted both by human spaceflights (deep-space astronautics) and by robotic spacecraft.

Some of the best candidates for future deep space engine technologies include anti-matter, nuclear power and beamed propulsion.[37] The latter, beamed propulsion, appears to be the best candidate for deep space exploration presently available, since it uses known physics and known technology that is being developed for other purposes.[38]

In the 2000s, several plans for space exploration were announced; both government entities and the private sector have space exploration objectives. China has announced plans to have a 60-ton multi-module space station in orbit by 2020.

The NASA Authorization Act of 2010 provided a re-prioritized list of objectives for the American space program, as well as funding for the first priorities. NASA proposes to move forward with the development of the Space Launch System (SLS), which will be designed to carry the Orion Multi-Purpose Crew Vehicle, as well as important cargo, equipment, and science experiments to Earth’s orbit and destinations beyond. Additionally, the SLS will serve as a back up for commercial and international partner transportation services to the International Space Station. The SLS rocket will incorporate technological investments from the Space Shuttle program and the Constellation program in order to take advantage of proven hardware and reduce development and operations costs. The first developmental flight is targeted for the end of 2017.[39]

The idea of using high level automated systems for space missions has become a desirable goal to space agencies all around the world. Such systems are believed to yield benefits such as lower cost, less human oversight, and ability to explore deeper in space which is usually restricted by long communications with human controllers.[40]

Autonomy is defined by three requirements:[40]

Autonomous technologies would be able to perform beyond predetermined actions. They would analyze all possible states and events happening around them and come up with a safe response. In addition, such technologies can reduce launch cost and ground involvement. Performance would increase as well. Autonomy would be able to quickly respond upon encountering an unforeseen event, especially in deep space exploration where communication back to Earth would take too long.[40]

NASA began its autonomous science experiment (ASE) on Earth Observing 1 (EO-1) which is NASA’s first satellite in the new millennium program Earth-observing series launched on 21 November 2000. The autonomy of ASE is capable of on-board science analysis, replanning, robust execution, and later the addition of model-based diagnostic. Images obtained by the EO-1 are analyzed on-board and downlinked when a change or an interesting event occur. The ASE software has successfully provided over 10,000 science images.[40]

An article in science magazine Nature suggested the use of asteroids as a gateway for space exploration, with the ultimate destination being Mars.[41] In order to make such an approach viable, three requirements need to be fulfilled: first, “a thorough asteroid survey to find thousands of nearby bodies suitable for astronauts to visit”; second, “extending flight duration and distance capability to ever-increasing ranges out to Mars”; and finally, “developing better robotic vehicles and tools to enable astronauts to explore an asteroid regardless of its size, shape or spin.”[41] Furthermore, using asteroids would provide astronauts with protection from galactic cosmic rays, with mission crews being able to land on them in times of greater risk to radiation exposure.[42]

The research that is conducted by national space exploration agencies, such as NASA and Roscosmos, is one of the reasons supporters cite to justify government expenses. Economic analyses of the NASA programs often showed ongoing economic benefits (such as NASA spin-offs), generating many times the revenue of the cost of the program.[43] It is also argued that space exploration would lead to the extraction of resources on other planets and especially asteroids, which contain billions of dollars worth of minerals and metals. Such expeditions could generate a lot of revenue.[44] As well, it has been argued that space exploration programs help inspire youth to study in science and engineering.[45]

Another claim is that space exploration is a necessity to mankind and that staying on Earth will lead to extinction. Some of the reasons are lack of natural resources, comets, nuclear war, and worldwide epidemic. Stephen Hawking, renowned British theoretical physicist, said that “I don’t think the human race will survive the next thousand years, unless we spread into space. There are too many accidents that can befall life on a single planet. But I’m an optimist. We will reach out to the stars.”[46]

NASA has produced a series of public service announcement videos supporting the concept of space exploration.[47]

Overall, the public remains largely supportive of both crewed and uncrewed space exploration. According to an Associated Press Poll conducted in July 2003, 71% of U.S. citizens agreed with the statement that the space program is “a good investment”, compared to 21% who did not.[48]

Arthur C. Clarke (1950) presented a summary of motivations for the human exploration of space in his non-fiction semi-technical monograph Interplanetary Flight.[49] He argued that humanity’s choice is essentially between expansion off Earth into space, versus cultural (and eventually biological) stagnation and death.

Spaceflight is the use of space technology to achieve the flight of spacecraft into and through outer space.

Spaceflight is used in space exploration, and also in commercial activities like space tourism and satellite telecommunications. Additional non-commercial uses of spaceflight include space observatories, reconnaissance satellites and other Earth observation satellites.

A spaceflight typically begins with a rocket launch, which provides the initial thrust to overcome the force of gravity and propels the spacecraft from the surface of Earth. Once in space, the motion of a spacecraftboth when unpropelled and when under propulsionis covered by the area of study called astrodynamics. Some spacecraft remain in space indefinitely, some disintegrate during atmospheric reentry, and others reach a planetary or lunar surface for landing or impact.

Satellites are used for a large number of purposes. Common types include military (spy) and civilian Earth observation satellites, communication satellites, navigation satellites, weather satellites, and research satellites. Space stations and human spacecraft in orbit are also satellites.

Current examples of the commercial use of space include satellite navigation systems, satellite television and satellite radio. Space tourism is the recent phenomenon of space travel by individuals for the purpose of personal pleasure.

Private spaceflight companies such as SpaceX and Blue Origin, and commercial space stations such as the Axiom Space and the Bigelow Commercial Space Station have dramatically changed the landscape of space exploration, and will continue to do so in the near future.

Astrobiology is the interdisciplinary study of life in the universe, combining aspects of astronomy, biology and geology.[50] It is focused primarily on the study of the origin, distribution and evolution of life. It is also known as exobiology (from Greek: , exo, “outside”).[51][52][53] The term “Xenobiology” has been used as well, but this is technically incorrect because its terminology means “biology of the foreigners”.[54] Astrobiologists must also consider the possibility of life that is chemically entirely distinct from any life found on Earth.[55] In the Solar System some of the prime locations for current or past astrobiology are on Enceladus, Europa, Mars, and Titan.

Space colonization, also called space settlement and space humanization, would be the permanent autonomous (self-sufficient) human habitation of locations outside Earth, especially of natural satellites or planets such as the Moon or Mars, using significant amounts of in-situ resource utilization.

To date, the longest human occupation of space is the International Space Station which has been in continuous use for 17years, 231days. Valeri Polyakov’s record single spaceflight of almost 438 days aboard the Mir space station has not been surpassed. Long-term stays in space reveal issues with bone and muscle loss in low gravity, immune system suppression, and radiation exposure.

Many past and current concepts for the continued exploration and colonization of space focus on a return to the Moon as a “stepping stone” to the other planets, especially Mars. At the end of 2006 NASA announced they were planning to build a permanent Moon base with continual presence by 2024.[57]

Beyond the technical factors that could make living in space more widespread, it has been suggested that the lack of private property, the inability or difficulty in establishing property rights in space, has been an impediment to the development of space for human habitation. Since the advent of space technology in the latter half of the twentieth century, the ownership of property in space has been murky, with strong arguments both for and against. In particular, the making of national territorial claims in outer space and on celestial bodies has been specifically proscribed by the Outer Space Treaty, which had been, as of 2012[update], ratified by all spacefaring nations.[58]

More:

Space exploration – Wikipedia

How Long Would It Take To Travel To The Nearest Star …

Weve all asked this question at some point in our lives: How long would it take to travel to the stars? Could it be within a persons own lifetime, and could this kind of travel become the norm someday? There are many possible answers to this question some very simple, others in the realms of science fiction. But coming up with a comprehensive answer means taking a lot of things into consideration.

Unfortunately, any realistic assessment is likely to produce answers that would totally discourage futurists and enthusiasts of interstellar travel. Like it or not, space is very large, and our technology is still very limited. But should we ever contemplate leaving the nest, we will have a range of options for getting to the nearest Solar Systems in our galaxy.

The nearest star to Earth is our Sun, which is a fairly average star in the Hertzsprung Russell Diagrams Main Sequence. This means that it is highly stable, providing Earth with just the right type of sunlight for life to evolve on our planet. We know there are planets orbiting other stars near to our Solar System, and many of these stars are similar to our own.

In the future, should mankind wish to leave the Solar System, well have a huge choice of stars we could travel to, and many could have the right conditions for life to thrive. But where would we go and how long would it take for us to get there? Just remember, this is all speculative and there is currently no benchmark for interstellar trips. That being said, here we go!

Over 2000 exoplanets have been identified, many of which are believed to be habitable. Credit: phl.upl.edu

As already noted, the closest star to our Solar System is Proxima Centauri, which is why it makes the most sense to plot an interstellar mission to this system first. As part of a triple star system called Alpha Centauri, Proxima is about 4.24 light years (or 1.3 parsecs) from Earth. Alpha Centauri is actually the brightest star of the three in the system part of a closely orbiting binary 4.37 light years from Earth whereas Proxima Centauri (the dimmest of the three) is an isolated red dwarf about 0.13 light years from the binary.

And while interstellar travel conjures up all kinds of visions of Faster-Than-Light (FTL) travel, ranging from warp speed and wormholes to jump drives, such theories are either highly speculative (such as the Alcubierre Drive) or entirely the province of science fiction. In all likelihood, any deep space mission will likely take generations to get there, rather than a few days or in an instantaneous flash.

So, starting with one of the slowest forms of space travel, how long will it take to get to Proxima Centauri?

The question of how long would it take to get somewhere in space is somewhat easier when dealing with existing technology and bodies within our Solar System. For instance, using the technology that powered the New Horizons mission which consisted of 16 thrusters fueled with hydrazine monopropellant reaching the Moon would take a mere 8 hours and 35 minutes.

On the other hand, there is the European Space Agencys (ESA) SMART-1 mission, which took its time traveling to the Moon using the method of ionic propulsion. With this revolutionary technology, a variation of which has since been used by the Dawn spacecraft to reach Vesta, the SMART-1 mission took one year, one month and two weeks to reach the Moon.

So, from the speedy rocket-propelled spacecraft to the economical ion drive, we have a few options for getting around local space plus we could use Jupiter or Saturn for a hefty gravitational slingshot. However, if we were to contemplate missions to somewhere a little more out of the way, we would have to scale up our technology and look at whats really possible.

When we say possible methods, we are talking about those that involve existing technology, or those that do not yet exist, but are technically feasible. Some, as you will see, are time-honored and proven, while others are emerging or still on the board. In just about all cases though, they present a possible, but extremely time-consuming or expensive, scenario for getting to even the closest stars

Ionic Propulsion:Currently, the slowest form of propulsion, and the most fuel-efficient, is the ion engine. A few decades ago, ionic propulsion was considered to be the subject of science fiction. However, in recent years, the technology to support ion engines has moved from theory to practice in a big way. The ESAs SMART-1 mission for example successfully completed its mission to the Moon after taking a 13 month spiral path from the Earth.

SMART-1 used solar powered ion thrusters, where electrical energy was harvested from its solar panels and used to power its Hall-effect thrusters. Only 82 kg of xenon propellant was used to propel SMART-1 to the Moon. 1 kg of xenon propellant provided a delta-v of 45 m/s. This is a highly efficient form of propulsion, but it is by no means fast.

Artists concept of Dawn mission above Ceres. Since its arrival, the spacecraft turned around to point the blue glow of its ion engine in the opposite direction. Image credit: NASA/JPL

One of the first missions to use ion drive technology was the Deep Space 1 mission to Comet Borrelly that took place in 1998. DS1 also used a xenon-powered ion drive, consuming 81.5 kg of propellant. Over 20 months of thrusting, DS1 was managed to reach a velocity of 56,000 km/hr (35,000 miles/hr) during its flyby of the comet.

Ion thrusters are therefore more economical than rocket technology, as the thrust per unit mass of propellant (a.k.a. specific impulse) is far higher. But it takes a long time for ion thrusters to accelerate spacecraft to any great speeds, and the maximum velocity it can achieve is dependent on its fuel supply and how much electrical energy it can generate.

So if ionic propulsion were to be used for a mission to Proxima Centauri, the thrusters would need a huge source of energy production (i.e. nuclear power) and a large quantity of propellant (although still less than conventional rockets). But based on the assumption that a supply of 81.5 kg of xenon propellant translates into a maximum velocity of 56,000 km/hr (and that there are no other forms of propulsion available, such as a gravitational slingshot to accelerate it further), some calculations can be made.

In short, at a maximum velocity of 56,000 km/h, Deep Space 1 would take over 81,000 years to traverse the 4.24 light years between Earth and Proxima Centauri. To put that time-scale into perspective, that would be over 2,700 human generations. So it is safe to say that an interplanetary ion engine mission would be far too slow to be considered for a manned interstellar mission.

Ionic propulsion is currently the slowest, but most fuel-efficient, form of space travel. Credit: NASA/JPL

But, should ion thrusters be made larger and more powerful (i.e. ion exhaust velocity would need to be significantly higher), and enough propellant could be hauled to keep the spacecrafts going for the entire 4.243 light-year trip, that travel time could be greatly reduced. Still not enough to happen in someones lifetime though.

Gravity Assist Method:The fastest existing means of space travel is known the Gravity Assist method, which involves a spacecraft using the relative movement (i.e. orbit) and gravity of a planet to alter is path and speed. Gravitational assists are a very useful spaceflight technique, especially when using the Earth or another massive planet (like a gas giant) for a boost in velocity.

The Mariner 10 spacecraft was the first to use this method, using Venus gravitational pull to slingshot it towards Mercury in February of 1974. In the 1980s, the Voyager 1 probe used Saturn and Jupiter for gravitational slingshots to attain its current velocity of 60,000 km/hr (38,000 miles/hr) and make it into interstellar space.

However, it was the Helios 2 mission which was launched in 1976 to study the interplanetary medium from 0.3 AU to 1 AU to the Sun that holds the record for highest speed achieved with a gravity assist. At the time, Helios 1 (which launched in 1974) and Helios 2 held the record for closest approach to the Sun. Helios 2 was launched by a conventional NASA Titan/Centaur launch vehicle and placed in a highly elliptical orbit.

A Helios probe being encapsulated for launch. Credit: Public Domain

Due to the large eccentricity (0.54) of the 190 day solar orbit, at perihelion Helios 2 was able to reach a maximum velocity of over 240,000 km/hr (150,000 miles/hr). This orbital speed was attained by the gravitational pull of the Sun alone. Technically, the Helios 2 perihelion velocity was not a gravitational slingshot, it was a maximum orbital velocity, but it still holds the record for being the fastest man-made object regardless.

So, if Voyager 1 was traveling in the direction of the red dwarf Proxima Centauri at a constant velocity of 60,000 km/hr, it would take 76,000 years (or over 2,500 generations) to travel that distance. But if it could attain the record-breaking speed of Helios 2s close approach of the Sun a constant speed of 240,000 km/hr it would take 19,000 years (or over 600 generations) to travel 4.243 light years. Significantly better, but still not in the ream of practicality.

Electromagnetic (EM) Drive:Another proposed method of interstellar travel comes in the form of the Radio Frequency (RF) Resonant Cavity Thruster, also known as the EM Drive. Originally proposed in 2001 by Roger K. Shawyer, a UK scientist who started Satellite Propulsion Research Ltd (SPR) to bring it to fruition, this drive is built around the idea that electromagnetic microwave cavities can allow for the direct conversion of electrical energy to thrust.

Whereas conventional electromagnetic thrusters are designed to propel a certain type of mass (such as ionized particles), this particular drive system relies on no reaction mass and emits no directional radiation. Such a proposal has met with a great deal of skepticism, mainly because it violates the law of Conservation of Momentum which states that within a system, the amount of momentum remains constant and is neither created nor destroyed, but only changes through the action of forces.

The EM Drive prototype produced by NASA/Eagleworks. Credit: NASA Spaceflight Forum

However, recent experiments with the technology have apparently yielded positive results. In July of 2014, at the 50th AIAA/ASME/SAE/ASEE Joint Propulsion Conference in Cleveland, Ohio, researchers from NASAs advanced propulsion research claimed that they had successfully tested a new design for an electromagnetic propulsion drive.

This was followed up in April of 2015 when researchers at NASA Eagleworks (part of the Johnson Space Center) claimed that they had successfully tested the drive in a vacuum, an indication that it might actually work in space. In July of that same year, a research team from the Dresden University of Technologys Space System department built their own version of the engine and observed a detectable thrust.

And in 2010, Prof. Juan Yang of the Northwestern Polytechnical University in Xian, China, began publishing a series of papers about her research into EM Drive technology. This culminated in her 2012 paper where she reported higher input power (2.5kW) and tested thrust (720mN) levels. In 2014, she further reported extensive tests involving internal temperature measurements with embedded thermocouples, which seemed to confirm that the system worked.

Artists concept of an interstellar craft equipped with an EM Drive. Credit: NASA Spaceflight Center

According to calculations based on the NASA prototype (which yielded a power estimate of 0.4 N/kilowatt), a spacecraft equipped with the EM drive could make the trip to Pluto in less than 18 months. Thats one-sixth the time it took for the New Horizons probe to get there, which was traveling at speeds of close to 58,000 km/h (36,000 mph).

Sounds impressive. But even at that rate, it would take a ship equipped with EM engines over 13,000 years for the vessel to make it to Proxima Centauri. Getting closer, but not quickly enough! and until such time that technology can be definitively proven to work, it doesnt make much sense to put our eggs into this basket.

Nuclear Thermal and Nuclear Electric Propulsion (NTP/NEP):Another possibility for interstellar space flight is to use spacecraft equipped with nuclear engines, a concept which NASA has been exploring for decades. In a Nuclear Thermal Propulsion (NTP) rocket, uranium or deuterium reactions are used to heat liquid hydrogen inside a reactor, turning it into ionized hydrogen gas (plasma), which is then channeled through a rocket nozzle to generate thrust.

A Nuclear Electric Propulsion (NEP) rocket involves the same basic reactor converting its heat and energy into electrical energy, which would then power an electrical engine. In both cases, the rocket would rely on nuclear fission or fusion to generates propulsion rather than chemical propellants, which has been the mainstay of NASA and all other space agencies to date.

Artists impression of a Crew Transfer Vehicle (CTV) using its nuclear-thermal rocket engines to slow down and establish orbit around Mars. Credit: NASA

Compared to chemical propulsion, both NTP and NEC offers a number of advantages. The first and most obvious is the virtually unlimited energy density it offers compared to rocket fuel. In addition, a nuclear-powered engine could also provide superior thrust relative to the amount of propellant used. This would cut the total amount of propellant needed, thus cutting launch weight and the cost of individual missions.

Although no nuclear-thermal engines have ever flown, several design concepts have been built and tested over the past few decades, and numerous concepts have been proposed. These have ranged from the traditional solid-core design such as the Nuclear Engine for Rocket Vehicle Application (NERVA) to more advanced and efficient concepts that rely on either a liquid or a gas core.

However, despite these advantages in fuel-efficiency and specific impulse, the most sophisticated NTP concept has a maximum specific impulse of 5000 seconds (50 kNs/kg). Using nuclear engines driven by fission or fusion, NASA scientists estimate it would could take a spaceship only 90 days to get to Mars when the planet was at opposition i.e. as close as 55,000,000 km from Earth.

But adjusted for a one-way journey to Proxima Centauri, a nuclear rocket would still take centuries to accelerate to the point where it was flying a fraction of the speed of light. It would then require several decades of travel time, followed by many more centuries of deceleration before reaching it destination. All told, were still talking about 1000 years before it reaches its destination. Good for interplanetary missions, not so good for interstellar ones.

Using existing technology, the time it would take to send scientists and astronauts on an interstellar mission would be prohibitively slow. If we want to make that journey within a single lifetime, or even a generation, something a bit more radical (aka. highly theoretical) will be needed. And while wormholes and jump engines may still be pure fiction at this point, there are some rather advanced ideas that have been considered over the years.

Nuclear Pulse Propulsion:Nuclear pulse propulsion is a theoretically possible form of fast space travel. The concept was originally proposed in 1946 by Stanislaw Ulam, a Polish-American mathematician who participated in the Manhattan Project, and preliminary calculations were then made by F. Reines and Ulam in 1947. The actual project known as Project Orion was initiated in 1958 and lasted until 1963.

The Project Orion concept for a nuclear-powered spacecraft. Credit: silodrome.co

Led by Ted Taylor at General Atomics and physicist Freeman Dyson from the Institute for Advanced Study in Princeton, Orion hoped to harness the power of pulsed nuclear explosions to provide a huge thrust with very high specific impulse (i.e. the amount of thrust compared to weight or the amount of seconds the rocket can continually fire).

In a nutshell, the Orion design involves a large spacecraft with a high supply of thermonuclear warheads achieving propulsion by releasing a bomb behind it and then riding the detonation wave with the help of a rear-mounted pad called a pusher. After each blast, the explosive force would be absorbed by this pusher pad, which then translates the thrust into forward momentum.

Though hardly elegant by modern standards, the advantage of the design is that it achieves a high specific impulse meaning it extracts the maximum amount of energy from its fuel source (in this case, nuclear bombs) at minimal cost. In addition, the concept could theoretically achieve very high speeds, with some estimates suggesting a ballpark figure as high as 5% the speed of light (or 5.4107 km/hr).

But of course, there the inevitable downsides to the design. For one, a ship of this size would be incredibly expensive to build. According to estimates produced by Dyson in 1968, an Orion spacecraft that used hydrogen bombs to generate propulsion would weight 400,000 to 4,000,000 metric tons. And at least three quarters of that weight consists of nuclear bombs, where each warhead weights approximately 1 metric ton.

Artists concept of Orion spacecraft leaving Earth. Credit: bisbos.com/Adrian Mann

All told, Dysons most conservative estimates placed the total cost of building an Orion craft at 367 billion dollars. Adjusted for inflation, that works out to roughly $2.5 trillion dollars which accounts for over two thirds of the US governments current annual revenue. Hence, even at its lightest, the craft would be extremely expensive to manufacture.

Theres also the slight problem of all the radiation it generates, not to mention nuclear waste. In fact, it is for this reason that the Project is believed to have been terminated, owing to the passage of the Partial Test Ban Treaty of 1963 which sought to limit nuclear testing and stop the excessive release of nuclear fallout into the planets atmosphere.

Fusion Rockets:Another possibility within the realm of harnessed nuclear power involves rockets that rely on thermonuclear reactions to generate thrust. For this concept, energy is created when pellets of a deuterium/helium-3 mix are ignited in a reaction chamber by inertial confinement using electron beams (similar to what is done at the National Ignition Facility in California). This fusion reactor would detonate 250 pellets per second to create high-energy plasma, which would then be directed by a magnetic nozzle to create thrust.

Like a rocket that relies on a nuclear reactor, this concept offers advantages as far as fuel efficiency and specific impulse are concerned. Exhaust velocities of up to 10,600km/s are estimated, which is far beyond the speed of conventional rockets. Whats more, the technology has been studied extensively over the past few decades, and many proposals have been made.

Artists concept of the Daedalus spacecraft, a two-stage fusion rocket that would achieve up to 12% he speed of light. Credit: Adrian Mann

For example, between 1973 and 1978, the British Interplanetary Society conducted feasibility study known as Project Daedalus. Relying on current knowledge of fusion technology and existing methods, the study called for the creation of a two-stage unmanned scientific probe making a trip to Barnards Star (5.9 light years from Earth) in a single lifetime.

The first stage, the larger of the two, would operate for 2.05 years and accelerate the spacecraft to 7.1% the speed of light (o.071 c). This stage would then be jettisoned, at which point, the second stage would ignite its engine and accelerate the spacecraft up to about 12% of light speed (0.12 c) over the course of 1.8 years. The second-stage engine would then be shut down and the ship would enter into a 46-year cruise period.

According to the Projects estimates, the mission would take 50 years to reach Barnards Star. Adjusted for Proxima Centauri, the same craft could make the trip in 36 years. But of course, the project also identified numerous stumbling blocks that made it unfeasible using then-current technology most of which are still unresolved.

For instance, there is the fact that helium-3 is scare on Earth, which means it would have to be mined elsewhere (most likely on the Moon). Second, the reaction that drives the spacecraft requires that the energy released vastly exceed the energy used to trigger the reaction. And while experiments here on Earth have surpassed the break-even goal, we are still a long way away from the kinds of energy needed to power an interstellar spaceship.

Artists concept of the Project Daedalus spacecraft, with a Saturn V rocket standing next to it for scale. Credit: Adrian Mann

Third, there is the cost factor of constructing such a ship. Even by the modest standard of Project Daedalus unmanned craft, a fully-fueled craft would weight as much as 60,000 Mt. To put that in perspective, the gross weight of NASAs SLS is just over 30 Mt, and a single launch comes with a price tag of $5 billion (based on estimates made in 2013).

In short, a fusion rocket would not only be prohibitively expensive to build, it would require a level of fusion reactor technology that is currently beyond our means. Icarus Interstellar, an international organization of volunteer citizen scientists (some of whom worked for NASA or the ESA) have since attempted to revitalize the concept with Project Icarus. Founded in 2009, the group hopes to make fusion propulsion (among other things) feasible by the near future.

Fusion Ramjet:Also known as the Bussard Ramjet, this theoretical form of propulsion was first proposed by physicist Robert W. Bussard in 1960. Basically, it is an improvement over the standard nuclear fusion rocket, which uses magnetic fields to compress hydrogen fuel to the point that fusion occurs. But in the Ramjets case, an enormous electromagnetic funnel scoops hydrogen from the interstellar medium and dumps it into the reactor as fuel.

Artists concept of the Bussard Ramjet, which would harness hydrogen from the interstellar medium to power its fusion engines. Credit: futurespacetransportation.weebly.com

As the ship picks up speed, the reactive mass is forced into a progressively constricted magnetic field, compressing it until thermonuclear fusion occurs. The magnetic field then directs the energy as rocket exhaust through an engine nozzle, thereby accelerating the vessel. Without any fuel tanks to weigh it down, a fusion ramjet could achieve speeds approaching 4% of the speed of light and travel anywhere in the galaxy.

However, the potential drawbacks of this design are numerous. For instance, there is the problem of drag. The ship relies on increased speed to accumulate fuel, but as it collides with more and more interstellar hydrogen, it may also lose speed especially in denser regions of the galaxy. Second, deuterium and tritium (used in fusion reactors here on Earth) are rare in space, whereas fusing regular hydrogen (which is plentiful in space) is beyond our current methods.

This concept has been popularized extensively in science fiction. Perhaps the best known example of this is in the franchise of Star Trek, where Bussard collectors are the glowing nacelles on warp engines. But in reality, our knowledge of fusion reactions need to progress considerably before a ramjet is possible. We would also have to figure out that pesky drag problem before we began to consider building such a ship!

Laser Sail:Solar sails have long been considered to be a cost-effective way of exploring the Solar System. In addition to being relatively easy and cheap to manufacture, theres the added bonus of solar sails requiring no fuel. Rather than using rockets that require propellant, the sail uses the radiation pressure from stars to push large ultra-thin mirrors to high speeds.

IKAROS spaceprobe with solar sail in flight (artists depiction) showing a typical square sail configuration. Credit: Wikimedia Commons/Andrzej Mirecki

However, for the sake of interstellar flight, such a sail would need to be driven by focused energy beams (i.e. lasers or microwaves) to push it to a velocity approaching the speed of light. The concept was originally proposed by Robert Forward in 1984, who was a physicist at the Hughes Aircrafts research laboratories at the time.

The concept retains the benefits of a solar sail, in that it requires no on-board fuel, but also from the fact that laser energy does not dissipate with distance nearly as much as solar radiation. So while a laser-driven sail would take some time to accelerate to near-luminous speeds, it would be limited only to the speed of light itself.

According to a 2000 study produced by Robert Frisbee, a director of advanced propulsion concept studies at NASAs Jet Propulsion Laboratory, a laser sail could be accelerated to half the speed of light in less than a decade. He also calculated that a sail measuring about 320 km (200 miles) in diameter could reach Proxima Centauri in just over 12 years. Meanwhile, a sail measuring about 965 km (600 miles) in diameter would arrive in just under 9 years.

However, such a sail would have to be built from advanced composites to avoid melting. Combined with its size, this would add up to a pretty penny! Even worse is the sheer expense incurred from building a laser large and powerful enough to drive a sail to half the speed of light. According to Frisbees own study, the lasers would require a steady flow of 17,000 terawatts of power close to what the entire world consumes in a single day.

Antimatter Engine:Fans of science fiction are sure to have heard of antimatter. But in case you havent, antimatter is essentially material composed of antiparticles, which have the same mass but opposite charge as regular particles. An antimatter engine, meanwhile, is a form of propulsion that uses interactions between matter and antimatter to generate power, or to create thrust.

Artists concept of an antimatter-powered spacecraft for missions to Mars, as part of the Mars Reference Mission. Credit: NASA

In short, an antimatter engine involves particles of hydrogen and antihydrogen being slammed together. This reaction unleashes as much as energy as a thermonuclear bomb, along with a shower of subatomic particles called pions and muons. These particles, which would travel at one-third the speed of light, are then be channeled by a magnetic nozzle to generate thrust.

The advantage to this class of rocket is that a large fraction of the rest mass of a matter/antimatter mixture may be converted to energy, allowing antimatter rockets to have a far higher energy density and specific impulse than any other proposed class of rocket. Whats more, controlling this kind of reaction could conceivably push a rocket up to half the speed of light.

Pound for pound, this class of ship would be the fastest and most fuel-efficient ever conceived. Whereas conventional rockets require tons of chemical fuel to propel a spaceship to its destination, an antimatter engine could do the same job with just a few milligrams of fuel. In fact, the mutual annihilation of a half pound of hydrogen and antihydrogen particles would unleash more energy than a 10-megaton hydrogen bomb.

It is for this exact reason that NASAs Institute for Advanced Concepts (NIAC) has investigated the technology as a possible means for future Mars missions. Unfortunately, when contemplating missions to nearby star systems, the amount if fuel needs to make the trip is multiplied exponentially, and the cost involved in producing it would be astronomical (no pun!).

What matter and antimatter might look like annihilating one another. Credit: NASA/CXC/M. Weiss

According to report prepared for the 39th AIAA/ASME/SAE/ASEE Joint Propulsion Conference and Exhibit (also by Robert Frisbee), a two-stage antimatter rocket would need over 815,000 metric tons (900,000 US tons) of fuel to make the journey to Proxima Centauri in approximately 40 years. Thats not bad, as far as timelines go. But again, the cost

Whereas a single gram of antimatter would produce an incredible amount of energy, it is estimated that producing just one gram would require approximately 25 million billion kilowatt-hours of energy and cost over a trillion dollars. At present, the total amount of antimatter that has been created by humans is less 20 nanograms.

And even if we could produce antimatter for cheap, you would need a massive ship to hold the amount of fuel needed. According to a report by Dr. Darrel Smith & Jonathan Webby of the Embry-Riddle Aeronautical University in Arizona, an interstellar craft equipped with an antimatter engine could reach 0.5 the speed of light and reach Proxima Centauri in a little over 8 years. However, the ship itself would weigh 400 Mt, and would need 170 MT of antimatter fuel to make the journey.

A possible way around this is to create a vessel that can create antimatter which it could then store as fuel. This concept, known as the Vacuum to Antimatter Rocket Interstellar Explorer System (VARIES), was proposed by Richard Obousy of Icarus Interstellar. Based on the idea of in-situ refueling, a VARIES ship would rely on large lasers (powered by enormous solar arrays) which would create particles of antimatter when fired at empty space.

Artists concept of the Vacuum to Antimatter Rocket Interstellar Explorer System (VARIES), a concept that would use solar arrays to power lasers that create particles of antimatter to be used as fuel. Credit: Adrian Mann

Much like the Ramjet concept, this proposal solves the problem of carrying fuel by harnessing it from space. But once again, the sheer cost of such a ship would be prohibitively expensive using current technology. In addition, the ability to create dark matter in large volumes is not something we currently have the power to do. Theres also the matter of radiation, as matter-antimatter annihilation can produce blasts of high-energy gamma rays.

This not only presents a danger to the crew, requiring significant radiations shielding, but requires the engines be shielded as well to ensure they dont undergo atomic degradation from all the radiation they are exposed to. So bottom line, the antimatter engine is completely impractical with our current technology and in the current budget environment.

Alcubierre Warp Drive:Fans of science fiction are also no doubt familiar with the concept of an Alcubierre (or Warp) Drive. Proposed by Mexican physicist Miguel Alcubierre in 1994, this proposed method was an attempt to make FTL travel possible without violating Einsteins theory of Special Relativity. In short, the concept involves stretching the fabric of space-time in a wave, which would theoretically cause the space ahead of an object to contract and the space behind it to expand.

An object inside this wave (i.e. a spaceship) would then be able to ride this wave, known as a warp bubble, beyond relativistic speeds. Since the ship is not moving within this bubble, but is being carried along as it moves, the rules of space-time and relativity would cease to apply. The reason being, this method does not rely on moving faster than light in the local sense.

Artist Mark Rademakers concept for the IXS Enterprise, a theoretical interstellar warp spacecraft. Credit: Mark Rademaker/flickr.com

It is only faster than light in the sense that the ship could reach its destination faster than a beam of light that was traveling outside the warp bubble. So assuming that a spacecraft could be outfitted with an Alcubierre Drive system, it would be able to make the trip to Proxima Centauri in less than 4 years. So when it comes to theoretical interstellar space travel, this is by far the most promising technology, at least in terms of speed.

Naturally, the concept has been received its share of counter-arguments over the years. Chief amongst them are the fact that it does not take quantum mechanics into account, and could be invalidated by a Theory of Everything (such as loop quantum gravity). Calculations on the amount of energy required have also indicated that a warp drive would require a prohibitive amount of power to work. Other uncertainties include the safety of such a system, the effects on space-time at the destination, and violations of causality.

However, in 2012, NASA scientist Harold Sonny White announced that he and his colleagues had begun researching the possibility of an Alcubierre Drive. In a paper titled Warp Field Mechanics 101, White claimed that they had constructed an interferometer that will detect the spatial distortions produced by the expanding and contracting spacetime of the Alcubierre metric.

In 2013, the Jet Propulsion Laboratory published results of a warp field test which was conducted under vacuum conditions. Unfortunately, the results were reported as inconclusive. Long term, we may find that Alcubierres metric may violate one or more fundamental laws of nature. And even if the physics should prove to be sound, there is no guarantee it can be harnessed for the sake of FTL flight.

In conclusion, if you were hoping to travel to the nearest star within your lifetime, the outlook isnt very good. However, if mankind felt the incentive to build an interstellar ark filled with a self-sustaining community of space-faring humans, it might be possible to travel there in a little under a century if we were willing to invest in the requisite technology.

But all the available methods are still very limited when it comes to transit time. And while taking hundreds or thousands of years to reach the nearest star may matter less to us if our very survival was at stake, it is simply not practical as far as space exploration and travel goes. By the time a mission reached even the closest stars in our galaxy, the technology employed would be obsolete and humanity might not even exist back home anymore.

So unless we make a major breakthrough in the realms of fusion, antimatter, or laser technology, we will either have to be content with exploring our own Solar System, or be forced to accept a very long-term transit strategy

We have written many interesting articles about space travel here at Universe Today. Heres Will We Ever Reach Another Star?, Warp Drives May Come With a Killer Downside, The Alcubierre Warp Drive, How Far Is A Light Year?, When Light Just Isnt Fast Enough, When Will We Become Interstellar?, and Can We Travel Faster Than the Speed of Light?

For more information, be sure to consult NASAs pages on Propulsion Systems of the Future, and Is Warp Drive Real?

View original post here:

How Long Would It Take To Travel To The Nearest Star …

Time travel – Wikipedia

Time travel is the concept of movement between certain points in time, analogous to movement between different points in space by an object or a person, typically using a hypothetical device known as a time machine. Time travel is a widely-recognized concept in philosophy and fiction. The idea of a time machine was popularized by H. G. Wells’ 1895 novel The Time Machine.

It is uncertain if time travel to the past is physically possible. Forward time travel, outside the usual sense of the perception of time, is an extensively-observed phenomenon and well-understood within the framework of special relativity and general relativity. However, making one body advance or delay more than a few milliseconds compared to another body is not feasible with current technology.[1] As for backwards time travel, it is possible to find solutions in general relativity that allow for it, but the solutions require conditions that may not be physically possible. Traveling to an arbitrary point in spacetime has a very limited support in theoretical physics, and usually only connected with quantum mechanics or wormholes, also known as Einstein-Rosen bridges.

Some ancient myths depict a character skipping forward in time. In Hindu mythology, the Mahabharata mentions the story of King Raivata Kakudmi, who travels to heaven to meet the creator Brahma and is surprised to learn when he returns to Earth that many ages have passed.[2] The Buddhist Pli Canon mentions the relativity of time. The Payasi Sutta tells of one of the Buddha’s chief disciples, Kumara Kassapa, who explains to the skeptic Payasi that time in the Heavens passes differently than on Earth.[3] The Japanese tale of “Urashima Tar”,[4] first described in the Nihongi (720) tells of a young fisherman named Urashima Taro who visits an undersea palace. After three days, he returns home to his village and finds himself 300 years in the future, where he has been forgotten, his house is in ruins, and his family has died.[5] In Jewish tradition, the 1st-century BC scholar Honi ha-M’agel is said to have fallen asleep and slept for seventy years. When waking up he returned home but found none of the people he knew, and no one believed he is who he claims to be.[6]

Early science fiction stories feature characters who sleep for years and awaken in a changed society, or are transported to the past through supernatural means. Among them L’An 2440, rve s’il en ft jamais (1770) by Louis-Sbastien Mercier, Rip Van Winkle (1819) by Washington Irving, Looking Backward (1888) by Edward Bellamy, and When the Sleeper Awakes (1899) by H.G. Wells. Prolonged sleep, like the more familiar time machine, is used as a means of time travel in these stories.[7]

The earliest work about backwards time travel is uncertain. Samuel Madden’s Memoirs of the Twentieth Century (1733) is a series of letters from British ambassadors in 1997 and 1998 to diplomats in the past, conveying the political and religious conditions of the future.[8]:9596 Because the narrator receives these letters from his guardian angel, Paul Alkon suggests in his book Origins of Futuristic Fiction that “the first time-traveler in English literature is a guardian angel.”[8]:85 Madden does not explain how the angel obtains these documents, but Alkon asserts that Madden “deserves recognition as the first to toy with the rich idea of time-travel in the form of an artifact sent backward from the future to be discovered in the present.”[8]:9596 In the science fiction anthology Far Boundaries (1951), editor August Derleth claims that an early short story about time travel is “Missing One’s Coach: An Anachronism”, written for the Dublin Literary Magazine[9] by an anonymous author in 1838.[10]:3 While the narrator waits under a tree for a coach to take him out of Newcastle, he is transported back in time over a thousand years. He encounters the Venerable Bede in a monastery and explains to him the developments of the coming centuries. However, the story never makes it clear whether these events are real or a dream.[10]:1138 Another early work about time travel is The Forebears of Kalimeros: Alexander, son of Philip of Macedon by Alexander Veltman published in 1836.[11]

Charles Dickens’s A Christmas Carol (1843) has early depictions of time travel in both directions, as the protagonist, Ebenezer Scrooge, is transported to Christmases past and future. Other stories employ the same template, where a character naturally goes to sleep, and upon waking up finds itself in a different time.[12] A clearer example of backward time travel is found in the popular 1861 book Paris avant les hommes (Paris before Men) by the French botanist and geologist Pierre Boitard, published posthumously. In this story, the protagonist is transported to the prehistoric past by the magic of a “lame demon” (a French pun on Boitard’s name), where he encounters a Plesiosaur and an apelike ancestor and is able to interact with ancient creatures.[13] Edward Everett Hale’s “Hands Off” (1881) tells the story of an unnamed being, possibly the soul of a person who has recently died, who interferes with ancient Egyptian history by preventing Joseph’s enslavement. This may have been the first story to feature an alternate history created as a result of time travel.[14]:54

One of the first stories to feature time travel by means of a machine is “The Clock that Went Backward” by Edward Page Mitchell,[15] which appeared in the New York Sun in 1881. However, the mechanism borders on fantasy. An unusual clock, when wound, runs backwards and transports people nearby back in time. The author does not explain the origin or properties of the clock.[14]:55 Enrique Gaspar y Rimbau’s El Anacronpete (1887) may have been the first story to feature a vessel engineered to travel through time.[16][17] Andrew Sawyer has commented that the story “does seem to be the first literary description of a time machine noted so far”, adding that “Edward Page Mitchell’s story ‘The Clock That Went Backward’ (1881) is usually described as the first time-machine story, but I’m not sure that a clock quite counts.”[18] H. G. Wells’s The Time Machine (1895) popularized the concept of time travel by mechanical means.[19]

Some theories, most notably special and general relativity, suggest that suitable geometries of spacetime or specific types of motion in space might allow time travel into the past and future if these geometries or motions were possible.[20]:499 In technical papers, physicists discuss the possibility of closed timelike curves, which are world lines that form closed loops in spacetime, allowing objects to return to their own past. There are known to be solutions to the equations of general relativity that describe spacetimes which contain closed timelike curves, such as Gdel spacetime, but the physical plausibility of these solutions is uncertain.

Many in the scientific community believe that backward time travel is highly unlikely. Any theory that would allow time travel would introduce potential problems of causality.[21] The classic example of a problem involving causality is the “grandfather paradox”: what if one were to go back in time and kill one’s own grandfather before one’s father was conceived? Some physicists, such as Novikov and Deutsch, suggested that these sorts of temporal paradoxes can be avoided through the Novikov self-consistency principle or to a variation of the many-worlds interpretation with interacting worlds.[22]

Time travel to the past is theoretically possible in certain general relativity spacetime geometries that permit traveling faster than the speed of light, such as cosmic strings, transversable wormholes, and Alcubierre drive.[23][24]:33130 The theory of general relativity does suggest a scientific basis for the possibility of backward time travel in certain unusual scenarios, although arguments from semiclassical gravity suggest that when quantum effects are incorporated into general relativity, these loopholes may be closed.[25] These semiclassical arguments led Hawking to formulate the chronology protection conjecture, suggesting that the fundamental laws of nature prevent time travel,[26] but physicists cannot come to a definite judgment on the issue without a theory of quantum gravity to join quantum mechanics and general relativity into a completely unified theory.[27][28]:150

The theory of general relativity describes the universe under a system of field equations that determine the metric, or distance function, of spacetime. There exist exact solutions to these equations that include closed time-like curves, which are world lines that intersect themselves; some point in the causal future of the world line is also in its causal past, a situation which is akin to time travel. Such a solution was first proposed by Kurt Gdel, a solution known as the Gdel metric, but his (and others’) solution requires the universe to have physical characteristics that it does not appear to have,[20]:499 such as rotation and lack of Hubble expansion. Whether general relativity forbids closed time-like curves for all realistic conditions is still being researched.[29]

Wormholes are a hypothetical warped spacetime which are permitted by the Einstein field equations of general relativity.[30]:100 A proposed time-travel machine using a traversable wormhole would hypothetically work in the following way: One end of the wormhole is accelerated to some significant fraction of the speed of light, perhaps with some advanced propulsion system, and then brought back to the point of origin. Alternatively, another way is to take one entrance of the wormhole and move it to within the gravitational field of an object that has higher gravity than the other entrance, and then return it to a position near the other entrance. For both of these methods, time dilation causes the end of the wormhole that has been moved to have aged less, or become “younger”, than the stationary end as seen by an external observer; however, time connects differently through the wormhole than outside it, so that synchronized clocks at either end of the wormhole will always remain synchronized as seen by an observer passing through the wormhole, no matter how the two ends move around.[20]:502 This means that an observer entering the “younger” end would exit the “older” end at a time when it was the same age as the “younger” end, effectively going back in time as seen by an observer from the outside. One significant limitation of such a time machine is that it is only possible to go as far back in time as the initial creation of the machine;[20]:503 in essence, it is more of a path through time than it is a device that itself moves through time, and it would not allow the technology itself to be moved backward in time.

According to current theories on the nature of wormholes, construction of a traversable wormhole would require the existence of a substance with negative energy, often referred to as “exotic matter”. More technically, the wormhole spacetime requires a distribution of energy that violates various energy conditions, such as the null energy condition along with the weak, strong, and dominant energy conditions. However, it is known that quantum effects can lead to small measurable violations of the null energy condition,[30]:101 and many physicists believe that the required negative energy may actually be possible due to the Casimir effect in quantum physics.[31] Although early calculations suggested a very large amount of negative energy would be required, later calculations showed that the amount of negative energy can be made arbitrarily small.[32]

In 1993, Matt Visser argued that the two mouths of a wormhole with such an induced clock difference could not be brought together without inducing quantum field and gravitational effects that would either make the wormhole collapse or the two mouths repel each other.[33] Because of this, the two mouths could not be brought close enough for causality violation to take place. However, in a 1997 paper, Visser hypothesized that a complex “Roman ring” (named after Tom Roman) configuration of an N number of wormholes arranged in a symmetric polygon could still act as a time machine, although he concludes that this is more likely a flaw in classical quantum gravity theory rather than proof that causality violation is possible.[34]

Another approach involves a dense spinning cylinder usually referred to as a Tipler cylinder, a GR solution discovered by Willem Jacob van Stockum[35] in 1936 and Kornel Lanczos[36] in 1924, but not recognized as allowing closed timelike curves[37]:21 until an analysis by Frank Tipler[38] in 1974. If a cylinder is infinitely long and spins fast enough about its long axis, then a spaceship flying around the cylinder on a spiral path could travel back in time (or forward, depending on the direction of its spiral). However, the density and speed required is so great that ordinary matter is not strong enough to construct it. A similar device might be built from a cosmic string, but none are known to exist, and it does not seem to be possible to create a new cosmic string. Physicist Ronald Mallett is attempting to recreate the conditions of a rotating black hole with ring lasers, in order to bend spacetime and allow for time travel.[39]

A more fundamental objection to time travel schemes based on rotating cylinders or cosmic strings has been put forward by Stephen Hawking, who proved a theorem showing that according to general relativity it is impossible to build a time machine of a special type (a “time machine with the compactly generated Cauchy horizon”) in a region where the weak energy condition is satisfied, meaning that the region contains no matter with negative energy density (exotic matter). Solutions such as Tipler’s assume cylinders of infinite length, which are easier to analyze mathematically, and although Tipler suggested that a finite cylinder might produce closed timelike curves if the rotation rate were fast enough,[37]:169 he did not prove this. But Hawking points out that because of his theorem, “it can’t be done with positive energy density everywhere! I can prove that to build a finite time machine, you need negative energy.”[28]:96 This result comes from Hawking’s 1992 paper on the chronology protection conjecture, where he examines “the case that the causality violations appear in a finite region of spacetime without curvature singularities” and proves that “there will be a Cauchy horizon that is compactly generated and that in general contains one or more closed null geodesics which will be incomplete. One can define geometrical quantities that measure the Lorentz boost and area increase on going round these closed null geodesics. If the causality violation developed from a noncompact initial surface, the averaged weak energy condition must be violated on the Cauchy horizon.”[26] This theorem does not rule out the possibility of time travel by means of time machines with the non-compactly generated Cauchy horizons (such as the Deutsch-Politzer time machine) or in regions which contain exotic matter, which would be used for traversable wormholes or the Alcubierre drive.

When a signal is sent from one location and received at another location, then as long as the signal is moving at the speed of light or slower, the mathematics of simultaneity in the theory of relativity show that all reference frames agree that the transmission-event happened before the reception-event. When the signal travels faster than light, it is received before it is sent, in all reference frames.[40] The signal could be said to have moved backward in time. This hypothetical scenario is sometimes referred to as a tachyonic antitelephone.[41]

Quantum-mechanical phenomena such as quantum teleportation, the EPR paradox, or quantum entanglement might appear to create a mechanism that allows for faster-than-light (FTL) communication or time travel, and in fact some interpretations of quantum mechanics such as the Bohm interpretation presume that some information is being exchanged between particles instantaneously in order to maintain correlations between particles.[42] This effect was referred to as “spooky action at a distance” by Einstein.

Nevertheless, the fact that causality is preserved in quantum mechanics is a rigorous result in modern quantum field theories, and therefore modern theories do not allow for time travel or FTL communication. In any specific instance where FTL has been claimed, more detailed analysis has proven that to get a signal, some form of classical communication must also be used.[43] The no-communication theorem also gives a general proof that quantum entanglement cannot be used to transmit information faster than classical signals.

A variation of Everett’s many-worlds interpretation (MWI) of quantum mechanics provides a resolution to the grandfather paradox that involves the time traveler arriving in a different universe than the one they came from; it’s been argued that since the traveler arrives in a different universe’s history and not their own history, this is not “genuine” time travel.[44] The accepted many-worlds interpretation suggests that all possible quantum events can occur in mutually exclusive histories.[45] However, some variations allow different universes to interact. This concept is most often used in science-fiction, but some physicists such as David Deutsch have suggested that a time traveler should end up in a different history than the one he started from.[46][47] On the other hand, Stephen Hawking has argued that even if the MWI is correct, we should expect each time traveler to experience a single self-consistent history, so that time travelers remain within their own world rather than traveling to a different one.[48] The physicist Allen Everett argued that Deutsch’s approach “involves modifying fundamental principles of quantum mechanics; it certainly goes beyond simply adopting the MWI”. Everett also argues that even if Deutsch’s approach is correct, it would imply that any macroscopic object composed of multiple particles would be split apart when traveling back in time through a wormhole, with different particles emerging in different worlds.[22]

Daniel Greenberger and Karl Svozil proposed that quantum theory gives a model for time travel without paradoxes.[49][50] The quantum theory observation causes possible states to ‘collapse’ into one measured state; hence, the past observed from the present is deterministic (it has only one possible state), but the present observed from the past has many possible states until our actions cause it to collapse into one state. Our actions will then be seen to have been inevitable.

Certain experiments carried out give the impression of reversed causality, but fail to show it under closer examination.

The delayed choice quantum eraser experiment performed by Marlan Scully involves pairs of entangled photons that are divided into “signal photons” and “idler photons”, with the signal photons emerging from one of two locations and their position later measured as in the double-slit experiment. Depending on how the idler photon is measured, the experimenter can either learn which of the two locations the signal photon emerged from or “erase” that information. Even though the signal photons can be measured before the choice has been made about the idler photons, the choice seems to retroactively determine whether or not an interference pattern is observed when one correlates measurements of idler photons to the corresponding signal photons. However, since interference can only be observed after the idler photons are measured and they are correlated with the signal photons, there is no way for experimenters to tell what choice will be made in advance just by looking at the signal photons, only by gathering classical information from the entire system; thus causality is preserved.[51]

The experiment of Lijun Wang might also show causality violation since it made it possible to send packages of waves through a bulb of caesium gas in such a way that the package appeared to exit the bulb 62 nanoseconds before its entry, but a wave package is not a single well-defined object but rather a sum of multiple waves of different frequencies (see Fourier analysis), and the package can appear to move faster than light or even backward in time even if none of the pure waves in the sum do so. This effect cannot be used to send any matter, energy, or information faster than light,[52] so this experiment is understood not to violate causality either.

The physicists Gnter Nimtz and Alfons Stahlhofen, of the University of Koblenz, claim to have violated Einstein’s theory of relativity by transmitting photons faster than the speed of light. They say they have conducted an experiment in which microwave photons traveled “instantaneously” between a pair of prisms that had been moved up to 3ft (0.91m) apart, using a phenomenon known as quantum tunneling. Nimtz told New Scientist magazine: “For the time being, this is the only violation of special relativity that I know of.” However, other physicists say that this phenomenon does not allow information to be transmitted faster than light. Aephraim Steinberg, a quantum optics expert at the University of Toronto, Canada, uses the analogy of a train traveling from Chicago to New York, but dropping off train cars at each station along the way, so that the center of the train moves forward at each stop; in this way, the speed of the center of the train exceeds the speed of any of the individual cars.[53]

Shengwang Du claims in a peer-reviewed journal to have observed single photons’ precursors, saying that they travel no faster than c in a vacuum. His experiment involved slow light as well as passing light through a vacuum. He generated two single photons, passing one through rubidium atoms that had been cooled with a laser (thus slowing the light) and passing one through a vacuum. Both times, apparently, the precursors preceded the photons’ main bodies, and the precursor traveled at c in a vacuum. According to Du, this implies that there is no possibility of light traveling faster than c and, thus, no possibility of violating causality.[54]

The absence of time travelers from the future is a variation of the Fermi paradox. As the absence of extraterrestrial visitors does not prove they do not exist, so does the absence of time travelers not prove time travel is physically impossible; it might be that time travel is physically possible but is never developed or is cautiously used. Carl Sagan once suggested the possibility that time travelers could be here but are disguising their existence or are not recognized as time travelers.[27] Some versions of general relativity suggest that time travel might only be possible in a region of spacetime that is warped a certain way, and hence time travelers would not be able to travel back to earlier regions in spacetime, before this region existed. Stephen Hawking stated that this would explain why the world has not already been overrun by “tourists from the future.”[48]

Several experiments have been carried out to try to entice future humans, who might invent time travel technology, to come back and demonstrate it to people of the present time. Events such as Perth’s Destination Day or MIT’s Time Traveler Convention heavily publicized permanent “advertisements” of a meeting time and place for future time travelers to meet.[55] In 1982, a group in Baltimore, Maryland, identifying itself as the Krononauts, hosted an event of this type welcoming visitors from the future.[56][57] These experiments only stood the possibility of generating a positive result demonstrating the existence of time travel, but have failed so farno time travelers are known to have attended either event. Some versions of the many-worlds interpretation can be used to suggest that future humans have traveled back in time, but have traveled back to the meeting time and place in a parallel universe.[58]

There is a great deal of observable evidence for time dilation in special relativity[59] and gravitational time dilation in general relativity,[60][61][62] for example in the famous and easy-to-replicate observation of atmospheric muon decay.[63][64][65] The theory of relativity states that the speed of light is invariant for all observers in any frame of reference; that is, it is always the same. Time dilation is a direct consequence of the invariance of the speed of light.[65] Time dilation may be regarded in a limited sense as “time travel into the future”: a person may use time dilation so that a small amount of proper time passes for them, while a large amount of proper time passes elsewhere. This can be achieved by traveling at relativistic speeds or through the effects of gravity.[66]

For two identical clocks moving relative to each other without accelerating, each clock measures the other to be ticking slower. This is possible due to the relativity of simultaneity. However, the symmetry is broken if one clock accelerates, allowing for less proper time to pass for one clock than the other. The twin paradox describes this: one twin remains on Earth, while the other undergoes acceleration to relativistic speed as they travel into space, turn around, and travel back to Earth; the traveling twin ages less than the twin who stayed on Earth, because of the time dilation experienced during their acceleration. General relativity treats the effects of acceleration and the effects of gravity as equivalent, and shows that time dilation also occurs in gravity wells, with a clock deeper in the well ticking more slowly; this effect is taken into account when calibrating the clocks on the satellites of the Global Positioning System, and it could lead to significant differences in rates of aging for observers at different distances from a large gravity well such as a black hole.[24]:33130

A time machine that utilizes this principle might be, for instance, a spherical shell with a diameter of 5 meters and the mass of Jupiter. A person at its center will travel forward in time at a rate four times that of distant observers. Squeezing the mass of a large planet into such a small structure is not expected to be within humanity’s technological capabilities in the near future.[24]:76140 With current technologies, it is only possible to cause a human traveler to age less than companions on Earth by a very small fraction of a second, the current record being about 20 milliseconds for the cosmonaut Sergei Avdeyev.[67]

Philosophers have discussed the nature of time since at least the time of ancient Greece; for example, Parmenides presented the view that time is an illusion. Centuries later, Newton supported the idea of absolute time, while his contemporary Leibniz maintained that time is only a relation between events and it cannot be expressed independently. The latter approach eventually gave rise to the spacetime of relativity.[68]

Many philosophers have argued that relativity implies eternalism, the idea that the past and future exist in a real sense, not only as changes that occurred or will occur to the present.[69] Philosopher of science Dean Rickles disagrees with some qualifications, but notes that “the consensus among philosophers seems to be that special and general relativity are incompatible with presentism.”[70] Some philosophers view time as a dimension equal to spatial dimensions, that future events are “already there” in the same sense different places exist, and that there is no objective flow of time; however, this view is disputed.[71]

Presentism is a school of philosophy that holds that the future and the past exist only as changes that occurred or will occur to the present, and they have no real existence of their own. In this view, time travel is impossible because there is no future or past to travel to.[69] Keller and Nelson have argued that even if past and future objects do not exist, there can still be definite truths about past and future events, and thus it is possible that a future truth about a time traveler deciding to travel back to the present date could explain the time traveler’s actual appearance in the present;[72] these views are contested by some authors.[73]

Presentism in classical spacetime deems that only the present exists; this is not reconcilable with special relativity, shown in the following example: Alice and Bob are simultaneous observers of event O. For Alice, some event E is simultaneous with O, but for Bob, event E is in the past or future. Therefore, Alice and Bob disagree about what exists in the present, which contradicts classical presentism. “Here-now presentism” attempts to reconcile this by only acknowledging the time and space of a single point; this is unsatisfactory because objects coming and going from the “here-now” alternate between real and unreal, in addition to the lack of a privileged “here-now” that would be the “real” present. “Relativized presentism” acknowledges that there are infinite frames of reference, each of them has a different set of simultaneous events, which makes it impossible to distinguish a single “real” present, and hence either all events in time are realblurring the difference between presentism and eternalismor each frame of reference exists in its own reality. Options for presentism in special relativity appear to be exhausted, but Gdel and others suspect presentism may be valid for some forms of general relativity.[74] Generally, the idea of absolute time and space is considered incompatible with general relativity; there is no universal truth about the absolute position of events which occur at different times, and thus no way to determine which point in space at one time is at the universal “same position” at another time,[75] and all coordinate systems are on equal footing as given by the principle of diffeomorphism invariance.[76]

A common objection to the idea of traveling back in time is put forth in the grandfather paradox or the argument of auto-infanticide.[77] If one were able to go back in time, inconsistencies and contradictions would ensue if the time traveler were to change anything; there is a contradiction if the past becomes different from the way it is.[78][79] The paradox is commonly described with a person who travels to the past and kills their own grandfather, prevents the existence of their father or mother, and therefore their own existence.[27] Philosophers question whether these paradoxes make time travel impossible. Some philosophers answer the paradoxes by arguing that it might be the case that backward time travel could be possible but that it would be impossible to actually change the past in any way,[80] an idea similar to the proposed Novikov self-consistency principle in physics.

According to the philosophical theory of compossibility, what can happen, for example in the context of time travel, must be weighed against the context of everything relating to the situation. If the past is a certain way, it’s not possible for it to be any other way. What can happen when a time traveler visits the past is limited to what did happen, in order to prevent logical contradictions.[81]

The Novikov self-consistency principle, named after Igor Dmitrievich Novikov, states that any actions taken by a time traveler or by an object that travels back in time were part of history all along, and therefore it is impossible for the time traveler to “change” history in any way. The time traveler’s actions may be the cause of events in their own past though, which leads to the potential for circular causation, sometimes called a predestination paradox,[82] ontological paradox,[83] or bootstrap paradox.[83][84] The term bootstrap paradox was popularized by Robert A. Heinlein’s story “By His Bootstraps”.[85] The Novikov self-consistency principle proposes that the local laws of physics in a region of spacetime containing time travelers cannot be any different from the local laws of physics in any other region of spacetime.[86]

The philosopher Kelley L. Ross argues in “Time Travel Paradoxes”[87] that in a scenario involving a physical object whose world-line or history forms a closed loop in time there can be a violation of the second law of thermodynamics. Ross uses “Somewhere in Time” as an example of such an ontological paradox, where a watch is given to a person, and 60 years later the same watch is brought back in time and given to the same character. Ross states that entropy of the watch will increase, and the watch carried back in time will be more worn with each repetition of its history. The second law of thermodynamics is understood by modern physicists to be a statistical law, so decreasing entropy or non-increasing entropy are not impossible, just improbable. Additionally, entropy statistically increases in systems which are isolated, so non-isolated systems, such as an object, that interact with the outside world, can become less worn and decrease in entropy, and it’s possible for an object whose world-line forms a closed loop to be always in the same condition in the same point of its history.[24]:23

Time travel themes in science fiction and the media can generally be grouped into three categories: immutable timeline; mutable timeline; and alternate histories, as in the interacting-many-worlds interpretation.[88][89][90] Frequently in fiction, timeline is used to refer to all physical events in history, so that in time travel stories where events can be changed, the time traveler is described as creating a new or altered timeline.[91] This usage is distinct from the use of the term timeline to refer to a type of chart that illustrates a particular series of events, and the concept is also distinct from a world line, a term from Einstein’s theory of relativity which refers to the entire history of a single object.

Claims of time travel

Culture

Fiction

Visit link:

Time travel – Wikipedia

Space exploration – Wikipedia

Space exploration is the discovery and exploration of celestial structures in outer space by means of evolving and growing space technology. While the study of space is carried out mainly by astronomers with telescopes, the physical exploration of space is conducted both by unmanned robotic space probes and human spaceflight.

While the observation of objects in space, known as astronomy, predates reliable recorded history, it was the development of large and relatively efficient rockets during the mid-twentieth century that allowed physical space exploration to become a reality. Common rationales for exploring space include advancing scientific research, national prestige, uniting different nations, ensuring the future survival of humanity, and developing military and strategic advantages against other countries.[1]

Space exploration has often been used as a proxy competition for geopolitical rivalries such as the Cold War. The early era of space exploration was driven by a “Space Race” between the Soviet Union and the United States. The launch of the first human-made object to orbit Earth, the Soviet Union’s Sputnik 1, on 4 October 1957, and the first Moon landing by the American Apollo 11 mission on 20 July 1969 are often taken as landmarks for this initial period. The Soviet Space Program achieved many of the first milestones, including the first living being in orbit in 1957, the first human spaceflight (Yuri Gagarin aboard Vostok 1) in 1961, the first spacewalk (by Aleksei Leonov) on 18 March 1965, the first automatic landing on another celestial body in 1966, and the launch of the first space station (Salyut 1) in 1971. After the first 20 years of exploration, focus shifted from one-off flights to renewable hardware, such as the Space Shuttle program, and from competition to cooperation as with the International Space Station (ISS).

With the substantial completion of the ISS[2] following STS-133 in March 2011, plans for space exploration by the U.S. remain in flux. Constellation, a Bush Administration program for a return to the Moon by 2020[3] was judged inadequately funded and unrealistic by an expert review panel reporting in 2009.[4] The Obama Administration proposed a revision of Constellation in 2010 to focus on the development of the capability for crewed missions beyond low Earth orbit (LEO), envisioning extending the operation of the ISS beyond 2020, transferring the development of launch vehicles for human crews from NASA to the private sector, and developing technology to enable missions to beyond LEO, such as EarthMoon L1, the Moon, EarthSun L2, near-Earth asteroids, and Phobos or Mars orbit.[5]

In the 2000s, the People’s Republic of China initiated a successful manned spaceflight program, while the European Union, Japan, and India have also planned future crewed space missions. China, Russia, Japan, and India have advocated crewed missions to the Moon during the 21st century, while the European Union has advocated manned missions to both the Moon and Mars during the 20th and 21st century.

From the 1990s onwards, private interests began promoting space tourism and then public space exploration of the Moon (see Google Lunar X Prize).

The highest known projectiles prior to the rockets of the 1940s were the shells of the Paris Gun, a type of German long-range siege gun, which reached at least 40 kilometers altitude during World War One.[6] Steps towards putting a human-made object into space were taken by German scientists during World War II while testing the V-2 rocket, which became the first human-made object in space on 3 October 1942 with the launching of the A-4. After the war, the U.S. used German scientists and their captured rockets in programs for both military and civilian research. The first scientific exploration from space was the cosmic radiation experiment launched by the U.S. on a V-2 rocket on 10 May 1946.[7] The first images of Earth taken from space followed the same year[8][9] while the first animal experiment saw fruit flies lifted into space in 1947, both also on modified V-2s launched by Americans. Starting in 1947, the Soviets, also with the help of German teams, launched sub-orbital V-2 rockets and their own variant, the R-1, including radiation and animal experiments on some flights. These suborbital experiments only allowed a very short time in space which limited their usefulness.

The first successful orbital launch was of the Soviet uncrewed Sputnik 1 (“Satellite 1”) mission on 4 October 1957. The satellite weighed about 83kg (183lb), and is believed to have orbited Earth at a height of about 250km (160mi). It had two radio transmitters (20 and 40MHz), which emitted “beeps” that could be heard by radios around the globe. Analysis of the radio signals was used to gather information about the electron density of the ionosphere, while temperature and pressure data was encoded in the duration of radio beeps. The results indicated that the satellite was not punctured by a meteoroid. Sputnik 1 was launched by an R-7 rocket. It burned up upon re-entry on 3 January 1958.

The second one was Sputnik 2. Launched by the USSR on November 3, 1957, it carried the dog Laika, who became the first animal in orbit.

This success led to an escalation of the American space program, which unsuccessfully attempted to launch a Vanguard satellite into orbit two months later. On 31 January 1958, the U.S. successfully orbited Explorer 1 on a Juno rocket.

The first successful human spaceflight was Vostok 1 (“East 1”), carrying 27-year-old Russian cosmonaut Yuri Gagarin on 12 April 1961. The spacecraft completed one orbit around the globe, lasting about 1 hour and 48 minutes. Gagarin’s flight resonated around the world; it was a demonstration of the advanced Soviet space program and it opened an entirely new era in space exploration: human spaceflight.

The U.S. first launched a person into space within a month of Vostok 1 with Alan Shepard’s suborbital flight on Freedom 7. Orbital flight was achieved by the United States when John Glenn’s Friendship 7 orbited Earth on 20 February 1962.

Valentina Tereshkova, the first woman in space, orbited Earth 48 times aboard Vostok 6 on 16 June 1963.

China first launched a person into space 42 years after the launch of Vostok 1, on 15 October 2003, with the flight of Yang Liwei aboard the Shenzhou 5 (Divine Vessel 5) spacecraft.

The first artificial object to reach another celestial body was Luna 2 in 1959.[10] The first automatic landing on another celestial body was performed by Luna 9[11] in 1966. Luna 10 became the first artificial satellite of the Moon.[12]

The first crewed landing on another celestial body was performed by Apollo 11 on 20 July 1969.

The first successful interplanetary flyby was the 1962 Mariner 2 flyby of Venus (closest approach 34,773 kilometers). The other planets were first flown by in 1965 for Mars by Mariner 4, 1973 for Jupiter by Pioneer 10, 1974 for Mercury by Mariner 10, 1979 for Saturn by Pioneer 11, 1986 for Uranus by Voyager 2, 1989 for Neptune by Voyager 2. In 2015, the dwarf planets Ceres and Pluto were orbited by Dawn and passed by New Horizons, respectively.

The first interplanetary surface mission to return at least limited surface data from another planet was the 1970 landing of Venera 7 on Venus which returned data to Earth for 23 minutes. In 1975 the Venera 9 was the first to return images from the surface of another planet. In 1971 the Mars 3 mission achieved the first soft landing on Mars returning data for almost 20 seconds. Later much longer duration surface missions were achieved, including over six years of Mars surface operation by Viking 1 from 1975 to 1982 and over two hours of transmission from the surface of Venus by Venera 13 in 1982, the longest ever Soviet planetary surface mission.

The dream of stepping into the outer reaches of Earth’s atmosphere was driven by the fiction of Peter Francis Geraci[13][14][15] and H. G. Wells,[16] and rocket technology was developed to try to realize this vision. The German V-2 was the first rocket to travel into space, overcoming the problems of thrust and material failure. During the final days of World War II this technology was obtained by both the Americans and Soviets as were its designers. The initial driving force for further development of the technology was a weapons race for intercontinental ballistic missiles (ICBMs) to be used as long-range carriers for fast nuclear weapon delivery, but in 1961 when the Soviet Union launched the first man into space, the United States declared itself to be in a “Space Race” with the Soviets.

Konstantin Tsiolkovsky, Robert Goddard, Hermann Oberth, and Reinhold Tiling laid the groundwork of rocketry in the early years of the 20th century.

Wernher von Braun was the lead rocket engineer for Nazi Germany’s World War II V-2 rocket project. In the last days of the war he led a caravan of workers in the German rocket program to the American lines, where they surrendered and were brought to the United States to work on their rocket development (“Operation Paperclip”). He acquired American citizenship and led the team that developed and launched Explorer 1, the first American satellite. Von Braun later led the team at NASA’s Marshall Space Flight Center which developed the Saturn V moon rocket.

Initially the race for space was often led by Sergei Korolyov, whose legacy includes both the R7 and Soyuzwhich remain in service to this day. Korolev was the mastermind behind the first satellite, first man (and first woman) in orbit and first spacewalk. Until his death his identity was a closely guarded state secret; not even his mother knew that he was responsible for creating the Soviet space program.

Kerim Kerimov was one of the founders of the Soviet space program and was one of the lead architects behind the first human spaceflight (Vostok 1) alongside Sergey Korolyov. After Korolyov’s death in 1966, Kerimov became the lead scientist of the Soviet space program and was responsible for the launch of the first space stations from 1971 to 1991, including the Salyut and Mir series, and their precursors in 1967, the Cosmos 186 and Cosmos 188.[17][18]

Although the Sun will probably not be physically explored at all, the study of the Sun has nevertheless been a major focus of space exploration. Being above the atmosphere in particular and Earth’s magnetic field gives access to the solar wind and infrared and ultraviolet radiations that cannot reach Earth’s surface. The Sun generates most space weather, which can affect power generation and transmission systems on Earth and interfere with, and even damage, satellites and space probes. Numerous spacecraft dedicated to observing the Sun, beginning with the Apollo Telescope Mount, have been launched and still others have had solar observation as a secondary objective. Parker Solar Probe, planned for a 2018 launch, will approach the Sun to within 1/8th the orbit of Mercury.

Mercury remains the least explored of the Terrestrial planets. As of May 2013, the Mariner 10 and MESSENGER missions have been the only missions that have made close observations of Mercury. MESSENGER entered orbit around Mercury in March 2011, to further investigate the observations made by Mariner 10 in 1975 (Munsell, 2006b).

A third mission to Mercury, scheduled to arrive in 2025, BepiColombo is to include two probes. BepiColombo is a joint mission between Japan and the European Space Agency. MESSENGER and BepiColombo are intended to gather complementary data to help scientists understand many of the mysteries discovered by Mariner 10’s flybys.

Flights to other planets within the Solar System are accomplished at a cost in energy, which is described by the net change in velocity of the spacecraft, or delta-v. Due to the relatively high delta-v to reach Mercury and its proximity to the Sun, it is difficult to explore and orbits around it are rather unstable.

Venus was the first target of interplanetary flyby and lander missions and, despite one of the most hostile surface environments in the Solar System, has had more landers sent to it (nearly all from the Soviet Union) than any other planet in the Solar System. The first successful Venus flyby was the American Mariner 2 spacecraft, which flew past Venus in 1962. Mariner 2 has been followed by several other flybys by multiple space agencies often as part of missions using a Venus flyby to provide a gravitational assist en route to other celestial bodies. In 1967 Venera 4 became the first probe to enter and directly examine the atmosphere of Venus. In 1970, Venera 7 became the first successful lander to reach the surface of Venus and by 1985 it had been followed by eight additional successful Soviet Venus landers which provided images and other direct surface data. Starting in 1975 with the Soviet orbiter Venera 9 some ten successful orbiter missions have been sent to Venus, including later missions which were able to map the surface of Venus using radar to pierce the obscuring atmosphere.

Space exploration has been used as a tool to understand Earth as a celestial object in its own right. Orbital missions can provide data for Earth that can be difficult or impossible to obtain from a purely ground-based point of reference.

For example, the existence of the Van Allen radiation belts was unknown until their discovery by the United States’ first artificial satellite, Explorer 1. These belts contain radiation trapped by Earth’s magnetic fields, which currently renders construction of habitable space stations above 1000km impractical. Following this early unexpected discovery, a large number of Earth observation satellites have been deployed specifically to explore Earth from a space based perspective. These satellites have significantly contributed to the understanding of a variety of Earth-based phenomena. For instance, the hole in the ozone layer was found by an artificial satellite that was exploring Earth’s atmosphere, and satellites have allowed for the discovery of archeological sites or geological formations that were difficult or impossible to otherwise identify.

The Moon was the first celestial body to be the object of space exploration. It holds the distinctions of being the first remote celestial object to be flown by, orbited, and landed upon by spacecraft, and the only remote celestial object ever to be visited by humans.

In 1959 the Soviets obtained the first images of the far side of the Moon, never previously visible to humans. The U.S. exploration of the Moon began with the Ranger 4 impactor in 1962. Starting in 1966 the Soviets successfully deployed a number of landers to the Moon which were able to obtain data directly from the Moon’s surface; just four months later, Surveyor 1 marked the debut of a successful series of U.S. landers. The Soviet uncrewed missions culminated in the Lunokhod program in the early 1970s, which included the first uncrewed rovers and also successfully brought lunar soil samples to Earth for study. This marked the first (and to date the only) automated return of extraterrestrial soil samples to Earth. Uncrewed exploration of the Moon continues with various nations periodically deploying lunar orbiters, and in 2008 the Indian Moon Impact Probe.

Crewed exploration of the Moon began in 1968 with the Apollo 8 mission that successfully orbited the Moon, the first time any extraterrestrial object was orbited by humans. In 1969, the Apollo 11 mission marked the first time humans set foot upon another world. Crewed exploration of the Moon did not continue for long, however. The Apollo 17 mission in 1972 marked the most recent human visit there, and the next, Exploration Mission 2, is due to orbit the Moon in 2021. Robotic missions are still pursued vigorously.

The exploration of Mars has been an important part of the space exploration programs of the Soviet Union (later Russia), the United States, Europe, Japan and India. Dozens of robotic spacecraft, including orbiters, landers, and rovers, have been launched toward Mars since the 1960s. These missions were aimed at gathering data about current conditions and answering questions about the history of Mars. The questions raised by the scientific community are expected to not only give a better appreciation of the red planet but also yield further insight into the past, and possible future, of Earth.

The exploration of Mars has come at a considerable financial cost with roughly two-thirds of all spacecraft destined for Mars failing before completing their missions, with some failing before they even began. Such a high failure rate can be attributed to the complexity and large number of variables involved in an interplanetary journey, and has led researchers to jokingly speak of The Great Galactic Ghoul[20] which subsists on a diet of Mars probes. This phenomenon is also informally known as the “Mars Curse”.[21] In contrast to overall high failure rates in the exploration of Mars, India has become the first country to achieve success of its maiden attempt. India’s Mars Orbiter Mission (MOM)[22][23][24] is one of the least expensive interplanetary missions ever undertaken with an approximate total cost of 450 Crore (US$73 million).[25][26] The first mission to Mars by any Arab country has been taken up by the United Arab Emirates. Called the Emirates Mars Mission, it is scheduled for launch in 2020. The uncrewed exploratory probe has been named “Hope Probe” and will be sent to Mars to study its atmosphere in detail.[27]

The Russian space mission Fobos-Grunt, which launched on 9 November 2011 experienced a failure leaving it stranded in low Earth orbit.[28] It was to begin exploration of the Phobos and Martian circumterrestrial orbit, and study whether the moons of Mars, or at least Phobos, could be a “trans-shipment point” for spaceships traveling to Mars.[29]

The exploration of Jupiter has consisted solely of a number of automated NASA spacecraft visiting the planet since 1973. A large majority of the missions have been “flybys”, in which detailed observations are taken without the probe landing or entering orbit; such as in Pioneer and Voyager programs. The Galileo and Juno spacecraft are the only spacecraft to have entered the planet’s orbit. As Jupiter is believed to have only a relatively small rocky core and no real solid surface, a landing mission is nearly impossible.

Reaching Jupiter from Earth requires a delta-v of 9.2km/s,[30] which is comparable to the 9.7km/s delta-v needed to reach low Earth orbit.[31] Fortunately, gravity assists through planetary flybys can be used to reduce the energy required at launch to reach Jupiter, albeit at the cost of a significantly longer flight duration.[30]

Jupiter has 69 known moons, many of which have relatively little known information about them.

Saturn has been explored only through uncrewed spacecraft launched by NASA, including one mission (CassiniHuygens) planned and executed in cooperation with other space agencies. These missions consist of flybys in 1979 by Pioneer 11, in 1980 by Voyager 1, in 1982 by Voyager 2 and an orbital mission by the Cassini spacecraft, which lasted from 2004 until 2017.

Saturn has at least 62 known moons, although the exact number is debatable since Saturn’s rings are made up of vast numbers of independently orbiting objects of varying sizes. The largest of the moons is Titan, which holds the distinction of being the only moon in the Solar System with an atmosphere denser and thicker than that of Earth. Titan holds the distinction of being the only object in the Outer Solar System that has been explored with a lander, the Huygens probe deployed by the Cassini spacecraft.

The exploration of Uranus has been entirely through the Voyager 2 spacecraft, with no other visits currently planned. Given its axial tilt of 97.77, with its polar regions exposed to sunlight or darkness for long periods, scientists were not sure what to expect at Uranus. The closest approach to Uranus occurred on 24 January 1986. Voyager 2 studied the planet’s unique atmosphere and magnetosphere. Voyager 2 also examined its ring system and the moons of Uranus including all five of the previously known moons, while discovering an additional ten previously unknown moons.

Images of Uranus proved to have a very uniform appearance, with no evidence of the dramatic storms or atmospheric banding evident on Jupiter and Saturn. Great effort was required to even identify a few clouds in the images of the planet. The magnetosphere of Uranus, however, proved to be completely unique and proved to be profoundly affected by the planet’s unusual axial tilt. In contrast to the bland appearance of Uranus itself, striking images were obtained of the Moons of Uranus, including evidence that Miranda had been unusually geologically active.

The exploration of Neptune began with the 25 August 1989 Voyager 2 flyby, the sole visit to the system as of 2014. The possibility of a Neptune Orbiter has been discussed, but no other missions have been given serious thought.

Although the extremely uniform appearance of Uranus during Voyager 2’s visit in 1986 had led to expectations that Neptune would also have few visible atmospheric phenomena, the spacecraft found that Neptune had obvious banding, visible clouds, auroras, and even a conspicuous anticyclone storm system rivaled in size only by Jupiter’s small Spot. Neptune also proved to have the fastest winds of any planet in the Solar System, measured as high as 2,100km/h.[32] Voyager 2 also examined Neptune’s ring and moon system. It discovered 900 complete rings and additional partial ring “arcs” around Neptune. In addition to examining Neptune’s three previously known moons, Voyager 2 also discovered five previously unknown moons, one of which, Proteus, proved to be the last largest moon in the system. Data from Voyager 2 supported the view that Neptune’s largest moon, Triton, is a captured Kuiper belt object.[33]

The dwarf planet Pluto presents significant challenges for spacecraft because of its great distance from Earth (requiring high velocity for reasonable trip times) and small mass (making capture into orbit very difficult at present). Voyager 1 could have visited Pluto, but controllers opted instead for a close flyby of Saturn’s moon Titan, resulting in a trajectory incompatible with a Pluto flyby. Voyager 2 never had a plausible trajectory for reaching Pluto.[34]

Pluto continues to be of great interest, despite its reclassification as the lead and nearest member of a new and growing class of distant icy bodies of intermediate size (and also the first member of the important subclass, defined by orbit and known as “plutinos”). After an intense political battle, a mission to Pluto dubbed New Horizons was granted funding from the United States government in 2003.[35] New Horizons was launched successfully on 19 January 2006. In early 2007 the craft made use of a gravity assist from Jupiter. Its closest approach to Pluto was on 14 July 2015; scientific observations of Pluto began five months prior to closest approach and continued for 16 days after the encounter.

Until the advent of space travel, objects in the asteroid belt were merely pinpricks of light in even the largest telescopes, their shapes and terrain remaining a mystery. Several asteroids have now been visited by probes, the first of which was Galileo, which flew past two: 951 Gaspra in 1991, followed by 243 Ida in 1993. Both of these lay near enough to Galileo’s planned trajectory to Jupiter that they could be visited at acceptable cost. The first landing on an asteroid was performed by the NEAR Shoemaker probe in 2000, following an orbital survey of the object. The dwarf planet Ceres and the asteroid 4 Vesta, two of the three largest asteroids, were visited by NASA’s Dawn spacecraft, launched in 2007.

Although many comets have been studied from Earth sometimes with centuries-worth of observations, only a few comets have been closely visited. In 1985, the International Cometary Explorer conducted the first comet fly-by (21P/Giacobini-Zinner) before joining the Halley Armada studying the famous comet. The Deep Impact probe smashed into 9P/Tempel to learn more about its structure and composition and the Stardust mission returned samples of another comet’s tail. The Philae lander successfully landed on Comet ChuryumovGerasimenko in 2014 as part of the broader Rosetta mission.

Hayabusa was an unmanned spacecraft developed by the Japan Aerospace Exploration Agency to return a sample of material from the small near-Earth asteroid 25143 Itokawa to Earth for further analysis. Hayabusa was launched on 9 May 2003 and rendezvoused with Itokawa in mid-September 2005. After arriving at Itokawa, Hayabusa studied the asteroid’s shape, spin, topography, color, composition, density, and history. In November 2005, it landed on the asteroid to collect samples. The spacecraft returned to Earth on 13 June 2010.

Deep space exploration is the branch of astronomy, astronautics and space technology that is involved with the exploration of distant regions of outer space.[36] Physical exploration of space is conducted both by human spaceflights (deep-space astronautics) and by robotic spacecraft.

Some of the best candidates for future deep space engine technologies include anti-matter, nuclear power and beamed propulsion.[37] The latter, beamed propulsion, appears to be the best candidate for deep space exploration presently available, since it uses known physics and known technology that is being developed for other purposes.[38]

In the 2000s, several plans for space exploration were announced; both government entities and the private sector have space exploration objectives. China has announced plans to have a 60-ton multi-module space station in orbit by 2020.

The NASA Authorization Act of 2010 provided a re-prioritized list of objectives for the American space program, as well as funding for the first priorities. NASA proposes to move forward with the development of the Space Launch System (SLS), which will be designed to carry the Orion Multi-Purpose Crew Vehicle, as well as important cargo, equipment, and science experiments to Earth’s orbit and destinations beyond. Additionally, the SLS will serve as a back up for commercial and international partner transportation services to the International Space Station. The SLS rocket will incorporate technological investments from the Space Shuttle program and the Constellation program in order to take advantage of proven hardware and reduce development and operations costs. The first developmental flight is targeted for the end of 2017.[39]

The idea of using high level automated systems for space missions has become a desirable goal to space agencies all around the world. Such systems are believed to yield benefits such as lower cost, less human oversight, and ability to explore deeper in space which is usually restricted by long communications with human controllers.[40]

Autonomy is defined by three requirements:[40]

Autonomous technologies would be able to perform beyond predetermined actions. They would analyze all possible states and events happening around them and come up with a safe response. In addition, such technologies can reduce launch cost and ground involvement. Performance would increase as well. Autonomy would be able to quickly respond upon encountering an unforeseen event, especially in deep space exploration where communication back to Earth would take too long.[40]

NASA began its autonomous science experiment (ASE) on Earth Observing 1 (EO-1) which is NASA’s first satellite in the new millennium program Earth-observing series launched on 21 November 2000. The autonomy of ASE is capable of on-board science analysis, replanning, robust execution, and later the addition of model-based diagnostic. Images obtained by the EO-1 are analyzed on-board and downlinked when a change or an interesting event occur. The ASE software has successfully provided over 10,000 science images.[40]

An article in science magazine Nature suggested the use of asteroids as a gateway for space exploration, with the ultimate destination being Mars.[41] In order to make such an approach viable, three requirements need to be fulfilled: first, “a thorough asteroid survey to find thousands of nearby bodies suitable for astronauts to visit”; second, “extending flight duration and distance capability to ever-increasing ranges out to Mars”; and finally, “developing better robotic vehicles and tools to enable astronauts to explore an asteroid regardless of its size, shape or spin.”[41] Furthermore, using asteroids would provide astronauts with protection from galactic cosmic rays, with mission crews being able to land on them in times of greater risk to radiation exposure.[42]

The research that is conducted by national space exploration agencies, such as NASA and Roscosmos, is one of the reasons supporters cite to justify government expenses. Economic analyses of the NASA programs often showed ongoing economic benefits (such as NASA spin-offs), generating many times the revenue of the cost of the program.[43] It is also argued that space exploration would lead to the extraction of resources on other planets and especially asteroids, which contain billions of dollars worth of minerals and metals. Such expeditions could generate a lot of revenue.[44] As well, it has been argued that space exploration programs help inspire youth to study in science and engineering.[45]

Another claim is that space exploration is a necessity to mankind and that staying on Earth will lead to extinction. Some of the reasons are lack of natural resources, comets, nuclear war, and worldwide epidemic. Stephen Hawking, renowned British theoretical physicist, said that “I don’t think the human race will survive the next thousand years, unless we spread into space. There are too many accidents that can befall life on a single planet. But I’m an optimist. We will reach out to the stars.”[46]

NASA has produced a series of public service announcement videos supporting the concept of space exploration.[47]

Overall, the public remains largely supportive of both crewed and uncrewed space exploration. According to an Associated Press Poll conducted in July 2003, 71% of U.S. citizens agreed with the statement that the space program is “a good investment”, compared to 21% who did not.[48]

Arthur C. Clarke (1950) presented a summary of motivations for the human exploration of space in his non-fiction semi-technical monograph Interplanetary Flight.[49] He argued that humanity’s choice is essentially between expansion off Earth into space, versus cultural (and eventually biological) stagnation and death.

Spaceflight is the use of space technology to achieve the flight of spacecraft into and through outer space.

Spaceflight is used in space exploration, and also in commercial activities like space tourism and satellite telecommunications. Additional non-commercial uses of spaceflight include space observatories, reconnaissance satellites and other Earth observation satellites.

A spaceflight typically begins with a rocket launch, which provides the initial thrust to overcome the force of gravity and propels the spacecraft from the surface of Earth. Once in space, the motion of a spacecraftboth when unpropelled and when under propulsionis covered by the area of study called astrodynamics. Some spacecraft remain in space indefinitely, some disintegrate during atmospheric reentry, and others reach a planetary or lunar surface for landing or impact.

Satellites are used for a large number of purposes. Common types include military (spy) and civilian Earth observation satellites, communication satellites, navigation satellites, weather satellites, and research satellites. Space stations and human spacecraft in orbit are also satellites.

Current examples of the commercial use of space include satellite navigation systems, satellite television and satellite radio. Space tourism is the recent phenomenon of space travel by individuals for the purpose of personal pleasure.

Private spaceflight companies such as SpaceX and Blue Origin, and commercial space stations such as the Axiom Space and the Bigelow Commercial Space Station have dramatically changed the landscape of space exploration, and will continue to do so in the near future.

Astrobiology is the interdisciplinary study of life in the universe, combining aspects of astronomy, biology and geology.[50] It is focused primarily on the study of the origin, distribution and evolution of life. It is also known as exobiology (from Greek: , exo, “outside”).[51][52][53] The term “Xenobiology” has been used as well, but this is technically incorrect because its terminology means “biology of the foreigners”.[54] Astrobiologists must also consider the possibility of life that is chemically entirely distinct from any life found on Earth.[55] In the Solar System some of the prime locations for current or past astrobiology are on Enceladus, Europa, Mars, and Titan.

Space colonization, also called space settlement and space humanization, would be the permanent autonomous (self-sufficient) human habitation of locations outside Earth, especially of natural satellites or planets such as the Moon or Mars, using significant amounts of in-situ resource utilization.

To date, the longest human occupation of space is the International Space Station which has been in continuous use for 17years, 223days. Valeri Polyakov’s record single spaceflight of almost 438 days aboard the Mir space station has not been surpassed. Long-term stays in space reveal issues with bone and muscle loss in low gravity, immune system suppression, and radiation exposure.

Many past and current concepts for the continued exploration and colonization of space focus on a return to the Moon as a “stepping stone” to the other planets, especially Mars. At the end of 2006 NASA announced they were planning to build a permanent Moon base with continual presence by 2024.[57]

Beyond the technical factors that could make living in space more widespread, it has been suggested that the lack of private property, the inability or difficulty in establishing property rights in space, has been an impediment to the development of space for human habitation. Since the advent of space technology in the latter half of the twentieth century, the ownership of property in space has been murky, with strong arguments both for and against. In particular, the making of national territorial claims in outer space and on celestial bodies has been specifically proscribed by the Outer Space Treaty, which had been, as of 2012[update], ratified by all spacefaring nations.[58]

Original post:

Space exploration – Wikipedia

Space exploration – Wikipedia

Space exploration is the discovery and exploration of celestial structures in outer space by means of evolving and growing space technology. While the study of space is carried out mainly by astronomers with telescopes, the physical exploration of space is conducted both by unmanned robotic space probes and human spaceflight.

While the observation of objects in space, known as astronomy, predates reliable recorded history, it was the development of large and relatively efficient rockets during the mid-twentieth century that allowed physical space exploration to become a reality. Common rationales for exploring space include advancing scientific research, national prestige, uniting different nations, ensuring the future survival of humanity, and developing military and strategic advantages against other countries.[1]

Space exploration has often been used as a proxy competition for geopolitical rivalries such as the Cold War. The early era of space exploration was driven by a “Space Race” between the Soviet Union and the United States. The launch of the first human-made object to orbit Earth, the Soviet Union’s Sputnik 1, on 4 October 1957, and the first Moon landing by the American Apollo 11 mission on 20 July 1969 are often taken as landmarks for this initial period. The Soviet Space Program achieved many of the first milestones, including the first living being in orbit in 1957, the first human spaceflight (Yuri Gagarin aboard Vostok 1) in 1961, the first spacewalk (by Aleksei Leonov) on 18 March 1965, the first automatic landing on another celestial body in 1966, and the launch of the first space station (Salyut 1) in 1971. After the first 20 years of exploration, focus shifted from one-off flights to renewable hardware, such as the Space Shuttle program, and from competition to cooperation as with the International Space Station (ISS).

With the substantial completion of the ISS[2] following STS-133 in March 2011, plans for space exploration by the U.S. remain in flux. Constellation, a Bush Administration program for a return to the Moon by 2020[3] was judged inadequately funded and unrealistic by an expert review panel reporting in 2009.[4] The Obama Administration proposed a revision of Constellation in 2010 to focus on the development of the capability for crewed missions beyond low Earth orbit (LEO), envisioning extending the operation of the ISS beyond 2020, transferring the development of launch vehicles for human crews from NASA to the private sector, and developing technology to enable missions to beyond LEO, such as EarthMoon L1, the Moon, EarthSun L2, near-Earth asteroids, and Phobos or Mars orbit.[5]

In the 2000s, the People’s Republic of China initiated a successful manned spaceflight program, while the European Union, Japan, and India have also planned future crewed space missions. China, Russia, Japan, and India have advocated crewed missions to the Moon during the 21st century, while the European Union has advocated manned missions to both the Moon and Mars during the 20th and 21st century.

From the 1990s onwards, private interests began promoting space tourism and then public space exploration of the Moon (see Google Lunar X Prize).

The highest known projectiles prior to the rockets of the 1940s were the shells of the Paris Gun, a type of German long-range siege gun, which reached at least 40 kilometers altitude during World War One.[6] Steps towards putting a human-made object into space were taken by German scientists during World War II while testing the V-2 rocket, which became the first human-made object in space on 3 October 1942 with the launching of the A-4. After the war, the U.S. used German scientists and their captured rockets in programs for both military and civilian research. The first scientific exploration from space was the cosmic radiation experiment launched by the U.S. on a V-2 rocket on 10 May 1946.[7] The first images of Earth taken from space followed the same year[8][9] while the first animal experiment saw fruit flies lifted into space in 1947, both also on modified V-2s launched by Americans. Starting in 1947, the Soviets, also with the help of German teams, launched sub-orbital V-2 rockets and their own variant, the R-1, including radiation and animal experiments on some flights. These suborbital experiments only allowed a very short time in space which limited their usefulness.

The first successful orbital launch was of the Soviet uncrewed Sputnik 1 (“Satellite 1”) mission on 4 October 1957. The satellite weighed about 83kg (183lb), and is believed to have orbited Earth at a height of about 250km (160mi). It had two radio transmitters (20 and 40MHz), which emitted “beeps” that could be heard by radios around the globe. Analysis of the radio signals was used to gather information about the electron density of the ionosphere, while temperature and pressure data was encoded in the duration of radio beeps. The results indicated that the satellite was not punctured by a meteoroid. Sputnik 1 was launched by an R-7 rocket. It burned up upon re-entry on 3 January 1958.

The second one was Sputnik 2. Launched by the USSR on November 3, 1957, it carried the dog Laika, who became the first animal in orbit.

This success led to an escalation of the American space program, which unsuccessfully attempted to launch a Vanguard satellite into orbit two months later. On 31 January 1958, the U.S. successfully orbited Explorer 1 on a Juno rocket.

The first successful human spaceflight was Vostok 1 (“East 1”), carrying 27-year-old Russian cosmonaut Yuri Gagarin on 12 April 1961. The spacecraft completed one orbit around the globe, lasting about 1 hour and 48 minutes. Gagarin’s flight resonated around the world; it was a demonstration of the advanced Soviet space program and it opened an entirely new era in space exploration: human spaceflight.

The U.S. first launched a person into space within a month of Vostok 1 with Alan Shepard’s suborbital flight on Freedom 7. Orbital flight was achieved by the United States when John Glenn’s Friendship 7 orbited Earth on 20 February 1962.

Valentina Tereshkova, the first woman in space, orbited Earth 48 times aboard Vostok 6 on 16 June 1963.

China first launched a person into space 42 years after the launch of Vostok 1, on 15 October 2003, with the flight of Yang Liwei aboard the Shenzhou 5 (Divine Vessel 5) spacecraft.

The first artificial object to reach another celestial body was Luna 2 in 1959.[10] The first automatic landing on another celestial body was performed by Luna 9[11] in 1966. Luna 10 became the first artificial satellite of the Moon.[12]

The first crewed landing on another celestial body was performed by Apollo 11 on 20 July 1969.

The first successful interplanetary flyby was the 1962 Mariner 2 flyby of Venus (closest approach 34,773 kilometers). The other planets were first flown by in 1965 for Mars by Mariner 4, 1973 for Jupiter by Pioneer 10, 1974 for Mercury by Mariner 10, 1979 for Saturn by Pioneer 11, 1986 for Uranus by Voyager 2, 1989 for Neptune by Voyager 2. In 2015, the dwarf planets Ceres and Pluto were orbited by Dawn and passed by New Horizons, respectively.

The first interplanetary surface mission to return at least limited surface data from another planet was the 1970 landing of Venera 7 on Venus which returned data to Earth for 23 minutes. In 1975 the Venera 9 was the first to return images from the surface of another planet. In 1971 the Mars 3 mission achieved the first soft landing on Mars returning data for almost 20 seconds. Later much longer duration surface missions were achieved, including over six years of Mars surface operation by Viking 1 from 1975 to 1982 and over two hours of transmission from the surface of Venus by Venera 13 in 1982, the longest ever Soviet planetary surface mission.

The dream of stepping into the outer reaches of Earth’s atmosphere was driven by the fiction of Peter Francis Geraci[13][14][15] and H. G. Wells,[16] and rocket technology was developed to try to realize this vision. The German V-2 was the first rocket to travel into space, overcoming the problems of thrust and material failure. During the final days of World War II this technology was obtained by both the Americans and Soviets as were its designers. The initial driving force for further development of the technology was a weapons race for intercontinental ballistic missiles (ICBMs) to be used as long-range carriers for fast nuclear weapon delivery, but in 1961 when the Soviet Union launched the first man into space, the United States declared itself to be in a “Space Race” with the Soviets.

Konstantin Tsiolkovsky, Robert Goddard, Hermann Oberth, and Reinhold Tiling laid the groundwork of rocketry in the early years of the 20th century.

Wernher von Braun was the lead rocket engineer for Nazi Germany’s World War II V-2 rocket project. In the last days of the war he led a caravan of workers in the German rocket program to the American lines, where they surrendered and were brought to the United States to work on their rocket development (“Operation Paperclip”). He acquired American citizenship and led the team that developed and launched Explorer 1, the first American satellite. Von Braun later led the team at NASA’s Marshall Space Flight Center which developed the Saturn V moon rocket.

Initially the race for space was often led by Sergei Korolyov, whose legacy includes both the R7 and Soyuzwhich remain in service to this day. Korolev was the mastermind behind the first satellite, first man (and first woman) in orbit and first spacewalk. Until his death his identity was a closely guarded state secret; not even his mother knew that he was responsible for creating the Soviet space program.

Kerim Kerimov was one of the founders of the Soviet space program and was one of the lead architects behind the first human spaceflight (Vostok 1) alongside Sergey Korolyov. After Korolyov’s death in 1966, Kerimov became the lead scientist of the Soviet space program and was responsible for the launch of the first space stations from 1971 to 1991, including the Salyut and Mir series, and their precursors in 1967, the Cosmos 186 and Cosmos 188.[17][18]

Although the Sun will probably not be physically explored at all, the study of the Sun has nevertheless been a major focus of space exploration. Being above the atmosphere in particular and Earth’s magnetic field gives access to the solar wind and infrared and ultraviolet radiations that cannot reach Earth’s surface. The Sun generates most space weather, which can affect power generation and transmission systems on Earth and interfere with, and even damage, satellites and space probes. Numerous spacecraft dedicated to observing the Sun, beginning with the Apollo Telescope Mount, have been launched and still others have had solar observation as a secondary objective. Parker Solar Probe, planned for a 2018 launch, will approach the Sun to within 1/8th the orbit of Mercury.

Mercury remains the least explored of the Terrestrial planets. As of May 2013, the Mariner 10 and MESSENGER missions have been the only missions that have made close observations of Mercury. MESSENGER entered orbit around Mercury in March 2011, to further investigate the observations made by Mariner 10 in 1975 (Munsell, 2006b).

A third mission to Mercury, scheduled to arrive in 2020, BepiColombo is to include two probes. BepiColombo is a joint mission between Japan and the European Space Agency. MESSENGER and BepiColombo are intended to gather complementary data to help scientists understand many of the mysteries discovered by Mariner 10’s flybys.

Flights to other planets within the Solar System are accomplished at a cost in energy, which is described by the net change in velocity of the spacecraft, or delta-v. Due to the relatively high delta-v to reach Mercury and its proximity to the Sun, it is difficult to explore and orbits around it are rather unstable.

Venus was the first target of interplanetary flyby and lander missions and, despite one of the most hostile surface environments in the Solar System, has had more landers sent to it (nearly all from the Soviet Union) than any other planet in the Solar System. The first successful Venus flyby was the American Mariner 2 spacecraft, which flew past Venus in 1962. Mariner 2 has been followed by several other flybys by multiple space agencies often as part of missions using a Venus flyby to provide a gravitational assist en route to other celestial bodies. In 1967 Venera 4 became the first probe to enter and directly examine the atmosphere of Venus. In 1970, Venera 7 became the first successful lander to reach the surface of Venus and by 1985 it had been followed by eight additional successful Soviet Venus landers which provided images and other direct surface data. Starting in 1975 with the Soviet orbiter Venera 9 some ten successful orbiter missions have been sent to Venus, including later missions which were able to map the surface of Venus using radar to pierce the obscuring atmosphere.

Space exploration has been used as a tool to understand Earth as a celestial object in its own right. Orbital missions can provide data for Earth that can be difficult or impossible to obtain from a purely ground-based point of reference.

For example, the existence of the Van Allen radiation belts was unknown until their discovery by the United States’ first artificial satellite, Explorer 1. These belts contain radiation trapped by Earth’s magnetic fields, which currently renders construction of habitable space stations above 1000km impractical. Following this early unexpected discovery, a large number of Earth observation satellites have been deployed specifically to explore Earth from a space based perspective. These satellites have significantly contributed to the understanding of a variety of Earth-based phenomena. For instance, the hole in the ozone layer was found by an artificial satellite that was exploring Earth’s atmosphere, and satellites have allowed for the discovery of archeological sites or geological formations that were difficult or impossible to otherwise identify.

The Moon was the first celestial body to be the object of space exploration. It holds the distinctions of being the first remote celestial object to be flown by, orbited, and landed upon by spacecraft, and the only remote celestial object ever to be visited by humans.

In 1959 the Soviets obtained the first images of the far side of the Moon, never previously visible to humans. The U.S. exploration of the Moon began with the Ranger 4 impactor in 1962. Starting in 1966 the Soviets successfully deployed a number of landers to the Moon which were able to obtain data directly from the Moon’s surface; just four months later, Surveyor 1 marked the debut of a successful series of U.S. landers. The Soviet uncrewed missions culminated in the Lunokhod program in the early 1970s, which included the first uncrewed rovers and also successfully brought lunar soil samples to Earth for study. This marked the first (and to date the only) automated return of extraterrestrial soil samples to Earth. Uncrewed exploration of the Moon continues with various nations periodically deploying lunar orbiters, and in 2008 the Indian Moon Impact Probe.

Crewed exploration of the Moon began in 1968 with the Apollo 8 mission that successfully orbited the Moon, the first time any extraterrestrial object was orbited by humans. In 1969, the Apollo 11 mission marked the first time humans set foot upon another world. Crewed exploration of the Moon did not continue for long, however. The Apollo 17 mission in 1972 marked the most recent human visit there, and the next, Exploration Mission 2, is due to orbit the Moon in 2021. Robotic missions are still pursued vigorously.

The exploration of Mars has been an important part of the space exploration programs of the Soviet Union (later Russia), the United States, Europe, Japan and India. Dozens of robotic spacecraft, including orbiters, landers, and rovers, have been launched toward Mars since the 1960s. These missions were aimed at gathering data about current conditions and answering questions about the history of Mars. The questions raised by the scientific community are expected to not only give a better appreciation of the red planet but also yield further insight into the past, and possible future, of Earth.

The exploration of Mars has come at a considerable financial cost with roughly two-thirds of all spacecraft destined for Mars failing before completing their missions, with some failing before they even began. Such a high failure rate can be attributed to the complexity and large number of variables involved in an interplanetary journey, and has led researchers to jokingly speak of The Great Galactic Ghoul[20] which subsists on a diet of Mars probes. This phenomenon is also informally known as the “Mars Curse”.[21] In contrast to overall high failure rates in the exploration of Mars, India has become the first country to achieve success of its maiden attempt. India’s Mars Orbiter Mission (MOM)[22][23][24] is one of the least expensive interplanetary missions ever undertaken with an approximate total cost of 450 Crore (US$73 million).[25][26] The first mission to Mars by any Arab country has been taken up by the United Arab Emirates. Called the Emirates Mars Mission, it is scheduled for launch in 2020. The uncrewed exploratory probe has been named “Hope Probe” and will be sent to Mars to study its atmosphere in detail.[27]

The Russian space mission Fobos-Grunt, which launched on 9 November 2011 experienced a failure leaving it stranded in low Earth orbit.[28] It was to begin exploration of the Phobos and Martian circumterrestrial orbit, and study whether the moons of Mars, or at least Phobos, could be a “trans-shipment point” for spaceships traveling to Mars.[29]

The exploration of Jupiter has consisted solely of a number of automated NASA spacecraft visiting the planet since 1973. A large majority of the missions have been “flybys”, in which detailed observations are taken without the probe landing or entering orbit; such as in Pioneer and Voyager programs. The Galileo and Juno spacecraft are the only spacecraft to have entered the planet’s orbit. As Jupiter is believed to have only a relatively small rocky core and no real solid surface, a landing mission is nearly impossible.

Reaching Jupiter from Earth requires a delta-v of 9.2km/s,[30] which is comparable to the 9.7km/s delta-v needed to reach low Earth orbit.[31] Fortunately, gravity assists through planetary flybys can be used to reduce the energy required at launch to reach Jupiter, albeit at the cost of a significantly longer flight duration.[30]

Jupiter has 69 known moons, many of which have relatively little known information about them.

Saturn has been explored only through uncrewed spacecraft launched by NASA, including one mission (CassiniHuygens) planned and executed in cooperation with other space agencies. These missions consist of flybys in 1979 by Pioneer 11, in 1980 by Voyager 1, in 1982 by Voyager 2 and an orbital mission by the Cassini spacecraft, which lasted from 2004 until 2017.

Saturn has at least 62 known moons, although the exact number is debatable since Saturn’s rings are made up of vast numbers of independently orbiting objects of varying sizes. The largest of the moons is Titan, which holds the distinction of being the only moon in the Solar System with an atmosphere denser and thicker than that of Earth. Titan holds the distinction of being the only object in the Outer Solar System that has been explored with a lander, the Huygens probe deployed by the Cassini spacecraft.

The exploration of Uranus has been entirely through the Voyager 2 spacecraft, with no other visits currently planned. Given its axial tilt of 97.77, with its polar regions exposed to sunlight or darkness for long periods, scientists were not sure what to expect at Uranus. The closest approach to Uranus occurred on 24 January 1986. Voyager 2 studied the planet’s unique atmosphere and magnetosphere. Voyager 2 also examined its ring system and the moons of Uranus including all five of the previously known moons, while discovering an additional ten previously unknown moons.

Images of Uranus proved to have a very uniform appearance, with no evidence of the dramatic storms or atmospheric banding evident on Jupiter and Saturn. Great effort was required to even identify a few clouds in the images of the planet. The magnetosphere of Uranus, however, proved to be completely unique and proved to be profoundly affected by the planet’s unusual axial tilt. In contrast to the bland appearance of Uranus itself, striking images were obtained of the Moons of Uranus, including evidence that Miranda had been unusually geologically active.

The exploration of Neptune began with the 25 August 1989 Voyager 2 flyby, the sole visit to the system as of 2014. The possibility of a Neptune Orbiter has been discussed, but no other missions have been given serious thought.

Although the extremely uniform appearance of Uranus during Voyager 2’s visit in 1986 had led to expectations that Neptune would also have few visible atmospheric phenomena, the spacecraft found that Neptune had obvious banding, visible clouds, auroras, and even a conspicuous anticyclone storm system rivaled in size only by Jupiter’s small Spot. Neptune also proved to have the fastest winds of any planet in the Solar System, measured as high as 2,100km/h.[32] Voyager 2 also examined Neptune’s ring and moon system. It discovered 900 complete rings and additional partial ring “arcs” around Neptune. In addition to examining Neptune’s three previously known moons, Voyager 2 also discovered five previously unknown moons, one of which, Proteus, proved to be the last largest moon in the system. Data from Voyager 2 supported the view that Neptune’s largest moon, Triton, is a captured Kuiper belt object.[33]

The dwarf planet Pluto presents significant challenges for spacecraft because of its great distance from Earth (requiring high velocity for reasonable trip times) and small mass (making capture into orbit very difficult at present). Voyager 1 could have visited Pluto, but controllers opted instead for a close flyby of Saturn’s moon Titan, resulting in a trajectory incompatible with a Pluto flyby. Voyager 2 never had a plausible trajectory for reaching Pluto.[34]

Pluto continues to be of great interest, despite its reclassification as the lead and nearest member of a new and growing class of distant icy bodies of intermediate size (and also the first member of the important subclass, defined by orbit and known as “plutinos”). After an intense political battle, a mission to Pluto dubbed New Horizons was granted funding from the United States government in 2003.[35] New Horizons was launched successfully on 19 January 2006. In early 2007 the craft made use of a gravity assist from Jupiter. Its closest approach to Pluto was on 14 July 2015; scientific observations of Pluto began five months prior to closest approach and continued for 16 days after the encounter.

Until the advent of space travel, objects in the asteroid belt were merely pinpricks of light in even the largest telescopes, their shapes and terrain remaining a mystery. Several asteroids have now been visited by probes, the first of which was Galileo, which flew past two: 951 Gaspra in 1991, followed by 243 Ida in 1993. Both of these lay near enough to Galileo’s planned trajectory to Jupiter that they could be visited at acceptable cost. The first landing on an asteroid was performed by the NEAR Shoemaker probe in 2000, following an orbital survey of the object. The dwarf planet Ceres and the asteroid 4 Vesta, two of the three largest asteroids, were visited by NASA’s Dawn spacecraft, launched in 2007.

Although many comets have been studied from Earth sometimes with centuries-worth of observations, only a few comets have been closely visited. In 1985, the International Cometary Explorer conducted the first comet fly-by (21P/Giacobini-Zinner) before joining the Halley Armada studying the famous comet. The Deep Impact probe smashed into 9P/Tempel to learn more about its structure and composition and the Stardust mission returned samples of another comet’s tail. The Philae lander successfully landed on Comet ChuryumovGerasimenko in 2014 as part of the broader Rosetta mission.

Hayabusa was an unmanned spacecraft developed by the Japan Aerospace Exploration Agency to return a sample of material from the small near-Earth asteroid 25143 Itokawa to Earth for further analysis. Hayabusa was launched on 9 May 2003 and rendezvoused with Itokawa in mid-September 2005. After arriving at Itokawa, Hayabusa studied the asteroid’s shape, spin, topography, color, composition, density, and history. In November 2005, it landed on the asteroid to collect samples. The spacecraft returned to Earth on 13 June 2010.

Deep space exploration is the branch of astronomy, astronautics and space technology that is involved with the exploration of distant regions of outer space.[36] Physical exploration of space is conducted both by human spaceflights (deep-space astronautics) and by robotic spacecraft.

Some of the best candidates for future deep space engine technologies include anti-matter, nuclear power and beamed propulsion.[37] The latter, beamed propulsion, appears to be the best candidate for deep space exploration presently available, since it uses known physics and known technology that is being developed for other purposes.[38]

In the 2000s, several plans for space exploration were announced; both government entities and the private sector have space exploration objectives. China has announced plans to have a 60-ton multi-module space station in orbit by 2020.

The NASA Authorization Act of 2010 provided a re-prioritized list of objectives for the American space program, as well as funding for the first priorities. NASA proposes to move forward with the development of the Space Launch System (SLS), which will be designed to carry the Orion Multi-Purpose Crew Vehicle, as well as important cargo, equipment, and science experiments to Earth’s orbit and destinations beyond. Additionally, the SLS will serve as a back up for commercial and international partner transportation services to the International Space Station. The SLS rocket will incorporate technological investments from the Space Shuttle program and the Constellation program in order to take advantage of proven hardware and reduce development and operations costs. The first developmental flight is targeted for the end of 2017.[39]

The idea of using high level automated systems for space missions has become a desirable goal to space agencies all around the world. Such systems are believed to yield benefits such as lower cost, less human oversight, and ability to explore deeper in space which is usually restricted by long communications with human controllers.[40]

Autonomy is defined by three requirements:[40]

Autonomous technologies would be able to perform beyond predetermined actions. They would analyze all possible states and events happening around them and come up with a safe response. In addition, such technologies can reduce launch cost and ground involvement. Performance would increase as well. Autonomy would be able to quickly respond upon encountering an unforeseen event, especially in deep space exploration where communication back to Earth would take too long.[40]

NASA began its autonomous science experiment (ASE) on Earth Observing 1 (EO-1) which is NASA’s first satellite in the new millennium program Earth-observing series launched on 21 November 2000. The autonomy of ASE is capable of on-board science analysis, replanning, robust execution, and later the addition of model-based diagnostic. Images obtained by the EO-1 are analyzed on-board and downlinked when a change or an interesting event occur. The ASE software has successfully provided over 10,000 science images.[40]

An article in science magazine Nature suggested the use of asteroids as a gateway for space exploration, with the ultimate destination being Mars.[41] In order to make such an approach viable, three requirements need to be fulfilled: first, “a thorough asteroid survey to find thousands of nearby bodies suitable for astronauts to visit”; second, “extending flight duration and distance capability to ever-increasing ranges out to Mars”; and finally, “developing better robotic vehicles and tools to enable astronauts to explore an asteroid regardless of its size, shape or spin.”[41] Furthermore, using asteroids would provide astronauts with protection from galactic cosmic rays, with mission crews being able to land on them in times of greater risk to radiation exposure.[42]

The research that is conducted by national space exploration agencies, such as NASA and Roscosmos, is one of the reasons supporters cite to justify government expenses. Economic analyses of the NASA programs often showed ongoing economic benefits (such as NASA spin-offs), generating many times the revenue of the cost of the program.[43] It is also argued that space exploration would lead to the extraction of resources on other planets and especially asteroids, which contain billions of dollars worth of minerals and metals. Such expeditions could generate a lot of revenue.[44] As well, it has been argued that space exploration programs help inspire youth to study in science and engineering.[45]

Another claim is that space exploration is a necessity to mankind and that staying on Earth will lead to extinction. Some of the reasons are lack of natural resources, comets, nuclear war, and worldwide epidemic. Stephen Hawking, renowned British theoretical physicist, said that “I don’t think the human race will survive the next thousand years, unless we spread into space. There are too many accidents that can befall life on a single planet. But I’m an optimist. We will reach out to the stars.”[46]

NASA has produced a series of public service announcement videos supporting the concept of space exploration.[47]

Overall, the public remains largely supportive of both crewed and uncrewed space exploration. According to an Associated Press Poll conducted in July 2003, 71% of U.S. citizens agreed with the statement that the space program is “a good investment”, compared to 21% who did not.[48]

Arthur C. Clarke (1950) presented a summary of motivations for the human exploration of space in his non-fiction semi-technical monograph Interplanetary Flight.[49] He argued that humanity’s choice is essentially between expansion off Earth into space, versus cultural (and eventually biological) stagnation and death.

Spaceflight is the use of space technology to achieve the flight of spacecraft into and through outer space.

Spaceflight is used in space exploration, and also in commercial activities like space tourism and satellite telecommunications. Additional non-commercial uses of spaceflight include space observatories, reconnaissance satellites and other Earth observation satellites.

A spaceflight typically begins with a rocket launch, which provides the initial thrust to overcome the force of gravity and propels the spacecraft from the surface of Earth. Once in space, the motion of a spacecraftboth when unpropelled and when under propulsionis covered by the area of study called astrodynamics. Some spacecraft remain in space indefinitely, some disintegrate during atmospheric reentry, and others reach a planetary or lunar surface for landing or impact.

Satellites are used for a large number of purposes. Common types include military (spy) and civilian Earth observation satellites, communication satellites, navigation satellites, weather satellites, and research satellites. Space stations and human spacecraft in orbit are also satellites.

Current examples of the commercial use of space include satellite navigation systems, satellite television and satellite radio. Space tourism is the recent phenomenon of space travel by individuals for the purpose of personal pleasure.

Private spaceflight companies such as SpaceX and Blue Origin, and commercial space stations such as the Axiom Space and the Bigelow Commercial Space Station have dramatically changed the landscape of space exploration, and will continue to do so in the near future.

Astrobiology is the interdisciplinary study of life in the universe, combining aspects of astronomy, biology and geology.[50] It is focused primarily on the study of the origin, distribution and evolution of life. It is also known as exobiology (from Greek: , exo, “outside”).[51][52][53] The term “Xenobiology” has been used as well, but this is technically incorrect because its terminology means “biology of the foreigners”.[54] Astrobiologists must also consider the possibility of life that is chemically entirely distinct from any life found on Earth.[55] In the Solar System some of the prime locations for current or past astrobiology are on Enceladus, Europa, Mars, and Titan.

Space colonization, also called space settlement and space humanization, would be the permanent autonomous (self-sufficient) human habitation of locations outside Earth, especially of natural satellites or planets such as the Moon or Mars, using significant amounts of in-situ resource utilization.

To date, the longest human occupation of space is the International Space Station which has been in continuous use for 17years, 215days. Valeri Polyakov’s record single spaceflight of almost 438 days aboard the Mir space station has not been surpassed. Long-term stays in space reveal issues with bone and muscle loss in low gravity, immune system suppression, and radiation exposure.

Many past and current concepts for the continued exploration and colonization of space focus on a return to the Moon as a “stepping stone” to the other planets, especially Mars. At the end of 2006 NASA announced they were planning to build a permanent Moon base with continual presence by 2024.[57]

Beyond the technical factors that could make living in space more widespread, it has been suggested that the lack of private property, the inability or difficulty in establishing property rights in space, has been an impediment to the development of space for human habitation. Since the advent of space technology in the latter half of the twentieth century, the ownership of property in space has been murky, with strong arguments both for and against. In particular, the making of national territorial claims in outer space and on celestial bodies has been specifically proscribed by the Outer Space Treaty, which had been, as of 2012[update], ratified by all spacefaring nations.[58]

More:

Space exploration – Wikipedia

How Long Would It Take To Travel To The Nearest Star …

Weve all asked this question at some point in our lives: How long would it take to travel to the stars? Could it be within a persons own lifetime, and could this kind of travel become the norm someday? There are many possible answers to this question some very simple, others in the realms of science fiction. But coming up with a comprehensive answer means taking a lot of things into consideration.

Unfortunately, any realistic assessment is likely to produce answers that would totally discourage futurists and enthusiasts of interstellar travel. Like it or not, space is very large, and our technology is still very limited. But should we ever contemplate leaving the nest, we will have a range of options for getting to the nearest Solar Systems in our galaxy.

The nearest star to Earth is our Sun, which is a fairly average star in the Hertzsprung Russell Diagrams Main Sequence. This means that it is highly stable, providing Earth with just the right type of sunlight for life to evolve on our planet. We know there are planets orbiting other stars near to our Solar System, and many of these stars are similar to our own.

In the future, should mankind wish to leave the Solar System, well have a huge choice of stars we could travel to, and many could have the right conditions for life to thrive. But where would we go and how long would it take for us to get there? Just remember, this is all speculative and there is currently no benchmark for interstellar trips. That being said, here we go!

Over 2000 exoplanets have been identified, many of which are believed to be habitable. Credit: phl.upl.edu

As already noted, the closest star to our Solar System is Proxima Centauri, which is why it makes the most sense to plot an interstellar mission to this system first. As part of a triple star system called Alpha Centauri, Proxima is about 4.24 light years (or 1.3 parsecs) from Earth. Alpha Centauri is actually the brightest star of the three in the system part of a closely orbiting binary 4.37 light years from Earth whereas Proxima Centauri (the dimmest of the three) is an isolated red dwarf about 0.13 light years from the binary.

And while interstellar travel conjures up all kinds of visions of Faster-Than-Light (FTL) travel, ranging from warp speed and wormholes to jump drives, such theories are either highly speculative (such as the Alcubierre Drive) or entirely the province of science fiction. In all likelihood, any deep space mission will likely take generations to get there, rather than a few days or in an instantaneous flash.

So, starting with one of the slowest forms of space travel, how long will it take to get to Proxima Centauri?

The question of how long would it take to get somewhere in space is somewhat easier when dealing with existing technology and bodies within our Solar System. For instance, using the technology that powered the New Horizons mission which consisted of 16 thrusters fueled with hydrazine monopropellant reaching the Moon would take a mere 8 hours and 35 minutes.

On the other hand, there is the European Space Agencys (ESA) SMART-1 mission, which took its time traveling to the Moon using the method of ionic propulsion. With this revolutionary technology, a variation of which has since been used by the Dawn spacecraft to reach Vesta, the SMART-1 mission took one year, one month and two weeks to reach the Moon.

So, from the speedy rocket-propelled spacecraft to the economical ion drive, we have a few options for getting around local space plus we could use Jupiter or Saturn for a hefty gravitational slingshot. However, if we were to contemplate missions to somewhere a little more out of the way, we would have to scale up our technology and look at whats really possible.

When we say possible methods, we are talking about those that involve existing technology, or those that do not yet exist, but are technically feasible. Some, as you will see, are time-honored and proven, while others are emerging or still on the board. In just about all cases though, they present a possible, but extremely time-consuming or expensive, scenario for getting to even the closest stars

Ionic Propulsion:Currently, the slowest form of propulsion, and the most fuel-efficient, is the ion engine. A few decades ago, ionic propulsion was considered to be the subject of science fiction. However, in recent years, the technology to support ion engines has moved from theory to practice in a big way. The ESAs SMART-1 mission for example successfully completed its mission to the Moon after taking a 13 month spiral path from the Earth.

SMART-1 used solar powered ion thrusters, where electrical energy was harvested from its solar panels and used to power its Hall-effect thrusters. Only 82 kg of xenon propellant was used to propel SMART-1 to the Moon. 1 kg of xenon propellant provided a delta-v of 45 m/s. This is a highly efficient form of propulsion, but it is by no means fast.

Artists concept of Dawn mission above Ceres. Since its arrival, the spacecraft turned around to point the blue glow of its ion engine in the opposite direction. Image credit: NASA/JPL

One of the first missions to use ion drive technology was the Deep Space 1 mission to Comet Borrelly that took place in 1998. DS1 also used a xenon-powered ion drive, consuming 81.5 kg of propellant. Over 20 months of thrusting, DS1 was managed to reach a velocity of 56,000 km/hr (35,000 miles/hr) during its flyby of the comet.

Ion thrusters are therefore more economical than rocket technology, as the thrust per unit mass of propellant (a.k.a. specific impulse) is far higher. But it takes a long time for ion thrusters to accelerate spacecraft to any great speeds, and the maximum velocity it can achieve is dependent on its fuel supply and how much electrical energy it can generate.

So if ionic propulsion were to be used for a mission to Proxima Centauri, the thrusters would need a huge source of energy production (i.e. nuclear power) and a large quantity of propellant (although still less than conventional rockets). But based on the assumption that a supply of 81.5 kg of xenon propellant translates into a maximum velocity of 56,000 km/hr (and that there are no other forms of propulsion available, such as a gravitational slingshot to accelerate it further), some calculations can be made.

In short, at a maximum velocity of 56,000 km/h, Deep Space 1 would take over 81,000 years to traverse the 4.24 light years between Earth and Proxima Centauri. To put that time-scale into perspective, that would be over 2,700 human generations. So it is safe to say that an interplanetary ion engine mission would be far too slow to be considered for a manned interstellar mission.

Ionic propulsion is currently the slowest, but most fuel-efficient, form of space travel. Credit: NASA/JPL

But, should ion thrusters be made larger and more powerful (i.e. ion exhaust velocity would need to be significantly higher), and enough propellant could be hauled to keep the spacecrafts going for the entire 4.243 light-year trip, that travel time could be greatly reduced. Still not enough to happen in someones lifetime though.

Gravity Assist Method:The fastest existing means of space travel is known the Gravity Assist method, which involves a spacecraft using the relative movement (i.e. orbit) and gravity of a planet to alter is path and speed. Gravitational assists are a very useful spaceflight technique, especially when using the Earth or another massive planet (like a gas giant) for a boost in velocity.

The Mariner 10 spacecraft was the first to use this method, using Venus gravitational pull to slingshot it towards Mercury in February of 1974. In the 1980s, the Voyager 1 probe used Saturn and Jupiter for gravitational slingshots to attain its current velocity of 60,000 km/hr (38,000 miles/hr) and make it into interstellar space.

However, it was the Helios 2 mission which was launched in 1976 to study the interplanetary medium from 0.3 AU to 1 AU to the Sun that holds the record for highest speed achieved with a gravity assist. At the time, Helios 1 (which launched in 1974) and Helios 2 held the record for closest approach to the Sun. Helios 2 was launched by a conventional NASA Titan/Centaur launch vehicle and placed in a highly elliptical orbit.

A Helios probe being encapsulated for launch. Credit: Public Domain

Due to the large eccentricity (0.54) of the 190 day solar orbit, at perihelion Helios 2 was able to reach a maximum velocity of over 240,000 km/hr (150,000 miles/hr). This orbital speed was attained by the gravitational pull of the Sun alone. Technically, the Helios 2 perihelion velocity was not a gravitational slingshot, it was a maximum orbital velocity, but it still holds the record for being the fastest man-made object regardless.

So, if Voyager 1 was traveling in the direction of the red dwarf Proxima Centauri at a constant velocity of 60,000 km/hr, it would take 76,000 years (or over 2,500 generations) to travel that distance. But if it could attain the record-breaking speed of Helios 2s close approach of the Sun a constant speed of 240,000 km/hr it would take 19,000 years (or over 600 generations) to travel 4.243 light years. Significantly better, but still not in the ream of practicality.

Electromagnetic (EM) Drive:Another proposed method of interstellar travel comes in the form of the Radio Frequency (RF) Resonant Cavity Thruster, also known as the EM Drive. Originally proposed in 2001 by Roger K. Shawyer, a UK scientist who started Satellite Propulsion Research Ltd (SPR) to bring it to fruition, this drive is built around the idea that electromagnetic microwave cavities can allow for the direct conversion of electrical energy to thrust.

Whereas conventional electromagnetic thrusters are designed to propel a certain type of mass (such as ionized particles), this particular drive system relies on no reaction mass and emits no directional radiation. Such a proposal has met with a great deal of skepticism, mainly because it violates the law of Conservation of Momentum which states that within a system, the amount of momentum remains constant and is neither created nor destroyed, but only changes through the action of forces.

The EM Drive prototype produced by NASA/Eagleworks. Credit: NASA Spaceflight Forum

However, recent experiments with the technology have apparently yielded positive results. In July of 2014, at the 50th AIAA/ASME/SAE/ASEE Joint Propulsion Conference in Cleveland, Ohio, researchers from NASAs advanced propulsion research claimed that they had successfully tested a new design for an electromagnetic propulsion drive.

This was followed up in April of 2015 when researchers at NASA Eagleworks (part of the Johnson Space Center) claimed that they had successfully tested the drive in a vacuum, an indication that it might actually work in space. In July of that same year, a research team from the Dresden University of Technologys Space System department built their own version of the engine and observed a detectable thrust.

And in 2010, Prof. Juan Yang of the Northwestern Polytechnical University in Xian, China, began publishing a series of papers about her research into EM Drive technology. This culminated in her 2012 paper where she reported higher input power (2.5kW) and tested thrust (720mN) levels. In 2014, she further reported extensive tests involving internal temperature measurements with embedded thermocouples, which seemed to confirm that the system worked.

Artists concept of an interstellar craft equipped with an EM Drive. Credit: NASA Spaceflight Center

According to calculations based on the NASA prototype (which yielded a power estimate of 0.4 N/kilowatt), a spacecraft equipped with the EM drive could make the trip to Pluto in less than 18 months. Thats one-sixth the time it took for the New Horizons probe to get there, which was traveling at speeds of close to 58,000 km/h (36,000 mph).

Sounds impressive. But even at that rate, it would take a ship equipped with EM engines over 13,000 years for the vessel to make it to Proxima Centauri. Getting closer, but not quickly enough! and until such time that technology can be definitively proven to work, it doesnt make much sense to put our eggs into this basket.

Nuclear Thermal and Nuclear Electric Propulsion (NTP/NEP):Another possibility for interstellar space flight is to use spacecraft equipped with nuclear engines, a concept which NASA has been exploring for decades. In a Nuclear Thermal Propulsion (NTP) rocket, uranium or deuterium reactions are used to heat liquid hydrogen inside a reactor, turning it into ionized hydrogen gas (plasma), which is then channeled through a rocket nozzle to generate thrust.

A Nuclear Electric Propulsion (NEP) rocket involves the same basic reactor converting its heat and energy into electrical energy, which would then power an electrical engine. In both cases, the rocket would rely on nuclear fission or fusion to generates propulsion rather than chemical propellants, which has been the mainstay of NASA and all other space agencies to date.

Artists impression of a Crew Transfer Vehicle (CTV) using its nuclear-thermal rocket engines to slow down and establish orbit around Mars. Credit: NASA

Compared to chemical propulsion, both NTP and NEC offers a number of advantages. The first and most obvious is the virtually unlimited energy density it offers compared to rocket fuel. In addition, a nuclear-powered engine could also provide superior thrust relative to the amount of propellant used. This would cut the total amount of propellant needed, thus cutting launch weight and the cost of individual missions.

Although no nuclear-thermal engines have ever flown, several design concepts have been built and tested over the past few decades, and numerous concepts have been proposed. These have ranged from the traditional solid-core design such as the Nuclear Engine for Rocket Vehicle Application (NERVA) to more advanced and efficient concepts that rely on either a liquid or a gas core.

However, despite these advantages in fuel-efficiency and specific impulse, the most sophisticated NTP concept has a maximum specific impulse of 5000 seconds (50 kNs/kg). Using nuclear engines driven by fission or fusion, NASA scientists estimate it would could take a spaceship only 90 days to get to Mars when the planet was at opposition i.e. as close as 55,000,000 km from Earth.

But adjusted for a one-way journey to Proxima Centauri, a nuclear rocket would still take centuries to accelerate to the point where it was flying a fraction of the speed of light. It would then require several decades of travel time, followed by many more centuries of deceleration before reaching it destination. All told, were still talking about 1000 years before it reaches its destination. Good for interplanetary missions, not so good for interstellar ones.

Using existing technology, the time it would take to send scientists and astronauts on an interstellar mission would be prohibitively slow. If we want to make that journey within a single lifetime, or even a generation, something a bit more radical (aka. highly theoretical) will be needed. And while wormholes and jump engines may still be pure fiction at this point, there are some rather advanced ideas that have been considered over the years.

Nuclear Pulse Propulsion:Nuclear pulse propulsion is a theoretically possible form of fast space travel. The concept was originally proposed in 1946 by Stanislaw Ulam, a Polish-American mathematician who participated in the Manhattan Project, and preliminary calculations were then made by F. Reines and Ulam in 1947. The actual project known as Project Orion was initiated in 1958 and lasted until 1963.

The Project Orion concept for a nuclear-powered spacecraft. Credit: silodrome.co

Led by Ted Taylor at General Atomics and physicist Freeman Dyson from the Institute for Advanced Study in Princeton, Orion hoped to harness the power of pulsed nuclear explosions to provide a huge thrust with very high specific impulse (i.e. the amount of thrust compared to weight or the amount of seconds the rocket can continually fire).

In a nutshell, the Orion design involves a large spacecraft with a high supply of thermonuclear warheads achieving propulsion by releasing a bomb behind it and then riding the detonation wave with the help of a rear-mounted pad called a pusher. After each blast, the explosive force would be absorbed by this pusher pad, which then translates the thrust into forward momentum.

Though hardly elegant by modern standards, the advantage of the design is that it achieves a high specific impulse meaning it extracts the maximum amount of energy from its fuel source (in this case, nuclear bombs) at minimal cost. In addition, the concept could theoretically achieve very high speeds, with some estimates suggesting a ballpark figure as high as 5% the speed of light (or 5.4107 km/hr).

But of course, there the inevitable downsides to the design. For one, a ship of this size would be incredibly expensive to build. According to estimates produced by Dyson in 1968, an Orion spacecraft that used hydrogen bombs to generate propulsion would weight 400,000 to 4,000,000 metric tons. And at least three quarters of that weight consists of nuclear bombs, where each warhead weights approximately 1 metric ton.

Artists concept of Orion spacecraft leaving Earth. Credit: bisbos.com/Adrian Mann

All told, Dysons most conservative estimates placed the total cost of building an Orion craft at 367 billion dollars. Adjusted for inflation, that works out to roughly $2.5 trillion dollars which accounts for over two thirds of the US governments current annual revenue. Hence, even at its lightest, the craft would be extremely expensive to manufacture.

Theres also the slight problem of all the radiation it generates, not to mention nuclear waste. In fact, it is for this reason that the Project is believed to have been terminated, owing to the passage of the Partial Test Ban Treaty of 1963 which sought to limit nuclear testing and stop the excessive release of nuclear fallout into the planets atmosphere.

Fusion Rockets:Another possibility within the realm of harnessed nuclear power involves rockets that rely on thermonuclear reactions to generate thrust. For this concept, energy is created when pellets of a deuterium/helium-3 mix are ignited in a reaction chamber by inertial confinement using electron beams (similar to what is done at the National Ignition Facility in California). This fusion reactor would detonate 250 pellets per second to create high-energy plasma, which would then be directed by a magnetic nozzle to create thrust.

Like a rocket that relies on a nuclear reactor, this concept offers advantages as far as fuel efficiency and specific impulse are concerned. Exhaust velocities of up to 10,600km/s are estimated, which is far beyond the speed of conventional rockets. Whats more, the technology has been studied extensively over the past few decades, and many proposals have been made.

Artists concept of the Daedalus spacecraft, a two-stage fusion rocket that would achieve up to 12% he speed of light. Credit: Adrian Mann

For example, between 1973 and 1978, the British Interplanetary Society conducted feasibility study known as Project Daedalus. Relying on current knowledge of fusion technology and existing methods, the study called for the creation of a two-stage unmanned scientific probe making a trip to Barnards Star (5.9 light years from Earth) in a single lifetime.

The first stage, the larger of the two, would operate for 2.05 years and accelerate the spacecraft to 7.1% the speed of light (o.071 c). This stage would then be jettisoned, at which point, the second stage would ignite its engine and accelerate the spacecraft up to about 12% of light speed (0.12 c) over the course of 1.8 years. The second-stage engine would then be shut down and the ship would enter into a 46-year cruise period.

According to the Projects estimates, the mission would take 50 years to reach Barnards Star. Adjusted for Proxima Centauri, the same craft could make the trip in 36 years. But of course, the project also identified numerous stumbling blocks that made it unfeasible using then-current technology most of which are still unresolved.

For instance, there is the fact that helium-3 is scare on Earth, which means it would have to be mined elsewhere (most likely on the Moon). Second, the reaction that drives the spacecraft requires that the energy released vastly exceed the energy used to trigger the reaction. And while experiments here on Earth have surpassed the break-even goal, we are still a long way away from the kinds of energy needed to power an interstellar spaceship.

Artists concept of the Project Daedalus spacecraft, with a Saturn V rocket standing next to it for scale. Credit: Adrian Mann

Third, there is the cost factor of constructing such a ship. Even by the modest standard of Project Daedalus unmanned craft, a fully-fueled craft would weight as much as 60,000 Mt. To put that in perspective, the gross weight of NASAs SLS is just over 30 Mt, and a single launch comes with a price tag of $5 billion (based on estimates made in 2013).

In short, a fusion rocket would not only be prohibitively expensive to build, it would require a level of fusion reactor technology that is currently beyond our means. Icarus Interstellar, an international organization of volunteer citizen scientists (some of whom worked for NASA or the ESA) have since attempted to revitalize the concept with Project Icarus. Founded in 2009, the group hopes to make fusion propulsion (among other things) feasible by the near future.

Fusion Ramjet:Also known as the Bussard Ramjet, this theoretical form of propulsion was first proposed by physicist Robert W. Bussard in 1960. Basically, it is an improvement over the standard nuclear fusion rocket, which uses magnetic fields to compress hydrogen fuel to the point that fusion occurs. But in the Ramjets case, an enormous electromagnetic funnel scoops hydrogen from the interstellar medium and dumps it into the reactor as fuel.

Artists concept of the Bussard Ramjet, which would harness hydrogen from the interstellar medium to power its fusion engines. Credit: futurespacetransportation.weebly.com

As the ship picks up speed, the reactive mass is forced into a progressively constricted magnetic field, compressing it until thermonuclear fusion occurs. The magnetic field then directs the energy as rocket exhaust through an engine nozzle, thereby accelerating the vessel. Without any fuel tanks to weigh it down, a fusion ramjet could achieve speeds approaching 4% of the speed of light and travel anywhere in the galaxy.

However, the potential drawbacks of this design are numerous. For instance, there is the problem of drag. The ship relies on increased speed to accumulate fuel, but as it collides with more and more interstellar hydrogen, it may also lose speed especially in denser regions of the galaxy. Second, deuterium and tritium (used in fusion reactors here on Earth) are rare in space, whereas fusing regular hydrogen (which is plentiful in space) is beyond our current methods.

This concept has been popularized extensively in science fiction. Perhaps the best known example of this is in the franchise of Star Trek, where Bussard collectors are the glowing nacelles on warp engines. But in reality, our knowledge of fusion reactions need to progress considerably before a ramjet is possible. We would also have to figure out that pesky drag problem before we began to consider building such a ship!

Laser Sail:Solar sails have long been considered to be a cost-effective way of exploring the Solar System. In addition to being relatively easy and cheap to manufacture, theres the added bonus of solar sails requiring no fuel. Rather than using rockets that require propellant, the sail uses the radiation pressure from stars to push large ultra-thin mirrors to high speeds.

IKAROS spaceprobe with solar sail in flight (artists depiction) showing a typical square sail configuration. Credit: Wikimedia Commons/Andrzej Mirecki

However, for the sake of interstellar flight, such a sail would need to be driven by focused energy beams (i.e. lasers or microwaves) to push it to a velocity approaching the speed of light. The concept was originally proposed by Robert Forward in 1984, who was a physicist at the Hughes Aircrafts research laboratories at the time.

The concept retains the benefits of a solar sail, in that it requires no on-board fuel, but also from the fact that laser energy does not dissipate with distance nearly as much as solar radiation. So while a laser-driven sail would take some time to accelerate to near-luminous speeds, it would be limited only to the speed of light itself.

According to a 2000 study produced by Robert Frisbee, a director of advanced propulsion concept studies at NASAs Jet Propulsion Laboratory, a laser sail could be accelerated to half the speed of light in less than a decade. He also calculated that a sail measuring about 320 km (200 miles) in diameter could reach Proxima Centauri in just over 12 years. Meanwhile, a sail measuring about 965 km (600 miles) in diameter would arrive in just under 9 years.

However, such a sail would have to be built from advanced composites to avoid melting. Combined with its size, this would add up to a pretty penny! Even worse is the sheer expense incurred from building a laser large and powerful enough to drive a sail to half the speed of light. According to Frisbees own study, the lasers would require a steady flow of 17,000 terawatts of power close to what the entire world consumes in a single day.

Antimatter Engine:Fans of science fiction are sure to have heard of antimatter. But in case you havent, antimatter is essentially material composed of antiparticles, which have the same mass but opposite charge as regular particles. An antimatter engine, meanwhile, is a form of propulsion that uses interactions between matter and antimatter to generate power, or to create thrust.

Artists concept of an antimatter-powered spacecraft for missions to Mars, as part of the Mars Reference Mission. Credit: NASA

In short, an antimatter engine involves particles of hydrogen and antihydrogen being slammed together. This reaction unleashes as much as energy as a thermonuclear bomb, along with a shower of subatomic particles called pions and muons. These particles, which would travel at one-third the speed of light, are then be channeled by a magnetic nozzle to generate thrust.

The advantage to this class of rocket is that a large fraction of the rest mass of a matter/antimatter mixture may be converted to energy, allowing antimatter rockets to have a far higher energy density and specific impulse than any other proposed class of rocket. Whats more, controlling this kind of reaction could conceivably push a rocket up to half the speed of light.

Pound for pound, this class of ship would be the fastest and most fuel-efficient ever conceived. Whereas conventional rockets require tons of chemical fuel to propel a spaceship to its destination, an antimatter engine could do the same job with just a few milligrams of fuel. In fact, the mutual annihilation of a half pound of hydrogen and antihydrogen particles would unleash more energy than a 10-megaton hydrogen bomb.

It is for this exact reason that NASAs Institute for Advanced Concepts (NIAC) has investigated the technology as a possible means for future Mars missions. Unfortunately, when contemplating missions to nearby star systems, the amount if fuel needs to make the trip is multiplied exponentially, and the cost involved in producing it would be astronomical (no pun!).

What matter and antimatter might look like annihilating one another. Credit: NASA/CXC/M. Weiss

According to report prepared for the 39th AIAA/ASME/SAE/ASEE Joint Propulsion Conference and Exhibit (also by Robert Frisbee), a two-stage antimatter rocket would need over 815,000 metric tons (900,000 US tons) of fuel to make the journey to Proxima Centauri in approximately 40 years. Thats not bad, as far as timelines go. But again, the cost

Whereas a single gram of antimatter would produce an incredible amount of energy, it is estimated that producing just one gram would require approximately 25 million billion kilowatt-hours of energy and cost over a trillion dollars. At present, the total amount of antimatter that has been created by humans is less 20 nanograms.

And even if we could produce antimatter for cheap, you would need a massive ship to hold the amount of fuel needed. According to a report by Dr. Darrel Smith & Jonathan Webby of the Embry-Riddle Aeronautical University in Arizona, an interstellar craft equipped with an antimatter engine could reach 0.5 the speed of light and reach Proxima Centauri in a little over 8 years. However, the ship itself would weigh 400 Mt, and would need 170 MT of antimatter fuel to make the journey.

A possible way around this is to create a vessel that can create antimatter which it could then store as fuel. This concept, known as the Vacuum to Antimatter Rocket Interstellar Explorer System (VARIES), was proposed by Richard Obousy of Icarus Interstellar. Based on the idea of in-situ refueling, a VARIES ship would rely on large lasers (powered by enormous solar arrays) which would create particles of antimatter when fired at empty space.

Artists concept of the Vacuum to Antimatter Rocket Interstellar Explorer System (VARIES), a concept that would use solar arrays to power lasers that create particles of antimatter to be used as fuel. Credit: Adrian Mann

Much like the Ramjet concept, this proposal solves the problem of carrying fuel by harnessing it from space. But once again, the sheer cost of such a ship would be prohibitively expensive using current technology. In addition, the ability to create dark matter in large volumes is not something we currently have the power to do. Theres also the matter of radiation, as matter-antimatter annihilation can produce blasts of high-energy gamma rays.

This not only presents a danger to the crew, requiring significant radiations shielding, but requires the engines be shielded as well to ensure they dont undergo atomic degradation from all the radiation they are exposed to. So bottom line, the antimatter engine is completely impractical with our current technology and in the current budget environment.

Alcubierre Warp Drive:Fans of science fiction are also no doubt familiar with the concept of an Alcubierre (or Warp) Drive. Proposed by Mexican physicist Miguel Alcubierre in 1994, this proposed method was an attempt to make FTL travel possible without violating Einsteins theory of Special Relativity. In short, the concept involves stretching the fabric of space-time in a wave, which would theoretically cause the space ahead of an object to contract and the space behind it to expand.

An object inside this wave (i.e. a spaceship) would then be able to ride this wave, known as a warp bubble, beyond relativistic speeds. Since the ship is not moving within this bubble, but is being carried along as it moves, the rules of space-time and relativity would cease to apply. The reason being, this method does not rely on moving faster than light in the local sense.

Artist Mark Rademakers concept for the IXS Enterprise, a theoretical interstellar warp spacecraft. Credit: Mark Rademaker/flickr.com

It is only faster than light in the sense that the ship could reach its destination faster than a beam of light that was traveling outside the warp bubble. So assuming that a spacecraft could be outfitted with an Alcubierre Drive system, it would be able to make the trip to Proxima Centauri in less than 4 years. So when it comes to theoretical interstellar space travel, this is by far the most promising technology, at least in terms of speed.

Naturally, the concept has been received its share of counter-arguments over the years. Chief amongst them are the fact that it does not take quantum mechanics into account, and could be invalidated by a Theory of Everything (such as loop quantum gravity). Calculations on the amount of energy required have also indicated that a warp drive would require a prohibitive amount of power to work. Other uncertainties include the safety of such a system, the effects on space-time at the destination, and violations of causality.

However, in 2012, NASA scientist Harold Sonny White announced that he and his colleagues had begun researching the possibility of an Alcubierre Drive. In a paper titled Warp Field Mechanics 101, White claimed that they had constructed an interferometer that will detect the spatial distortions produced by the expanding and contracting spacetime of the Alcubierre metric.

In 2013, the Jet Propulsion Laboratory published results of a warp field test which was conducted under vacuum conditions. Unfortunately, the results were reported as inconclusive. Long term, we may find that Alcubierres metric may violate one or more fundamental laws of nature. And even if the physics should prove to be sound, there is no guarantee it can be harnessed for the sake of FTL flight.

In conclusion, if you were hoping to travel to the nearest star within your lifetime, the outlook isnt very good. However, if mankind felt the incentive to build an interstellar ark filled with a self-sustaining community of space-faring humans, it might be possible to travel there in a little under a century if we were willing to invest in the requisite technology.

But all the available methods are still very limited when it comes to transit time. And while taking hundreds or thousands of years to reach the nearest star may matter less to us if our very survival was at stake, it is simply not practical as far as space exploration and travel goes. By the time a mission reached even the closest stars in our galaxy, the technology employed would be obsolete and humanity might not even exist back home anymore.

So unless we make a major breakthrough in the realms of fusion, antimatter, or laser technology, we will either have to be content with exploring our own Solar System, or be forced to accept a very long-term transit strategy

We have written many interesting articles about space travel here at Universe Today. Heres Will We Ever Reach Another Star?, Warp Drives May Come With a Killer Downside, The Alcubierre Warp Drive, How Far Is A Light Year?, When Light Just Isnt Fast Enough, When Will We Become Interstellar?, and Can We Travel Faster Than the Speed of Light?

For more information, be sure to consult NASAs pages on Propulsion Systems of the Future, and Is Warp Drive Real?

See the rest here:

How Long Would It Take To Travel To The Nearest Star …

Space exploration – Wikipedia

Space exploration is the discovery and exploration of celestial structures in outer space by means of evolving and growing space technology. While the study of space is carried out mainly by astronomers with telescopes, the physical exploration of space is conducted both by unmanned robotic space probes and human spaceflight.

While the observation of objects in space, known as astronomy, predates reliable recorded history, it was the development of large and relatively efficient rockets during the mid-twentieth century that allowed physical space exploration to become a reality. Common rationales for exploring space include advancing scientific research, national prestige, uniting different nations, ensuring the future survival of humanity, and developing military and strategic advantages against other countries.[1]

Space exploration has often been used as a proxy competition for geopolitical rivalries such as the Cold War. The early era of space exploration was driven by a “Space Race” between the Soviet Union and the United States. The launch of the first human-made object to orbit Earth, the Soviet Union’s Sputnik 1, on 4 October 1957, and the first Moon landing by the American Apollo 11 mission on 20 July 1969 are often taken as landmarks for this initial period. The Soviet Space Program achieved many of the first milestones, including the first living being in orbit in 1957, the first human spaceflight (Yuri Gagarin aboard Vostok 1) in 1961, the first spacewalk (by Aleksei Leonov) on 18 March 1965, the first automatic landing on another celestial body in 1966, and the launch of the first space station (Salyut 1) in 1971. After the first 20 years of exploration, focus shifted from one-off flights to renewable hardware, such as the Space Shuttle program, and from competition to cooperation as with the International Space Station (ISS).

With the substantial completion of the ISS[2] following STS-133 in March 2011, plans for space exploration by the U.S. remain in flux. Constellation, a Bush Administration program for a return to the Moon by 2020[3] was judged inadequately funded and unrealistic by an expert review panel reporting in 2009.[4] The Obama Administration proposed a revision of Constellation in 2010 to focus on the development of the capability for crewed missions beyond low Earth orbit (LEO), envisioning extending the operation of the ISS beyond 2020, transferring the development of launch vehicles for human crews from NASA to the private sector, and developing technology to enable missions to beyond LEO, such as EarthMoon L1, the Moon, EarthSun L2, near-Earth asteroids, and Phobos or Mars orbit.[5]

In the 2000s, the People’s Republic of China initiated a successful manned spaceflight program, while the European Union, Japan, and India have also planned future crewed space missions. China, Russia, Japan, and India have advocated crewed missions to the Moon during the 21st century, while the European Union has advocated manned missions to both the Moon and Mars during the 20th and 21st century.

From the 1990s onwards, private interests began promoting space tourism and then public space exploration of the Moon (see Google Lunar X Prize).

The highest known projectiles prior to the rockets of the 1940s were the shells of the Paris Gun, a type of German long-range siege gun, which reached at least 40 kilometers altitude during World War One.[6] Steps towards putting a human-made object into space were taken by German scientists during World War II while testing the V-2 rocket, which became the first human-made object in space on 3 October 1942 with the launching of the A-4. After the war, the U.S. used German scientists and their captured rockets in programs for both military and civilian research. The first scientific exploration from space was the cosmic radiation experiment launched by the U.S. on a V-2 rocket on 10 May 1946.[7] The first images of Earth taken from space followed the same year[8][9] while the first animal experiment saw fruit flies lifted into space in 1947, both also on modified V-2s launched by Americans. Starting in 1947, the Soviets, also with the help of German teams, launched sub-orbital V-2 rockets and their own variant, the R-1, including radiation and animal experiments on some flights. These suborbital experiments only allowed a very short time in space which limited their usefulness.

The first successful orbital launch was of the Soviet uncrewed Sputnik 1 (“Satellite 1”) mission on 4 October 1957. The satellite weighed about 83kg (183lb), and is believed to have orbited Earth at a height of about 250km (160mi). It had two radio transmitters (20 and 40MHz), which emitted “beeps” that could be heard by radios around the globe. Analysis of the radio signals was used to gather information about the electron density of the ionosphere, while temperature and pressure data was encoded in the duration of radio beeps. The results indicated that the satellite was not punctured by a meteoroid. Sputnik 1 was launched by an R-7 rocket. It burned up upon re-entry on 3 January 1958.

The second one was Sputnik 2. Launched by the USSR on November 3, 1957, it carried the dog Laika, who became the first animal in orbit.

This success led to an escalation of the American space program, which unsuccessfully attempted to launch a Vanguard satellite into orbit two months later. On 31 January 1958, the U.S. successfully orbited Explorer 1 on a Juno rocket.

The first successful human spaceflight was Vostok 1 (“East 1”), carrying 27-year-old Russian cosmonaut Yuri Gagarin on 12 April 1961. The spacecraft completed one orbit around the globe, lasting about 1 hour and 48 minutes. Gagarin’s flight resonated around the world; it was a demonstration of the advanced Soviet space program and it opened an entirely new era in space exploration: human spaceflight.

The U.S. first launched a person into space within a month of Vostok 1 with Alan Shepard’s suborbital flight on Freedom 7. Orbital flight was achieved by the United States when John Glenn’s Friendship 7 orbited Earth on 20 February 1962.

Valentina Tereshkova, the first woman in space, orbited Earth 48 times aboard Vostok 6 on 16 June 1963.

China first launched a person into space 42 years after the launch of Vostok 1, on 15 October 2003, with the flight of Yang Liwei aboard the Shenzhou 5 (Divine Vessel 5) spacecraft.

The first artificial object to reach another celestial body was Luna 2 in 1959.[10] The first automatic landing on another celestial body was performed by Luna 9[11] in 1966. Luna 10 became the first artificial satellite of the Moon.[12]

The first crewed landing on another celestial body was performed by Apollo 11 on 20 July 1969.

The first successful interplanetary flyby was the 1962 Mariner 2 flyby of Venus (closest approach 34,773 kilometers). The other planets were first flown by in 1965 for Mars by Mariner 4, 1973 for Jupiter by Pioneer 10, 1974 for Mercury by Mariner 10, 1979 for Saturn by Pioneer 11, 1986 for Uranus by Voyager 2, 1989 for Neptune by Voyager 2. In 2015, the dwarf planets Ceres and Pluto were orbited by Dawn and passed by New Horizons, respectively.

The first interplanetary surface mission to return at least limited surface data from another planet was the 1970 landing of Venera 7 on Venus which returned data to Earth for 23 minutes. In 1975 the Venera 9 was the first to return images from the surface of another planet. In 1971 the Mars 3 mission achieved the first soft landing on Mars returning data for almost 20 seconds. Later much longer duration surface missions were achieved, including over six years of Mars surface operation by Viking 1 from 1975 to 1982 and over two hours of transmission from the surface of Venus by Venera 13 in 1982, the longest ever Soviet planetary surface mission.

The dream of stepping into the outer reaches of Earth’s atmosphere was driven by the fiction of Peter Francis Geraci[13][14][15] and H. G. Wells,[16] and rocket technology was developed to try to realize this vision. The German V-2 was the first rocket to travel into space, overcoming the problems of thrust and material failure. During the final days of World War II this technology was obtained by both the Americans and Soviets as were its designers. The initial driving force for further development of the technology was a weapons race for intercontinental ballistic missiles (ICBMs) to be used as long-range carriers for fast nuclear weapon delivery, but in 1961 when the Soviet Union launched the first man into space, the United States declared itself to be in a “Space Race” with the Soviets.

Konstantin Tsiolkovsky, Robert Goddard, Hermann Oberth, and Reinhold Tiling laid the groundwork of rocketry in the early years of the 20th century.

Wernher von Braun was the lead rocket engineer for Nazi Germany’s World War II V-2 rocket project. In the last days of the war he led a caravan of workers in the German rocket program to the American lines, where they surrendered and were brought to the United States to work on their rocket development (“Operation Paperclip”). He acquired American citizenship and led the team that developed and launched Explorer 1, the first American satellite. Von Braun later led the team at NASA’s Marshall Space Flight Center which developed the Saturn V moon rocket.

Initially the race for space was often led by Sergei Korolyov, whose legacy includes both the R7 and Soyuzwhich remain in service to this day. Korolev was the mastermind behind the first satellite, first man (and first woman) in orbit and first spacewalk. Until his death his identity was a closely guarded state secret; not even his mother knew that he was responsible for creating the Soviet space program.

Kerim Kerimov was one of the founders of the Soviet space program and was one of the lead architects behind the first human spaceflight (Vostok 1) alongside Sergey Korolyov. After Korolyov’s death in 1966, Kerimov became the lead scientist of the Soviet space program and was responsible for the launch of the first space stations from 1971 to 1991, including the Salyut and Mir series, and their precursors in 1967, the Cosmos 186 and Cosmos 188.[17][18]

Although the Sun will probably not be physically explored at all, the study of the Sun has nevertheless been a major focus of space exploration. Being above the atmosphere in particular and Earth’s magnetic field gives access to the solar wind and infrared and ultraviolet radiations that cannot reach Earth’s surface. The Sun generates most space weather, which can affect power generation and transmission systems on Earth and interfere with, and even damage, satellites and space probes. Numerous spacecraft dedicated to observing the Sun, beginning with the Apollo Telescope Mount, have been launched and still others have had solar observation as a secondary objective. Parker Solar Probe, planned for a 2018 launch, will approach the Sun to within 1/8th the orbit of Mercury.

Mercury remains the least explored of the Terrestrial planets. As of May 2013, the Mariner 10 and MESSENGER missions have been the only missions that have made close observations of Mercury. MESSENGER entered orbit around Mercury in March 2011, to further investigate the observations made by Mariner 10 in 1975 (Munsell, 2006b).

A third mission to Mercury, scheduled to arrive in 2020, BepiColombo is to include two probes. BepiColombo is a joint mission between Japan and the European Space Agency. MESSENGER and BepiColombo are intended to gather complementary data to help scientists understand many of the mysteries discovered by Mariner 10’s flybys.

Flights to other planets within the Solar System are accomplished at a cost in energy, which is described by the net change in velocity of the spacecraft, or delta-v. Due to the relatively high delta-v to reach Mercury and its proximity to the Sun, it is difficult to explore and orbits around it are rather unstable.

Venus was the first target of interplanetary flyby and lander missions and, despite one of the most hostile surface environments in the Solar System, has had more landers sent to it (nearly all from the Soviet Union) than any other planet in the Solar System. The first successful Venus flyby was the American Mariner 2 spacecraft, which flew past Venus in 1962. Mariner 2 has been followed by several other flybys by multiple space agencies often as part of missions using a Venus flyby to provide a gravitational assist en route to other celestial bodies. In 1967 Venera 4 became the first probe to enter and directly examine the atmosphere of Venus. In 1970, Venera 7 became the first successful lander to reach the surface of Venus and by 1985 it had been followed by eight additional successful Soviet Venus landers which provided images and other direct surface data. Starting in 1975 with the Soviet orbiter Venera 9 some ten successful orbiter missions have been sent to Venus, including later missions which were able to map the surface of Venus using radar to pierce the obscuring atmosphere.

Space exploration has been used as a tool to understand Earth as a celestial object in its own right. Orbital missions can provide data for Earth that can be difficult or impossible to obtain from a purely ground-based point of reference.

For example, the existence of the Van Allen radiation belts was unknown until their discovery by the United States’ first artificial satellite, Explorer 1. These belts contain radiation trapped by Earth’s magnetic fields, which currently renders construction of habitable space stations above 1000km impractical. Following this early unexpected discovery, a large number of Earth observation satellites have been deployed specifically to explore Earth from a space based perspective. These satellites have significantly contributed to the understanding of a variety of Earth-based phenomena. For instance, the hole in the ozone layer was found by an artificial satellite that was exploring Earth’s atmosphere, and satellites have allowed for the discovery of archeological sites or geological formations that were difficult or impossible to otherwise identify.

The Moon was the first celestial body to be the object of space exploration. It holds the distinctions of being the first remote celestial object to be flown by, orbited, and landed upon by spacecraft, and the only remote celestial object ever to be visited by humans.

In 1959 the Soviets obtained the first images of the far side of the Moon, never previously visible to humans. The U.S. exploration of the Moon began with the Ranger 4 impactor in 1962. Starting in 1966 the Soviets successfully deployed a number of landers to the Moon which were able to obtain data directly from the Moon’s surface; just four months later, Surveyor 1 marked the debut of a successful series of U.S. landers. The Soviet uncrewed missions culminated in the Lunokhod program in the early 1970s, which included the first uncrewed rovers and also successfully brought lunar soil samples to Earth for study. This marked the first (and to date the only) automated return of extraterrestrial soil samples to Earth. Uncrewed exploration of the Moon continues with various nations periodically deploying lunar orbiters, and in 2008 the Indian Moon Impact Probe.

Crewed exploration of the Moon began in 1968 with the Apollo 8 mission that successfully orbited the Moon, the first time any extraterrestrial object was orbited by humans. In 1969, the Apollo 11 mission marked the first time humans set foot upon another world. Crewed exploration of the Moon did not continue for long, however. The Apollo 17 mission in 1972 marked the most recent human visit there, and the next, Exploration Mission 2, is due to orbit the Moon in 2021. Robotic missions are still pursued vigorously.

The exploration of Mars has been an important part of the space exploration programs of the Soviet Union (later Russia), the United States, Europe, Japan and India. Dozens of robotic spacecraft, including orbiters, landers, and rovers, have been launched toward Mars since the 1960s. These missions were aimed at gathering data about current conditions and answering questions about the history of Mars. The questions raised by the scientific community are expected to not only give a better appreciation of the red planet but also yield further insight into the past, and possible future, of Earth.

The exploration of Mars has come at a considerable financial cost with roughly two-thirds of all spacecraft destined for Mars failing before completing their missions, with some failing before they even began. Such a high failure rate can be attributed to the complexity and large number of variables involved in an interplanetary journey, and has led researchers to jokingly speak of The Great Galactic Ghoul[20] which subsists on a diet of Mars probes. This phenomenon is also informally known as the “Mars Curse”.[21] In contrast to overall high failure rates in the exploration of Mars, India has become the first country to achieve success of its maiden attempt. India’s Mars Orbiter Mission (MOM)[22][23][24] is one of the least expensive interplanetary missions ever undertaken with an approximate total cost of 450 Crore (US$73 million).[25][26] The first mission to Mars by any Arab country has been taken up by the United Arab Emirates. Called the Emirates Mars Mission, it is scheduled for launch in 2020. The uncrewed exploratory probe has been named “Hope Probe” and will be sent to Mars to study its atmosphere in detail.[27]

The Russian space mission Fobos-Grunt, which launched on 9 November 2011 experienced a failure leaving it stranded in low Earth orbit.[28] It was to begin exploration of the Phobos and Martian circumterrestrial orbit, and study whether the moons of Mars, or at least Phobos, could be a “trans-shipment point” for spaceships traveling to Mars.[29]

The exploration of Jupiter has consisted solely of a number of automated NASA spacecraft visiting the planet since 1973. A large majority of the missions have been “flybys”, in which detailed observations are taken without the probe landing or entering orbit; such as in Pioneer and Voyager programs. The Galileo and Juno spacecraft are the only spacecraft to have entered the planet’s orbit. As Jupiter is believed to have only a relatively small rocky core and no real solid surface, a landing mission is nearly impossible.

Reaching Jupiter from Earth requires a delta-v of 9.2km/s,[30] which is comparable to the 9.7km/s delta-v needed to reach low Earth orbit.[31] Fortunately, gravity assists through planetary flybys can be used to reduce the energy required at launch to reach Jupiter, albeit at the cost of a significantly longer flight duration.[30]

Jupiter has 69 known moons, many of which have relatively little known information about them.

Saturn has been explored only through uncrewed spacecraft launched by NASA, including one mission (CassiniHuygens) planned and executed in cooperation with other space agencies. These missions consist of flybys in 1979 by Pioneer 11, in 1980 by Voyager 1, in 1982 by Voyager 2 and an orbital mission by the Cassini spacecraft, which lasted from 2004 until 2017.

Saturn has at least 62 known moons, although the exact number is debatable since Saturn’s rings are made up of vast numbers of independently orbiting objects of varying sizes. The largest of the moons is Titan, which holds the distinction of being the only moon in the Solar System with an atmosphere denser and thicker than that of Earth. Titan holds the distinction of being the only object in the Outer Solar System that has been explored with a lander, the Huygens probe deployed by the Cassini spacecraft.

The exploration of Uranus has been entirely through the Voyager 2 spacecraft, with no other visits currently planned. Given its axial tilt of 97.77, with its polar regions exposed to sunlight or darkness for long periods, scientists were not sure what to expect at Uranus. The closest approach to Uranus occurred on 24 January 1986. Voyager 2 studied the planet’s unique atmosphere and magnetosphere. Voyager 2 also examined its ring system and the moons of Uranus including all five of the previously known moons, while discovering an additional ten previously unknown moons.

Images of Uranus proved to have a very uniform appearance, with no evidence of the dramatic storms or atmospheric banding evident on Jupiter and Saturn. Great effort was required to even identify a few clouds in the images of the planet. The magnetosphere of Uranus, however, proved to be completely unique and proved to be profoundly affected by the planet’s unusual axial tilt. In contrast to the bland appearance of Uranus itself, striking images were obtained of the Moons of Uranus, including evidence that Miranda had been unusually geologically active.

The exploration of Neptune began with the 25 August 1989 Voyager 2 flyby, the sole visit to the system as of 2014. The possibility of a Neptune Orbiter has been discussed, but no other missions have been given serious thought.

Although the extremely uniform appearance of Uranus during Voyager 2’s visit in 1986 had led to expectations that Neptune would also have few visible atmospheric phenomena, the spacecraft found that Neptune had obvious banding, visible clouds, auroras, and even a conspicuous anticyclone storm system rivaled in size only by Jupiter’s small Spot. Neptune also proved to have the fastest winds of any planet in the Solar System, measured as high as 2,100km/h.[32] Voyager 2 also examined Neptune’s ring and moon system. It discovered 900 complete rings and additional partial ring “arcs” around Neptune. In addition to examining Neptune’s three previously known moons, Voyager 2 also discovered five previously unknown moons, one of which, Proteus, proved to be the last largest moon in the system. Data from Voyager 2 supported the view that Neptune’s largest moon, Triton, is a captured Kuiper belt object.[33]

The dwarf planet Pluto presents significant challenges for spacecraft because of its great distance from Earth (requiring high velocity for reasonable trip times) and small mass (making capture into orbit very difficult at present). Voyager 1 could have visited Pluto, but controllers opted instead for a close flyby of Saturn’s moon Titan, resulting in a trajectory incompatible with a Pluto flyby. Voyager 2 never had a plausible trajectory for reaching Pluto.[34]

Pluto continues to be of great interest, despite its reclassification as the lead and nearest member of a new and growing class of distant icy bodies of intermediate size (and also the first member of the important subclass, defined by orbit and known as “plutinos”). After an intense political battle, a mission to Pluto dubbed New Horizons was granted funding from the United States government in 2003.[35] New Horizons was launched successfully on 19 January 2006. In early 2007 the craft made use of a gravity assist from Jupiter. Its closest approach to Pluto was on 14 July 2015; scientific observations of Pluto began five months prior to closest approach and continued for 16 days after the encounter.

Until the advent of space travel, objects in the asteroid belt were merely pinpricks of light in even the largest telescopes, their shapes and terrain remaining a mystery. Several asteroids have now been visited by probes, the first of which was Galileo, which flew past two: 951 Gaspra in 1991, followed by 243 Ida in 1993. Both of these lay near enough to Galileo’s planned trajectory to Jupiter that they could be visited at acceptable cost. The first landing on an asteroid was performed by the NEAR Shoemaker probe in 2000, following an orbital survey of the object. The dwarf planet Ceres and the asteroid 4 Vesta, two of the three largest asteroids, were visited by NASA’s Dawn spacecraft, launched in 2007.

Although many comets have been studied from Earth sometimes with centuries-worth of observations, only a few comets have been closely visited. In 1985, the International Cometary Explorer conducted the first comet fly-by (21P/Giacobini-Zinner) before joining the Halley Armada studying the famous comet. The Deep Impact probe smashed into 9P/Tempel to learn more about its structure and composition and the Stardust mission returned samples of another comet’s tail. The Philae lander successfully landed on Comet ChuryumovGerasimenko in 2014 as part of the broader Rosetta mission.

Hayabusa was an unmanned spacecraft developed by the Japan Aerospace Exploration Agency to return a sample of material from the small near-Earth asteroid 25143 Itokawa to Earth for further analysis. Hayabusa was launched on 9 May 2003 and rendezvoused with Itokawa in mid-September 2005. After arriving at Itokawa, Hayabusa studied the asteroid’s shape, spin, topography, color, composition, density, and history. In November 2005, it landed on the asteroid to collect samples. The spacecraft returned to Earth on 13 June 2010.

Deep space exploration is the branch of astronomy, astronautics and space technology that is involved with the exploration of distant regions of outer space.[36] Physical exploration of space is conducted both by human spaceflights (deep-space astronautics) and by robotic spacecraft.

Some of the best candidates for future deep space engine technologies include anti-matter, nuclear power and beamed propulsion.[37] The latter, beamed propulsion, appears to be the best candidate for deep space exploration presently available, since it uses known physics and known technology that is being developed for other purposes.[38]

In the 2000s, several plans for space exploration were announced; both government entities and the private sector have space exploration objectives. China has announced plans to have a 60-ton multi-module space station in orbit by 2020.

The NASA Authorization Act of 2010 provided a re-prioritized list of objectives for the American space program, as well as funding for the first priorities. NASA proposes to move forward with the development of the Space Launch System (SLS), which will be designed to carry the Orion Multi-Purpose Crew Vehicle, as well as important cargo, equipment, and science experiments to Earth’s orbit and destinations beyond. Additionally, the SLS will serve as a back up for commercial and international partner transportation services to the International Space Station. The SLS rocket will incorporate technological investments from the Space Shuttle program and the Constellation program in order to take advantage of proven hardware and reduce development and operations costs. The first developmental flight is targeted for the end of 2017.[39]

The idea of using high level automated systems for space missions has become a desirable goal to space agencies all around the world. Such systems are believed to yield benefits such as lower cost, less human oversight, and ability to explore deeper in space which is usually restricted by long communications with human controllers.[40]

Autonomy is defined by three requirements:[40]

Autonomous technologies would be able to perform beyond predetermined actions. They would analyze all possible states and events happening around them and come up with a safe response. In addition, such technologies can reduce launch cost and ground involvement. Performance would increase as well. Autonomy would be able to quickly respond upon encountering an unforeseen event, especially in deep space exploration where communication back to Earth would take too long.[40]

NASA began its autonomous science experiment (ASE) on Earth Observing 1 (EO-1) which is NASA’s first satellite in the new millennium program Earth-observing series launched on 21 November 2000. The autonomy of ASE is capable of on-board science analysis, replanning, robust execution, and later the addition of model-based diagnostic. Images obtained by the EO-1 are analyzed on-board and downlinked when a change or an interesting event occur. The ASE software has successfully provided over 10,000 science images.[40]

An article in science magazine Nature suggested the use of asteroids as a gateway for space exploration, with the ultimate destination being Mars.[41] In order to make such an approach viable, three requirements need to be fulfilled: first, “a thorough asteroid survey to find thousands of nearby bodies suitable for astronauts to visit”; second, “extending flight duration and distance capability to ever-increasing ranges out to Mars”; and finally, “developing better robotic vehicles and tools to enable astronauts to explore an asteroid regardless of its size, shape or spin.”[41] Furthermore, using asteroids would provide astronauts with protection from galactic cosmic rays, with mission crews being able to land on them in times of greater risk to radiation exposure.[42]

The research that is conducted by national space exploration agencies, such as NASA and Roscosmos, is one of the reasons supporters cite to justify government expenses. Economic analyses of the NASA programs often showed ongoing economic benefits (such as NASA spin-offs), generating many times the revenue of the cost of the program.[43] It is also argued that space exploration would lead to the extraction of resources on other planets and especially asteroids, which contain billions of dollars worth of minerals and metals. Such expeditions could generate a lot of revenue.[44] As well, it has been argued that space exploration programs help inspire youth to study in science and engineering.[45]

Another claim is that space exploration is a necessity to mankind and that staying on Earth will lead to extinction. Some of the reasons are lack of natural resources, comets, nuclear war, and worldwide epidemic. Stephen Hawking, renowned British theoretical physicist, said that “I don’t think the human race will survive the next thousand years, unless we spread into space. There are too many accidents that can befall life on a single planet. But I’m an optimist. We will reach out to the stars.”[46]

NASA has produced a series of public service announcement videos supporting the concept of space exploration.[47]

Overall, the public remains largely supportive of both crewed and uncrewed space exploration. According to an Associated Press Poll conducted in July 2003, 71% of U.S. citizens agreed with the statement that the space program is “a good investment”, compared to 21% who did not.[48]

Arthur C. Clarke (1950) presented a summary of motivations for the human exploration of space in his non-fiction semi-technical monograph Interplanetary Flight.[49] He argued that humanity’s choice is essentially between expansion off Earth into space, versus cultural (and eventually biological) stagnation and death.

Spaceflight is the use of space technology to achieve the flight of spacecraft into and through outer space.

Spaceflight is used in space exploration, and also in commercial activities like space tourism and satellite telecommunications. Additional non-commercial uses of spaceflight include space observatories, reconnaissance satellites and other Earth observation satellites.

A spaceflight typically begins with a rocket launch, which provides the initial thrust to overcome the force of gravity and propels the spacecraft from the surface of Earth. Once in space, the motion of a spacecraftboth when unpropelled and when under propulsionis covered by the area of study called astrodynamics. Some spacecraft remain in space indefinitely, some disintegrate during atmospheric reentry, and others reach a planetary or lunar surface for landing or impact.

Satellites are used for a large number of purposes. Common types include military (spy) and civilian Earth observation satellites, communication satellites, navigation satellites, weather satellites, and research satellites. Space stations and human spacecraft in orbit are also satellites.

Current examples of the commercial use of space include satellite navigation systems, satellite television and satellite radio. Space tourism is the recent phenomenon of space travel by individuals for the purpose of personal pleasure.

Private spaceflight companies such as SpaceX and Blue Origin, and commercial space stations such as the Axiom Space and the Bigelow Commercial Space Station have dramatically changed the landscape of space exploration, and will continue to do so in the near future.

Astrobiology is the interdisciplinary study of life in the universe, combining aspects of astronomy, biology and geology.[50] It is focused primarily on the study of the origin, distribution and evolution of life. It is also known as exobiology (from Greek: , exo, “outside”).[51][52][53] The term “Xenobiology” has been used as well, but this is technically incorrect because its terminology means “biology of the foreigners”.[54] Astrobiologists must also consider the possibility of life that is chemically entirely distinct from any life found on Earth.[55] In the Solar System some of the prime locations for current or past astrobiology are on Enceladus, Europa, Mars, and Titan.

Space colonization, also called space settlement and space humanization, would be the permanent autonomous (self-sufficient) human habitation of locations outside Earth, especially of natural satellites or planets such as the Moon or Mars, using significant amounts of in-situ resource utilization.

To date, the longest human occupation of space is the International Space Station which has been in continuous use for 17years, 209days. Valeri Polyakov’s record single spaceflight of almost 438 days aboard the Mir space station has not been surpassed. Long-term stays in space reveal issues with bone and muscle loss in low gravity, immune system suppression, and radiation exposure.

Many past and current concepts for the continued exploration and colonization of space focus on a return to the Moon as a “stepping stone” to the other planets, especially Mars. At the end of 2006 NASA announced they were planning to build a permanent Moon base with continual presence by 2024.[57]

Beyond the technical factors that could make living in space more widespread, it has been suggested that the lack of private property, the inability or difficulty in establishing property rights in space, has been an impediment to the development of space for human habitation. Since the advent of space technology in the latter half of the twentieth century, the ownership of property in space has been murky, with strong arguments both for and against. In particular, the making of national territorial claims in outer space and on celestial bodies has been specifically proscribed by the Outer Space Treaty, which had been, as of 2012[update], ratified by all spacefaring nations.[58]

View original post here:

Space exploration – Wikipedia

How Long Would It Take To Travel To The Nearest Star …

Weve all asked this question at some point in our lives: How long would it take to travel to the stars? Could it be within a persons own lifetime, and could this kind of travel become the norm someday? There are many possible answers to this question some very simple, others in the realms of science fiction. But coming up with a comprehensive answer means taking a lot of things into consideration.

Unfortunately, any realistic assessment is likely to produce answers that would totally discourage futurists and enthusiasts of interstellar travel. Like it or not, space is very large, and our technology is still very limited. But should we ever contemplate leaving the nest, we will have a range of options for getting to the nearest Solar Systems in our galaxy.

The nearest star to Earth is our Sun, which is a fairly average star in the Hertzsprung Russell Diagrams Main Sequence. This means that it is highly stable, providing Earth with just the right type of sunlight for life to evolve on our planet. We know there are planets orbiting other stars near to our Solar System, and many of these stars are similar to our own.

In the future, should mankind wish to leave the Solar System, well have a huge choice of stars we could travel to, and many could have the right conditions for life to thrive. But where would we go and how long would it take for us to get there? Just remember, this is all speculative and there is currently no benchmark for interstellar trips. That being said, here we go!

Over 2000 exoplanets have been identified, many of which are believed to be habitable. Credit: phl.upl.edu

As already noted, the closest star to our Solar System is Proxima Centauri, which is why it makes the most sense to plot an interstellar mission to this system first. As part of a triple star system called Alpha Centauri, Proxima is about 4.24 light years (or 1.3 parsecs) from Earth. Alpha Centauri is actually the brightest star of the three in the system part of a closely orbiting binary 4.37 light years from Earth whereas Proxima Centauri (the dimmest of the three) is an isolated red dwarf about 0.13 light years from the binary.

And while interstellar travel conjures up all kinds of visions of Faster-Than-Light (FTL) travel, ranging from warp speed and wormholes to jump drives, such theories are either highly speculative (such as the Alcubierre Drive) or entirely the province of science fiction. In all likelihood, any deep space mission will likely take generations to get there, rather than a few days or in an instantaneous flash.

So, starting with one of the slowest forms of space travel, how long will it take to get to Proxima Centauri?

The question of how long would it take to get somewhere in space is somewhat easier when dealing with existing technology and bodies within our Solar System. For instance, using the technology that powered the New Horizons mission which consisted of 16 thrusters fueled with hydrazine monopropellant reaching the Moon would take a mere 8 hours and 35 minutes.

On the other hand, there is the European Space Agencys (ESA) SMART-1 mission, which took its time traveling to the Moon using the method of ionic propulsion. With this revolutionary technology, a variation of which has since been used by the Dawn spacecraft to reach Vesta, the SMART-1 mission took one year, one month and two weeks to reach the Moon.

So, from the speedy rocket-propelled spacecraft to the economical ion drive, we have a few options for getting around local space plus we could use Jupiter or Saturn for a hefty gravitational slingshot. However, if we were to contemplate missions to somewhere a little more out of the way, we would have to scale up our technology and look at whats really possible.

When we say possible methods, we are talking about those that involve existing technology, or those that do not yet exist, but are technically feasible. Some, as you will see, are time-honored and proven, while others are emerging or still on the board. In just about all cases though, they present a possible, but extremely time-consuming or expensive, scenario for getting to even the closest stars

Ionic Propulsion:Currently, the slowest form of propulsion, and the most fuel-efficient, is the ion engine. A few decades ago, ionic propulsion was considered to be the subject of science fiction. However, in recent years, the technology to support ion engines has moved from theory to practice in a big way. The ESAs SMART-1 mission for example successfully completed its mission to the Moon after taking a 13 month spiral path from the Earth.

SMART-1 used solar powered ion thrusters, where electrical energy was harvested from its solar panels and used to power its Hall-effect thrusters. Only 82 kg of xenon propellant was used to propel SMART-1 to the Moon. 1 kg of xenon propellant provided a delta-v of 45 m/s. This is a highly efficient form of propulsion, but it is by no means fast.

Artists concept of Dawn mission above Ceres. Since its arrival, the spacecraft turned around to point the blue glow of its ion engine in the opposite direction. Image credit: NASA/JPL

One of the first missions to use ion drive technology was the Deep Space 1 mission to Comet Borrelly that took place in 1998. DS1 also used a xenon-powered ion drive, consuming 81.5 kg of propellant. Over 20 months of thrusting, DS1 was managed to reach a velocity of 56,000 km/hr (35,000 miles/hr) during its flyby of the comet.

Ion thrusters are therefore more economical than rocket technology, as the thrust per unit mass of propellant (a.k.a. specific impulse) is far higher. But it takes a long time for ion thrusters to accelerate spacecraft to any great speeds, and the maximum velocity it can achieve is dependent on its fuel supply and how much electrical energy it can generate.

So if ionic propulsion were to be used for a mission to Proxima Centauri, the thrusters would need a huge source of energy production (i.e. nuclear power) and a large quantity of propellant (although still less than conventional rockets). But based on the assumption that a supply of 81.5 kg of xenon propellant translates into a maximum velocity of 56,000 km/hr (and that there are no other forms of propulsion available, such as a gravitational slingshot to accelerate it further), some calculations can be made.

In short, at a maximum velocity of 56,000 km/h, Deep Space 1 would take over 81,000 years to traverse the 4.24 light years between Earth and Proxima Centauri. To put that time-scale into perspective, that would be over 2,700 human generations. So it is safe to say that an interplanetary ion engine mission would be far too slow to be considered for a manned interstellar mission.

Ionic propulsion is currently the slowest, but most fuel-efficient, form of space travel. Credit: NASA/JPL

But, should ion thrusters be made larger and more powerful (i.e. ion exhaust velocity would need to be significantly higher), and enough propellant could be hauled to keep the spacecrafts going for the entire 4.243 light-year trip, that travel time could be greatly reduced. Still not enough to happen in someones lifetime though.

Gravity Assist Method:The fastest existing means of space travel is known the Gravity Assist method, which involves a spacecraft using the relative movement (i.e. orbit) and gravity of a planet to alter is path and speed. Gravitational assists are a very useful spaceflight technique, especially when using the Earth or another massive planet (like a gas giant) for a boost in velocity.

The Mariner 10 spacecraft was the first to use this method, using Venus gravitational pull to slingshot it towards Mercury in February of 1974. In the 1980s, the Voyager 1 probe used Saturn and Jupiter for gravitational slingshots to attain its current velocity of 60,000 km/hr (38,000 miles/hr) and make it into interstellar space.

However, it was the Helios 2 mission which was launched in 1976 to study the interplanetary medium from 0.3 AU to 1 AU to the Sun that holds the record for highest speed achieved with a gravity assist. At the time, Helios 1 (which launched in 1974) and Helios 2 held the record for closest approach to the Sun. Helios 2 was launched by a conventional NASA Titan/Centaur launch vehicle and placed in a highly elliptical orbit.

A Helios probe being encapsulated for launch. Credit: Public Domain

Due to the large eccentricity (0.54) of the 190 day solar orbit, at perihelion Helios 2 was able to reach a maximum velocity of over 240,000 km/hr (150,000 miles/hr). This orbital speed was attained by the gravitational pull of the Sun alone. Technically, the Helios 2 perihelion velocity was not a gravitational slingshot, it was a maximum orbital velocity, but it still holds the record for being the fastest man-made object regardless.

So, if Voyager 1 was traveling in the direction of the red dwarf Proxima Centauri at a constant velocity of 60,000 km/hr, it would take 76,000 years (or over 2,500 generations) to travel that distance. But if it could attain the record-breaking speed of Helios 2s close approach of the Sun a constant speed of 240,000 km/hr it would take 19,000 years (or over 600 generations) to travel 4.243 light years. Significantly better, but still not in the ream of practicality.

Electromagnetic (EM) Drive:Another proposed method of interstellar travel comes in the form of the Radio Frequency (RF) Resonant Cavity Thruster, also known as the EM Drive. Originally proposed in 2001 by Roger K. Shawyer, a UK scientist who started Satellite Propulsion Research Ltd (SPR) to bring it to fruition, this drive is built around the idea that electromagnetic microwave cavities can allow for the direct conversion of electrical energy to thrust.

Whereas conventional electromagnetic thrusters are designed to propel a certain type of mass (such as ionized particles), this particular drive system relies on no reaction mass and emits no directional radiation. Such a proposal has met with a great deal of skepticism, mainly because it violates the law of Conservation of Momentum which states that within a system, the amount of momentum remains constant and is neither created nor destroyed, but only changes through the action of forces.

The EM Drive prototype produced by NASA/Eagleworks. Credit: NASA Spaceflight Forum

However, recent experiments with the technology have apparently yielded positive results. In July of 2014, at the 50th AIAA/ASME/SAE/ASEE Joint Propulsion Conference in Cleveland, Ohio, researchers from NASAs advanced propulsion research claimed that they had successfully tested a new design for an electromagnetic propulsion drive.

This was followed up in April of 2015 when researchers at NASA Eagleworks (part of the Johnson Space Center) claimed that they had successfully tested the drive in a vacuum, an indication that it might actually work in space. In July of that same year, a research team from the Dresden University of Technologys Space System department built their own version of the engine and observed a detectable thrust.

And in 2010, Prof. Juan Yang of the Northwestern Polytechnical University in Xian, China, began publishing a series of papers about her research into EM Drive technology. This culminated in her 2012 paper where she reported higher input power (2.5kW) and tested thrust (720mN) levels. In 2014, she further reported extensive tests involving internal temperature measurements with embedded thermocouples, which seemed to confirm that the system worked.

Artists concept of an interstellar craft equipped with an EM Drive. Credit: NASA Spaceflight Center

According to calculations based on the NASA prototype (which yielded a power estimate of 0.4 N/kilowatt), a spacecraft equipped with the EM drive could make the trip to Pluto in less than 18 months. Thats one-sixth the time it took for the New Horizons probe to get there, which was traveling at speeds of close to 58,000 km/h (36,000 mph).

Sounds impressive. But even at that rate, it would take a ship equipped with EM engines over 13,000 years for the vessel to make it to Proxima Centauri. Getting closer, but not quickly enough! and until such time that technology can be definitively proven to work, it doesnt make much sense to put our eggs into this basket.

Nuclear Thermal and Nuclear Electric Propulsion (NTP/NEP):Another possibility for interstellar space flight is to use spacecraft equipped with nuclear engines, a concept which NASA has been exploring for decades. In a Nuclear Thermal Propulsion (NTP) rocket, uranium or deuterium reactions are used to heat liquid hydrogen inside a reactor, turning it into ionized hydrogen gas (plasma), which is then channeled through a rocket nozzle to generate thrust.

A Nuclear Electric Propulsion (NEP) rocket involves the same basic reactor converting its heat and energy into electrical energy, which would then power an electrical engine. In both cases, the rocket would rely on nuclear fission or fusion to generates propulsion rather than chemical propellants, which has been the mainstay of NASA and all other space agencies to date.

Artists impression of a Crew Transfer Vehicle (CTV) using its nuclear-thermal rocket engines to slow down and establish orbit around Mars. Credit: NASA

Compared to chemical propulsion, both NTP and NEC offers a number of advantages. The first and most obvious is the virtually unlimited energy density it offers compared to rocket fuel. In addition, a nuclear-powered engine could also provide superior thrust relative to the amount of propellant used. This would cut the total amount of propellant needed, thus cutting launch weight and the cost of individual missions.

Although no nuclear-thermal engines have ever flown, several design concepts have been built and tested over the past few decades, and numerous concepts have been proposed. These have ranged from the traditional solid-core design such as the Nuclear Engine for Rocket Vehicle Application (NERVA) to more advanced and efficient concepts that rely on either a liquid or a gas core.

However, despite these advantages in fuel-efficiency and specific impulse, the most sophisticated NTP concept has a maximum specific impulse of 5000 seconds (50 kNs/kg). Using nuclear engines driven by fission or fusion, NASA scientists estimate it would could take a spaceship only 90 days to get to Mars when the planet was at opposition i.e. as close as 55,000,000 km from Earth.

But adjusted for a one-way journey to Proxima Centauri, a nuclear rocket would still take centuries to accelerate to the point where it was flying a fraction of the speed of light. It would then require several decades of travel time, followed by many more centuries of deceleration before reaching it destination. All told, were still talking about 1000 years before it reaches its destination. Good for interplanetary missions, not so good for interstellar ones.

Using existing technology, the time it would take to send scientists and astronauts on an interstellar mission would be prohibitively slow. If we want to make that journey within a single lifetime, or even a generation, something a bit more radical (aka. highly theoretical) will be needed. And while wormholes and jump engines may still be pure fiction at this point, there are some rather advanced ideas that have been considered over the years.

Nuclear Pulse Propulsion:Nuclear pulse propulsion is a theoretically possible form of fast space travel. The concept was originally proposed in 1946 by Stanislaw Ulam, a Polish-American mathematician who participated in the Manhattan Project, and preliminary calculations were then made by F. Reines and Ulam in 1947. The actual project known as Project Orion was initiated in 1958 and lasted until 1963.

The Project Orion concept for a nuclear-powered spacecraft. Credit: silodrome.co

Led by Ted Taylor at General Atomics and physicist Freeman Dyson from the Institute for Advanced Study in Princeton, Orion hoped to harness the power of pulsed nuclear explosions to provide a huge thrust with very high specific impulse (i.e. the amount of thrust compared to weight or the amount of seconds the rocket can continually fire).

In a nutshell, the Orion design involves a large spacecraft with a high supply of thermonuclear warheads achieving propulsion by releasing a bomb behind it and then riding the detonation wave with the help of a rear-mounted pad called a pusher. After each blast, the explosive force would be absorbed by this pusher pad, which then translates the thrust into forward momentum.

Though hardly elegant by modern standards, the advantage of the design is that it achieves a high specific impulse meaning it extracts the maximum amount of energy from its fuel source (in this case, nuclear bombs) at minimal cost. In addition, the concept could theoretically achieve very high speeds, with some estimates suggesting a ballpark figure as high as 5% the speed of light (or 5.4107 km/hr).

But of course, there the inevitable downsides to the design. For one, a ship of this size would be incredibly expensive to build. According to estimates produced by Dyson in 1968, an Orion spacecraft that used hydrogen bombs to generate propulsion would weight 400,000 to 4,000,000 metric tons. And at least three quarters of that weight consists of nuclear bombs, where each warhead weights approximately 1 metric ton.

Artists concept of Orion spacecraft leaving Earth. Credit: bisbos.com/Adrian Mann

All told, Dysons most conservative estimates placed the total cost of building an Orion craft at 367 billion dollars. Adjusted for inflation, that works out to roughly $2.5 trillion dollars which accounts for over two thirds of the US governments current annual revenue. Hence, even at its lightest, the craft would be extremely expensive to manufacture.

Theres also the slight problem of all the radiation it generates, not to mention nuclear waste. In fact, it is for this reason that the Project is believed to have been terminated, owing to the passage of the Partial Test Ban Treaty of 1963 which sought to limit nuclear testing and stop the excessive release of nuclear fallout into the planets atmosphere.

Fusion Rockets:Another possibility within the realm of harnessed nuclear power involves rockets that rely on thermonuclear reactions to generate thrust. For this concept, energy is created when pellets of a deuterium/helium-3 mix are ignited in a reaction chamber by inertial confinement using electron beams (similar to what is done at the National Ignition Facility in California). This fusion reactor would detonate 250 pellets per second to create high-energy plasma, which would then be directed by a magnetic nozzle to create thrust.

Like a rocket that relies on a nuclear reactor, this concept offers advantages as far as fuel efficiency and specific impulse are concerned. Exhaust velocities of up to 10,600km/s are estimated, which is far beyond the speed of conventional rockets. Whats more, the technology has been studied extensively over the past few decades, and many proposals have been made.

Artists concept of the Daedalus spacecraft, a two-stage fusion rocket that would achieve up to 12% he speed of light. Credit: Adrian Mann

For example, between 1973 and 1978, the British Interplanetary Society conducted feasibility study known as Project Daedalus. Relying on current knowledge of fusion technology and existing methods, the study called for the creation of a two-stage unmanned scientific probe making a trip to Barnards Star (5.9 light years from Earth) in a single lifetime.

The first stage, the larger of the two, would operate for 2.05 years and accelerate the spacecraft to 7.1% the speed of light (o.071 c). This stage would then be jettisoned, at which point, the second stage would ignite its engine and accelerate the spacecraft up to about 12% of light speed (0.12 c) over the course of 1.8 years. The second-stage engine would then be shut down and the ship would enter into a 46-year cruise period.

According to the Projects estimates, the mission would take 50 years to reach Barnards Star. Adjusted for Proxima Centauri, the same craft could make the trip in 36 years. But of course, the project also identified numerous stumbling blocks that made it unfeasible using then-current technology most of which are still unresolved.

For instance, there is the fact that helium-3 is scare on Earth, which means it would have to be mined elsewhere (most likely on the Moon). Second, the reaction that drives the spacecraft requires that the energy released vastly exceed the energy used to trigger the reaction. And while experiments here on Earth have surpassed the break-even goal, we are still a long way away from the kinds of energy needed to power an interstellar spaceship.

Artists concept of the Project Daedalus spacecraft, with a Saturn V rocket standing next to it for scale. Credit: Adrian Mann

Third, there is the cost factor of constructing such a ship. Even by the modest standard of Project Daedalus unmanned craft, a fully-fueled craft would weight as much as 60,000 Mt. To put that in perspective, the gross weight of NASAs SLS is just over 30 Mt, and a single launch comes with a price tag of $5 billion (based on estimates made in 2013).

In short, a fusion rocket would not only be prohibitively expensive to build, it would require a level of fusion reactor technology that is currently beyond our means. Icarus Interstellar, an international organization of volunteer citizen scientists (some of whom worked for NASA or the ESA) have since attempted to revitalize the concept with Project Icarus. Founded in 2009, the group hopes to make fusion propulsion (among other things) feasible by the near future.

Fusion Ramjet:Also known as the Bussard Ramjet, this theoretical form of propulsion was first proposed by physicist Robert W. Bussard in 1960. Basically, it is an improvement over the standard nuclear fusion rocket, which uses magnetic fields to compress hydrogen fuel to the point that fusion occurs. But in the Ramjets case, an enormous electromagnetic funnel scoops hydrogen from the interstellar medium and dumps it into the reactor as fuel.

Artists concept of the Bussard Ramjet, which would harness hydrogen from the interstellar medium to power its fusion engines. Credit: futurespacetransportation.weebly.com

As the ship picks up speed, the reactive mass is forced into a progressively constricted magnetic field, compressing it until thermonuclear fusion occurs. The magnetic field then directs the energy as rocket exhaust through an engine nozzle, thereby accelerating the vessel. Without any fuel tanks to weigh it down, a fusion ramjet could achieve speeds approaching 4% of the speed of light and travel anywhere in the galaxy.

However, the potential drawbacks of this design are numerous. For instance, there is the problem of drag. The ship relies on increased speed to accumulate fuel, but as it collides with more and more interstellar hydrogen, it may also lose speed especially in denser regions of the galaxy. Second, deuterium and tritium (used in fusion reactors here on Earth) are rare in space, whereas fusing regular hydrogen (which is plentiful in space) is beyond our current methods.

This concept has been popularized extensively in science fiction. Perhaps the best known example of this is in the franchise of Star Trek, where Bussard collectors are the glowing nacelles on warp engines. But in reality, our knowledge of fusion reactions need to progress considerably before a ramjet is possible. We would also have to figure out that pesky drag problem before we began to consider building such a ship!

Laser Sail:Solar sails have long been considered to be a cost-effective way of exploring the Solar System. In addition to being relatively easy and cheap to manufacture, theres the added bonus of solar sails requiring no fuel. Rather than using rockets that require propellant, the sail uses the radiation pressure from stars to push large ultra-thin mirrors to high speeds.

IKAROS spaceprobe with solar sail in flight (artists depiction) showing a typical square sail configuration. Credit: Wikimedia Commons/Andrzej Mirecki

However, for the sake of interstellar flight, such a sail would need to be driven by focused energy beams (i.e. lasers or microwaves) to push it to a velocity approaching the speed of light. The concept was originally proposed by Robert Forward in 1984, who was a physicist at the Hughes Aircrafts research laboratories at the time.

The concept retains the benefits of a solar sail, in that it requires no on-board fuel, but also from the fact that laser energy does not dissipate with distance nearly as much as solar radiation. So while a laser-driven sail would take some time to accelerate to near-luminous speeds, it would be limited only to the speed of light itself.

According to a 2000 study produced by Robert Frisbee, a director of advanced propulsion concept studies at NASAs Jet Propulsion Laboratory, a laser sail could be accelerated to half the speed of light in less than a decade. He also calculated that a sail measuring about 320 km (200 miles) in diameter could reach Proxima Centauri in just over 12 years. Meanwhile, a sail measuring about 965 km (600 miles) in diameter would arrive in just under 9 years.

However, such a sail would have to be built from advanced composites to avoid melting. Combined with its size, this would add up to a pretty penny! Even worse is the sheer expense incurred from building a laser large and powerful enough to drive a sail to half the speed of light. According to Frisbees own study, the lasers would require a steady flow of 17,000 terawatts of power close to what the entire world consumes in a single day.

Antimatter Engine:Fans of science fiction are sure to have heard of antimatter. But in case you havent, antimatter is essentially material composed of antiparticles, which have the same mass but opposite charge as regular particles. An antimatter engine, meanwhile, is a form of propulsion that uses interactions between matter and antimatter to generate power, or to create thrust.

Artists concept of an antimatter-powered spacecraft for missions to Mars, as part of the Mars Reference Mission. Credit: NASA

In short, an antimatter engine involves particles of hydrogen and antihydrogen being slammed together. This reaction unleashes as much as energy as a thermonuclear bomb, along with a shower of subatomic particles called pions and muons. These particles, which would travel at one-third the speed of light, are then be channeled by a magnetic nozzle to generate thrust.

The advantage to this class of rocket is that a large fraction of the rest mass of a matter/antimatter mixture may be converted to energy, allowing antimatter rockets to have a far higher energy density and specific impulse than any other proposed class of rocket. Whats more, controlling this kind of reaction could conceivably push a rocket up to half the speed of light.

Pound for pound, this class of ship would be the fastest and most fuel-efficient ever conceived. Whereas conventional rockets require tons of chemical fuel to propel a spaceship to its destination, an antimatter engine could do the same job with just a few milligrams of fuel. In fact, the mutual annihilation of a half pound of hydrogen and antihydrogen particles would unleash more energy than a 10-megaton hydrogen bomb.

It is for this exact reason that NASAs Institute for Advanced Concepts (NIAC) has investigated the technology as a possible means for future Mars missions. Unfortunately, when contemplating missions to nearby star systems, the amount if fuel needs to make the trip is multiplied exponentially, and the cost involved in producing it would be astronomical (no pun!).

What matter and antimatter might look like annihilating one another. Credit: NASA/CXC/M. Weiss

According to report prepared for the 39th AIAA/ASME/SAE/ASEE Joint Propulsion Conference and Exhibit (also by Robert Frisbee), a two-stage antimatter rocket would need over 815,000 metric tons (900,000 US tons) of fuel to make the journey to Proxima Centauri in approximately 40 years. Thats not bad, as far as timelines go. But again, the cost

Whereas a single gram of antimatter would produce an incredible amount of energy, it is estimated that producing just one gram would require approximately 25 million billion kilowatt-hours of energy and cost over a trillion dollars. At present, the total amount of antimatter that has been created by humans is less 20 nanograms.

And even if we could produce antimatter for cheap, you would need a massive ship to hold the amount of fuel needed. According to a report by Dr. Darrel Smith & Jonathan Webby of the Embry-Riddle Aeronautical University in Arizona, an interstellar craft equipped with an antimatter engine could reach 0.5 the speed of light and reach Proxima Centauri in a little over 8 years. However, the ship itself would weigh 400 Mt, and would need 170 MT of antimatter fuel to make the journey.

A possible way around this is to create a vessel that can create antimatter which it could then store as fuel. This concept, known as the Vacuum to Antimatter Rocket Interstellar Explorer System (VARIES), was proposed by Richard Obousy of Icarus Interstellar. Based on the idea of in-situ refueling, a VARIES ship would rely on large lasers (powered by enormous solar arrays) which would create particles of antimatter when fired at empty space.

Artists concept of the Vacuum to Antimatter Rocket Interstellar Explorer System (VARIES), a concept that would use solar arrays to power lasers that create particles of antimatter to be used as fuel. Credit: Adrian Mann

Much like the Ramjet concept, this proposal solves the problem of carrying fuel by harnessing it from space. But once again, the sheer cost of such a ship would be prohibitively expensive using current technology. In addition, the ability to create dark matter in large volumes is not something we currently have the power to do. Theres also the matter of radiation, as matter-antimatter annihilation can produce blasts of high-energy gamma rays.

This not only presents a danger to the crew, requiring significant radiations shielding, but requires the engines be shielded as well to ensure they dont undergo atomic degradation from all the radiation they are exposed to. So bottom line, the antimatter engine is completely impractical with our current technology and in the current budget environment.

Alcubierre Warp Drive:Fans of science fiction are also no doubt familiar with the concept of an Alcubierre (or Warp) Drive. Proposed by Mexican physicist Miguel Alcubierre in 1994, this proposed method was an attempt to make FTL travel possible without violating Einsteins theory of Special Relativity. In short, the concept involves stretching the fabric of space-time in a wave, which would theoretically cause the space ahead of an object to contract and the space behind it to expand.

An object inside this wave (i.e. a spaceship) would then be able to ride this wave, known as a warp bubble, beyond relativistic speeds. Since the ship is not moving within this bubble, but is being carried along as it moves, the rules of space-time and relativity would cease to apply. The reason being, this method does not rely on moving faster than light in the local sense.

Artist Mark Rademakers concept for the IXS Enterprise, a theoretical interstellar warp spacecraft. Credit: Mark Rademaker/flickr.com

It is only faster than light in the sense that the ship could reach its destination faster than a beam of light that was traveling outside the warp bubble. So assuming that a spacecraft could be outfitted with an Alcubierre Drive system, it would be able to make the trip to Proxima Centauri in less than 4 years. So when it comes to theoretical interstellar space travel, this is by far the most promising technology, at least in terms of speed.

Naturally, the concept has been received its share of counter-arguments over the years. Chief amongst them are the fact that it does not take quantum mechanics into account, and could be invalidated by a Theory of Everything (such as loop quantum gravity). Calculations on the amount of energy required have also indicated that a warp drive would require a prohibitive amount of power to work. Other uncertainties include the safety of such a system, the effects on space-time at the destination, and violations of causality.

However, in 2012, NASA scientist Harold Sonny White announced that he and his colleagues had begun researching the possibility of an Alcubierre Drive. In a paper titled Warp Field Mechanics 101, White claimed that they had constructed an interferometer that will detect the spatial distortions produced by the expanding and contracting spacetime of the Alcubierre metric.

In 2013, the Jet Propulsion Laboratory published results of a warp field test which was conducted under vacuum conditions. Unfortunately, the results were reported as inconclusive. Long term, we may find that Alcubierres metric may violate one or more fundamental laws of nature. And even if the physics should prove to be sound, there is no guarantee it can be harnessed for the sake of FTL flight.

In conclusion, if you were hoping to travel to the nearest star within your lifetime, the outlook isnt very good. However, if mankind felt the incentive to build an interstellar ark filled with a self-sustaining community of space-faring humans, it might be possible to travel there in a little under a century if we were willing to invest in the requisite technology.

But all the available methods are still very limited when it comes to transit time. And while taking hundreds or thousands of years to reach the nearest star may matter less to us if our very survival was at stake, it is simply not practical as far as space exploration and travel goes. By the time a mission reached even the closest stars in our galaxy, the technology employed would be obsolete and humanity might not even exist back home anymore.

So unless we make a major breakthrough in the realms of fusion, antimatter, or laser technology, we will either have to be content with exploring our own Solar System, or be forced to accept a very long-term transit strategy

We have written many interesting articles about space travel here at Universe Today. Heres Will We Ever Reach Another Star?, Warp Drives May Come With a Killer Downside, The Alcubierre Warp Drive, How Far Is A Light Year?, When Light Just Isnt Fast Enough, When Will We Become Interstellar?, and Can We Travel Faster Than the Speed of Light?

For more information, be sure to consult NASAs pages on Propulsion Systems of the Future, and Is Warp Drive Real?

Read the original post:

How Long Would It Take To Travel To The Nearest Star …

Space exploration – Wikipedia

Space exploration is the discovery and exploration of celestial structures in outer space by means of evolving and growing space technology. While the study of space is carried out mainly by astronomers with telescopes, the physical exploration of space is conducted both by unmanned robotic space probes and human spaceflight.

While the observation of objects in space, known as astronomy, predates reliable recorded history, it was the development of large and relatively efficient rockets during the mid-twentieth century that allowed physical space exploration to become a reality. Common rationales for exploring space include advancing scientific research, national prestige, uniting different nations, ensuring the future survival of humanity, and developing military and strategic advantages against other countries.[1]

Space exploration has often been used as a proxy competition for geopolitical rivalries such as the Cold War. The early era of space exploration was driven by a “Space Race” between the Soviet Union and the United States. The launch of the first human-made object to orbit Earth, the Soviet Union’s Sputnik 1, on 4 October 1957, and the first Moon landing by the American Apollo 11 mission on 20 July 1969 are often taken as landmarks for this initial period. The Soviet Space Program achieved many of the first milestones, including the first living being in orbit in 1957, the first human spaceflight (Yuri Gagarin aboard Vostok 1) in 1961, the first spacewalk (by Aleksei Leonov) on 18 March 1965, the first automatic landing on another celestial body in 1966, and the launch of the first space station (Salyut 1) in 1971. After the first 20 years of exploration, focus shifted from one-off flights to renewable hardware, such as the Space Shuttle program, and from competition to cooperation as with the International Space Station (ISS).

With the substantial completion of the ISS[2] following STS-133 in March 2011, plans for space exploration by the U.S. remain in flux. Constellation, a Bush Administration program for a return to the Moon by 2020[3] was judged inadequately funded and unrealistic by an expert review panel reporting in 2009.[4] The Obama Administration proposed a revision of Constellation in 2010 to focus on the development of the capability for crewed missions beyond low Earth orbit (LEO), envisioning extending the operation of the ISS beyond 2020, transferring the development of launch vehicles for human crews from NASA to the private sector, and developing technology to enable missions to beyond LEO, such as EarthMoon L1, the Moon, EarthSun L2, near-Earth asteroids, and Phobos or Mars orbit.[5]

In the 2000s, the People’s Republic of China initiated a successful manned spaceflight program, while the European Union, Japan, and India have also planned future crewed space missions. China, Russia, Japan, and India have advocated crewed missions to the Moon during the 21st century, while the European Union has advocated manned missions to both the Moon and Mars during the 20th and 21st century.

From the 1990s onwards, private interests began promoting space tourism and then public space exploration of the Moon (see Google Lunar X Prize).

The highest known projectiles prior to the rockets of the 1940s were the shells of the Paris Gun, a type of German long-range siege gun, which reached at least 40 kilometers altitude during World War One.[6] Steps towards putting a human-made object into space were taken by German scientists during World War II while testing the V-2 rocket, which became the first human-made object in space on 3 October 1942 with the launching of the A-4. After the war, the U.S. used German scientists and their captured rockets in programs for both military and civilian research. The first scientific exploration from space was the cosmic radiation experiment launched by the U.S. on a V-2 rocket on 10 May 1946.[7] The first images of Earth taken from space followed the same year[8][9] while the first animal experiment saw fruit flies lifted into space in 1947, both also on modified V-2s launched by Americans. Starting in 1947, the Soviets, also with the help of German teams, launched sub-orbital V-2 rockets and their own variant, the R-1, including radiation and animal experiments on some flights. These suborbital experiments only allowed a very short time in space which limited their usefulness.

The first successful orbital launch was of the Soviet uncrewed Sputnik 1 (“Satellite 1”) mission on 4 October 1957. The satellite weighed about 83kg (183lb), and is believed to have orbited Earth at a height of about 250km (160mi). It had two radio transmitters (20 and 40MHz), which emitted “beeps” that could be heard by radios around the globe. Analysis of the radio signals was used to gather information about the electron density of the ionosphere, while temperature and pressure data was encoded in the duration of radio beeps. The results indicated that the satellite was not punctured by a meteoroid. Sputnik 1 was launched by an R-7 rocket. It burned up upon re-entry on 3 January 1958.

The second one was Sputnik 2. Launched by the USSR on November 3, 1957, it carried the dog Laika, who became the first animal in orbit.

This success led to an escalation of the American space program, which unsuccessfully attempted to launch a Vanguard satellite into orbit two months later. On 31 January 1958, the U.S. successfully orbited Explorer 1 on a Juno rocket.

The first successful human spaceflight was Vostok 1 (“East 1”), carrying 27-year-old Russian cosmonaut Yuri Gagarin on 12 April 1961. The spacecraft completed one orbit around the globe, lasting about 1 hour and 48 minutes. Gagarin’s flight resonated around the world; it was a demonstration of the advanced Soviet space program and it opened an entirely new era in space exploration: human spaceflight.

The U.S. first launched a person into space within a month of Vostok 1 with Alan Shepard’s suborbital flight on Freedom 7. Orbital flight was achieved by the United States when John Glenn’s Friendship 7 orbited Earth on 20 February 1962.

Valentina Tereshkova, the first woman in space, orbited Earth 48 times aboard Vostok 6 on 16 June 1963.

China first launched a person into space 42 years after the launch of Vostok 1, on 15 October 2003, with the flight of Yang Liwei aboard the Shenzhou 5 (Divine Vessel 5) spacecraft.

The first artificial object to reach another celestial body was Luna 2 in 1959.[10] The first automatic landing on another celestial body was performed by Luna 9[11] in 1966. Luna 10 became the first artificial satellite of the Moon.[12]

The first crewed landing on another celestial body was performed by Apollo 11 on 20 July 1969.

The first successful interplanetary flyby was the 1962 Mariner 2 flyby of Venus (closest approach 34,773 kilometers). The other planets were first flown by in 1965 for Mars by Mariner 4, 1973 for Jupiter by Pioneer 10, 1974 for Mercury by Mariner 10, 1979 for Saturn by Pioneer 11, 1986 for Uranus by Voyager 2, 1989 for Neptune by Voyager 2. In 2015, the dwarf planets Ceres and Pluto were orbited by Dawn and passed by New Horizons, respectively.

The first interplanetary surface mission to return at least limited surface data from another planet was the 1970 landing of Venera 7 on Venus which returned data to Earth for 23 minutes. In 1975 the Venera 9 was the first to return images from the surface of another planet. In 1971 the Mars 3 mission achieved the first soft landing on Mars returning data for almost 20 seconds. Later much longer duration surface missions were achieved, including over six years of Mars surface operation by Viking 1 from 1975 to 1982 and over two hours of transmission from the surface of Venus by Venera 13 in 1982, the longest ever Soviet planetary surface mission.

The dream of stepping into the outer reaches of Earth’s atmosphere was driven by the fiction of Peter Francis Geraci[13][14][15] and H. G. Wells,[16] and rocket technology was developed to try to realize this vision. The German V-2 was the first rocket to travel into space, overcoming the problems of thrust and material failure. During the final days of World War II this technology was obtained by both the Americans and Soviets as were its designers. The initial driving force for further development of the technology was a weapons race for intercontinental ballistic missiles (ICBMs) to be used as long-range carriers for fast nuclear weapon delivery, but in 1961 when the Soviet Union launched the first man into space, the United States declared itself to be in a “Space Race” with the Soviets.

Konstantin Tsiolkovsky, Robert Goddard, Hermann Oberth, and Reinhold Tiling laid the groundwork of rocketry in the early years of the 20th century.

Wernher von Braun was the lead rocket engineer for Nazi Germany’s World War II V-2 rocket project. In the last days of the war he led a caravan of workers in the German rocket program to the American lines, where they surrendered and were brought to the United States to work on their rocket development (“Operation Paperclip”). He acquired American citizenship and led the team that developed and launched Explorer 1, the first American satellite. Von Braun later led the team at NASA’s Marshall Space Flight Center which developed the Saturn V moon rocket.

Initially the race for space was often led by Sergei Korolyov, whose legacy includes both the R7 and Soyuzwhich remain in service to this day. Korolev was the mastermind behind the first satellite, first man (and first woman) in orbit and first spacewalk. Until his death his identity was a closely guarded state secret; not even his mother knew that he was responsible for creating the Soviet space program.

Kerim Kerimov was one of the founders of the Soviet space program and was one of the lead architects behind the first human spaceflight (Vostok 1) alongside Sergey Korolyov. After Korolyov’s death in 1966, Kerimov became the lead scientist of the Soviet space program and was responsible for the launch of the first space stations from 1971 to 1991, including the Salyut and Mir series, and their precursors in 1967, the Cosmos 186 and Cosmos 188.[17][18]

Although the Sun will probably not be physically explored at all, the study of the Sun has nevertheless been a major focus of space exploration. Being above the atmosphere in particular and Earth’s magnetic field gives access to the solar wind and infrared and ultraviolet radiations that cannot reach Earth’s surface. The Sun generates most space weather, which can affect power generation and transmission systems on Earth and interfere with, and even damage, satellites and space probes. Numerous spacecraft dedicated to observing the Sun, beginning with the Apollo Telescope Mount, have been launched and still others have had solar observation as a secondary objective. Parker Solar Probe, planned for a 2018 launch, will approach the Sun to within 1/8th the orbit of Mercury.

Mercury remains the least explored of the Terrestrial planets. As of May 2013, the Mariner 10 and MESSENGER missions have been the only missions that have made close observations of Mercury. MESSENGER entered orbit around Mercury in March 2011, to further investigate the observations made by Mariner 10 in 1975 (Munsell, 2006b).

A third mission to Mercury, scheduled to arrive in 2020, BepiColombo is to include two probes. BepiColombo is a joint mission between Japan and the European Space Agency. MESSENGER and BepiColombo are intended to gather complementary data to help scientists understand many of the mysteries discovered by Mariner 10’s flybys.

Flights to other planets within the Solar System are accomplished at a cost in energy, which is described by the net change in velocity of the spacecraft, or delta-v. Due to the relatively high delta-v to reach Mercury and its proximity to the Sun, it is difficult to explore and orbits around it are rather unstable.

Venus was the first target of interplanetary flyby and lander missions and, despite one of the most hostile surface environments in the Solar System, has had more landers sent to it (nearly all from the Soviet Union) than any other planet in the Solar System. The first successful Venus flyby was the American Mariner 2 spacecraft, which flew past Venus in 1962. Mariner 2 has been followed by several other flybys by multiple space agencies often as part of missions using a Venus flyby to provide a gravitational assist en route to other celestial bodies. In 1967 Venera 4 became the first probe to enter and directly examine the atmosphere of Venus. In 1970, Venera 7 became the first successful lander to reach the surface of Venus and by 1985 it had been followed by eight additional successful Soviet Venus landers which provided images and other direct surface data. Starting in 1975 with the Soviet orbiter Venera 9 some ten successful orbiter missions have been sent to Venus, including later missions which were able to map the surface of Venus using radar to pierce the obscuring atmosphere.

Space exploration has been used as a tool to understand Earth as a celestial object in its own right. Orbital missions can provide data for Earth that can be difficult or impossible to obtain from a purely ground-based point of reference.

For example, the existence of the Van Allen radiation belts was unknown until their discovery by the United States’ first artificial satellite, Explorer 1. These belts contain radiation trapped by Earth’s magnetic fields, which currently renders construction of habitable space stations above 1000km impractical. Following this early unexpected discovery, a large number of Earth observation satellites have been deployed specifically to explore Earth from a space based perspective. These satellites have significantly contributed to the understanding of a variety of Earth-based phenomena. For instance, the hole in the ozone layer was found by an artificial satellite that was exploring Earth’s atmosphere, and satellites have allowed for the discovery of archeological sites or geological formations that were difficult or impossible to otherwise identify.

The Moon was the first celestial body to be the object of space exploration. It holds the distinctions of being the first remote celestial object to be flown by, orbited, and landed upon by spacecraft, and the only remote celestial object ever to be visited by humans.

In 1959 the Soviets obtained the first images of the far side of the Moon, never previously visible to humans. The U.S. exploration of the Moon began with the Ranger 4 impactor in 1962. Starting in 1966 the Soviets successfully deployed a number of landers to the Moon which were able to obtain data directly from the Moon’s surface; just four months later, Surveyor 1 marked the debut of a successful series of U.S. landers. The Soviet uncrewed missions culminated in the Lunokhod program in the early 1970s, which included the first uncrewed rovers and also successfully brought lunar soil samples to Earth for study. This marked the first (and to date the only) automated return of extraterrestrial soil samples to Earth. Uncrewed exploration of the Moon continues with various nations periodically deploying lunar orbiters, and in 2008 the Indian Moon Impact Probe.

Crewed exploration of the Moon began in 1968 with the Apollo 8 mission that successfully orbited the Moon, the first time any extraterrestrial object was orbited by humans. In 1969, the Apollo 11 mission marked the first time humans set foot upon another world. Crewed exploration of the Moon did not continue for long, however. The Apollo 17 mission in 1972 marked the most recent human visit there, and the next, Exploration Mission 2, is due to orbit the Moon in 2021. Robotic missions are still pursued vigorously.

The exploration of Mars has been an important part of the space exploration programs of the Soviet Union (later Russia), the United States, Europe, Japan and India. Dozens of robotic spacecraft, including orbiters, landers, and rovers, have been launched toward Mars since the 1960s. These missions were aimed at gathering data about current conditions and answering questions about the history of Mars. The questions raised by the scientific community are expected to not only give a better appreciation of the red planet but also yield further insight into the past, and possible future, of Earth.

The exploration of Mars has come at a considerable financial cost with roughly two-thirds of all spacecraft destined for Mars failing before completing their missions, with some failing before they even began. Such a high failure rate can be attributed to the complexity and large number of variables involved in an interplanetary journey, and has led researchers to jokingly speak of The Great Galactic Ghoul[20] which subsists on a diet of Mars probes. This phenomenon is also informally known as the “Mars Curse”.[21] In contrast to overall high failure rates in the exploration of Mars, India has become the first country to achieve success of its maiden attempt. India’s Mars Orbiter Mission (MOM)[22][23][24] is one of the least expensive interplanetary missions ever undertaken with an approximate total cost of 450 Crore (US$73 million).[25][26] The first mission to Mars by any Arab country has been taken up by the United Arab Emirates. Called the Emirates Mars Mission, it is scheduled for launch in 2020. The uncrewed exploratory probe has been named “Hope Probe” and will be sent to Mars to study its atmosphere in detail.[27]

The Russian space mission Fobos-Grunt, which launched on 9 November 2011 experienced a failure leaving it stranded in low Earth orbit.[28] It was to begin exploration of the Phobos and Martian circumterrestrial orbit, and study whether the moons of Mars, or at least Phobos, could be a “trans-shipment point” for spaceships traveling to Mars.[29]

The exploration of Jupiter has consisted solely of a number of automated NASA spacecraft visiting the planet since 1973. A large majority of the missions have been “flybys”, in which detailed observations are taken without the probe landing or entering orbit; such as in Pioneer and Voyager programs. The Galileo and Juno spacecraft are the only spacecraft to have entered the planet’s orbit. As Jupiter is believed to have only a relatively small rocky core and no real solid surface, a landing mission is nearly impossible.

Reaching Jupiter from Earth requires a delta-v of 9.2km/s,[30] which is comparable to the 9.7km/s delta-v needed to reach low Earth orbit.[31] Fortunately, gravity assists through planetary flybys can be used to reduce the energy required at launch to reach Jupiter, albeit at the cost of a significantly longer flight duration.[30]

Jupiter has 69 known moons, many of which have relatively little known information about them.

Saturn has been explored only through uncrewed spacecraft launched by NASA, including one mission (CassiniHuygens) planned and executed in cooperation with other space agencies. These missions consist of flybys in 1979 by Pioneer 11, in 1980 by Voyager 1, in 1982 by Voyager 2 and an orbital mission by the Cassini spacecraft, which lasted from 2004 until 2017.

Saturn has at least 62 known moons, although the exact number is debatable since Saturn’s rings are made up of vast numbers of independently orbiting objects of varying sizes. The largest of the moons is Titan, which holds the distinction of being the only moon in the Solar System with an atmosphere denser and thicker than that of Earth. Titan holds the distinction of being the only object in the Outer Solar System that has been explored with a lander, the Huygens probe deployed by the Cassini spacecraft.

The exploration of Uranus has been entirely through the Voyager 2 spacecraft, with no other visits currently planned. Given its axial tilt of 97.77, with its polar regions exposed to sunlight or darkness for long periods, scientists were not sure what to expect at Uranus. The closest approach to Uranus occurred on 24 January 1986. Voyager 2 studied the planet’s unique atmosphere and magnetosphere. Voyager 2 also examined its ring system and the moons of Uranus including all five of the previously known moons, while discovering an additional ten previously unknown moons.

Images of Uranus proved to have a very uniform appearance, with no evidence of the dramatic storms or atmospheric banding evident on Jupiter and Saturn. Great effort was required to even identify a few clouds in the images of the planet. The magnetosphere of Uranus, however, proved to be completely unique and proved to be profoundly affected by the planet’s unusual axial tilt. In contrast to the bland appearance of Uranus itself, striking images were obtained of the Moons of Uranus, including evidence that Miranda had been unusually geologically active.

The exploration of Neptune began with the 25 August 1989 Voyager 2 flyby, the sole visit to the system as of 2014. The possibility of a Neptune Orbiter has been discussed, but no other missions have been given serious thought.

Although the extremely uniform appearance of Uranus during Voyager 2’s visit in 1986 had led to expectations that Neptune would also have few visible atmospheric phenomena, the spacecraft found that Neptune had obvious banding, visible clouds, auroras, and even a conspicuous anticyclone storm system rivaled in size only by Jupiter’s small Spot. Neptune also proved to have the fastest winds of any planet in the Solar System, measured as high as 2,100km/h.[32] Voyager 2 also examined Neptune’s ring and moon system. It discovered 900 complete rings and additional partial ring “arcs” around Neptune. In addition to examining Neptune’s three previously known moons, Voyager 2 also discovered five previously unknown moons, one of which, Proteus, proved to be the last largest moon in the system. Data from Voyager 2 supported the view that Neptune’s largest moon, Triton, is a captured Kuiper belt object.[33]

The dwarf planet Pluto presents significant challenges for spacecraft because of its great distance from Earth (requiring high velocity for reasonable trip times) and small mass (making capture into orbit very difficult at present). Voyager 1 could have visited Pluto, but controllers opted instead for a close flyby of Saturn’s moon Titan, resulting in a trajectory incompatible with a Pluto flyby. Voyager 2 never had a plausible trajectory for reaching Pluto.[34]

Pluto continues to be of great interest, despite its reclassification as the lead and nearest member of a new and growing class of distant icy bodies of intermediate size (and also the first member of the important subclass, defined by orbit and known as “plutinos”). After an intense political battle, a mission to Pluto dubbed New Horizons was granted funding from the United States government in 2003.[35] New Horizons was launched successfully on 19 January 2006. In early 2007 the craft made use of a gravity assist from Jupiter. Its closest approach to Pluto was on 14 July 2015; scientific observations of Pluto began five months prior to closest approach and continued for 16 days after the encounter.

Until the advent of space travel, objects in the asteroid belt were merely pinpricks of light in even the largest telescopes, their shapes and terrain remaining a mystery. Several asteroids have now been visited by probes, the first of which was Galileo, which flew past two: 951 Gaspra in 1991, followed by 243 Ida in 1993. Both of these lay near enough to Galileo’s planned trajectory to Jupiter that they could be visited at acceptable cost. The first landing on an asteroid was performed by the NEAR Shoemaker probe in 2000, following an orbital survey of the object. The dwarf planet Ceres and the asteroid 4 Vesta, two of the three largest asteroids, were visited by NASA’s Dawn spacecraft, launched in 2007.

Although many comets have been studied from Earth sometimes with centuries-worth of observations, only a few comets have been closely visited. In 1985, the International Cometary Explorer conducted the first comet fly-by (21P/Giacobini-Zinner) before joining the Halley Armada studying the famous comet. The Deep Impact probe smashed into 9P/Tempel to learn more about its structure and composition and the Stardust mission returned samples of another comet’s tail. The Philae lander successfully landed on Comet ChuryumovGerasimenko in 2014 as part of the broader Rosetta mission.

Hayabusa was an unmanned spacecraft developed by the Japan Aerospace Exploration Agency to return a sample of material from the small near-Earth asteroid 25143 Itokawa to Earth for further analysis. Hayabusa was launched on 9 May 2003 and rendezvoused with Itokawa in mid-September 2005. After arriving at Itokawa, Hayabusa studied the asteroid’s shape, spin, topography, color, composition, density, and history. In November 2005, it landed on the asteroid to collect samples. The spacecraft returned to Earth on 13 June 2010.

Deep space exploration is the branch of astronomy, astronautics and space technology that is involved with the exploration of distant regions of outer space.[36] Physical exploration of space is conducted both by human spaceflights (deep-space astronautics) and by robotic spacecraft.

Some of the best candidates for future deep space engine technologies include anti-matter, nuclear power and beamed propulsion.[37] The latter, beamed propulsion, appears to be the best candidate for deep space exploration presently available, since it uses known physics and known technology that is being developed for other purposes.[38]

In the 2000s, several plans for space exploration were announced; both government entities and the private sector have space exploration objectives. China has announced plans to have a 60-ton multi-module space station in orbit by 2020.

The NASA Authorization Act of 2010 provided a re-prioritized list of objectives for the American space program, as well as funding for the first priorities. NASA proposes to move forward with the development of the Space Launch System (SLS), which will be designed to carry the Orion Multi-Purpose Crew Vehicle, as well as important cargo, equipment, and science experiments to Earth’s orbit and destinations beyond. Additionally, the SLS will serve as a back up for commercial and international partner transportation services to the International Space Station. The SLS rocket will incorporate technological investments from the Space Shuttle program and the Constellation program in order to take advantage of proven hardware and reduce development and operations costs. The first developmental flight is targeted for the end of 2017.[39]

The idea of using high level automated systems for space missions has become a desirable goal to space agencies all around the world. Such systems are believed to yield benefits such as lower cost, less human oversight, and ability to explore deeper in space which is usually restricted by long communications with human controllers.[40]

Autonomy is defined by three requirements:[40]

Autonomous technologies would be able to perform beyond predetermined actions. They would analyze all possible states and events happening around them and come up with a safe response. In addition, such technologies can reduce launch cost and ground involvement. Performance would increase as well. Autonomy would be able to quickly respond upon encountering an unforeseen event, especially in deep space exploration where communication back to Earth would take too long.[40]

NASA began its autonomous science experiment (ASE) on Earth Observing 1 (EO-1) which is NASA’s first satellite in the new millennium program Earth-observing series launched on 21 November 2000. The autonomy of ASE is capable of on-board science analysis, replanning, robust execution, and later the addition of model-based diagnostic. Images obtained by the EO-1 are analyzed on-board and downlinked when a change or an interesting event occur. The ASE software has successfully provided over 10,000 science images.[40]

An article in science magazine Nature suggested the use of asteroids as a gateway for space exploration, with the ultimate destination being Mars.[41] In order to make such an approach viable, three requirements need to be fulfilled: first, “a thorough asteroid survey to find thousands of nearby bodies suitable for astronauts to visit”; second, “extending flight duration and distance capability to ever-increasing ranges out to Mars”; and finally, “developing better robotic vehicles and tools to enable astronauts to explore an asteroid regardless of its size, shape or spin.”[41] Furthermore, using asteroids would provide astronauts with protection from galactic cosmic rays, with mission crews being able to land on them in times of greater risk to radiation exposure.[42]

The research that is conducted by national space exploration agencies, such as NASA and Roscosmos, is one of the reasons supporters cite to justify government expenses. Economic analyses of the NASA programs often showed ongoing economic benefits (such as NASA spin-offs), generating many times the revenue of the cost of the program.[43] It is also argued that space exploration would lead to the extraction of resources on other planets and especially asteroids, which contain billions of dollars worth of minerals and metals. Such expeditions could generate a lot of revenue.[44] As well, it has been argued that space exploration programs help inspire youth to study in science and engineering.[45]

Another claim is that space exploration is a necessity to mankind and that staying on Earth will lead to extinction. Some of the reasons are lack of natural resources, comets, nuclear war, and worldwide epidemic. Stephen Hawking, renowned British theoretical physicist, said that “I don’t think the human race will survive the next thousand years, unless we spread into space. There are too many accidents that can befall life on a single planet. But I’m an optimist. We will reach out to the stars.”[46]

NASA has produced a series of public service announcement videos supporting the concept of space exploration.[47]

Overall, the public remains largely supportive of both crewed and uncrewed space exploration. According to an Associated Press Poll conducted in July 2003, 71% of U.S. citizens agreed with the statement that the space program is “a good investment”, compared to 21% who did not.[48]

Arthur C. Clarke (1950) presented a summary of motivations for the human exploration of space in his non-fiction semi-technical monograph Interplanetary Flight.[49] He argued that humanity’s choice is essentially between expansion off Earth into space, versus cultural (and eventually biological) stagnation and death.

Spaceflight is the use of space technology to achieve the flight of spacecraft into and through outer space.

Spaceflight is used in space exploration, and also in commercial activities like space tourism and satellite telecommunications. Additional non-commercial uses of spaceflight include space observatories, reconnaissance satellites and other Earth observation satellites.

A spaceflight typically begins with a rocket launch, which provides the initial thrust to overcome the force of gravity and propels the spacecraft from the surface of Earth. Once in space, the motion of a spacecraftboth when unpropelled and when under propulsionis covered by the area of study called astrodynamics. Some spacecraft remain in space indefinitely, some disintegrate during atmospheric reentry, and others reach a planetary or lunar surface for landing or impact.

Satellites are used for a large number of purposes. Common types include military (spy) and civilian Earth observation satellites, communication satellites, navigation satellites, weather satellites, and research satellites. Space stations and human spacecraft in orbit are also satellites.

Current examples of the commercial use of space include satellite navigation systems, satellite television and satellite radio. Space tourism is the recent phenomenon of space travel by individuals for the purpose of personal pleasure.

Private spaceflight companies such as SpaceX and Blue Origin, and commercial space stations such as the Axiom Space and the Bigelow Commercial Space Station have dramatically changed the landscape of space exploration, and will continue to do so in the near future.

Astrobiology is the interdisciplinary study of life in the universe, combining aspects of astronomy, biology and geology.[50] It is focused primarily on the study of the origin, distribution and evolution of life. It is also known as exobiology (from Greek: , exo, “outside”).[51][52][53] The term “Xenobiology” has been used as well, but this is technically incorrect because its terminology means “biology of the foreigners”.[54] Astrobiologists must also consider the possibility of life that is chemically entirely distinct from any life found on Earth.[55] In the Solar System some of the prime locations for current or past astrobiology are on Enceladus, Europa, Mars, and Titan.

Space colonization, also called space settlement and space humanization, would be the permanent autonomous (self-sufficient) human habitation of locations outside Earth, especially of natural satellites or planets such as the Moon or Mars, using significant amounts of in-situ resource utilization.

To date, the longest human occupation of space is the International Space Station which has been in continuous use for 17years, 209days. Valeri Polyakov’s record single spaceflight of almost 438 days aboard the Mir space station has not been surpassed. Long-term stays in space reveal issues with bone and muscle loss in low gravity, immune system suppression, and radiation exposure.

Many past and current concepts for the continued exploration and colonization of space focus on a return to the Moon as a “stepping stone” to the other planets, especially Mars. At the end of 2006 NASA announced they were planning to build a permanent Moon base with continual presence by 2024.[57]

Beyond the technical factors that could make living in space more widespread, it has been suggested that the lack of private property, the inability or difficulty in establishing property rights in space, has been an impediment to the development of space for human habitation. Since the advent of space technology in the latter half of the twentieth century, the ownership of property in space has been murky, with strong arguments both for and against. In particular, the making of national territorial claims in outer space and on celestial bodies has been specifically proscribed by the Outer Space Treaty, which had been, as of 2012[update], ratified by all spacefaring nations.[58]

Original post:

Space exploration – Wikipedia

How Long Would It Take To Travel To The Nearest Star …

Weve all asked this question at some point in our lives: How long would it take to travel to the stars? Could it be within a persons own lifetime, and could this kind of travel become the norm someday? There are many possible answers to this question some very simple, others in the realms of science fiction. But coming up with a comprehensive answer means taking a lot of things into consideration.

Unfortunately, any realistic assessment is likely to produce answers that would totally discourage futurists and enthusiasts of interstellar travel. Like it or not, space is very large, and our technology is still very limited. But should we ever contemplate leaving the nest, we will have a range of options for getting to the nearest Solar Systems in our galaxy.

The nearest star to Earth is our Sun, which is a fairly average star in the Hertzsprung Russell Diagrams Main Sequence. This means that it is highly stable, providing Earth with just the right type of sunlight for life to evolve on our planet. We know there are planets orbiting other stars near to our Solar System, and many of these stars are similar to our own.

In the future, should mankind wish to leave the Solar System, well have a huge choice of stars we could travel to, and many could have the right conditions for life to thrive. But where would we go and how long would it take for us to get there? Just remember, this is all speculative and there is currently no benchmark for interstellar trips. That being said, here we go!

Over 2000 exoplanets have been identified, many of which are believed to be habitable. Credit: phl.upl.edu

As already noted, the closest star to our Solar System is Proxima Centauri, which is why it makes the most sense to plot an interstellar mission to this system first. As part of a triple star system called Alpha Centauri, Proxima is about 4.24 light years (or 1.3 parsecs) from Earth. Alpha Centauri is actually the brightest star of the three in the system part of a closely orbiting binary 4.37 light years from Earth whereas Proxima Centauri (the dimmest of the three) is an isolated red dwarf about 0.13 light years from the binary.

And while interstellar travel conjures up all kinds of visions of Faster-Than-Light (FTL) travel, ranging from warp speed and wormholes to jump drives, such theories are either highly speculative (such as the Alcubierre Drive) or entirely the province of science fiction. In all likelihood, any deep space mission will likely take generations to get there, rather than a few days or in an instantaneous flash.

So, starting with one of the slowest forms of space travel, how long will it take to get to Proxima Centauri?

The question of how long would it take to get somewhere in space is somewhat easier when dealing with existing technology and bodies within our Solar System. For instance, using the technology that powered the New Horizons mission which consisted of 16 thrusters fueled with hydrazine monopropellant reaching the Moon would take a mere 8 hours and 35 minutes.

On the other hand, there is the European Space Agencys (ESA) SMART-1 mission, which took its time traveling to the Moon using the method of ionic propulsion. With this revolutionary technology, a variation of which has since been used by the Dawn spacecraft to reach Vesta, the SMART-1 mission took one year, one month and two weeks to reach the Moon.

So, from the speedy rocket-propelled spacecraft to the economical ion drive, we have a few options for getting around local space plus we could use Jupiter or Saturn for a hefty gravitational slingshot. However, if we were to contemplate missions to somewhere a little more out of the way, we would have to scale up our technology and look at whats really possible.

When we say possible methods, we are talking about those that involve existing technology, or those that do not yet exist, but are technically feasible. Some, as you will see, are time-honored and proven, while others are emerging or still on the board. In just about all cases though, they present a possible, but extremely time-consuming or expensive, scenario for getting to even the closest stars

Ionic Propulsion:Currently, the slowest form of propulsion, and the most fuel-efficient, is the ion engine. A few decades ago, ionic propulsion was considered to be the subject of science fiction. However, in recent years, the technology to support ion engines has moved from theory to practice in a big way. The ESAs SMART-1 mission for example successfully completed its mission to the Moon after taking a 13 month spiral path from the Earth.

SMART-1 used solar powered ion thrusters, where electrical energy was harvested from its solar panels and used to power its Hall-effect thrusters. Only 82 kg of xenon propellant was used to propel SMART-1 to the Moon. 1 kg of xenon propellant provided a delta-v of 45 m/s. This is a highly efficient form of propulsion, but it is by no means fast.

Artists concept of Dawn mission above Ceres. Since its arrival, the spacecraft turned around to point the blue glow of its ion engine in the opposite direction. Image credit: NASA/JPL

One of the first missions to use ion drive technology was the Deep Space 1 mission to Comet Borrelly that took place in 1998. DS1 also used a xenon-powered ion drive, consuming 81.5 kg of propellant. Over 20 months of thrusting, DS1 was managed to reach a velocity of 56,000 km/hr (35,000 miles/hr) during its flyby of the comet.

Ion thrusters are therefore more economical than rocket technology, as the thrust per unit mass of propellant (a.k.a. specific impulse) is far higher. But it takes a long time for ion thrusters to accelerate spacecraft to any great speeds, and the maximum velocity it can achieve is dependent on its fuel supply and how much electrical energy it can generate.

So if ionic propulsion were to be used for a mission to Proxima Centauri, the thrusters would need a huge source of energy production (i.e. nuclear power) and a large quantity of propellant (although still less than conventional rockets). But based on the assumption that a supply of 81.5 kg of xenon propellant translates into a maximum velocity of 56,000 km/hr (and that there are no other forms of propulsion available, such as a gravitational slingshot to accelerate it further), some calculations can be made.

In short, at a maximum velocity of 56,000 km/h, Deep Space 1 would take over 81,000 years to traverse the 4.24 light years between Earth and Proxima Centauri. To put that time-scale into perspective, that would be over 2,700 human generations. So it is safe to say that an interplanetary ion engine mission would be far too slow to be considered for a manned interstellar mission.

Ionic propulsion is currently the slowest, but most fuel-efficient, form of space travel. Credit: NASA/JPL

But, should ion thrusters be made larger and more powerful (i.e. ion exhaust velocity would need to be significantly higher), and enough propellant could be hauled to keep the spacecrafts going for the entire 4.243 light-year trip, that travel time could be greatly reduced. Still not enough to happen in someones lifetime though.

Gravity Assist Method:The fastest existing means of space travel is known the Gravity Assist method, which involves a spacecraft using the relative movement (i.e. orbit) and gravity of a planet to alter is path and speed. Gravitational assists are a very useful spaceflight technique, especially when using the Earth or another massive planet (like a gas giant) for a boost in velocity.

The Mariner 10 spacecraft was the first to use this method, using Venus gravitational pull to slingshot it towards Mercury in February of 1974. In the 1980s, the Voyager 1 probe used Saturn and Jupiter for gravitational slingshots to attain its current velocity of 60,000 km/hr (38,000 miles/hr) and make it into interstellar space.

However, it was the Helios 2 mission which was launched in 1976 to study the interplanetary medium from 0.3 AU to 1 AU to the Sun that holds the record for highest speed achieved with a gravity assist. At the time, Helios 1 (which launched in 1974) and Helios 2 held the record for closest approach to the Sun. Helios 2 was launched by a conventional NASA Titan/Centaur launch vehicle and placed in a highly elliptical orbit.

A Helios probe being encapsulated for launch. Credit: Public Domain

Due to the large eccentricity (0.54) of the 190 day solar orbit, at perihelion Helios 2 was able to reach a maximum velocity of over 240,000 km/hr (150,000 miles/hr). This orbital speed was attained by the gravitational pull of the Sun alone. Technically, the Helios 2 perihelion velocity was not a gravitational slingshot, it was a maximum orbital velocity, but it still holds the record for being the fastest man-made object regardless.

So, if Voyager 1 was traveling in the direction of the red dwarf Proxima Centauri at a constant velocity of 60,000 km/hr, it would take 76,000 years (or over 2,500 generations) to travel that distance. But if it could attain the record-breaking speed of Helios 2s close approach of the Sun a constant speed of 240,000 km/hr it would take 19,000 years (or over 600 generations) to travel 4.243 light years. Significantly better, but still not in the ream of practicality.

Electromagnetic (EM) Drive:Another proposed method of interstellar travel comes in the form of the Radio Frequency (RF) Resonant Cavity Thruster, also known as the EM Drive. Originally proposed in 2001 by Roger K. Shawyer, a UK scientist who started Satellite Propulsion Research Ltd (SPR) to bring it to fruition, this drive is built around the idea that electromagnetic microwave cavities can allow for the direct conversion of electrical energy to thrust.

Whereas conventional electromagnetic thrusters are designed to propel a certain type of mass (such as ionized particles), this particular drive system relies on no reaction mass and emits no directional radiation. Such a proposal has met with a great deal of skepticism, mainly because it violates the law of Conservation of Momentum which states that within a system, the amount of momentum remains constant and is neither created nor destroyed, but only changes through the action of forces.

The EM Drive prototype produced by NASA/Eagleworks. Credit: NASA Spaceflight Forum

However, recent experiments with the technology have apparently yielded positive results. In July of 2014, at the 50th AIAA/ASME/SAE/ASEE Joint Propulsion Conference in Cleveland, Ohio, researchers from NASAs advanced propulsion research claimed that they had successfully tested a new design for an electromagnetic propulsion drive.

This was followed up in April of 2015 when researchers at NASA Eagleworks (part of the Johnson Space Center) claimed that they had successfully tested the drive in a vacuum, an indication that it might actually work in space. In July of that same year, a research team from the Dresden University of Technologys Space System department built their own version of the engine and observed a detectable thrust.

And in 2010, Prof. Juan Yang of the Northwestern Polytechnical University in Xian, China, began publishing a series of papers about her research into EM Drive technology. This culminated in her 2012 paper where she reported higher input power (2.5kW) and tested thrust (720mN) levels. In 2014, she further reported extensive tests involving internal temperature measurements with embedded thermocouples, which seemed to confirm that the system worked.

Artists concept of an interstellar craft equipped with an EM Drive. Credit: NASA Spaceflight Center

According to calculations based on the NASA prototype (which yielded a power estimate of 0.4 N/kilowatt), a spacecraft equipped with the EM drive could make the trip to Pluto in less than 18 months. Thats one-sixth the time it took for the New Horizons probe to get there, which was traveling at speeds of close to 58,000 km/h (36,000 mph).

Sounds impressive. But even at that rate, it would take a ship equipped with EM engines over 13,000 years for the vessel to make it to Proxima Centauri. Getting closer, but not quickly enough! and until such time that technology can be definitively proven to work, it doesnt make much sense to put our eggs into this basket.

Nuclear Thermal and Nuclear Electric Propulsion (NTP/NEP):Another possibility for interstellar space flight is to use spacecraft equipped with nuclear engines, a concept which NASA has been exploring for decades. In a Nuclear Thermal Propulsion (NTP) rocket, uranium or deuterium reactions are used to heat liquid hydrogen inside a reactor, turning it into ionized hydrogen gas (plasma), which is then channeled through a rocket nozzle to generate thrust.

A Nuclear Electric Propulsion (NEP) rocket involves the same basic reactor converting its heat and energy into electrical energy, which would then power an electrical engine. In both cases, the rocket would rely on nuclear fission or fusion to generates propulsion rather than chemical propellants, which has been the mainstay of NASA and all other space agencies to date.

Artists impression of a Crew Transfer Vehicle (CTV) using its nuclear-thermal rocket engines to slow down and establish orbit around Mars. Credit: NASA

Compared to chemical propulsion, both NTP and NEC offers a number of advantages. The first and most obvious is the virtually unlimited energy density it offers compared to rocket fuel. In addition, a nuclear-powered engine could also provide superior thrust relative to the amount of propellant used. This would cut the total amount of propellant needed, thus cutting launch weight and the cost of individual missions.

Although no nuclear-thermal engines have ever flown, several design concepts have been built and tested over the past few decades, and numerous concepts have been proposed. These have ranged from the traditional solid-core design such as the Nuclear Engine for Rocket Vehicle Application (NERVA) to more advanced and efficient concepts that rely on either a liquid or a gas core.

However, despite these advantages in fuel-efficiency and specific impulse, the most sophisticated NTP concept has a maximum specific impulse of 5000 seconds (50 kNs/kg). Using nuclear engines driven by fission or fusion, NASA scientists estimate it would could take a spaceship only 90 days to get to Mars when the planet was at opposition i.e. as close as 55,000,000 km from Earth.

But adjusted for a one-way journey to Proxima Centauri, a nuclear rocket would still take centuries to accelerate to the point where it was flying a fraction of the speed of light. It would then require several decades of travel time, followed by many more centuries of deceleration before reaching it destination. All told, were still talking about 1000 years before it reaches its destination. Good for interplanetary missions, not so good for interstellar ones.

Using existing technology, the time it would take to send scientists and astronauts on an interstellar mission would be prohibitively slow. If we want to make that journey within a single lifetime, or even a generation, something a bit more radical (aka. highly theoretical) will be needed. And while wormholes and jump engines may still be pure fiction at this point, there are some rather advanced ideas that have been considered over the years.

Nuclear Pulse Propulsion:Nuclear pulse propulsion is a theoretically possible form of fast space travel. The concept was originally proposed in 1946 by Stanislaw Ulam, a Polish-American mathematician who participated in the Manhattan Project, and preliminary calculations were then made by F. Reines and Ulam in 1947. The actual project known as Project Orion was initiated in 1958 and lasted until 1963.

The Project Orion concept for a nuclear-powered spacecraft. Credit: silodrome.co

Led by Ted Taylor at General Atomics and physicist Freeman Dyson from the Institute for Advanced Study in Princeton, Orion hoped to harness the power of pulsed nuclear explosions to provide a huge thrust with very high specific impulse (i.e. the amount of thrust compared to weight or the amount of seconds the rocket can continually fire).

In a nutshell, the Orion design involves a large spacecraft with a high supply of thermonuclear warheads achieving propulsion by releasing a bomb behind it and then riding the detonation wave with the help of a rear-mounted pad called a pusher. After each blast, the explosive force would be absorbed by this pusher pad, which then translates the thrust into forward momentum.

Though hardly elegant by modern standards, the advantage of the design is that it achieves a high specific impulse meaning it extracts the maximum amount of energy from its fuel source (in this case, nuclear bombs) at minimal cost. In addition, the concept could theoretically achieve very high speeds, with some estimates suggesting a ballpark figure as high as 5% the speed of light (or 5.4107 km/hr).

But of course, there the inevitable downsides to the design. For one, a ship of this size would be incredibly expensive to build. According to estimates produced by Dyson in 1968, an Orion spacecraft that used hydrogen bombs to generate propulsion would weight 400,000 to 4,000,000 metric tons. And at least three quarters of that weight consists of nuclear bombs, where each warhead weights approximately 1 metric ton.

Artists concept of Orion spacecraft leaving Earth. Credit: bisbos.com/Adrian Mann

All told, Dysons most conservative estimates placed the total cost of building an Orion craft at 367 billion dollars. Adjusted for inflation, that works out to roughly $2.5 trillion dollars which accounts for over two thirds of the US governments current annual revenue. Hence, even at its lightest, the craft would be extremely expensive to manufacture.

Theres also the slight problem of all the radiation it generates, not to mention nuclear waste. In fact, it is for this reason that the Project is believed to have been terminated, owing to the passage of the Partial Test Ban Treaty of 1963 which sought to limit nuclear testing and stop the excessive release of nuclear fallout into the planets atmosphere.

Fusion Rockets:Another possibility within the realm of harnessed nuclear power involves rockets that rely on thermonuclear reactions to generate thrust. For this concept, energy is created when pellets of a deuterium/helium-3 mix are ignited in a reaction chamber by inertial confinement using electron beams (similar to what is done at the National Ignition Facility in California). This fusion reactor would detonate 250 pellets per second to create high-energy plasma, which would then be directed by a magnetic nozzle to create thrust.

Like a rocket that relies on a nuclear reactor, this concept offers advantages as far as fuel efficiency and specific impulse are concerned. Exhaust velocities of up to 10,600km/s are estimated, which is far beyond the speed of conventional rockets. Whats more, the technology has been studied extensively over the past few decades, and many proposals have been made.

Artists concept of the Daedalus spacecraft, a two-stage fusion rocket that would achieve up to 12% he speed of light. Credit: Adrian Mann

For example, between 1973 and 1978, the British Interplanetary Society conducted feasibility study known as Project Daedalus. Relying on current knowledge of fusion technology and existing methods, the study called for the creation of a two-stage unmanned scientific probe making a trip to Barnards Star (5.9 light years from Earth) in a single lifetime.

The first stage, the larger of the two, would operate for 2.05 years and accelerate the spacecraft to 7.1% the speed of light (o.071 c). This stage would then be jettisoned, at which point, the second stage would ignite its engine and accelerate the spacecraft up to about 12% of light speed (0.12 c) over the course of 1.8 years. The second-stage engine would then be shut down and the ship would enter into a 46-year cruise period.

According to the Projects estimates, the mission would take 50 years to reach Barnards Star. Adjusted for Proxima Centauri, the same craft could make the trip in 36 years. But of course, the project also identified numerous stumbling blocks that made it unfeasible using then-current technology most of which are still unresolved.

For instance, there is the fact that helium-3 is scare on Earth, which means it would have to be mined elsewhere (most likely on the Moon). Second, the reaction that drives the spacecraft requires that the energy released vastly exceed the energy used to trigger the reaction. And while experiments here on Earth have surpassed the break-even goal, we are still a long way away from the kinds of energy needed to power an interstellar spaceship.

Artists concept of the Project Daedalus spacecraft, with a Saturn V rocket standing next to it for scale. Credit: Adrian Mann

Third, there is the cost factor of constructing such a ship. Even by the modest standard of Project Daedalus unmanned craft, a fully-fueled craft would weight as much as 60,000 Mt. To put that in perspective, the gross weight of NASAs SLS is just over 30 Mt, and a single launch comes with a price tag of $5 billion (based on estimates made in 2013).

In short, a fusion rocket would not only be prohibitively expensive to build, it would require a level of fusion reactor technology that is currently beyond our means. Icarus Interstellar, an international organization of volunteer citizen scientists (some of whom worked for NASA or the ESA) have since attempted to revitalize the concept with Project Icarus. Founded in 2009, the group hopes to make fusion propulsion (among other things) feasible by the near future.

Fusion Ramjet:Also known as the Bussard Ramjet, this theoretical form of propulsion was first proposed by physicist Robert W. Bussard in 1960. Basically, it is an improvement over the standard nuclear fusion rocket, which uses magnetic fields to compress hydrogen fuel to the point that fusion occurs. But in the Ramjets case, an enormous electromagnetic funnel scoops hydrogen from the interstellar medium and dumps it into the reactor as fuel.

Artists concept of the Bussard Ramjet, which would harness hydrogen from the interstellar medium to power its fusion engines. Credit: futurespacetransportation.weebly.com

As the ship picks up speed, the reactive mass is forced into a progressively constricted magnetic field, compressing it until thermonuclear fusion occurs. The magnetic field then directs the energy as rocket exhaust through an engine nozzle, thereby accelerating the vessel. Without any fuel tanks to weigh it down, a fusion ramjet could achieve speeds approaching 4% of the speed of light and travel anywhere in the galaxy.

However, the potential drawbacks of this design are numerous. For instance, there is the problem of drag. The ship relies on increased speed to accumulate fuel, but as it collides with more and more interstellar hydrogen, it may also lose speed especially in denser regions of the galaxy. Second, deuterium and tritium (used in fusion reactors here on Earth) are rare in space, whereas fusing regular hydrogen (which is plentiful in space) is beyond our current methods.

This concept has been popularized extensively in science fiction. Perhaps the best known example of this is in the franchise of Star Trek, where Bussard collectors are the glowing nacelles on warp engines. But in reality, our knowledge of fusion reactions need to progress considerably before a ramjet is possible. We would also have to figure out that pesky drag problem before we began to consider building such a ship!

Laser Sail:Solar sails have long been considered to be a cost-effective way of exploring the Solar System. In addition to being relatively easy and cheap to manufacture, theres the added bonus of solar sails requiring no fuel. Rather than using rockets that require propellant, the sail uses the radiation pressure from stars to push large ultra-thin mirrors to high speeds.

IKAROS spaceprobe with solar sail in flight (artists depiction) showing a typical square sail configuration. Credit: Wikimedia Commons/Andrzej Mirecki

However, for the sake of interstellar flight, such a sail would need to be driven by focused energy beams (i.e. lasers or microwaves) to push it to a velocity approaching the speed of light. The concept was originally proposed by Robert Forward in 1984, who was a physicist at the Hughes Aircrafts research laboratories at the time.

The concept retains the benefits of a solar sail, in that it requires no on-board fuel, but also from the fact that laser energy does not dissipate with distance nearly as much as solar radiation. So while a laser-driven sail would take some time to accelerate to near-luminous speeds, it would be limited only to the speed of light itself.

According to a 2000 study produced by Robert Frisbee, a director of advanced propulsion concept studies at NASAs Jet Propulsion Laboratory, a laser sail could be accelerated to half the speed of light in less than a decade. He also calculated that a sail measuring about 320 km (200 miles) in diameter could reach Proxima Centauri in just over 12 years. Meanwhile, a sail measuring about 965 km (600 miles) in diameter would arrive in just under 9 years.

However, such a sail would have to be built from advanced composites to avoid melting. Combined with its size, this would add up to a pretty penny! Even worse is the sheer expense incurred from building a laser large and powerful enough to drive a sail to half the speed of light. According to Frisbees own study, the lasers would require a steady flow of 17,000 terawatts of power close to what the entire world consumes in a single day.

Antimatter Engine:Fans of science fiction are sure to have heard of antimatter. But in case you havent, antimatter is essentially material composed of antiparticles, which have the same mass but opposite charge as regular particles. An antimatter engine, meanwhile, is a form of propulsion that uses interactions between matter and antimatter to generate power, or to create thrust.

Artists concept of an antimatter-powered spacecraft for missions to Mars, as part of the Mars Reference Mission. Credit: NASA

In short, an antimatter engine involves particles of hydrogen and antihydrogen being slammed together. This reaction unleashes as much as energy as a thermonuclear bomb, along with a shower of subatomic particles called pions and muons. These particles, which would travel at one-third the speed of light, are then be channeled by a magnetic nozzle to generate thrust.

The advantage to this class of rocket is that a large fraction of the rest mass of a matter/antimatter mixture may be converted to energy, allowing antimatter rockets to have a far higher energy density and specific impulse than any other proposed class of rocket. Whats more, controlling this kind of reaction could conceivably push a rocket up to half the speed of light.

Pound for pound, this class of ship would be the fastest and most fuel-efficient ever conceived. Whereas conventional rockets require tons of chemical fuel to propel a spaceship to its destination, an antimatter engine could do the same job with just a few milligrams of fuel. In fact, the mutual annihilation of a half pound of hydrogen and antihydrogen particles would unleash more energy than a 10-megaton hydrogen bomb.

It is for this exact reason that NASAs Institute for Advanced Concepts (NIAC) has investigated the technology as a possible means for future Mars missions. Unfortunately, when contemplating missions to nearby star systems, the amount if fuel needs to make the trip is multiplied exponentially, and the cost involved in producing it would be astronomical (no pun!).

What matter and antimatter might look like annihilating one another. Credit: NASA/CXC/M. Weiss

According to report prepared for the 39th AIAA/ASME/SAE/ASEE Joint Propulsion Conference and Exhibit (also by Robert Frisbee), a two-stage antimatter rocket would need over 815,000 metric tons (900,000 US tons) of fuel to make the journey to Proxima Centauri in approximately 40 years. Thats not bad, as far as timelines go. But again, the cost

Whereas a single gram of antimatter would produce an incredible amount of energy, it is estimated that producing just one gram would require approximately 25 million billion kilowatt-hours of energy and cost over a trillion dollars. At present, the total amount of antimatter that has been created by humans is less 20 nanograms.

And even if we could produce antimatter for cheap, you would need a massive ship to hold the amount of fuel needed. According to a report by Dr. Darrel Smith & Jonathan Webby of the Embry-Riddle Aeronautical University in Arizona, an interstellar craft equipped with an antimatter engine could reach 0.5 the speed of light and reach Proxima Centauri in a little over 8 years. However, the ship itself would weigh 400 Mt, and would need 170 MT of antimatter fuel to make the journey.

A possible way around this is to create a vessel that can create antimatter which it could then store as fuel. This concept, known as the Vacuum to Antimatter Rocket Interstellar Explorer System (VARIES), was proposed by Richard Obousy of Icarus Interstellar. Based on the idea of in-situ refueling, a VARIES ship would rely on large lasers (powered by enormous solar arrays) which would create particles of antimatter when fired at empty space.

Artists concept of the Vacuum to Antimatter Rocket Interstellar Explorer System (VARIES), a concept that would use solar arrays to power lasers that create particles of antimatter to be used as fuel. Credit: Adrian Mann

Much like the Ramjet concept, this proposal solves the problem of carrying fuel by harnessing it from space. But once again, the sheer cost of such a ship would be prohibitively expensive using current technology. In addition, the ability to create dark matter in large volumes is not something we currently have the power to do. Theres also the matter of radiation, as matter-antimatter annihilation can produce blasts of high-energy gamma rays.

This not only presents a danger to the crew, requiring significant radiations shielding, but requires the engines be shielded as well to ensure they dont undergo atomic degradation from all the radiation they are exposed to. So bottom line, the antimatter engine is completely impractical with our current technology and in the current budget environment.

Alcubierre Warp Drive:Fans of science fiction are also no doubt familiar with the concept of an Alcubierre (or Warp) Drive. Proposed by Mexican physicist Miguel Alcubierre in 1994, this proposed method was an attempt to make FTL travel possible without violating Einsteins theory of Special Relativity. In short, the concept involves stretching the fabric of space-time in a wave, which would theoretically cause the space ahead of an object to contract and the space behind it to expand.

An object inside this wave (i.e. a spaceship) would then be able to ride this wave, known as a warp bubble, beyond relativistic speeds. Since the ship is not moving within this bubble, but is being carried along as it moves, the rules of space-time and relativity would cease to apply. The reason being, this method does not rely on moving faster than light in the local sense.

Artist Mark Rademakers concept for the IXS Enterprise, a theoretical interstellar warp spacecraft. Credit: Mark Rademaker/flickr.com

It is only faster than light in the sense that the ship could reach its destination faster than a beam of light that was traveling outside the warp bubble. So assuming that a spacecraft could be outfitted with an Alcubierre Drive system, it would be able to make the trip to Proxima Centauri in less than 4 years. So when it comes to theoretical interstellar space travel, this is by far the most promising technology, at least in terms of speed.

Naturally, the concept has been received its share of counter-arguments over the years. Chief amongst them are the fact that it does not take quantum mechanics into account, and could be invalidated by a Theory of Everything (such as loop quantum gravity). Calculations on the amount of energy required have also indicated that a warp drive would require a prohibitive amount of power to work. Other uncertainties include the safety of such a system, the effects on space-time at the destination, and violations of causality.

However, in 2012, NASA scientist Harold Sonny White announced that he and his colleagues had begun researching the possibility of an Alcubierre Drive. In a paper titled Warp Field Mechanics 101, White claimed that they had constructed an interferometer that will detect the spatial distortions produced by the expanding and contracting spacetime of the Alcubierre metric.

In 2013, the Jet Propulsion Laboratory published results of a warp field test which was conducted under vacuum conditions. Unfortunately, the results were reported as inconclusive. Long term, we may find that Alcubierres metric may violate one or more fundamental laws of nature. And even if the physics should prove to be sound, there is no guarantee it can be harnessed for the sake of FTL flight.

In conclusion, if you were hoping to travel to the nearest star within your lifetime, the outlook isnt very good. However, if mankind felt the incentive to build an interstellar ark filled with a self-sustaining community of space-faring humans, it might be possible to travel there in a little under a century if we were willing to invest in the requisite technology.

But all the available methods are still very limited when it comes to transit time. And while taking hundreds or thousands of years to reach the nearest star may matter less to us if our very survival was at stake, it is simply not practical as far as space exploration and travel goes. By the time a mission reached even the closest stars in our galaxy, the technology employed would be obsolete and humanity might not even exist back home anymore.

So unless we make a major breakthrough in the realms of fusion, antimatter, or laser technology, we will either have to be content with exploring our own Solar System, or be forced to accept a very long-term transit strategy

We have written many interesting articles about space travel here at Universe Today. Heres Will We Ever Reach Another Star?, Warp Drives May Come With a Killer Downside, The Alcubierre Warp Drive, How Far Is A Light Year?, When Light Just Isnt Fast Enough, When Will We Become Interstellar?, and Can We Travel Faster Than the Speed of Light?

For more information, be sure to consult NASAs pages on Propulsion Systems of the Future, and Is Warp Drive Real?

More:

How Long Would It Take To Travel To The Nearest Star …

How Long Would It Take To Travel To The Nearest Star …

Weve all asked this question at some point in our lives: How long would it take to travel to the stars? Could it be within a persons own lifetime, and could this kind of travel become the norm someday? There are many possible answers to this question some very simple, others in the realms of science fiction. But coming up with a comprehensive answer means taking a lot of things into consideration.

Unfortunately, any realistic assessment is likely to produce answers that would totally discourage futurists and enthusiasts of interstellar travel. Like it or not, space is very large, and our technology is still very limited. But should we ever contemplate leaving the nest, we will have a range of options for getting to the nearest Solar Systems in our galaxy.

The nearest star to Earth is our Sun, which is a fairly average star in the Hertzsprung Russell Diagrams Main Sequence. This means that it is highly stable, providing Earth with just the right type of sunlight for life to evolve on our planet. We know there are planets orbiting other stars near to our Solar System, and many of these stars are similar to our own.

In the future, should mankind wish to leave the Solar System, well have a huge choice of stars we could travel to, and many could have the right conditions for life to thrive. But where would we go and how long would it take for us to get there? Just remember, this is all speculative and there is currently no benchmark for interstellar trips. That being said, here we go!

Over 2000 exoplanets have been identified, many of which are believed to be habitable. Credit: phl.upl.edu

As already noted, the closest star to our Solar System is Proxima Centauri, which is why it makes the most sense to plot an interstellar mission to this system first. As part of a triple star system called Alpha Centauri, Proxima is about 4.24 light years (or 1.3 parsecs) from Earth. Alpha Centauri is actually the brightest star of the three in the system part of a closely orbiting binary 4.37 light years from Earth whereas Proxima Centauri (the dimmest of the three) is an isolated red dwarf about 0.13 light years from the binary.

And while interstellar travel conjures up all kinds of visions of Faster-Than-Light (FTL) travel, ranging from warp speed and wormholes to jump drives, such theories are either highly speculative (such as the Alcubierre Drive) or entirely the province of science fiction. In all likelihood, any deep space mission will likely take generations to get there, rather than a few days or in an instantaneous flash.

So, starting with one of the slowest forms of space travel, how long will it take to get to Proxima Centauri?

The question of how long would it take to get somewhere in space is somewhat easier when dealing with existing technology and bodies within our Solar System. For instance, using the technology that powered the New Horizons mission which consisted of 16 thrusters fueled with hydrazine monopropellant reaching the Moon would take a mere 8 hours and 35 minutes.

On the other hand, there is the European Space Agencys (ESA) SMART-1 mission, which took its time traveling to the Moon using the method of ionic propulsion. With this revolutionary technology, a variation of which has since been used by the Dawn spacecraft to reach Vesta, the SMART-1 mission took one year, one month and two weeks to reach the Moon.

So, from the speedy rocket-propelled spacecraft to the economical ion drive, we have a few options for getting around local space plus we could use Jupiter or Saturn for a hefty gravitational slingshot. However, if we were to contemplate missions to somewhere a little more out of the way, we would have to scale up our technology and look at whats really possible.

When we say possible methods, we are talking about those that involve existing technology, or those that do not yet exist, but are technically feasible. Some, as you will see, are time-honored and proven, while others are emerging or still on the board. In just about all cases though, they present a possible, but extremely time-consuming or expensive, scenario for getting to even the closest stars

Ionic Propulsion:Currently, the slowest form of propulsion, and the most fuel-efficient, is the ion engine. A few decades ago, ionic propulsion was considered to be the subject of science fiction. However, in recent years, the technology to support ion engines has moved from theory to practice in a big way. The ESAs SMART-1 mission for example successfully completed its mission to the Moon after taking a 13 month spiral path from the Earth.

SMART-1 used solar powered ion thrusters, where electrical energy was harvested from its solar panels and used to power its Hall-effect thrusters. Only 82 kg of xenon propellant was used to propel SMART-1 to the Moon. 1 kg of xenon propellant provided a delta-v of 45 m/s. This is a highly efficient form of propulsion, but it is by no means fast.

Artists concept of Dawn mission above Ceres. Since its arrival, the spacecraft turned around to point the blue glow of its ion engine in the opposite direction. Image credit: NASA/JPL

One of the first missions to use ion drive technology was the Deep Space 1 mission to Comet Borrelly that took place in 1998. DS1 also used a xenon-powered ion drive, consuming 81.5 kg of propellant. Over 20 months of thrusting, DS1 was managed to reach a velocity of 56,000 km/hr (35,000 miles/hr) during its flyby of the comet.

Ion thrusters are therefore more economical than rocket technology, as the thrust per unit mass of propellant (a.k.a. specific impulse) is far higher. But it takes a long time for ion thrusters to accelerate spacecraft to any great speeds, and the maximum velocity it can achieve is dependent on its fuel supply and how much electrical energy it can generate.

So if ionic propulsion were to be used for a mission to Proxima Centauri, the thrusters would need a huge source of energy production (i.e. nuclear power) and a large quantity of propellant (although still less than conventional rockets). But based on the assumption that a supply of 81.5 kg of xenon propellant translates into a maximum velocity of 56,000 km/hr (and that there are no other forms of propulsion available, such as a gravitational slingshot to accelerate it further), some calculations can be made.

In short, at a maximum velocity of 56,000 km/h, Deep Space 1 would take over 81,000 years to traverse the 4.24 light years between Earth and Proxima Centauri. To put that time-scale into perspective, that would be over 2,700 human generations. So it is safe to say that an interplanetary ion engine mission would be far too slow to be considered for a manned interstellar mission.

Ionic propulsion is currently the slowest, but most fuel-efficient, form of space travel. Credit: NASA/JPL

But, should ion thrusters be made larger and more powerful (i.e. ion exhaust velocity would need to be significantly higher), and enough propellant could be hauled to keep the spacecrafts going for the entire 4.243 light-year trip, that travel time could be greatly reduced. Still not enough to happen in someones lifetime though.

Gravity Assist Method:The fastest existing means of space travel is known the Gravity Assist method, which involves a spacecraft using the relative movement (i.e. orbit) and gravity of a planet to alter is path and speed. Gravitational assists are a very useful spaceflight technique, especially when using the Earth or another massive planet (like a gas giant) for a boost in velocity.

The Mariner 10 spacecraft was the first to use this method, using Venus gravitational pull to slingshot it towards Mercury in February of 1974. In the 1980s, the Voyager 1 probe used Saturn and Jupiter for gravitational slingshots to attain its current velocity of 60,000 km/hr (38,000 miles/hr) and make it into interstellar space.

However, it was the Helios 2 mission which was launched in 1976 to study the interplanetary medium from 0.3 AU to 1 AU to the Sun that holds the record for highest speed achieved with a gravity assist. At the time, Helios 1 (which launched in 1974) and Helios 2 held the record for closest approach to the Sun. Helios 2 was launched by a conventional NASA Titan/Centaur launch vehicle and placed in a highly elliptical orbit.

A Helios probe being encapsulated for launch. Credit: Public Domain

Due to the large eccentricity (0.54) of the 190 day solar orbit, at perihelion Helios 2 was able to reach a maximum velocity of over 240,000 km/hr (150,000 miles/hr). This orbital speed was attained by the gravitational pull of the Sun alone. Technically, the Helios 2 perihelion velocity was not a gravitational slingshot, it was a maximum orbital velocity, but it still holds the record for being the fastest man-made object regardless.

So, if Voyager 1 was traveling in the direction of the red dwarf Proxima Centauri at a constant velocity of 60,000 km/hr, it would take 76,000 years (or over 2,500 generations) to travel that distance. But if it could attain the record-breaking speed of Helios 2s close approach of the Sun a constant speed of 240,000 km/hr it would take 19,000 years (or over 600 generations) to travel 4.243 light years. Significantly better, but still not in the ream of practicality.

Electromagnetic (EM) Drive:Another proposed method of interstellar travel comes in the form of the Radio Frequency (RF) Resonant Cavity Thruster, also known as the EM Drive. Originally proposed in 2001 by Roger K. Shawyer, a UK scientist who started Satellite Propulsion Research Ltd (SPR) to bring it to fruition, this drive is built around the idea that electromagnetic microwave cavities can allow for the direct conversion of electrical energy to thrust.

Whereas conventional electromagnetic thrusters are designed to propel a certain type of mass (such as ionized particles), this particular drive system relies on no reaction mass and emits no directional radiation. Such a proposal has met with a great deal of skepticism, mainly because it violates the law of Conservation of Momentum which states that within a system, the amount of momentum remains constant and is neither created nor destroyed, but only changes through the action of forces.

The EM Drive prototype produced by NASA/Eagleworks. Credit: NASA Spaceflight Forum

However, recent experiments with the technology have apparently yielded positive results. In July of 2014, at the 50th AIAA/ASME/SAE/ASEE Joint Propulsion Conference in Cleveland, Ohio, researchers from NASAs advanced propulsion research claimed that they had successfully tested a new design for an electromagnetic propulsion drive.

This was followed up in April of 2015 when researchers at NASA Eagleworks (part of the Johnson Space Center) claimed that they had successfully tested the drive in a vacuum, an indication that it might actually work in space. In July of that same year, a research team from the Dresden University of Technologys Space System department built their own version of the engine and observed a detectable thrust.

And in 2010, Prof. Juan Yang of the Northwestern Polytechnical University in Xian, China, began publishing a series of papers about her research into EM Drive technology. This culminated in her 2012 paper where she reported higher input power (2.5kW) and tested thrust (720mN) levels. In 2014, she further reported extensive tests involving internal temperature measurements with embedded thermocouples, which seemed to confirm that the system worked.

Artists concept of an interstellar craft equipped with an EM Drive. Credit: NASA Spaceflight Center

According to calculations based on the NASA prototype (which yielded a power estimate of 0.4 N/kilowatt), a spacecraft equipped with the EM drive could make the trip to Pluto in less than 18 months. Thats one-sixth the time it took for the New Horizons probe to get there, which was traveling at speeds of close to 58,000 km/h (36,000 mph).

Sounds impressive. But even at that rate, it would take a ship equipped with EM engines over 13,000 years for the vessel to make it to Proxima Centauri. Getting closer, but not quickly enough! and until such time that technology can be definitively proven to work, it doesnt make much sense to put our eggs into this basket.

Nuclear Thermal and Nuclear Electric Propulsion (NTP/NEP):Another possibility for interstellar space flight is to use spacecraft equipped with nuclear engines, a concept which NASA has been exploring for decades. In a Nuclear Thermal Propulsion (NTP) rocket, uranium or deuterium reactions are used to heat liquid hydrogen inside a reactor, turning it into ionized hydrogen gas (plasma), which is then channeled through a rocket nozzle to generate thrust.

A Nuclear Electric Propulsion (NEP) rocket involves the same basic reactor converting its heat and energy into electrical energy, which would then power an electrical engine. In both cases, the rocket would rely on nuclear fission or fusion to generates propulsion rather than chemical propellants, which has been the mainstay of NASA and all other space agencies to date.

Artists impression of a Crew Transfer Vehicle (CTV) using its nuclear-thermal rocket engines to slow down and establish orbit around Mars. Credit: NASA

Compared to chemical propulsion, both NTP and NEC offers a number of advantages. The first and most obvious is the virtually unlimited energy density it offers compared to rocket fuel. In addition, a nuclear-powered engine could also provide superior thrust relative to the amount of propellant used. This would cut the total amount of propellant needed, thus cutting launch weight and the cost of individual missions.

Although no nuclear-thermal engines have ever flown, several design concepts have been built and tested over the past few decades, and numerous concepts have been proposed. These have ranged from the traditional solid-core design such as the Nuclear Engine for Rocket Vehicle Application (NERVA) to more advanced and efficient concepts that rely on either a liquid or a gas core.

However, despite these advantages in fuel-efficiency and specific impulse, the most sophisticated NTP concept has a maximum specific impulse of 5000 seconds (50 kNs/kg). Using nuclear engines driven by fission or fusion, NASA scientists estimate it would could take a spaceship only 90 days to get to Mars when the planet was at opposition i.e. as close as 55,000,000 km from Earth.

But adjusted for a one-way journey to Proxima Centauri, a nuclear rocket would still take centuries to accelerate to the point where it was flying a fraction of the speed of light. It would then require several decades of travel time, followed by many more centuries of deceleration before reaching it destination. All told, were still talking about 1000 years before it reaches its destination. Good for interplanetary missions, not so good for interstellar ones.

Using existing technology, the time it would take to send scientists and astronauts on an interstellar mission would be prohibitively slow. If we want to make that journey within a single lifetime, or even a generation, something a bit more radical (aka. highly theoretical) will be needed. And while wormholes and jump engines may still be pure fiction at this point, there are some rather advanced ideas that have been considered over the years.

Nuclear Pulse Propulsion:Nuclear pulse propulsion is a theoretically possible form of fast space travel. The concept was originally proposed in 1946 by Stanislaw Ulam, a Polish-American mathematician who participated in the Manhattan Project, and preliminary calculations were then made by F. Reines and Ulam in 1947. The actual project known as Project Orion was initiated in 1958 and lasted until 1963.

The Project Orion concept for a nuclear-powered spacecraft. Credit: silodrome.co

Led by Ted Taylor at General Atomics and physicist Freeman Dyson from the Institute for Advanced Study in Princeton, Orion hoped to harness the power of pulsed nuclear explosions to provide a huge thrust with very high specific impulse (i.e. the amount of thrust compared to weight or the amount of seconds the rocket can continually fire).

In a nutshell, the Orion design involves a large spacecraft with a high supply of thermonuclear warheads achieving propulsion by releasing a bomb behind it and then riding the detonation wave with the help of a rear-mounted pad called a pusher. After each blast, the explosive force would be absorbed by this pusher pad, which then translates the thrust into forward momentum.

Though hardly elegant by modern standards, the advantage of the design is that it achieves a high specific impulse meaning it extracts the maximum amount of energy from its fuel source (in this case, nuclear bombs) at minimal cost. In addition, the concept could theoretically achieve very high speeds, with some estimates suggesting a ballpark figure as high as 5% the speed of light (or 5.4107 km/hr).

But of course, there the inevitable downsides to the design. For one, a ship of this size would be incredibly expensive to build. According to estimates produced by Dyson in 1968, an Orion spacecraft that used hydrogen bombs to generate propulsion would weight 400,000 to 4,000,000 metric tons. And at least three quarters of that weight consists of nuclear bombs, where each warhead weights approximately 1 metric ton.

Artists concept of Orion spacecraft leaving Earth. Credit: bisbos.com/Adrian Mann

All told, Dysons most conservative estimates placed the total cost of building an Orion craft at 367 billion dollars. Adjusted for inflation, that works out to roughly $2.5 trillion dollars which accounts for over two thirds of the US governments current annual revenue. Hence, even at its lightest, the craft would be extremely expensive to manufacture.

Theres also the slight problem of all the radiation it generates, not to mention nuclear waste. In fact, it is for this reason that the Project is believed to have been terminated, owing to the passage of the Partial Test Ban Treaty of 1963 which sought to limit nuclear testing and stop the excessive release of nuclear fallout into the planets atmosphere.

Fusion Rockets:Another possibility within the realm of harnessed nuclear power involves rockets that rely on thermonuclear reactions to generate thrust. For this concept, energy is created when pellets of a deuterium/helium-3 mix are ignited in a reaction chamber by inertial confinement using electron beams (similar to what is done at the National Ignition Facility in California). This fusion reactor would detonate 250 pellets per second to create high-energy plasma, which would then be directed by a magnetic nozzle to create thrust.

Like a rocket that relies on a nuclear reactor, this concept offers advantages as far as fuel efficiency and specific impulse are concerned. Exhaust velocities of up to 10,600km/s are estimated, which is far beyond the speed of conventional rockets. Whats more, the technology has been studied extensively over the past few decades, and many proposals have been made.

Artists concept of the Daedalus spacecraft, a two-stage fusion rocket that would achieve up to 12% he speed of light. Credit: Adrian Mann

For example, between 1973 and 1978, the British Interplanetary Society conducted feasibility study known as Project Daedalus. Relying on current knowledge of fusion technology and existing methods, the study called for the creation of a two-stage unmanned scientific probe making a trip to Barnards Star (5.9 light years from Earth) in a single lifetime.

The first stage, the larger of the two, would operate for 2.05 years and accelerate the spacecraft to 7.1% the speed of light (o.071 c). This stage would then be jettisoned, at which point, the second stage would ignite its engine and accelerate the spacecraft up to about 12% of light speed (0.12 c) over the course of 1.8 years. The second-stage engine would then be shut down and the ship would enter into a 46-year cruise period.

According to the Projects estimates, the mission would take 50 years to reach Barnards Star. Adjusted for Proxima Centauri, the same craft could make the trip in 36 years. But of course, the project also identified numerous stumbling blocks that made it unfeasible using then-current technology most of which are still unresolved.

For instance, there is the fact that helium-3 is scare on Earth, which means it would have to be mined elsewhere (most likely on the Moon). Second, the reaction that drives the spacecraft requires that the energy released vastly exceed the energy used to trigger the reaction. And while experiments here on Earth have surpassed the break-even goal, we are still a long way away from the kinds of energy needed to power an interstellar spaceship.

Artists concept of the Project Daedalus spacecraft, with a Saturn V rocket standing next to it for scale. Credit: Adrian Mann

Third, there is the cost factor of constructing such a ship. Even by the modest standard of Project Daedalus unmanned craft, a fully-fueled craft would weight as much as 60,000 Mt. To put that in perspective, the gross weight of NASAs SLS is just over 30 Mt, and a single launch comes with a price tag of $5 billion (based on estimates made in 2013).

In short, a fusion rocket would not only be prohibitively expensive to build, it would require a level of fusion reactor technology that is currently beyond our means. Icarus Interstellar, an international organization of volunteer citizen scientists (some of whom worked for NASA or the ESA) have since attempted to revitalize the concept with Project Icarus. Founded in 2009, the group hopes to make fusion propulsion (among other things) feasible by the near future.

Fusion Ramjet:Also known as the Bussard Ramjet, this theoretical form of propulsion was first proposed by physicist Robert W. Bussard in 1960. Basically, it is an improvement over the standard nuclear fusion rocket, which uses magnetic fields to compress hydrogen fuel to the point that fusion occurs. But in the Ramjets case, an enormous electromagnetic funnel scoops hydrogen from the interstellar medium and dumps it into the reactor as fuel.

Artists concept of the Bussard Ramjet, which would harness hydrogen from the interstellar medium to power its fusion engines. Credit: futurespacetransportation.weebly.com

As the ship picks up speed, the reactive mass is forced into a progressively constricted magnetic field, compressing it until thermonuclear fusion occurs. The magnetic field then directs the energy as rocket exhaust through an engine nozzle, thereby accelerating the vessel. Without any fuel tanks to weigh it down, a fusion ramjet could achieve speeds approaching 4% of the speed of light and travel anywhere in the galaxy.

However, the potential drawbacks of this design are numerous. For instance, there is the problem of drag. The ship relies on increased speed to accumulate fuel, but as it collides with more and more interstellar hydrogen, it may also lose speed especially in denser regions of the galaxy. Second, deuterium and tritium (used in fusion reactors here on Earth) are rare in space, whereas fusing regular hydrogen (which is plentiful in space) is beyond our current methods.

This concept has been popularized extensively in science fiction. Perhaps the best known example of this is in the franchise of Star Trek, where Bussard collectors are the glowing nacelles on warp engines. But in reality, our knowledge of fusion reactions need to progress considerably before a ramjet is possible. We would also have to figure out that pesky drag problem before we began to consider building such a ship!

Laser Sail:Solar sails have long been considered to be a cost-effective way of exploring the Solar System. In addition to being relatively easy and cheap to manufacture, theres the added bonus of solar sails requiring no fuel. Rather than using rockets that require propellant, the sail uses the radiation pressure from stars to push large ultra-thin mirrors to high speeds.

IKAROS spaceprobe with solar sail in flight (artists depiction) showing a typical square sail configuration. Credit: Wikimedia Commons/Andrzej Mirecki

However, for the sake of interstellar flight, such a sail would need to be driven by focused energy beams (i.e. lasers or microwaves) to push it to a velocity approaching the speed of light. The concept was originally proposed by Robert Forward in 1984, who was a physicist at the Hughes Aircrafts research laboratories at the time.

The concept retains the benefits of a solar sail, in that it requires no on-board fuel, but also from the fact that laser energy does not dissipate with distance nearly as much as solar radiation. So while a laser-driven sail would take some time to accelerate to near-luminous speeds, it would be limited only to the speed of light itself.

According to a 2000 study produced by Robert Frisbee, a director of advanced propulsion concept studies at NASAs Jet Propulsion Laboratory, a laser sail could be accelerated to half the speed of light in less than a decade. He also calculated that a sail measuring about 320 km (200 miles) in diameter could reach Proxima Centauri in just over 12 years. Meanwhile, a sail measuring about 965 km (600 miles) in diameter would arrive in just under 9 years.

However, such a sail would have to be built from advanced composites to avoid melting. Combined with its size, this would add up to a pretty penny! Even worse is the sheer expense incurred from building a laser large and powerful enough to drive a sail to half the speed of light. According to Frisbees own study, the lasers would require a steady flow of 17,000 terawatts of power close to what the entire world consumes in a single day.

Antimatter Engine:Fans of science fiction are sure to have heard of antimatter. But in case you havent, antimatter is essentially material composed of antiparticles, which have the same mass but opposite charge as regular particles. An antimatter engine, meanwhile, is a form of propulsion that uses interactions between matter and antimatter to generate power, or to create thrust.

Artists concept of an antimatter-powered spacecraft for missions to Mars, as part of the Mars Reference Mission. Credit: NASA

In short, an antimatter engine involves particles of hydrogen and antihydrogen being slammed together. This reaction unleashes as much as energy as a thermonuclear bomb, along with a shower of subatomic particles called pions and muons. These particles, which would travel at one-third the speed of light, are then be channeled by a magnetic nozzle to generate thrust.

The advantage to this class of rocket is that a large fraction of the rest mass of a matter/antimatter mixture may be converted to energy, allowing antimatter rockets to have a far higher energy density and specific impulse than any other proposed class of rocket. Whats more, controlling this kind of reaction could conceivably push a rocket up to half the speed of light.

Pound for pound, this class of ship would be the fastest and most fuel-efficient ever conceived. Whereas conventional rockets require tons of chemical fuel to propel a spaceship to its destination, an antimatter engine could do the same job with just a few milligrams of fuel. In fact, the mutual annihilation of a half pound of hydrogen and antihydrogen particles would unleash more energy than a 10-megaton hydrogen bomb.

It is for this exact reason that NASAs Institute for Advanced Concepts (NIAC) has investigated the technology as a possible means for future Mars missions. Unfortunately, when contemplating missions to nearby star systems, the amount if fuel needs to make the trip is multiplied exponentially, and the cost involved in producing it would be astronomical (no pun!).

What matter and antimatter might look like annihilating one another. Credit: NASA/CXC/M. Weiss

According to report prepared for the 39th AIAA/ASME/SAE/ASEE Joint Propulsion Conference and Exhibit (also by Robert Frisbee), a two-stage antimatter rocket would need over 815,000 metric tons (900,000 US tons) of fuel to make the journey to Proxima Centauri in approximately 40 years. Thats not bad, as far as timelines go. But again, the cost

Whereas a single gram of antimatter would produce an incredible amount of energy, it is estimated that producing just one gram would require approximately 25 million billion kilowatt-hours of energy and cost over a trillion dollars. At present, the total amount of antimatter that has been created by humans is less 20 nanograms.

And even if we could produce antimatter for cheap, you would need a massive ship to hold the amount of fuel needed. According to a report by Dr. Darrel Smith & Jonathan Webby of the Embry-Riddle Aeronautical University in Arizona, an interstellar craft equipped with an antimatter engine could reach 0.5 the speed of light and reach Proxima Centauri in a little over 8 years. However, the ship itself would weigh 400 Mt, and would need 170 MT of antimatter fuel to make the journey.

A possible way around this is to create a vessel that can create antimatter which it could then store as fuel. This concept, known as the Vacuum to Antimatter Rocket Interstellar Explorer System (VARIES), was proposed by Richard Obousy of Icarus Interstellar. Based on the idea of in-situ refueling, a VARIES ship would rely on large lasers (powered by enormous solar arrays) which would create particles of antimatter when fired at empty space.

Artists concept of the Vacuum to Antimatter Rocket Interstellar Explorer System (VARIES), a concept that would use solar arrays to power lasers that create particles of antimatter to be used as fuel. Credit: Adrian Mann

Much like the Ramjet concept, this proposal solves the problem of carrying fuel by harnessing it from space. But once again, the sheer cost of such a ship would be prohibitively expensive using current technology. In addition, the ability to create dark matter in large volumes is not something we currently have the power to do. Theres also the matter of radiation, as matter-antimatter annihilation can produce blasts of high-energy gamma rays.

This not only presents a danger to the crew, requiring significant radiations shielding, but requires the engines be shielded as well to ensure they dont undergo atomic degradation from all the radiation they are exposed to. So bottom line, the antimatter engine is completely impractical with our current technology and in the current budget environment.

Alcubierre Warp Drive:Fans of science fiction are also no doubt familiar with the concept of an Alcubierre (or Warp) Drive. Proposed by Mexican physicist Miguel Alcubierre in 1994, this proposed method was an attempt to make FTL travel possible without violating Einsteins theory of Special Relativity. In short, the concept involves stretching the fabric of space-time in a wave, which would theoretically cause the space ahead of an object to contract and the space behind it to expand.

An object inside this wave (i.e. a spaceship) would then be able to ride this wave, known as a warp bubble, beyond relativistic speeds. Since the ship is not moving within this bubble, but is being carried along as it moves, the rules of space-time and relativity would cease to apply. The reason being, this method does not rely on moving faster than light in the local sense.

Artist Mark Rademakers concept for the IXS Enterprise, a theoretical interstellar warp spacecraft. Credit: Mark Rademaker/flickr.com

It is only faster than light in the sense that the ship could reach its destination faster than a beam of light that was traveling outside the warp bubble. So assuming that a spacecraft could be outfitted with an Alcubierre Drive system, it would be able to make the trip to Proxima Centauri in less than 4 years. So when it comes to theoretical interstellar space travel, this is by far the most promising technology, at least in terms of speed.

Naturally, the concept has been received its share of counter-arguments over the years. Chief amongst them are the fact that it does not take quantum mechanics into account, and could be invalidated by a Theory of Everything (such as loop quantum gravity). Calculations on the amount of energy required have also indicated that a warp drive would require a prohibitive amount of power to work. Other uncertainties include the safety of such a system, the effects on space-time at the destination, and violations of causality.

However, in 2012, NASA scientist Harold Sonny White announced that he and his colleagues had begun researching the possibility of an Alcubierre Drive. In a paper titled Warp Field Mechanics 101, White claimed that they had constructed an interferometer that will detect the spatial distortions produced by the expanding and contracting spacetime of the Alcubierre metric.

In 2013, the Jet Propulsion Laboratory published results of a warp field test which was conducted under vacuum conditions. Unfortunately, the results were reported as inconclusive. Long term, we may find that Alcubierres metric may violate one or more fundamental laws of nature. And even if the physics should prove to be sound, there is no guarantee it can be harnessed for the sake of FTL flight.

In conclusion, if you were hoping to travel to the nearest star within your lifetime, the outlook isnt very good. However, if mankind felt the incentive to build an interstellar ark filled with a self-sustaining community of space-faring humans, it might be possible to travel there in a little under a century if we were willing to invest in the requisite technology.

But all the available methods are still very limited when it comes to transit time. And while taking hundreds or thousands of years to reach the nearest star may matter less to us if our very survival was at stake, it is simply not practical as far as space exploration and travel goes. By the time a mission reached even the closest stars in our galaxy, the technology employed would be obsolete and humanity might not even exist back home anymore.

So unless we make a major breakthrough in the realms of fusion, antimatter, or laser technology, we will either have to be content with exploring our own Solar System, or be forced to accept a very long-term transit strategy

We have written many interesting articles about space travel here at Universe Today. Heres Will We Ever Reach Another Star?, Warp Drives May Come With a Killer Downside, The Alcubierre Warp Drive, How Far Is A Light Year?, When Light Just Isnt Fast Enough, When Will We Become Interstellar?, and Can We Travel Faster Than the Speed of Light?

For more information, be sure to consult NASAs pages on Propulsion Systems of the Future, and Is Warp Drive Real?

See more here:

How Long Would It Take To Travel To The Nearest Star …

Space News From SpaceDaily.Com

Mars Helicopter to Fly on NASA’s Next Red Planet Rover MissionPasadena CA (JPL) May 11, 2018NASA is sending a helicopter to Mars. The Mars Helicopter, a small, autonomous rotorcraft, will travel with the agency’s Mars 2020 rover mission, currently scheduled to launch in July 2020, to demonstrate the viability and potential of heavier-than-air vehicles on the Red Planet. “NASA has a proud history of firsts,” said NASA Administrator Jim Bridenstine. “The idea of a helicopter flying the skies of another planet is thrilling. The Mars Helicopter holds much promise for our future science, disc … read moreSpaceX launches most powerful Falcon 9 yet Tampa (AFP) May 11, 2018 SpaceX on Friday blasted off its most powerful Falcon 9 rocket, which is certified to carry humans to space later this year. … moreMusk sets goal for two flights in 24 hours with same Falcon 9 rocket Washington (UPI) May 10, 2018 Block 5 will be the last version of the Falcon 9 rocket, according to SpaceX CEO and founder Elon Musk. … moreThe challenge of space gardening: One giant ‘leaf’ for mankind Miami (AFP) May 11, 2018 It’s not easy having a green thumb in space. … moreTesting maintenance-free engines that power science in deep space Cleveland OH (SPX) May 10, 2018There are no gas stations or mechanics in deep space. So, if you want the power to perform science in the deep, dark frontiers of our solar system, you must have an engine that is reliable for the l … more

More here:

Space News From SpaceDaily.Com

Watch Space Travel Full Episode – The Universe | HISTORY

When man finally broke free of the Earths gravitational pull the dream of traveling to other planets became a reality. Today scientists are proposing a bizarre array of technologies in the hope of traveling faster through space: from space craft sporting sails that catch laser beams, to propulsion engines powered by a bizarre entity known as anti-matter. Finally explore the science behind the seemingly fanciful notion of warp-drive and a theoretical particle that can travel faster than light.

See the rest here:

Watch Space Travel Full Episode – The Universe | HISTORY

Time travel – Wikipedia

Time travel is the concept of movement between certain points in time, analogous to movement between different points in space by an object or a person, typically using a hypothetical device known as a time machine, in the form of a vehicle or of a portal connecting distant points in spacetime, either to an earlier time or to a later time, without the need for the time-traveling body to experience the intervening period in the usual sense. Time travel is a widely-recognized concept in philosophy and fiction. It was popularized by H. G. Wells’ 1895 novel The Time Machine, which moved the concept of time travel into the public imagination. However, it is uncertain if time travel to the past is physically possible. Forward time travel, outside the usual sense of the perception of time, is possible according to special relativity and general relativity, although making one body advance or delay more than a few milliseconds compared to another body is not feasible with current technology.[1] As for backwards time travel, it is possible to find solutions in general relativity that allow for it, but the solutions require conditions that may not be physically possible. Traveling to an arbitrary point in spacetime has a very limited support in theoretical physics, and usually only connected with quantum mechanics or wormholes, also known as Einstein-Rosen bridges.

Some ancient myths depict a character skipping forward in time. In Hindu mythology, the Mahabharata mentions the story of King Raivata Kakudmi, who travels to heaven to meet the creator Brahma and is surprised to learn when he returns to Earth that many ages have passed.[2] The Buddhist Pli Canon mentions the relativity of time. The Payasi Sutta tells of one of the Buddha’s chief disciples, Kumara Kassapa, who explains to the skeptic Payasi that time in the Heavens passes differently than on Earth.[3] The Japanese tale of “Urashima Tar”,[4] first described in the Nihongi (720) tells of a young fisherman named Urashima Taro who visits an undersea palace. After three days, he returns home to his village and finds himself 300 years in the future, where he has been forgotten, his house is in ruins, and his family has died.[5] In Jewish tradition, the 1st-century BC scholar Honi ha-M’agel is said to have fallen asleep and slept for seventy years. When waking up he returned home but found none of the people he knew, and no one believed he is who he claims to be.[6]

Early science fiction stories feature characters who sleep for years and awaken in a changed society, or are transported to the past through supernatural means. Among them L’An 2440, rve s’il en ft jamais (1770) by Louis-Sbastien Mercier, Rip Van Winkle (1819) by Washington Irving, Looking Backward (1888) by Edward Bellamy, and When the Sleeper Awakes (1899) by H.G. Wells. Prolonged sleep, like the more familiar time machine, is used as a means of time travel in these stories.[7]

The earliest work about backwards time travel is uncertain. Samuel Madden’s Memoirs of the Twentieth Century (1733) is a series of letters from British ambassadors in 1997 and 1998 to diplomats in the past, conveying the political and religious conditions of the future.[8]:9596 Because the narrator receives these letters from his guardian angel, Paul Alkon suggests in his book Origins of Futuristic Fiction that “the first time-traveler in English literature is a guardian angel.”[8]:85 Madden does not explain how the angel obtains these documents, but Alkon asserts that Madden “deserves recognition as the first to toy with the rich idea of time-travel in the form of an artifact sent backward from the future to be discovered in the present.”[8]:9596 In the science fiction anthology Far Boundaries (1951), editor August Derleth claims that an early short story about time travel is “Missing One’s Coach: An Anachronism”, written for the Dublin Literary Magazine[9] by an anonymous author in 1838.[10]:3 While the narrator waits under a tree for a coach to take him out of Newcastle, he is transported back in time over a thousand years. He encounters the Venerable Bede in a monastery and explains to him the developments of the coming centuries. However, the story never makes it clear whether these events are real or a dream.[10]:1138 Another early work about time travel is The Forebears of Kalimeros: Alexander, son of Philip of Macedon by Alexander Veltman published in 1836.[11]

Charles Dickens’s A Christmas Carol (1843) has early depictions of time travel in both directions, as the protagonist, Ebenezer Scrooge, is transported to Christmases past and future. Other stories employ the same template, where a character naturally goes to sleep, and upon waking up finds itself in a different time.[12] A clearer example of backward time travel is found in the popular 1861 book Paris avant les hommes (Paris before Men) by the French botanist and geologist Pierre Boitard, published posthumously. In this story, the protagonist is transported to the prehistoric past by the magic of a “lame demon” (a French pun on Boitard’s name), where he encounters a Plesiosaur and an apelike ancestor and is able to interact with ancient creatures.[13] Edward Everett Hale’s “Hands Off” (1881) tells the story of an unnamed being, possibly the soul of a person who has recently died, who interferes with ancient Egyptian history by preventing Joseph’s enslavement. This may have been the first story to feature an alternate history created as a result of time travel.[14]:54

One of the first stories to feature time travel by means of a machine is “The Clock that Went Backward” by Edward Page Mitchell,[15] which appeared in the New York Sun in 1881. However, the mechanism borders on fantasy. An unusual clock, when wound, runs backwards and transports people nearby back in time. The author does not explain the origin or properties of the clock.[14]:55 Enrique Gaspar y Rimbau’s El Anacronpete (1887) may have been the first story to feature a vessel engineered to travel through time.[16][17] Andrew Sawyer has commented that the story “does seem to be the first literary description of a time machine noted so far”, adding that “Edward Page Mitchell’s story ‘The Clock That Went Backward’ (1881) is usually described as the first time-machine story, but I’m not sure that a clock quite counts.”[18] H. G. Wells’s The Time Machine (1895) popularized the concept of time travel by mechanical means.[19]

Some theories, most notably special and general relativity, suggest that suitable geometries of spacetime or specific types of motion in space might allow time travel into the past and future if these geometries or motions were possible.[20]:499 In technical papers, physicists discuss the possibility of closed timelike curves, which are world lines that form closed loops in spacetime, allowing objects to return to their own past. There are known to be solutions to the equations of general relativity that describe spacetimes which contain closed timelike curves, such as Gdel spacetime, but the physical plausibility of these solutions is uncertain.

Many in the scientific community believe that backward time travel is highly unlikely. Any theory that would allow time travel would introduce potential problems of causality.[21] The classic example of a problem involving causality is the “grandfather paradox”: what if one were to go back in time and kill one’s own grandfather before one’s father was conceived? Some physicists, such as Novikov and Deutsch, suggested that these sorts of temporal paradoxes can be avoided through the Novikov self-consistency principle or to a variation of the many-worlds interpretation with interacting worlds.[22]

Time travel to the past is theoretically possible in certain general relativity spacetime geometries that permit traveling faster than the speed of light, such as cosmic strings, transversable wormholes, and Alcubierre drive.[23][24]:33130 The theory of general relativity does suggest a scientific basis for the possibility of backward time travel in certain unusual scenarios, although arguments from semiclassical gravity suggest that when quantum effects are incorporated into general relativity, these loopholes may be closed.[25] These semiclassical arguments led Hawking to formulate the chronology protection conjecture, suggesting that the fundamental laws of nature prevent time travel,[26] but physicists cannot come to a definite judgment on the issue without a theory of quantum gravity to join quantum mechanics and general relativity into a completely unified theory.[27][28]:150

The theory of general relativity describes the universe under a system of field equations that determine the metric, or distance function, of spacetime. There exist exact solutions to these equations that include closed time-like curves, which are world lines that intersect themselves; some point in the causal future of the world line is also in its causal past, a situation which is akin to time travel. Such a solution was first proposed by Kurt Gdel, a solution known as the Gdel metric, but his (and others’) solution requires the universe to have physical characteristics that it does not appear to have,[20]:499 such as rotation and lack of Hubble expansion. Whether general relativity forbids closed time-like curves for all realistic conditions is still being researched.[29]

Wormholes are a hypothetical warped spacetime which are permitted by the Einstein field equations of general relativity.[30]:100 A proposed time-travel machine using a traversable wormhole would hypothetically work in the following way: One end of the wormhole is accelerated to some significant fraction of the speed of light, perhaps with some advanced propulsion system, and then brought back to the point of origin. Alternatively, another way is to take one entrance of the wormhole and move it to within the gravitational field of an object that has higher gravity than the other entrance, and then return it to a position near the other entrance. For both of these methods, time dilation causes the end of the wormhole that has been moved to have aged less, or become “younger”, than the stationary end as seen by an external observer; however, time connects differently through the wormhole than outside it, so that synchronized clocks at either end of the wormhole will always remain synchronized as seen by an observer passing through the wormhole, no matter how the two ends move around.[20]:502 This means that an observer entering the “younger” end would exit the “older” end at a time when it was the same age as the “younger” end, effectively going back in time as seen by an observer from the outside. One significant limitation of such a time machine is that it is only possible to go as far back in time as the initial creation of the machine;[20]:503 in essence, it is more of a path through time than it is a device that itself moves through time, and it would not allow the technology itself to be moved backward in time.

According to current theories on the nature of wormholes, construction of a traversable wormhole would require the existence of a substance with negative energy, often referred to as “exotic matter”. More technically, the wormhole spacetime requires a distribution of energy that violates various energy conditions, such as the null energy condition along with the weak, strong, and dominant energy conditions. However, it is known that quantum effects can lead to small measurable violations of the null energy condition,[30]:101 and many physicists believe that the required negative energy may actually be possible due to the Casimir effect in quantum physics.[31] Although early calculations suggested a very large amount of negative energy would be required, later calculations showed that the amount of negative energy can be made arbitrarily small.[32]

In 1993, Matt Visser argued that the two mouths of a wormhole with such an induced clock difference could not be brought together without inducing quantum field and gravitational effects that would either make the wormhole collapse or the two mouths repel each other.[33] Because of this, the two mouths could not be brought close enough for causality violation to take place. However, in a 1997 paper, Visser hypothesized that a complex “Roman ring” (named after Tom Roman) configuration of an N number of wormholes arranged in a symmetric polygon could still act as a time machine, although he concludes that this is more likely a flaw in classical quantum gravity theory rather than proof that causality violation is possible.[34]

Another approach involves a dense spinning cylinder usually referred to as a Tipler cylinder, a GR solution discovered by Willem Jacob van Stockum[35] in 1936 and Kornel Lanczos[36] in 1924, but not recognized as allowing closed timelike curves[37]:21 until an analysis by Frank Tipler[38] in 1974. If a cylinder is infinitely long and spins fast enough about its long axis, then a spaceship flying around the cylinder on a spiral path could travel back in time (or forward, depending on the direction of its spiral). However, the density and speed required is so great that ordinary matter is not strong enough to construct it. A similar device might be built from a cosmic string, but none are known to exist, and it does not seem to be possible to create a new cosmic string. Physicist Ronald Mallett is attempting to recreate the conditions of a rotating black hole with ring lasers, in order to bend spacetime and allow for time travel.[39]

A more fundamental objection to time travel schemes based on rotating cylinders or cosmic strings has been put forward by Stephen Hawking, who proved a theorem showing that according to general relativity it is impossible to build a time machine of a special type (a “time machine with the compactly generated Cauchy horizon”) in a region where the weak energy condition is satisfied, meaning that the region contains no matter with negative energy density (exotic matter). Solutions such as Tipler’s assume cylinders of infinite length, which are easier to analyze mathematically, and although Tipler suggested that a finite cylinder might produce closed timelike curves if the rotation rate were fast enough,[37]:169 he did not prove this. But Hawking points out that because of his theorem, “it can’t be done with positive energy density everywhere! I can prove that to build a finite time machine, you need negative energy.”[28]:96 This result comes from Hawking’s 1992 paper on the chronology protection conjecture, where he examines “the case that the causality violations appear in a finite region of spacetime without curvature singularities” and proves that “there will be a Cauchy horizon that is compactly generated and that in general contains one or more closed null geodesics which will be incomplete. One can define geometrical quantities that measure the Lorentz boost and area increase on going round these closed null geodesics. If the causality violation developed from a noncompact initial surface, the averaged weak energy condition must be violated on the Cauchy horizon.”[26] This theorem does not rule out the possibility of time travel by means of time machines with the non-compactly generated Cauchy horizons (such as the Deutsch-Politzer time machine) or in regions which contain exotic matter, which would be used for traversable wormholes or the Alcubierre drive.

When a signal is sent from one location and received at another location, then as long as the signal is moving at the speed of light or slower, the mathematics of simultaneity in the theory of relativity show that all reference frames agree that the transmission-event happened before the reception-event. When the signal travels faster than light, it is received before it is sent, in all reference frames.[40] The signal could be said to have moved backward in time. This hypothetical scenario is sometimes referred to as a tachyonic antitelephone.[41]

Quantum-mechanical phenomena such as quantum teleportation, the EPR paradox, or quantum entanglement might appear to create a mechanism that allows for faster-than-light (FTL) communication or time travel, and in fact some interpretations of quantum mechanics such as the Bohm interpretation presume that some information is being exchanged between particles instantaneously in order to maintain correlations between particles.[42] This effect was referred to as “spooky action at a distance” by Einstein.

Nevertheless, the fact that causality is preserved in quantum mechanics is a rigorous result in modern quantum field theories, and therefore modern theories do not allow for time travel or FTL communication. In any specific instance where FTL has been claimed, more detailed analysis has proven that to get a signal, some form of classical communication must also be used.[43] The no-communication theorem also gives a general proof that quantum entanglement cannot be used to transmit information faster than classical signals.

A variation of Everett’s many-worlds interpretation (MWI) of quantum mechanics provides a resolution to the grandfather paradox that involves the time traveler arriving in a different universe than the one they came from; it’s been argued that since the traveler arrives in a different universe’s history and not their own history, this is not “genuine” time travel.[44] The accepted many-worlds interpretation suggests that all possible quantum events can occur in mutually exclusive histories.[45] However, some variations allow different universes to interact. This concept is most often used in science-fiction, but some physicists such as David Deutsch have suggested that a time traveler should end up in a different history than the one he started from.[46][47] On the other hand, Stephen Hawking has argued that even if the MWI is correct, we should expect each time traveler to experience a single self-consistent history, so that time travelers remain within their own world rather than traveling to a different one.[48] The physicist Allen Everett argued that Deutsch’s approach “involves modifying fundamental principles of quantum mechanics; it certainly goes beyond simply adopting the MWI”. Everett also argues that even if Deutsch’s approach is correct, it would imply that any macroscopic object composed of multiple particles would be split apart when traveling back in time through a wormhole, with different particles emerging in different worlds.[22]

Daniel Greenberger and Karl Svozil proposed that quantum theory gives a model for time travel without paradoxes.[49][50] The quantum theory observation causes possible states to ‘collapse’ into one measured state; hence, the past observed from the present is deterministic (it has only one possible state), but the present observed from the past has many possible states until our actions cause it to collapse into one state. Our actions will then be seen to have been inevitable.

Certain experiments carried out give the impression of reversed causality, but fail to show it under closer examination.

The delayed choice quantum eraser experiment performed by Marlan Scully involves pairs of entangled photons that are divided into “signal photons” and “idler photons”, with the signal photons emerging from one of two locations and their position later measured as in the double-slit experiment. Depending on how the idler photon is measured, the experimenter can either learn which of the two locations the signal photon emerged from or “erase” that information. Even though the signal photons can be measured before the choice has been made about the idler photons, the choice seems to retroactively determine whether or not an interference pattern is observed when one correlates measurements of idler photons to the corresponding signal photons. However, since interference can only be observed after the idler photons are measured and they are correlated with the signal photons, there is no way for experimenters to tell what choice will be made in advance just by looking at the signal photons, only by gathering classical information from the entire system; thus causality is preserved.[51]

The experiment of Lijun Wang might also show causality violation since it made it possible to send packages of waves through a bulb of caesium gas in such a way that the package appeared to exit the bulb 62 nanoseconds before its entry, but a wave package is not a single well-defined object but rather a sum of multiple waves of different frequencies (see Fourier analysis), and the package can appear to move faster than light or even backward in time even if none of the pure waves in the sum do so. This effect cannot be used to send any matter, energy, or information faster than light,[52] so this experiment is understood not to violate causality either.

The physicists Gnter Nimtz and Alfons Stahlhofen, of the University of Koblenz, claim to have violated Einstein’s theory of relativity by transmitting photons faster than the speed of light. They say they have conducted an experiment in which microwave photons traveled “instantaneously” between a pair of prisms that had been moved up to 3ft (0.91m) apart, using a phenomenon known as quantum tunneling. Nimtz told New Scientist magazine: “For the time being, this is the only violation of special relativity that I know of.” However, other physicists say that this phenomenon does not allow information to be transmitted faster than light. Aephraim Steinberg, a quantum optics expert at the University of Toronto, Canada, uses the analogy of a train traveling from Chicago to New York, but dropping off train cars at each station along the way, so that the center of the train moves forward at each stop; in this way, the speed of the center of the train exceeds the speed of any of the individual cars.[53]

Shengwang Du claims in a peer-reviewed journal to have observed single photons’ precursors, saying that they travel no faster than c in a vacuum. His experiment involved slow light as well as passing light through a vacuum. He generated two single photons, passing one through rubidium atoms that had been cooled with a laser (thus slowing the light) and passing one through a vacuum. Both times, apparently, the precursors preceded the photons’ main bodies, and the precursor traveled at c in a vacuum. According to Du, this implies that there is no possibility of light traveling faster than c and, thus, no possibility of violating causality.[54]

The absence of time travelers from the future is a variation of the Fermi paradox, and like the absence of extraterrestrial visitors, the absence of time travelers does not prove time travel is physically impossible; it might be that time travel is physically possible but is never developed or is cautiously used. Carl Sagan once suggested the possibility that time travelers could be here but are disguising their existence or are not recognized as time travelers.[27] Some versions of general relativity suggest that time travel might only be possible in a region of spacetime that is warped a certain way, and hence time travelers would not be able to travel back to earlier regions in spacetime, before this region existed. Stephen Hawking stated that this would explain why the world has not already been overrun by “tourists from the future.”[48]

Several experiments have been carried out to try to entice future humans, who might invent time travel technology, to come back and demonstrate it to people of the present time. Events such as Perth’s Destination Day (2005) or MIT’s Time Traveler Convention heavily publicized permanent “advertisements” of a meeting time and place for future time travelers to meet. Back in 1982, a group in Baltimore, Maryland, identifying itself as the Krononauts, hosted an event of this type welcoming visitors from the future.[55][56] These experiments only stood the possibility of generating a positive result demonstrating the existence of time travel, but have failed so farno time travelers are known to have attended either event. Some versions of the many-worlds interpretation can be used to suggest that future humans have traveled back in time, but have traveled back to the meeting time and place in a parallel universe.[57]

There is a great deal of observable evidence for time dilation in special relativity[58] and gravitational time dilation in general relativity,[59][60][61] for example in the famous and easy-to-replicate observation of atmospheric muon decay.[62][63][64] The theory of relativity states that the speed of light is invariant for all observers in any frame of reference; that is, it is always the same. Time dilation is a direct consequence of the invariance of the speed of light.[64] Time dilation may be regarded in a limited sense as “time travel into the future”: a person may use time dilation so that a small amount of proper time passes for them, while a large amount of proper time passes elsewhere. This can be achieved by traveling at relativistic speeds or through the effects of gravity.[65]

For two identical clocks moving relative to each other without accelerating, each clock measures the other to be ticking slower. This is possible due to the relativity of simultaneity. However, the symmetry is broken if one clock accelerates, allowing for less proper time to pass for one clock than the other. The twin paradox describes this: one twin remains on Earth, while the other undergoes acceleration to relativistic speed as they travel into space, turn around, and travel back to Earth; the traveling twin ages less than the twin who stayed on Earth, because of the time dilation experienced during their acceleration. General relativity treats the effects of acceleration and the effects of gravity as equivalent, and shows that time dilation also occurs in gravity wells, with a clock deeper in the well ticking more slowly; this effect is taken into account when calibrating the clocks on the satellites of the Global Positioning System, and it could lead to significant differences in rates of aging for observers at different distances from a large gravity well such as a black hole.[24]:33130

A time machine that utilizes this principle might be, for instance, a spherical shell with a diameter of 5 meters and the mass of Jupiter. A person at its center will travel forward in time at a rate four times that of distant observers. Squeezing the mass of a large planet into such a small structure is not expected to be within humanity’s technological capabilities in the near future.[24]:76140 With current technologies, it is only possible to cause a human traveler to age less than companions on Earth by a very small fraction of a second, the current record being about 20 milliseconds for the cosmonaut Sergei Avdeyev.[66]

Philosophers have discussed the nature of time since at least the time of ancient Greece; for example, Parmenides presented the view that time is an illusion. Centuries later, Newton supported the idea of absolute time, while his contemporary Leibniz maintained that time is only a relation between events and it cannot be expressed independently. The latter approach eventually gave rise to the spacetime of relativity.[67]

Many philosophers have argued that relativity implies eternalism, the idea that the past and future exist in a real sense, not only as changes that occurred or will occur to the present.[68] Philosopher of science Dean Rickles disagrees with some qualifications, but notes that “the consensus among philosophers seems to be that special and general relativity are incompatible with presentism.”[69] Some philosophers view time as a dimension equal to spatial dimensions, that future events are “already there” in the same sense different places exist, and that there is no objective flow of time; however, this view is disputed.[70]

Presentism is a school of philosophy that holds that the future and the past exist only as changes that occurred or will occur to the present, and they have no real existence of their own. In this view, time travel is impossible because there is no future or past to travel to.[68] Keller and Nelson have argued that even if past and future objects do not exist, there can still be definite truths about past and future events, and thus it is possible that a future truth about a time traveler deciding to travel back to the present date could explain the time traveler’s actual appearance in the present;[71] these views are contested by some authors.[72]

Presentism in classical spacetime deems that only the present exists; this is not reconcilable with special relativity, shown in the following example: Alice and Bob are simultaneous observers of event O. For Alice, some event E is simultaneous with O, but for Bob, event E is in the past or future. Therefore, Alice and Bob disagree about what exists in the present, which contradicts classical presentism. “Here-now presentism” attempts to reconcile this by only acknowledging the time and space of a single point; this is unsatisfactory because objects coming and going from the “here-now” alternate between real and unreal, in addition to the lack of a privileged “here-now” that would be the “real” present. “Relativized presentism” acknowledges that there are infinite frames of reference, each of them has a different set of simultaneous events, which makes it impossible to distinguish a single “real” present, and hence either all events in time are realblurring the difference between presentism and eternalismor each frame of reference exists in its own reality. Options for presentism in special relativity appear to be exhausted, but Gdel and others suspect presentism may be valid for some forms of general relativity.[73] Generally, the idea of absolute time and space is considered incompatible with general relativity; there is no universal truth about the absolute position of events which occur at different times, and thus no way to determine which point in space at one time is at the universal “same position” at another time,[74] and all coordinate systems are on equal footing as given by the principle of diffeomorphism invariance.[75]

A common objection to the idea of traveling back in time is put forth in the grandfather paradox or the argument of auto-infanticide.[76] If one were able to go back in time, inconsistencies and contradictions would ensue if the time traveler were to change anything; there is a contradiction if the past becomes different from the way it is.[77][78] The paradox is commonly described with a person who travels to the past and kills their own grandfather, prevents the existence of their father or mother, and therefore their own existence.[27] Philosophers question whether these paradoxes make time travel impossible. Some philosophers answer the paradoxes by arguing that it might be the case that backward time travel could be possible but that it would be impossible to actually change the past in any way,[79] an idea similar to the proposed Novikov self-consistency principle in physics.

According to the philosophical theory of compossibility, what can happen, for example in the context of time travel, must be weighed against the context of everything relating to the situation. If the past is a certain way, it’s not possible for it to be any other way. What can happen when a time traveler visits the past is limited to what did happen, in order to prevent logical contradictions.[80]

The Novikov self-consistency principle, named after Igor Dmitrievich Novikov, states that any actions taken by a time traveler or by an object that travels back in time were part of history all along, and therefore it is impossible for the time traveler to “change” history in any way. The time traveler’s actions may be the cause of events in their own past though, which leads to the potential for circular causation, sometimes called a predestination paradox,[81] ontological paradox,[82] or bootstrap paradox.[82][83] The term bootstrap paradox was popularized by Robert A. Heinlein’s story “By His Bootstraps”.[84] The Novikov self-consistency principle proposes that the local laws of physics in a region of spacetime containing time travelers cannot be any different from the local laws of physics in any other region of spacetime.[85]

The philosopher Kelley L. Ross argues in “Time Travel Paradoxes”[86] that in a scenario involving a physical object whose world-line or history forms a closed loop in time there can be a violation of the second law of thermodynamics. Ross uses “Somewhere in Time” as an example of such an ontological paradox, where a watch is given to a person, and 60 years later the same watch is brought back in time and given to the same character. Ross states that entropy of the watch will increase, and the watch carried back in time will be more worn with each repetition of its history. The second law of thermodynamics is understood by modern physicists to be a statistical law, so decreasing entropy or non-increasing entropy are not impossible, just improbable. Additionally, entropy statistically increases in systems which are isolated, so non-isolated systems, such as an object, that interact with the outside world, can become less worn and decrease in entropy, and it’s possible for an object whose world-line forms a closed loop to be always in the same condition in the same point of its history.[24]:23

Time travel themes in science fiction and the media can generally be grouped into three categories: immutable timeline; mutable timeline; and alternate histories, as in the interacting-many-worlds interpretation.[87][88][89] Frequently in fiction, timeline is used to refer to all physical events in history, so that in time travel stories where events can be changed, the time traveler is described as creating a new or altered timeline.[90] This usage is distinct from the use of the term timeline to refer to a type of chart that illustrates a particular series of events, and the concept is also distinct from a world line, a term from Einstein’s theory of relativity which refers to the entire history of a single object.

More here:

Time travel – Wikipedia

Space News From SpaceDaily.Com

SpaceX blasts off NASA’s new planet-hunter, TESSTampa (AFP) April 19, 2018 NASA on Wednesday blasted off its newest planet-hunting spacecraft, TESS, a $337 million satellite that aims to scan 85 percent of the skies for cosmic bodies where life may exist. “Three, two, one and liftoff!” said NASA commentator Mike Curie as the Transiting Exoplanet Survey Satellite (TESS) soared into the cloudless, blue sky atop a SpaceX Falcon 9 rocket from Cape Canaveral, Florida at 6:51 pm (2251 GMT). The washing machine-sized spacecraft is built to search outside the solar system, sca … read moreNew research seeks to optimize space travel efficiency Chicago IL (SPX) Apr 19, 2018Sending a human into space and doing it efficiently presents a galaxy of challenges. Koki Ho, University of Illinois assistant professor in the Department of Aerospace Engineering, and his graduate … moreNew DARPA Challenge Seeks Flexible and Responsive Launch Solutions Washington DC (SPX) Apr 19, 2018DARPA announced the DARPA Launch Challenge, designed to promote rapid access to space within days, not years. Our nation’s space architecture is currently built around a limited number of exquisite … moreIndia, France Join Hands for Ambitious Inter-Planetary Missions New Delhi (Sputnik) Apr 19, 2018The Indian space research agency ISRO and CNES of France have announced ambitious collaborations on inter-planetary missions like those to Mars, Venus and certain asteroids. The collaboration … moreEuropean Space Agency Hopes Skripal Case Won’t Affect Work With Russia Colorado Springs CO (Sputnik) Apr 19, 2018The European Space Agency (ESA) hopes that the Skripal case will not affect the agency’s cooperation with Russia, ESA Director-General Jan Woerner told Sputnik. “In our case, our relation with … more

Visit link:

Space News From SpaceDaily.Com

Time travel – Wikipedia

Time travel is the concept of movement between certain points in time, analogous to movement between different points in space by an object or a person, typically using a hypothetical device known as a time machine, in the form of a vehicle or of a portal connecting distant points in spacetime, either to an earlier time or to a later time, without the need for the time-traveling body to experience the intervening period in the usual sense. Time travel is a widely-recognized concept in philosophy and fiction. It was popularized by H. G. Wells’ 1895 novel The Time Machine, which moved the concept of time travel into the public imagination. However, it is uncertain if time travel to the past is physically possible. Forward time travel, outside the usual sense of the perception of time, is possible according to special relativity and general relativity, although making one body advance or delay more than a few milliseconds compared to another body is not feasible with current technology.[1] As for backwards time travel, it is possible to find solutions in general relativity that allow for it, but the solutions require conditions that may not be physically possible. Traveling to an arbitrary point in spacetime has a very limited support in theoretical physics, and usually only connected with quantum mechanics or wormholes, also known as Einstein-Rosen bridges.

Some ancient myths depict a character skipping forward in time. In Hindu mythology, the Mahabharata mentions the story of King Raivata Kakudmi, who travels to heaven to meet the creator Brahma and is surprised to learn when he returns to Earth that many ages have passed.[2] The Buddhist Pli Canon mentions the relativity of time. The Payasi Sutta tells of one of the Buddha’s chief disciples, Kumara Kassapa, who explains to the skeptic Payasi that time in the Heavens passes differently than on Earth.[3] The Japanese tale of “Urashima Tar”,[4] first described in the Nihongi (720) tells of a young fisherman named Urashima Taro who visits an undersea palace. After three days, he returns home to his village and finds himself 300 years in the future, where he has been forgotten, his house is in ruins, and his family has died.[5] In Jewish tradition, the 1st-century BC scholar Honi ha-M’agel is said to have fallen asleep and slept for seventy years. When waking up he returned home but found none of the people he knew, and no one believed he is who he claims to be.[6]

Early science fiction stories feature characters who sleep for years and awaken in a changed society, or are transported to the past through supernatural means. Among them L’An 2440, rve s’il en ft jamais (1770) by Louis-Sbastien Mercier, Rip Van Winkle (1819) by Washington Irving, Looking Backward (1888) by Edward Bellamy, and When the Sleeper Awakes (1899) by H.G. Wells. Prolonged sleep, like the more familiar time machine, is used as a means of time travel in these stories.[7]

The earliest work about backwards time travel is uncertain. Samuel Madden’s Memoirs of the Twentieth Century (1733) is a series of letters from British ambassadors in 1997 and 1998 to diplomats in the past, conveying the political and religious conditions of the future.[8]:9596 Because the narrator receives these letters from his guardian angel, Paul Alkon suggests in his book Origins of Futuristic Fiction that “the first time-traveler in English literature is a guardian angel.”[8]:85 Madden does not explain how the angel obtains these documents, but Alkon asserts that Madden “deserves recognition as the first to toy with the rich idea of time-travel in the form of an artifact sent backward from the future to be discovered in the present.”[8]:9596 In the science fiction anthology Far Boundaries (1951), editor August Derleth claims that an early short story about time travel is “Missing One’s Coach: An Anachronism”, written for the Dublin Literary Magazine[9] by an anonymous author in 1838.[10]:3 While the narrator waits under a tree for a coach to take him out of Newcastle, he is transported back in time over a thousand years. He encounters the Venerable Bede in a monastery and explains to him the developments of the coming centuries. However, the story never makes it clear whether these events are real or a dream.[10]:1138 Another early work about time travel is The Forebears of Kalimeros: Alexander, son of Philip of Macedon by Alexander Veltman published in 1836.[11]

Charles Dickens’s A Christmas Carol (1843) has early depictions of time travel in both directions, as the protagonist, Ebenezer Scrooge, is transported to Christmases past and future. Other stories employ the same template, where a character naturally goes to sleep, and upon waking up finds itself in a different time.[12] A clearer example of backward time travel is found in the popular 1861 book Paris avant les hommes (Paris before Men) by the French botanist and geologist Pierre Boitard, published posthumously. In this story, the protagonist is transported to the prehistoric past by the magic of a “lame demon” (a French pun on Boitard’s name), where he encounters a Plesiosaur and an apelike ancestor and is able to interact with ancient creatures.[13] Edward Everett Hale’s “Hands Off” (1881) tells the story of an unnamed being, possibly the soul of a person who has recently died, who interferes with ancient Egyptian history by preventing Joseph’s enslavement. This may have been the first story to feature an alternate history created as a result of time travel.[14]:54

One of the first stories to feature time travel by means of a machine is “The Clock that Went Backward” by Edward Page Mitchell,[15] which appeared in the New York Sun in 1881. However, the mechanism borders on fantasy. An unusual clock, when wound, runs backwards and transports people nearby back in time. The author does not explain the origin or properties of the clock.[14]:55 Enrique Gaspar y Rimbau’s El Anacronpete (1887) may have been the first story to feature a vessel engineered to travel through time.[16][17] Andrew Sawyer has commented that the story “does seem to be the first literary description of a time machine noted so far”, adding that “Edward Page Mitchell’s story ‘The Clock That Went Backward’ (1881) is usually described as the first time-machine story, but I’m not sure that a clock quite counts.”[18] H. G. Wells’s The Time Machine (1895) popularized the concept of time travel by mechanical means.[19]

Some theories, most notably special and general relativity, suggest that suitable geometries of spacetime or specific types of motion in space might allow time travel into the past and future if these geometries or motions were possible.[20]:499 In technical papers, physicists discuss the possibility of closed timelike curves, which are world lines that form closed loops in spacetime, allowing objects to return to their own past. There are known to be solutions to the equations of general relativity that describe spacetimes which contain closed timelike curves, such as Gdel spacetime, but the physical plausibility of these solutions is uncertain.

Many in the scientific community believe that backward time travel is highly unlikely. Any theory that would allow time travel would introduce potential problems of causality.[21] The classic example of a problem involving causality is the “grandfather paradox”: what if one were to go back in time and kill one’s own grandfather before one’s father was conceived? Some physicists, such as Novikov and Deutsch, suggested that these sorts of temporal paradoxes can be avoided through the Novikov self-consistency principle or to a variation of the many-worlds interpretation with interacting worlds.[22]

Time travel to the past is theoretically possible in certain general relativity spacetime geometries that permit traveling faster than the speed of light, such as cosmic strings, transversable wormholes, and Alcubierre drive.[23][24]:33130 The theory of general relativity does suggest a scientific basis for the possibility of backward time travel in certain unusual scenarios, although arguments from semiclassical gravity suggest that when quantum effects are incorporated into general relativity, these loopholes may be closed.[25] These semiclassical arguments led Hawking to formulate the chronology protection conjecture, suggesting that the fundamental laws of nature prevent time travel,[26] but physicists cannot come to a definite judgment on the issue without a theory of quantum gravity to join quantum mechanics and general relativity into a completely unified theory.[27][28]:150

The theory of general relativity describes the universe under a system of field equations that determine the metric, or distance function, of spacetime. There exist exact solutions to these equations that include closed time-like curves, which are world lines that intersect themselves; some point in the causal future of the world line is also in its causal past, a situation which is akin to time travel. Such a solution was first proposed by Kurt Gdel, a solution known as the Gdel metric, but his (and others’) solution requires the universe to have physical characteristics that it does not appear to have,[20]:499 such as rotation and lack of Hubble expansion. Whether general relativity forbids closed time-like curves for all realistic conditions is still being researched.[29]

Wormholes are a hypothetical warped spacetime which are permitted by the Einstein field equations of general relativity.[30]:100 A proposed time-travel machine using a traversable wormhole would hypothetically work in the following way: One end of the wormhole is accelerated to some significant fraction of the speed of light, perhaps with some advanced propulsion system, and then brought back to the point of origin. Alternatively, another way is to take one entrance of the wormhole and move it to within the gravitational field of an object that has higher gravity than the other entrance, and then return it to a position near the other entrance. For both of these methods, time dilation causes the end of the wormhole that has been moved to have aged less, or become “younger”, than the stationary end as seen by an external observer; however, time connects differently through the wormhole than outside it, so that synchronized clocks at either end of the wormhole will always remain synchronized as seen by an observer passing through the wormhole, no matter how the two ends move around.[20]:502 This means that an observer entering the “younger” end would exit the “older” end at a time when it was the same age as the “younger” end, effectively going back in time as seen by an observer from the outside. One significant limitation of such a time machine is that it is only possible to go as far back in time as the initial creation of the machine;[20]:503 in essence, it is more of a path through time than it is a device that itself moves through time, and it would not allow the technology itself to be moved backward in time.

According to current theories on the nature of wormholes, construction of a traversable wormhole would require the existence of a substance with negative energy, often referred to as “exotic matter”. More technically, the wormhole spacetime requires a distribution of energy that violates various energy conditions, such as the null energy condition along with the weak, strong, and dominant energy conditions. However, it is known that quantum effects can lead to small measurable violations of the null energy condition,[30]:101 and many physicists believe that the required negative energy may actually be possible due to the Casimir effect in quantum physics.[31] Although early calculations suggested a very large amount of negative energy would be required, later calculations showed that the amount of negative energy can be made arbitrarily small.[32]

In 1993, Matt Visser argued that the two mouths of a wormhole with such an induced clock difference could not be brought together without inducing quantum field and gravitational effects that would either make the wormhole collapse or the two mouths repel each other.[33] Because of this, the two mouths could not be brought close enough for causality violation to take place. However, in a 1997 paper, Visser hypothesized that a complex “Roman ring” (named after Tom Roman) configuration of an N number of wormholes arranged in a symmetric polygon could still act as a time machine, although he concludes that this is more likely a flaw in classical quantum gravity theory rather than proof that causality violation is possible.[34]

Another approach involves a dense spinning cylinder usually referred to as a Tipler cylinder, a GR solution discovered by Willem Jacob van Stockum[35] in 1936 and Kornel Lanczos[36] in 1924, but not recognized as allowing closed timelike curves[37]:21 until an analysis by Frank Tipler[38] in 1974. If a cylinder is infinitely long and spins fast enough about its long axis, then a spaceship flying around the cylinder on a spiral path could travel back in time (or forward, depending on the direction of its spiral). However, the density and speed required is so great that ordinary matter is not strong enough to construct it. A similar device might be built from a cosmic string, but none are known to exist, and it does not seem to be possible to create a new cosmic string. Physicist Ronald Mallett is attempting to recreate the conditions of a rotating black hole with ring lasers, in order to bend spacetime and allow for time travel.[39]

A more fundamental objection to time travel schemes based on rotating cylinders or cosmic strings has been put forward by Stephen Hawking, who proved a theorem showing that according to general relativity it is impossible to build a time machine of a special type (a “time machine with the compactly generated Cauchy horizon”) in a region where the weak energy condition is satisfied, meaning that the region contains no matter with negative energy density (exotic matter). Solutions such as Tipler’s assume cylinders of infinite length, which are easier to analyze mathematically, and although Tipler suggested that a finite cylinder might produce closed timelike curves if the rotation rate were fast enough,[37]:169 he did not prove this. But Hawking points out that because of his theorem, “it can’t be done with positive energy density everywhere! I can prove that to build a finite time machine, you need negative energy.”[28]:96 This result comes from Hawking’s 1992 paper on the chronology protection conjecture, where he examines “the case that the causality violations appear in a finite region of spacetime without curvature singularities” and proves that “[t]here will be a Cauchy horizon that is compactly generated and that in general contains one or more closed null geodesics which will be incomplete. One can define geometrical quantities that measure the Lorentz boost and area increase on going round these closed null geodesics. If the causality violation developed from a noncompact initial surface, the averaged weak energy condition must be violated on the Cauchy horizon.”[26] This theorem does not rule out the possibility of time travel by means of time machines with the non-compactly generated Cauchy horizons (such as the Deutsch-Politzer time machine) or in regions which contain exotic matter, which would be used for traversable wormholes or the Alcubierre drive.

When a signal is sent from one location and received at another location, then as long as the signal is moving at the speed of light or slower, the mathematics of simultaneity in the theory of relativity show that all reference frames agree that the transmission-event happened before the reception-event. When the signal travels faster than light, it is received before it is sent, in all reference frames.[40] The signal could be said to have moved backward in time. This hypothetical scenario is sometimes referred to as a tachyonic antitelephone.[41]

Quantum-mechanical phenomena such as quantum teleportation, the EPR paradox, or quantum entanglement might appear to create a mechanism that allows for faster-than-light (FTL) communication or time travel, and in fact some interpretations of quantum mechanics such as the Bohm interpretation presume that some information is being exchanged between particles instantaneously in order to maintain correlations between particles.[42] This effect was referred to as “spooky action at a distance” by Einstein.

Nevertheless, the fact that causality is preserved in quantum mechanics is a rigorous result in modern quantum field theories, and therefore modern theories do not allow for time travel or FTL communication. In any specific instance where FTL has been claimed, more detailed analysis has proven that to get a signal, some form of classical communication must also be used.[43] The no-communication theorem also gives a general proof that quantum entanglement cannot be used to transmit information faster than classical signals.

A variation of Everett’s many-worlds interpretation (MWI) of quantum mechanics provides a resolution to the grandfather paradox that involves the time traveler arriving in a different universe than the one they came from; it’s been argued that since the traveler arrives in a different universe’s history and not their own history, this is not “genuine” time travel.[44] The accepted many-worlds interpretation suggests that all possible quantum events can occur in mutually exclusive histories.[45] However, some variations allow different universes to interact. This concept is most often used in science-fiction, but some physicists such as David Deutsch have suggested that a time traveler should end up in a different history than the one he started from.[46][47] On the other hand, Stephen Hawking has argued that even if the MWI is correct, we should expect each time traveler to experience a single self-consistent history, so that time travelers remain within their own world rather than traveling to a different one.[48] The physicist Allen Everett argued that Deutsch’s approach “involves modifying fundamental principles of quantum mechanics; it certainly goes beyond simply adopting the MWI”. Everett also argues that even if Deutsch’s approach is correct, it would imply that any macroscopic object composed of multiple particles would be split apart when traveling back in time through a wormhole, with different particles emerging in different worlds.[22]

Daniel Greenberger and Karl Svozil proposed that quantum theory gives a model for time travel without paradoxes.[49][50] The quantum theory observation causes possible states to ‘collapse’ into one measured state; hence, the past observed from the present is deterministic (it has only one possible state), but the present observed from the past has many possible states until our actions cause it to collapse into one state. Our actions will then be seen to have been inevitable.

Certain experiments carried out give the impression of reversed causality, but fail to show it under closer examination.

The delayed choice quantum eraser experiment performed by Marlan Scully involves pairs of entangled photons that are divided into “signal photons” and “idler photons”, with the signal photons emerging from one of two locations and their position later measured as in the double-slit experiment. Depending on how the idler photon is measured, the experimenter can either learn which of the two locations the signal photon emerged from or “erase” that information. Even though the signal photons can be measured before the choice has been made about the idler photons, the choice seems to retroactively determine whether or not an interference pattern is observed when one correlates measurements of idler photons to the corresponding signal photons. However, since interference can only be observed after the idler photons are measured and they are correlated with the signal photons, there is no way for experimenters to tell what choice will be made in advance just by looking at the signal photons, only by gathering classical information from the entire system; thus causality is preserved.[51]

The experiment of Lijun Wang might also show causality violation since it made it possible to send packages of waves through a bulb of caesium gas in such a way that the package appeared to exit the bulb 62 nanoseconds before its entry, but a wave package is not a single well-defined object but rather a sum of multiple waves of different frequencies (see Fourier analysis), and the package can appear to move faster than light or even backward in time even if none of the pure waves in the sum do so. This effect cannot be used to send any matter, energy, or information faster than light,[52] so this experiment is understood not to violate causality either.

The physicists Gnter Nimtz and Alfons Stahlhofen, of the University of Koblenz, claim to have violated Einstein’s theory of relativity by transmitting photons faster than the speed of light. They say they have conducted an experiment in which microwave photons traveled “instantaneously” between a pair of prisms that had been moved up to 3ft (0.91m) apart, using a phenomenon known as quantum tunneling. Nimtz told New Scientist magazine: “For the time being, this is the only violation of special relativity that I know of.” However, other physicists say that this phenomenon does not allow information to be transmitted faster than light. Aephraim Steinberg, a quantum optics expert at the University of Toronto, Canada, uses the analogy of a train traveling from Chicago to New York, but dropping off train cars at each station along the way, so that the center of the train moves forward at each stop; in this way, the speed of the center of the train exceeds the speed of any of the individual cars.[53]

Shengwang Du claims in a peer-reviewed journal to have observed single photons’ precursors, saying that they travel no faster than c in a vacuum. His experiment involved slow light as well as passing light through a vacuum. He generated two single photons, passing one through rubidium atoms that had been cooled with a laser (thus slowing the light) and passing one through a vacuum. Both times, apparently, the precursors preceded the photons’ main bodies, and the precursor traveled at c in a vacuum. According to Du, this implies that there is no possibility of light traveling faster than c and, thus, no possibility of violating causality.[54]

The absence of time travelers from the future is a variation of the Fermi paradox, and like the absence of extraterrestrial visitors, the absence of time travelers does not prove time travel is physically impossible; it might be that time travel is physically possible but is never developed or is cautiously used. Carl Sagan once suggested the possibility that time travelers could be here but are disguising their existence or are not recognized as time travelers.[27] Some versions of general relativity suggest that time travel might only be possible in a region of spacetime that is warped a certain way, and hence time travelers would not be able to travel back to earlier regions in spacetime, before this region existed. Stephen Hawking stated that this would explain why the world has not already been overrun by “tourists from the future.”[48]

Several experiments have been carried out to try to entice future humans, who might invent time travel technology, to come back and demonstrate it to people of the present time. Events such as Perth’s Destination Day (2005) or MIT’s Time Traveler Convention heavily publicized permanent “advertisements” of a meeting time and place for future time travelers to meet. Back in 1982, a group in Baltimore, Maryland, identifying itself as the Krononauts, hosted an event of this type welcoming visitors from the future.[55][56] These experiments only stood the possibility of generating a positive result demonstrating the existence of time travel, but have failed so farno time travelers are known to have attended either event. Some versions of the many-worlds interpretation can be used to suggest that future humans have traveled back in time, but have traveled back to the meeting time and place in a parallel universe.[57]

There is a great deal of observable evidence for time dilation in special relativity[58] and gravitational time dilation in general relativity,[59][60][61] for example in the famous and easy-to-replicate observation of atmospheric muon decay.[62][63][64] The theory of relativity states that the speed of light is invariant for all observers in any frame of reference; that is, it is always the same. Time dilation is a direct consequence of the invariance of the speed of light.[64] Time dilation may be regarded in a limited sense as “time travel into the future”: a person may use time dilation so that a small amount of proper time passes for them, while a large amount of proper time passes elsewhere. This can be achieved by traveling at relativistic speeds or through the effects of gravity.[65]

For two identical clocks moving relative to each other without accelerating, each clock measures the other to be ticking slower. This is possible due to the relativity of simultaneity. However, the symmetry is broken if one clock accelerates, allowing for less proper time to pass for one clock than the other. The twin paradox describes this: one twin remains on Earth, while the other undergoes acceleration to relativistic speed as they travel into space, turn around, and travel back to Earth; the traveling twin ages less than the twin who stayed on Earth, because of the time dilation experienced during their acceleration. General relativity treats the effects of acceleration and the effects of gravity as equivalent, and shows that time dilation also occurs in gravity wells, with a clock deeper in the well ticking more slowly; this effect is taken into account when calibrating the clocks on the satellites of the Global Positioning System, and it could lead to significant differences in rates of aging for observers at different distances from a large gravity well such as a black hole.[24]:33130

A time machine that utilizes this principle might be, for instance, a spherical shell with a diameter of 5 meters and the mass of Jupiter. A person at its center will travel forward in time at a rate four times that of distant observers. Squeezing the mass of a large planet into such a small structure is not expected to be within humanity’s technological capabilities in the near future.[24]:76140 With current technologies, it is only possible to cause a human traveler to age less than companions on Earth by a very small fraction of a second, the current record being about 20 milliseconds for the cosmonaut Sergei Avdeyev.[66]

Philosophers have discussed the nature of time since at least the time of ancient Greece; for example, Parmenides presented the view that time is an illusion. Centuries later, Newton supported the idea of absolute time, while his contemporary Leibniz maintained that time is only a relation between events and it cannot be expressed independently. The latter approach eventually gave rise to the spacetime of relativity.[67]

Many philosophers have argued that relativity implies eternalism, the idea that the past and future exist in a real sense, not only as changes that occurred or will occur to the present.[68] Philosopher of science Dean Rickles disagrees with some qualifications, but notes that “the consensus among philosophers seems to be that special and general relativity are incompatible with presentism.”[69] Some philosophers view time as a dimension equal to spatial dimensions, that future events are “already there” in the same sense different places exist, and that there is no objective flow of time; however, this view is disputed.[70]

Presentism is a school of philosophy that holds that the future and the past exist only as changes that occurred or will occur to the present, and they have no real existence of their own. In this view, time travel is impossible because there is no future or past to travel to.[68] Keller and Nelson have argued that even if past and future objects do not exist, there can still be definite truths about past and future events, and thus it is possible that a future truth about a time traveler deciding to travel back to the present date could explain the time traveler’s actual appearance in the present;[71] these views are contested by some authors.[72]

Presentism in classical spacetime deems that only the present exists; this is not reconcilable with special relativity, shown in the following example: Alice and Bob are simultaneous observers of event O. For Alice, some event E is simultaneous with O, but for Bob, event E is in the past or future. Therefore, Alice and Bob disagree about what exists in the present, which contradicts classical presentism. “Here-now presentism” attempts to reconcile this by only acknowledging the time and space of a single point; this is unsatisfactory because objects coming and going from the “here-now” alternate between real and unreal, in addition to the lack of a privileged “here-now” that would be the “real” present. “Relativized presentism” acknowledges that there are infinite frames of reference, each of them has a different set of simultaneous events, which makes it impossible to distinguish a single “real” present, and hence either all events in time are realblurring the difference between presentism and eternalismor each frame of reference exists in its own reality. Options for presentism in special relativity appear to be exhausted, but Gdel and others suspect presentism may be valid for some forms of general relativity.[73] Generally, the idea of absolute time and space is considered incompatible with general relativity; there is no universal truth about the absolute position of events which occur at different times, and thus no way to determine which point in space at one time is at the universal “same position” at another time,[74] and all coordinate systems are on equal footing as given by the principle of diffeomorphism invariance.[75]

A common objection to the idea of traveling back in time is put forth in the grandfather paradox or the argument of auto-infanticide.[76] If one were able to go back in time, inconsistencies and contradictions would ensue if the time traveler were to change anything; there is a contradiction if the past becomes different from the way it is.[77][78] The paradox is commonly described with a person who travels to the past and kills their own grandfather, prevents the existence of their father or mother, and therefore their own existence.[27] Philosophers question whether these paradoxes make time travel impossible. Some philosophers answer the paradoxes by arguing that it might be the case that backward time travel could be possible but that it would be impossible to actually change the past in any way,[79] an idea similar to the proposed Novikov self-consistency principle in physics.

According to the philosophical theory of compossibility, what can happen, for example in the context of time travel, must be weighed against the context of everything relating to the situation. If the past is a certain way, it’s not possible for it to be any other way. What can happen when a time traveler visits the past is limited to what did happen, in order to prevent logical contradictions.[80]

The Novikov self-consistency principle, named after Igor Dmitrievich Novikov, states that any actions taken by a time traveler or by an object that travels back in time were part of history all along, and therefore it is impossible for the time traveler to “change” history in any way. The time traveler’s actions may be the cause of events in their own past though, which leads to the potential for circular causation, sometimes called a predestination paradox,[81] ontological paradox,[82] or bootstrap paradox.[82][83] The term bootstrap paradox was popularized by Robert A. Heinlein’s story “By His Bootstraps”.[84] The Novikov self-consistency principle proposes that the local laws of physics in a region of spacetime containing time travelers cannot be any different from the local laws of physics in any other region of spacetime.[85]

The philosopher Kelley L. Ross argues in “Time Travel Paradoxes”[86] that in a scenario involving a physical object whose world-line or history forms a closed loop in time there can be a violation of the second law of thermodynamics. Ross uses “Somewhere in Time” as an example of such an ontological paradox, where a watch is given to a person, and 60 years later the same watch is brought back in time and given to the same character. Ross states that entropy of the watch will increase, and the watch carried back in time will be more worn with each repetition of its history. The second law of thermodynamics is understood by modern physicists to be a statistical law, so decreasing entropy or non-increasing entropy are not impossible, just improbable. Additionally, entropy statistically increases in systems which are isolated, so non-isolated systems, such as an object, that interact with the outside world, can become less worn and decrease in entropy, and it’s possible for an object whose world-line forms a closed loop to be always in the same condition in the same point of its history.[24]:23

Time travel themes in science fiction and the media can generally be grouped into three categories: immutable timeline; mutable timeline; and alternate histories, as in the interacting-many-worlds interpretation.[87][88][89] Frequently in fiction, timeline is used to refer to all physical events in history, so that in time travel stories where events can be changed, the time traveler is described as creating a new or altered timeline.[90] This usage is distinct from the use of the term timeline to refer to a type of chart that illustrates a particular series of events, and the concept is also distinct from a world line, a term from Einstein’s theory of relativity which refers to the entire history of a single object.

Read this article:

Time travel – Wikipedia

Time travel – Wikipedia

Time travel is the concept of movement between certain points in time, analogous to movement between different points in space by an object or a person, typically using a hypothetical device known as a time machine, in the form of a vehicle or of a portal connecting distant points in spacetime, either to an earlier time or to a later time, without the need for the time-traveling body to experience the intervening period in the usual sense. Time travel is a widely-recognized concept in philosophy and fiction. It was popularized by H. G. Wells’ 1895 novel The Time Machine, which moved the concept of time travel into the public imagination. However, it is uncertain if time travel to the past is physically possible. Forward time travel, outside the usual sense of the perception of time, is possible according to special relativity and general relativity, although making one body advance or delay more than a few milliseconds compared to another body is not feasible with current technology.[1] As for backwards time travel, it is possible to find solutions in general relativity that allow for it, but the solutions require conditions that may not be physically possible. Traveling to an arbitrary point in spacetime has a very limited support in theoretical physics, and usually only connected with quantum mechanics or wormholes, also known as Einstein-Rosen bridges.

Some ancient myths depict a character skipping forward in time. In Hindu mythology, the Mahabharata mentions the story of King Raivata Kakudmi, who travels to heaven to meet the creator Brahma and is surprised to learn when he returns to Earth that many ages have passed.[2] The Buddhist Pli Canon mentions the relativity of time. The Payasi Sutta tells of one of the Buddha’s chief disciples, Kumara Kassapa, who explains to the skeptic Payasi that time in the Heavens passes differently than on Earth.[3] The Japanese tale of “Urashima Tar”,[4] first described in the Nihongi (720) tells of a young fisherman named Urashima Taro who visits an undersea palace. After three days, he returns home to his village and finds himself 300 years in the future, where he has been forgotten, his house is in ruins, and his family has died.[5] In Jewish tradition, the 1st-century BC scholar Honi ha-M’agel is said to have fallen asleep and slept for seventy years. When waking up he returned home but found none of the people he knew, and no one believed he is who he claims to be.[6]

Early science fiction stories feature characters who sleep for years and awaken in a changed society, or are transported to the past through supernatural means. Among them L’An 2440, rve s’il en ft jamais (1770) by Louis-Sbastien Mercier, Rip Van Winkle (1819) by Washington Irving, Looking Backward (1888) by Edward Bellamy, and When the Sleeper Awakes (1899) by H.G. Wells. Prolonged sleep, like the more familiar time machine, is used as a means of time travel in these stories.[7]

The earliest work about backwards time travel is uncertain. Samuel Madden’s Memoirs of the Twentieth Century (1733) is a series of letters from British ambassadors in 1997 and 1998 to diplomats in the past, conveying the political and religious conditions of the future.[8]:9596 Because the narrator receives these letters from his guardian angel, Paul Alkon suggests in his book Origins of Futuristic Fiction that “the first time-traveler in English literature is a guardian angel.”[8]:85 Madden does not explain how the angel obtains these documents, but Alkon asserts that Madden “deserves recognition as the first to toy with the rich idea of time-travel in the form of an artifact sent backward from the future to be discovered in the present.”[8]:9596 In the science fiction anthology Far Boundaries (1951), editor August Derleth claims that an early short story about time travel is “Missing One’s Coach: An Anachronism”, written for the Dublin Literary Magazine[9] by an anonymous author in 1838.[10]:3 While the narrator waits under a tree for a coach to take him out of Newcastle, he is transported back in time over a thousand years. He encounters the Venerable Bede in a monastery and explains to him the developments of the coming centuries. However, the story never makes it clear whether these events are real or a dream.[10]:1138 Another early work about time travel is The Forebears of Kalimeros: Alexander, son of Philip of Macedon by Alexander Veltman published in 1836.[11]

Charles Dickens’s A Christmas Carol (1843) has early depictions of time travel in both directions, as the protagonist, Ebenezer Scrooge, is transported to Christmases past and future. Other stories employ the same template, where a character naturally goes to sleep, and upon waking up finds itself in a different time.[12] A clearer example of backward time travel is found in the popular 1861 book Paris avant les hommes (Paris before Men) by the French botanist and geologist Pierre Boitard, published posthumously. In this story, the protagonist is transported to the prehistoric past by the magic of a “lame demon” (a French pun on Boitard’s name), where he encounters a Plesiosaur and an apelike ancestor and is able to interact with ancient creatures.[13] Edward Everett Hale’s “Hands Off” (1881) tells the story of an unnamed being, possibly the soul of a person who has recently died, who interferes with ancient Egyptian history by preventing Joseph’s enslavement. This may have been the first story to feature an alternate history created as a result of time travel.[14]:54

One of the first stories to feature time travel by means of a machine is “The Clock that Went Backward” by Edward Page Mitchell,[15] which appeared in the New York Sun in 1881. However, the mechanism borders on fantasy. An unusual clock, when wound, runs backwards and transports people nearby back in time. The author does not explain the origin or properties of the clock.[14]:55 Enrique Gaspar y Rimbau’s El Anacronpete (1887) may have been the first story to feature a vessel engineered to travel through time.[16][17] Andrew Sawyer has commented that the story “does seem to be the first literary description of a time machine noted so far”, adding that “Edward Page Mitchell’s story ‘The Clock That Went Backward’ (1881) is usually described as the first time-machine story, but I’m not sure that a clock quite counts.”[18] H. G. Wells’s The Time Machine (1895) popularized the concept of time travel by mechanical means.[19]

Some theories, most notably special and general relativity, suggest that suitable geometries of spacetime or specific types of motion in space might allow time travel into the past and future if these geometries or motions were possible.[20]:499 In technical papers, physicists discuss the possibility of closed timelike curves, which are world lines that form closed loops in spacetime, allowing objects to return to their own past. There are known to be solutions to the equations of general relativity that describe spacetimes which contain closed timelike curves, such as Gdel spacetime, but the physical plausibility of these solutions is uncertain.

Many in the scientific community believe that backward time travel is highly unlikely. Any theory that would allow time travel would introduce potential problems of causality.[21] The classic example of a problem involving causality is the “grandfather paradox”: what if one were to go back in time and kill one’s own grandfather before one’s father was conceived? Some physicists, such as Novikov and Deutsch, suggested that these sorts of temporal paradoxes can be avoided through the Novikov self-consistency principle or to a variation of the many-worlds interpretation with interacting worlds.[22]

Time travel to the past is theoretically possible in certain general relativity spacetime geometries that permit traveling faster than the speed of light, such as cosmic strings, transversable wormholes, and Alcubierre drive.[23][24]:33130 The theory of general relativity does suggest a scientific basis for the possibility of backward time travel in certain unusual scenarios, although arguments from semiclassical gravity suggest that when quantum effects are incorporated into general relativity, these loopholes may be closed.[25] These semiclassical arguments led Hawking to formulate the chronology protection conjecture, suggesting that the fundamental laws of nature prevent time travel,[26] but physicists cannot come to a definite judgment on the issue without a theory of quantum gravity to join quantum mechanics and general relativity into a completely unified theory.[27][28]:150

The theory of general relativity describes the universe under a system of field equations that determine the metric, or distance function, of spacetime. There exist exact solutions to these equations that include closed time-like curves, which are world lines that intersect themselves; some point in the causal future of the world line is also in its causal past, a situation which is akin to time travel. Such a solution was first proposed by Kurt Gdel, a solution known as the Gdel metric, but his (and others’) solution requires the universe to have physical characteristics that it does not appear to have,[20]:499 such as rotation and lack of Hubble expansion. Whether general relativity forbids closed time-like curves for all realistic conditions is still being researched.[29]

Wormholes are a hypothetical warped spacetime which are permitted by the Einstein field equations of general relativity.[30]:100 A proposed time-travel machine using a traversable wormhole would hypothetically work in the following way: One end of the wormhole is accelerated to some significant fraction of the speed of light, perhaps with some advanced propulsion system, and then brought back to the point of origin. Alternatively, another way is to take one entrance of the wormhole and move it to within the gravitational field of an object that has higher gravity than the other entrance, and then return it to a position near the other entrance. For both of these methods, time dilation causes the end of the wormhole that has been moved to have aged less, or become “younger”, than the stationary end as seen by an external observer; however, time connects differently through the wormhole than outside it, so that synchronized clocks at either end of the wormhole will always remain synchronized as seen by an observer passing through the wormhole, no matter how the two ends move around.[20]:502 This means that an observer entering the “younger” end would exit the “older” end at a time when it was the same age as the “younger” end, effectively going back in time as seen by an observer from the outside. One significant limitation of such a time machine is that it is only possible to go as far back in time as the initial creation of the machine;[20]:503 in essence, it is more of a path through time than it is a device that itself moves through time, and it would not allow the technology itself to be moved backward in time.

According to current theories on the nature of wormholes, construction of a traversable wormhole would require the existence of a substance with negative energy, often referred to as “exotic matter”. More technically, the wormhole spacetime requires a distribution of energy that violates various energy conditions, such as the null energy condition along with the weak, strong, and dominant energy conditions. However, it is known that quantum effects can lead to small measurable violations of the null energy condition,[30]:101 and many physicists believe that the required negative energy may actually be possible due to the Casimir effect in quantum physics.[31] Although early calculations suggested a very large amount of negative energy would be required, later calculations showed that the amount of negative energy can be made arbitrarily small.[32]

In 1993, Matt Visser argued that the two mouths of a wormhole with such an induced clock difference could not be brought together without inducing quantum field and gravitational effects that would either make the wormhole collapse or the two mouths repel each other.[33] Because of this, the two mouths could not be brought close enough for causality violation to take place. However, in a 1997 paper, Visser hypothesized that a complex “Roman ring” (named after Tom Roman) configuration of an N number of wormholes arranged in a symmetric polygon could still act as a time machine, although he concludes that this is more likely a flaw in classical quantum gravity theory rather than proof that causality violation is possible.[34]

Another approach involves a dense spinning cylinder usually referred to as a Tipler cylinder, a GR solution discovered by Willem Jacob van Stockum[35] in 1936 and Kornel Lanczos[36] in 1924, but not recognized as allowing closed timelike curves[37]:21 until an analysis by Frank Tipler[38] in 1974. If a cylinder is infinitely long and spins fast enough about its long axis, then a spaceship flying around the cylinder on a spiral path could travel back in time (or forward, depending on the direction of its spiral). However, the density and speed required is so great that ordinary matter is not strong enough to construct it. A similar device might be built from a cosmic string, but none are known to exist, and it does not seem to be possible to create a new cosmic string. Physicist Ronald Mallett is attempting to recreate the conditions of a rotating black hole with ring lasers, in order to bend spacetime and allow for time travel.[39]

A more fundamental objection to time travel schemes based on rotating cylinders or cosmic strings has been put forward by Stephen Hawking, who proved a theorem showing that according to general relativity it is impossible to build a time machine of a special type (a “time machine with the compactly generated Cauchy horizon”) in a region where the weak energy condition is satisfied, meaning that the region contains no matter with negative energy density (exotic matter). Solutions such as Tipler’s assume cylinders of infinite length, which are easier to analyze mathematically, and although Tipler suggested that a finite cylinder might produce closed timelike curves if the rotation rate were fast enough,[37]:169 he did not prove this. But Hawking points out that because of his theorem, “it can’t be done with positive energy density everywhere! I can prove that to build a finite time machine, you need negative energy.”[28]:96 This result comes from Hawking’s 1992 paper on the chronology protection conjecture, where he examines “the case that the causality violations appear in a finite region of spacetime without curvature singularities” and proves that “[t]here will be a Cauchy horizon that is compactly generated and that in general contains one or more closed null geodesics which will be incomplete. One can define geometrical quantities that measure the Lorentz boost and area increase on going round these closed null geodesics. If the causality violation developed from a noncompact initial surface, the averaged weak energy condition must be violated on the Cauchy horizon.”[26] This theorem does not rule out the possibility of time travel by means of time machines with the non-compactly generated Cauchy horizons (such as the Deutsch-Politzer time machine) or in regions which contain exotic matter, which would be used for traversable wormholes or the Alcubierre drive.

When a signal is sent from one location and received at another location, then as long as the signal is moving at the speed of light or slower, the mathematics of simultaneity in the theory of relativity show that all reference frames agree that the transmission-event happened before the reception-event. When the signal travels faster than light, it is received before it is sent, in all reference frames.[40] The signal could be said to have moved backward in time. This hypothetical scenario is sometimes referred to as a tachyonic antitelephone.[41]

Quantum-mechanical phenomena such as quantum teleportation, the EPR paradox, or quantum entanglement might appear to create a mechanism that allows for faster-than-light (FTL) communication or time travel, and in fact some interpretations of quantum mechanics such as the Bohm interpretation presume that some information is being exchanged between particles instantaneously in order to maintain correlations between particles.[42] This effect was referred to as “spooky action at a distance” by Einstein.

Nevertheless, the fact that causality is preserved in quantum mechanics is a rigorous result in modern quantum field theories, and therefore modern theories do not allow for time travel or FTL communication. In any specific instance where FTL has been claimed, more detailed analysis has proven that to get a signal, some form of classical communication must also be used.[43] The no-communication theorem also gives a general proof that quantum entanglement cannot be used to transmit information faster than classical signals.

A variation of Everett’s many-worlds interpretation (MWI) of quantum mechanics provides a resolution to the grandfather paradox that involves the time traveler arriving in a different universe than the one they came from; it’s been argued that since the traveler arrives in a different universe’s history and not their own history, this is not “genuine” time travel.[44] The accepted many-worlds interpretation suggests that all possible quantum events can occur in mutually exclusive histories.[45] However, some variations allow different universes to interact. This concept is most often used in science-fiction, but some physicists such as David Deutsch have suggested that a time traveler should end up in a different history than the one he started from.[46][47] On the other hand, Stephen Hawking has argued that even if the MWI is correct, we should expect each time traveler to experience a single self-consistent history, so that time travelers remain within their own world rather than traveling to a different one.[48] The physicist Allen Everett argued that Deutsch’s approach “involves modifying fundamental principles of quantum mechanics; it certainly goes beyond simply adopting the MWI”. Everett also argues that even if Deutsch’s approach is correct, it would imply that any macroscopic object composed of multiple particles would be split apart when traveling back in time through a wormhole, with different particles emerging in different worlds.[22]

Daniel Greenberger and Karl Svozil proposed that quantum theory gives a model for time travel without paradoxes.[49][50] The quantum theory observation causes possible states to ‘collapse’ into one measured state; hence, the past observed from the present is deterministic (it has only one possible state), but the present observed from the past has many possible states until our actions cause it to collapse into one state. Our actions will then be seen to have been inevitable.

Certain experiments carried out give the impression of reversed causality, but fail to show it under closer examination.

The delayed choice quantum eraser experiment performed by Marlan Scully involves pairs of entangled photons that are divided into “signal photons” and “idler photons”, with the signal photons emerging from one of two locations and their position later measured as in the double-slit experiment. Depending on how the idler photon is measured, the experimenter can either learn which of the two locations the signal photon emerged from or “erase” that information. Even though the signal photons can be measured before the choice has been made about the idler photons, the choice seems to retroactively determine whether or not an interference pattern is observed when one correlates measurements of idler photons to the corresponding signal photons. However, since interference can only be observed after the idler photons are measured and they are correlated with the signal photons, there is no way for experimenters to tell what choice will be made in advance just by looking at the signal photons, only by gathering classical information from the entire system; thus causality is preserved.[51]

The experiment of Lijun Wang might also show causality violation since it made it possible to send packages of waves through a bulb of caesium gas in such a way that the package appeared to exit the bulb 62 nanoseconds before its entry, but a wave package is not a single well-defined object but rather a sum of multiple waves of different frequencies (see Fourier analysis), and the package can appear to move faster than light or even backward in time even if none of the pure waves in the sum do so. This effect cannot be used to send any matter, energy, or information faster than light,[52] so this experiment is understood not to violate causality either.

The physicists Gnter Nimtz and Alfons Stahlhofen, of the University of Koblenz, claim to have violated Einstein’s theory of relativity by transmitting photons faster than the speed of light. They say they have conducted an experiment in which microwave photons traveled “instantaneously” between a pair of prisms that had been moved up to 3ft (0.91m) apart, using a phenomenon known as quantum tunneling. Nimtz told New Scientist magazine: “For the time being, this is the only violation of special relativity that I know of.” However, other physicists say that this phenomenon does not allow information to be transmitted faster than light. Aephraim Steinberg, a quantum optics expert at the University of Toronto, Canada, uses the analogy of a train traveling from Chicago to New York, but dropping off train cars at each station along the way, so that the center of the train moves forward at each stop; in this way, the speed of the center of the train exceeds the speed of any of the individual cars.[53]

Shengwang Du claims in a peer-reviewed journal to have observed single photons’ precursors, saying that they travel no faster than c in a vacuum. His experiment involved slow light as well as passing light through a vacuum. He generated two single photons, passing one through rubidium atoms that had been cooled with a laser (thus slowing the light) and passing one through a vacuum. Both times, apparently, the precursors preceded the photons’ main bodies, and the precursor traveled at c in a vacuum. According to Du, this implies that there is no possibility of light traveling faster than c and, thus, no possibility of violating causality.[54]

The absence of time travelers from the future is a variation of the Fermi paradox, and like the absence of extraterrestrial visitors, the absence of time travelers does not prove time travel is physically impossible; it might be that time travel is physically possible but is never developed or is cautiously used. Carl Sagan once suggested the possibility that time travelers could be here but are disguising their existence or are not recognized as time travelers.[27] Some versions of general relativity suggest that time travel might only be possible in a region of spacetime that is warped a certain way, and hence time travelers would not be able to travel back to earlier regions in spacetime, before this region existed. Stephen Hawking stated that this would explain why the world has not already been overrun by “tourists from the future.”[48]

Several experiments have been carried out to try to entice future humans, who might invent time travel technology, to come back and demonstrate it to people of the present time. Events such as Perth’s Destination Day (2005) or MIT’s Time Traveler Convention heavily publicized permanent “advertisements” of a meeting time and place for future time travelers to meet. Back in 1982, a group in Baltimore, Maryland, identifying itself as the Krononauts, hosted an event of this type welcoming visitors from the future.[55][56] These experiments only stood the possibility of generating a positive result demonstrating the existence of time travel, but have failed so farno time travelers are known to have attended either event. Some versions of the many-worlds interpretation can be used to suggest that future humans have traveled back in time, but have traveled back to the meeting time and place in a parallel universe.[57]

There is a great deal of observable evidence for time dilation in special relativity[58] and gravitational time dilation in general relativity,[59][60][61] for example in the famous and easy-to-replicate observation of atmospheric muon decay.[62][63][64] The theory of relativity states that the speed of light is invariant for all observers in any frame of reference; that is, it is always the same. Time dilation is a direct consequence of the invariance of the speed of light.[64] Time dilation may be regarded in a limited sense as “time travel into the future”: a person may use time dilation so that a small amount of proper time passes for them, while a large amount of proper time passes elsewhere. This can be achieved by traveling at relativistic speeds or through the effects of gravity.[65]

For two identical clocks moving relative to each other without accelerating, each clock measures the other to be ticking slower. This is possible due to the relativity of simultaneity. However, the symmetry is broken if one clock accelerates, allowing for less proper time to pass for one clock than the other. The twin paradox describes this: one twin remains on Earth, while the other undergoes acceleration to relativistic speed as they travel into space, turn around, and travel back to Earth; the traveling twin ages less than the twin who stayed on Earth, because of the time dilation experienced during their acceleration. General relativity treats the effects of acceleration and the effects of gravity as equivalent, and shows that time dilation also occurs in gravity wells, with a clock deeper in the well ticking more slowly; this effect is taken into account when calibrating the clocks on the satellites of the Global Positioning System, and it could lead to significant differences in rates of aging for observers at different distances from a large gravity well such as a black hole.[24]:33130

A time machine that utilizes this principle might be, for instance, a spherical shell with a diameter of 5 meters and the mass of Jupiter. A person at its center will travel forward in time at a rate four times that of distant observers. Squeezing the mass of a large planet into such a small structure is not expected to be within humanity’s technological capabilities in the near future.[24]:76140 With current technologies, it is only possible to cause a human traveler to age less than companions on Earth by a very small fraction of a second, the current record being about 20 milliseconds for the cosmonaut Sergei Avdeyev.[66]

Philosophers have discussed the nature of time since at least the time of ancient Greece; for example, Parmenides presented the view that time is an illusion. Centuries later, Newton supported the idea of absolute time, while his contemporary Leibniz maintained that time is only a relation between events and it cannot be expressed independently. The latter approach eventually gave rise to the spacetime of relativity.[67]

Many philosophers have argued that relativity implies eternalism, the idea that the past and future exist in a real sense, not only as changes that occurred or will occur to the present.[68] Philosopher of science Dean Rickles disagrees with some qualifications, but notes that “the consensus among philosophers seems to be that special and general relativity are incompatible with presentism.”[69] Some philosophers view time as a dimension equal to spatial dimensions, that future events are “already there” in the same sense different places exist, and that there is no objective flow of time; however, this view is disputed.[70]

Presentism is a school of philosophy that holds that the future and the past exist only as changes that occurred or will occur to the present, and they have no real existence of their own. In this view, time travel is impossible because there is no future or past to travel to.[68] Keller and Nelson have argued that even if past and future objects do not exist, there can still be definite truths about past and future events, and thus it is possible that a future truth about a time traveler deciding to travel back to the present date could explain the time traveler’s actual appearance in the present;[71] these views are contested by some authors.[72]

Presentism in classical spacetime deems that only the present exists; this is not reconcilable with special relativity, shown in the following example: Alice and Bob are simultaneous observers of event O. For Alice, some event E is simultaneous with O, but for Bob, event E is in the past or future. Therefore, Alice and Bob disagree about what exists in the present, which contradicts classical presentism. “Here-now presentism” attempts to reconcile this by only acknowledging the time and space of a single point; this is unsatisfactory because objects coming and going from the “here-now” alternate between real and unreal, in addition to the lack of a privileged “here-now” that would be the “real” present. “Relativized presentism” acknowledges that there are infinite frames of reference, each of them has a different set of simultaneous events, which makes it impossible to distinguish a single “real” present, and hence either all events in time are realblurring the difference between presentism and eternalismor each frame of reference exists in its own reality. Options for presentism in special relativity appear to be exhausted, but Gdel and others suspect presentism may be valid for some forms of general relativity.[73] Generally, the idea of absolute time and space is considered incompatible with general relativity; there is no universal truth about the absolute position of events which occur at different times, and thus no way to determine which point in space at one time is at the universal “same position” at another time,[74] and all coordinate systems are on equal footing as given by the principle of diffeomorphism invariance.[75]

A common objection to the idea of traveling back in time is put forth in the grandfather paradox or the argument of auto-infanticide.[76] If one were able to go back in time, inconsistencies and contradictions would ensue if the time traveler were to change anything; there is a contradiction if the past becomes different from the way it is.[77][78] The paradox is commonly described with a person who travels to the past and kills their own grandfather, prevents the existence of their father or mother, and therefore their own existence.[27] Philosophers question whether these paradoxes make time travel impossible. Some philosophers answer the paradoxes by arguing that it might be the case that backward time travel could be possible but that it would be impossible to actually change the past in any way,[79] an idea similar to the proposed Novikov self-consistency principle in physics.

According to the philosophical theory of compossibility, what can happen, for example in the context of time travel, must be weighed against the context of everything relating to the situation. If the past is a certain way, it’s not possible for it to be any other way. What can happen when a time traveler visits the past is limited to what did happen, in order to prevent logical contradictions.[80]

The Novikov self-consistency principle, named after Igor Dmitrievich Novikov, states that any actions taken by a time traveler or by an object that travels back in time were part of history all along, and therefore it is impossible for the time traveler to “change” history in any way. The time traveler’s actions may be the cause of events in their own past though, which leads to the potential for circular causation, sometimes called a predestination paradox,[81] ontological paradox,[82] or bootstrap paradox.[82][83] The term bootstrap paradox was popularized by Robert A. Heinlein’s story “By His Bootstraps”.[84] The Novikov self-consistency principle proposes that the local laws of physics in a region of spacetime containing time travelers cannot be any different from the local laws of physics in any other region of spacetime.[85]

The philosopher Kelley L. Ross argues in “Time Travel Paradoxes”[86] that in a scenario involving a physical object whose world-line or history forms a closed loop in time there can be a violation of the second law of thermodynamics. Ross uses “Somewhere in Time” as an example of such an ontological paradox, where a watch is given to a person, and 60 years later the same watch is brought back in time and given to the same character. Ross states that entropy of the watch will increase, and the watch carried back in time will be more worn with each repetition of its history. The second law of thermodynamics is understood by modern physicists to be a statistical law, so decreasing entropy or non-increasing entropy are not impossible, just improbable. Additionally, entropy statistically increases in systems which are isolated, so non-isolated systems, such as an object, that interact with the outside world, can become less worn and decrease in entropy, and it’s possible for an object whose world-line forms a closed loop to be always in the same condition in the same point of its history.[24]:23

Time travel themes in science fiction and the media can generally be grouped into three categories: immutable timeline; mutable timeline; and alternate histories, as in the interacting-many-worlds interpretation.[87][88][89] Frequently in fiction, timeline is used to refer to all physical events in history, so that in time travel stories where events can be changed, the time traveler is described as creating a new or altered timeline.[90] This usage is distinct from the use of the term timeline to refer to a type of chart that illustrates a particular series of events, and the concept is also distinct from a world line, a term from Einstein’s theory of relativity which refers to the entire history of a single object.

See the original post here:

Time travel – Wikipedia

Time travel – Wikipedia

Time travel is the concept of movement between certain points in time, analogous to movement between different points in space by an object or a person, typically using a hypothetical device known as a time machine, in the form of a vehicle or of a portal connecting distant points in spacetime, either to an earlier time or to a later time, without the need for the time-traveling body to experience the intervening period in the usual sense. Time travel is a widely-recognized concept in philosophy and fiction. It was popularized by H. G. Wells’ 1895 novel The Time Machine, which moved the concept of time travel into the public imagination. However, it is uncertain if time travel to the past is physically possible. Forward time travel, outside the usual sense of the perception of time, is possible according to special relativity and general relativity, although making one body advance or delay more than a few milliseconds compared to another body is not feasible with current technology.[1] As for backwards time travel, it is possible to find solutions in general relativity that allow for it, but the solutions require conditions that may not be physically possible. Traveling to an arbitrary point in spacetime has a very limited support in theoretical physics, and usually only connected with quantum mechanics or wormholes, also known as Einstein-Rosen bridges.

Some ancient myths depict a character skipping forward in time. In Hindu mythology, the Mahabharata mentions the story of King Raivata Kakudmi, who travels to heaven to meet the creator Brahma and is surprised to learn when he returns to Earth that many ages have passed.[2] The Buddhist Pli Canon mentions the relativity of time. The Payasi Sutta tells of one of the Buddha’s chief disciples, Kumara Kassapa, who explains to the skeptic Payasi that time in the Heavens passes differently than on Earth.[3] The Japanese tale of “Urashima Tar”,[4] first described in the Nihongi (720) tells of a young fisherman named Urashima Taro who visits an undersea palace. After three days, he returns home to his village and finds himself 300 years in the future, where he has been forgotten, his house is in ruins, and his family has died.[5] In Jewish tradition, the 1st-century BC scholar Honi ha-M’agel is said to have fallen asleep and slept for seventy years. When waking up he returned home but found none of the people he knew, and no one believed he is who he claims to be.[6]

Early science fiction stories feature characters who sleep for years and awaken in a changed society, or are transported to the past through supernatural means. Among them L’An 2440, rve s’il en ft jamais (1770) by Louis-Sbastien Mercier, Rip Van Winkle (1819) by Washington Irving, Looking Backward (1888) by Edward Bellamy, and When the Sleeper Awakes (1899) by H.G. Wells. Prolonged sleep, like the more familiar time machine, is used as a means of time travel in these stories.[7]

The earliest work about backwards time travel is uncertain. Samuel Madden’s Memoirs of the Twentieth Century (1733) is a series of letters from British ambassadors in 1997 and 1998 to diplomats in the past, conveying the political and religious conditions of the future.[8]:9596 Because the narrator receives these letters from his guardian angel, Paul Alkon suggests in his book Origins of Futuristic Fiction that “the first time-traveler in English literature is a guardian angel.”[8]:85 Madden does not explain how the angel obtains these documents, but Alkon asserts that Madden “deserves recognition as the first to toy with the rich idea of time-travel in the form of an artifact sent backward from the future to be discovered in the present.”[8]:9596 In the science fiction anthology Far Boundaries (1951), editor August Derleth claims that an early short story about time travel is “Missing One’s Coach: An Anachronism”, written for the Dublin Literary Magazine[9] by an anonymous author in 1838.[10]:3 While the narrator waits under a tree for a coach to take him out of Newcastle, he is transported back in time over a thousand years. He encounters the Venerable Bede in a monastery and explains to him the developments of the coming centuries. However, the story never makes it clear whether these events are real or a dream.[10]:1138 Another early work about time travel is The Forebears of Kalimeros: Alexander, son of Philip of Macedon by Alexander Veltman published in 1836.[11]

Charles Dickens’s A Christmas Carol (1843) has early depictions of time travel in both directions, as the protagonist, Ebenezer Scrooge, is transported to Christmases past and future. Other stories employ the same template, where a character naturally goes to sleep, and upon waking up finds itself in a different time.[12] A clearer example of backward time travel is found in the popular 1861 book Paris avant les hommes (Paris before Men) by the French botanist and geologist Pierre Boitard, published posthumously. In this story, the protagonist is transported to the prehistoric past by the magic of a “lame demon” (a French pun on Boitard’s name), where he encounters a Plesiosaur and an apelike ancestor and is able to interact with ancient creatures.[13] Edward Everett Hale’s “Hands Off” (1881) tells the story of an unnamed being, possibly the soul of a person who has recently died, who interferes with ancient Egyptian history by preventing Joseph’s enslavement. This may have been the first story to feature an alternate history created as a result of time travel.[14]:54

One of the first stories to feature time travel by means of a machine is “The Clock that Went Backward” by Edward Page Mitchell,[15] which appeared in the New York Sun in 1881. However, the mechanism borders on fantasy. An unusual clock, when wound, runs backwards and transports people nearby back in time. The author does not explain the origin or properties of the clock.[14]:55 Enrique Gaspar y Rimbau’s El Anacronpete (1887) may have been the first story to feature a vessel engineered to travel through time.[16][17] Andrew Sawyer has commented that the story “does seem to be the first literary description of a time machine noted so far”, adding that “Edward Page Mitchell’s story ‘The Clock That Went Backward’ (1881) is usually described as the first time-machine story, but I’m not sure that a clock quite counts.”[18] H. G. Wells’s The Time Machine (1895) popularized the concept of time travel by mechanical means.[19]

Some theories, most notably special and general relativity, suggest that suitable geometries of spacetime or specific types of motion in space might allow time travel into the past and future if these geometries or motions were possible.[20]:499 In technical papers, physicists discuss the possibility of closed timelike curves, which are world lines that form closed loops in spacetime, allowing objects to return to their own past. There are known to be solutions to the equations of general relativity that describe spacetimes which contain closed timelike curves, such as Gdel spacetime, but the physical plausibility of these solutions is uncertain.

Many in the scientific community believe that backward time travel is highly unlikely. Any theory that would allow time travel would introduce potential problems of causality.[21] The classic example of a problem involving causality is the “grandfather paradox”: what if one were to go back in time and kill one’s own grandfather before one’s father was conceived? Some physicists, such as Novikov and Deutsch, suggested that these sorts of temporal paradoxes can be avoided through the Novikov self-consistency principle or to a variation of the many-worlds interpretation with interacting worlds.[22]

Time travel to the past is theoretically possible in certain general relativity spacetime geometries that permit traveling faster than the speed of light, such as cosmic strings, transversable wormholes, and Alcubierre drive.[23][24]:33130 The theory of general relativity does suggest a scientific basis for the possibility of backward time travel in certain unusual scenarios, although arguments from semiclassical gravity suggest that when quantum effects are incorporated into general relativity, these loopholes may be closed.[25] These semiclassical arguments led Hawking to formulate the chronology protection conjecture, suggesting that the fundamental laws of nature prevent time travel,[26] but physicists cannot come to a definite judgment on the issue without a theory of quantum gravity to join quantum mechanics and general relativity into a completely unified theory.[27][28]:150

The theory of general relativity describes the universe under a system of field equations that determine the metric, or distance function, of spacetime. There exist exact solutions to these equations that include closed time-like curves, which are world lines that intersect themselves; some point in the causal future of the world line is also in its causal past, a situation which is akin to time travel. Such a solution was first proposed by Kurt Gdel, a solution known as the Gdel metric, but his (and others’) solution requires the universe to have physical characteristics that it does not appear to have,[20]:499 such as rotation and lack of Hubble expansion. Whether general relativity forbids closed time-like curves for all realistic conditions is still being researched.[29]

Wormholes are a hypothetical warped spacetime which are permitted by the Einstein field equations of general relativity.[30]:100 A proposed time-travel machine using a traversable wormhole would hypothetically work in the following way: One end of the wormhole is accelerated to some significant fraction of the speed of light, perhaps with some advanced propulsion system, and then brought back to the point of origin. Alternatively, another way is to take one entrance of the wormhole and move it to within the gravitational field of an object that has higher gravity than the other entrance, and then return it to a position near the other entrance. For both of these methods, time dilation causes the end of the wormhole that has been moved to have aged less, or become “younger”, than the stationary end as seen by an external observer; however, time connects differently through the wormhole than outside it, so that synchronized clocks at either end of the wormhole will always remain synchronized as seen by an observer passing through the wormhole, no matter how the two ends move around.[20]:502 This means that an observer entering the “younger” end would exit the “older” end at a time when it was the same age as the “younger” end, effectively going back in time as seen by an observer from the outside. One significant limitation of such a time machine is that it is only possible to go as far back in time as the initial creation of the machine;[20]:503 in essence, it is more of a path through time than it is a device that itself moves through time, and it would not allow the technology itself to be moved backward in time.

According to current theories on the nature of wormholes, construction of a traversable wormhole would require the existence of a substance with negative energy, often referred to as “exotic matter”. More technically, the wormhole spacetime requires a distribution of energy that violates various energy conditions, such as the null energy condition along with the weak, strong, and dominant energy conditions. However, it is known that quantum effects can lead to small measurable violations of the null energy condition,[30]:101 and many physicists believe that the required negative energy may actually be possible due to the Casimir effect in quantum physics.[31] Although early calculations suggested a very large amount of negative energy would be required, later calculations showed that the amount of negative energy can be made arbitrarily small.[32]

In 1993, Matt Visser argued that the two mouths of a wormhole with such an induced clock difference could not be brought together without inducing quantum field and gravitational effects that would either make the wormhole collapse or the two mouths repel each other.[33] Because of this, the two mouths could not be brought close enough for causality violation to take place. However, in a 1997 paper, Visser hypothesized that a complex “Roman ring” (named after Tom Roman) configuration of an N number of wormholes arranged in a symmetric polygon could still act as a time machine, although he concludes that this is more likely a flaw in classical quantum gravity theory rather than proof that causality violation is possible.[34]

Another approach involves a dense spinning cylinder usually referred to as a Tipler cylinder, a GR solution discovered by Willem Jacob van Stockum[35] in 1936 and Kornel Lanczos[36] in 1924, but not recognized as allowing closed timelike curves[37]:21 until an analysis by Frank Tipler[38] in 1974. If a cylinder is infinitely long and spins fast enough about its long axis, then a spaceship flying around the cylinder on a spiral path could travel back in time (or forward, depending on the direction of its spiral). However, the density and speed required is so great that ordinary matter is not strong enough to construct it. A similar device might be built from a cosmic string, but none are known to exist, and it does not seem to be possible to create a new cosmic string. Physicist Ronald Mallett is attempting to recreate the conditions of a rotating black hole with ring lasers, in order to bend spacetime and allow for time travel.[39]

A more fundamental objection to time travel schemes based on rotating cylinders or cosmic strings has been put forward by Stephen Hawking, who proved a theorem showing that according to general relativity it is impossible to build a time machine of a special type (a “time machine with the compactly generated Cauchy horizon”) in a region where the weak energy condition is satisfied, meaning that the region contains no matter with negative energy density (exotic matter). Solutions such as Tipler’s assume cylinders of infinite length, which are easier to analyze mathematically, and although Tipler suggested that a finite cylinder might produce closed timelike curves if the rotation rate were fast enough,[37]:169 he did not prove this. But Hawking points out that because of his theorem, “it can’t be done with positive energy density everywhere! I can prove that to build a finite time machine, you need negative energy.”[28]:96 This result comes from Hawking’s 1992 paper on the chronology protection conjecture, where he examines “the case that the causality violations appear in a finite region of spacetime without curvature singularities” and proves that “[t]here will be a Cauchy horizon that is compactly generated and that in general contains one or more closed null geodesics which will be incomplete. One can define geometrical quantities that measure the Lorentz boost and area increase on going round these closed null geodesics. If the causality violation developed from a noncompact initial surface, the averaged weak energy condition must be violated on the Cauchy horizon.”[26] This theorem does not rule out the possibility of time travel by means of time machines with the non-compactly generated Cauchy horizons (such as the Deutsch-Politzer time machine) or in regions which contain exotic matter, which would be used for traversable wormholes or the Alcubierre drive.

When a signal is sent from one location and received at another location, then as long as the signal is moving at the speed of light or slower, the mathematics of simultaneity in the theory of relativity show that all reference frames agree that the transmission-event happened before the reception-event. When the signal travels faster than light, it is received before it is sent, in all reference frames.[40] The signal could be said to have moved backward in time. This hypothetical scenario is sometimes referred to as a tachyonic antitelephone.[41]

Quantum-mechanical phenomena such as quantum teleportation, the EPR paradox, or quantum entanglement might appear to create a mechanism that allows for faster-than-light (FTL) communication or time travel, and in fact some interpretations of quantum mechanics such as the Bohm interpretation presume that some information is being exchanged between particles instantaneously in order to maintain correlations between particles.[42] This effect was referred to as “spooky action at a distance” by Einstein.

Nevertheless, the fact that causality is preserved in quantum mechanics is a rigorous result in modern quantum field theories, and therefore modern theories do not allow for time travel or FTL communication. In any specific instance where FTL has been claimed, more detailed analysis has proven that to get a signal, some form of classical communication must also be used.[43] The no-communication theorem also gives a general proof that quantum entanglement cannot be used to transmit information faster than classical signals.

A variation of Everett’s many-worlds interpretation (MWI) of quantum mechanics provides a resolution to the grandfather paradox that involves the time traveler arriving in a different universe than the one they came from; it’s been argued that since the traveler arrives in a different universe’s history and not their own history, this is not “genuine” time travel.[44] The accepted many-worlds interpretation suggests that all possible quantum events can occur in mutually exclusive histories.[45] However, some variations allow different universes to interact. This concept is most often used in science-fiction, but some physicists such as David Deutsch have suggested that a time traveler should end up in a different history than the one he started from.[46][47] On the other hand, Stephen Hawking has argued that even if the MWI is correct, we should expect each time traveler to experience a single self-consistent history, so that time travelers remain within their own world rather than traveling to a different one.[48] The physicist Allen Everett argued that Deutsch’s approach “involves modifying fundamental principles of quantum mechanics; it certainly goes beyond simply adopting the MWI”. Everett also argues that even if Deutsch’s approach is correct, it would imply that any macroscopic object composed of multiple particles would be split apart when traveling back in time through a wormhole, with different particles emerging in different worlds.[22]

Daniel Greenberger and Karl Svozil proposed that quantum theory gives a model for time travel without paradoxes.[49][50] The quantum theory observation causes possible states to ‘collapse’ into one measured state; hence, the past observed from the present is deterministic (it has only one possible state), but the present observed from the past has many possible states until our actions cause it to collapse into one state. Our actions will then be seen to have been inevitable.

Certain experiments carried out give the impression of reversed causality, but fail to show it under closer examination.

The delayed choice quantum eraser experiment performed by Marlan Scully involves pairs of entangled photons that are divided into “signal photons” and “idler photons”, with the signal photons emerging from one of two locations and their position later measured as in the double-slit experiment. Depending on how the idler photon is measured, the experimenter can either learn which of the two locations the signal photon emerged from or “erase” that information. Even though the signal photons can be measured before the choice has been made about the idler photons, the choice seems to retroactively determine whether or not an interference pattern is observed when one correlates measurements of idler photons to the corresponding signal photons. However, since interference can only be observed after the idler photons are measured and they are correlated with the signal photons, there is no way for experimenters to tell what choice will be made in advance just by looking at the signal photons, only by gathering classical information from the entire system; thus causality is preserved.[51]

The experiment of Lijun Wang might also show causality violation since it made it possible to send packages of waves through a bulb of caesium gas in such a way that the package appeared to exit the bulb 62 nanoseconds before its entry, but a wave package is not a single well-defined object but rather a sum of multiple waves of different frequencies (see Fourier analysis), and the package can appear to move faster than light or even backward in time even if none of the pure waves in the sum do so. This effect cannot be used to send any matter, energy, or information faster than light,[52] so this experiment is understood not to violate causality either.

The physicists Gnter Nimtz and Alfons Stahlhofen, of the University of Koblenz, claim to have violated Einstein’s theory of relativity by transmitting photons faster than the speed of light. They say they have conducted an experiment in which microwave photons traveled “instantaneously” between a pair of prisms that had been moved up to 3ft (0.91m) apart, using a phenomenon known as quantum tunneling. Nimtz told New Scientist magazine: “For the time being, this is the only violation of special relativity that I know of.” However, other physicists say that this phenomenon does not allow information to be transmitted faster than light. Aephraim Steinberg, a quantum optics expert at the University of Toronto, Canada, uses the analogy of a train traveling from Chicago to New York, but dropping off train cars at each station along the way, so that the center of the train moves forward at each stop; in this way, the speed of the center of the train exceeds the speed of any of the individual cars.[53]

Shengwang Du claims in a peer-reviewed journal to have observed single photons’ precursors, saying that they travel no faster than c in a vacuum. His experiment involved slow light as well as passing light through a vacuum. He generated two single photons, passing one through rubidium atoms that had been cooled with a laser (thus slowing the light) and passing one through a vacuum. Both times, apparently, the precursors preceded the photons’ main bodies, and the precursor traveled at c in a vacuum. According to Du, this implies that there is no possibility of light traveling faster than c and, thus, no possibility of violating causality.[54]

The absence of time travelers from the future is a variation of the Fermi paradox, and like the absence of extraterrestrial visitors, the absence of time travelers does not prove time travel is physically impossible; it might be that time travel is physically possible but is never developed or is cautiously used. Carl Sagan once suggested the possibility that time travelers could be here but are disguising their existence or are not recognized as time travelers.[27] Some versions of general relativity suggest that time travel might only be possible in a region of spacetime that is warped a certain way, and hence time travelers would not be able to travel back to earlier regions in spacetime, before this region existed. Stephen Hawking stated that this would explain why the world has not already been overrun by “tourists from the future.”[48]

Several experiments have been carried out to try to entice future humans, who might invent time travel technology, to come back and demonstrate it to people of the present time. Events such as Perth’s Destination Day (2005) or MIT’s Time Traveler Convention heavily publicized permanent “advertisements” of a meeting time and place for future time travelers to meet. Back in 1982, a group in Baltimore, Maryland, identifying itself as the Krononauts, hosted an event of this type welcoming visitors from the future.[55][56] These experiments only stood the possibility of generating a positive result demonstrating the existence of time travel, but have failed so farno time travelers are known to have attended either event. Some versions of the many-worlds interpretation can be used to suggest that future humans have traveled back in time, but have traveled back to the meeting time and place in a parallel universe.[57]

There is a great deal of observable evidence for time dilation in special relativity[58] and gravitational time dilation in general relativity,[59][60][61] for example in the famous and easy-to-replicate observation of atmospheric muon decay.[62][63][64] The theory of relativity states that the speed of light is invariant for all observers in any frame of reference; that is, it is always the same. Time dilation is a direct consequence of the invariance of the speed of light.[64] Time dilation may be regarded in a limited sense as “time travel into the future”: a person may use time dilation so that a small amount of proper time passes for them, while a large amount of proper time passes elsewhere. This can be achieved by traveling at relativistic speeds or through the effects of gravity.[65]

For two identical clocks moving relative to each other without accelerating, each clock measures the other to be ticking slower. This is possible due to the relativity of simultaneity. However, the symmetry is broken if one clock accelerates, allowing for less proper time to pass for one clock than the other. The twin paradox describes this: one twin remains on Earth, while the other undergoes acceleration to relativistic speed as they travel into space, turn around, and travel back to Earth; the traveling twin ages less than the twin who stayed on Earth, because of the time dilation experienced during their acceleration. General relativity treats the effects of acceleration and the effects of gravity as equivalent, and shows that time dilation also occurs in gravity wells, with a clock deeper in the well ticking more slowly; this effect is taken into account when calibrating the clocks on the satellites of the Global Positioning System, and it could lead to significant differences in rates of aging for observers at different distances from a large gravity well such as a black hole.[24]:33130

A time machine that utilizes this principle might be, for instance, a spherical shell with a diameter of 5 meters and the mass of Jupiter. A person at its center will travel forward in time at a rate four times that of distant observers. Squeezing the mass of a large planet into such a small structure is not expected to be within humanity’s technological capabilities in the near future.[24]:76140 With current technologies, it is only possible to cause a human traveler to age less than companions on Earth by a very small fraction of a second, the current record being about 20 milliseconds for the cosmonaut Sergei Avdeyev.[66]

Philosophers have discussed the nature of time since at least the time of ancient Greece; for example, Parmenides presented the view that time is an illusion. Centuries later, Newton supported the idea of absolute time, while his contemporary Leibniz maintained that time is only a relation between events and it cannot be expressed independently. The latter approach eventually gave rise to the spacetime of relativity.[67]

Many philosophers have argued that relativity implies eternalism, the idea that the past and future exist in a real sense, not only as changes that occurred or will occur to the present.[68] Philosopher of science Dean Rickles disagrees with some qualifications, but notes that “the consensus among philosophers seems to be that special and general relativity are incompatible with presentism.”[69] Some philosophers view time as a dimension equal to spatial dimensions, that future events are “already there” in the same sense different places exist, and that there is no objective flow of time; however, this view is disputed.[70]

Presentism is a school of philosophy that holds that the future and the past exist only as changes that occurred or will occur to the present, and they have no real existence of their own. In this view, time travel is impossible because there is no future or past to travel to.[68] Keller and Nelson have argued that even if past and future objects do not exist, there can still be definite truths about past and future events, and thus it is possible that a future truth about a time traveler deciding to travel back to the present date could explain the time traveler’s actual appearance in the present;[71] these views are contested by some authors.[72]

Presentism in classical spacetime deems that only the present exists; this is not reconcilable with special relativity, shown in the following example: Alice and Bob are simultaneous observers of event O. For Alice, some event E is simultaneous with O, but for Bob, event E is in the past or future. Therefore, Alice and Bob disagree about what exists in the present, which contradicts classical presentism. “Here-now presentism” attempts to reconcile this by only acknowledging the time and space of a single point; this is unsatisfactory because objects coming and going from the “here-now” alternate between real and unreal, in addition to the lack of a privileged “here-now” that would be the “real” present. “Relativized presentism” acknowledges that there are infinite frames of reference, each of them has a different set of simultaneous events, which makes it impossible to distinguish a single “real” present, and hence either all events in time are realblurring the difference between presentism and eternalismor each frame of reference exists in its own reality. Options for presentism in special relativity appear to be exhausted, but Gdel and others suspect presentism may be valid for some forms of general relativity.[73] Generally, the idea of absolute time and space is considered incompatible with general relativity; there is no universal truth about the absolute position of events which occur at different times, and thus no way to determine which point in space at one time is at the universal “same position” at another time,[74] and all coordinate systems are on equal footing as given by the principle of diffeomorphism invariance.[75]

A common objection to the idea of traveling back in time is put forth in the grandfather paradox or the argument of auto-infanticide.[76] If one were able to go back in time, inconsistencies and contradictions would ensue if the time traveler were to change anything; there is a contradiction if the past becomes different from the way it is.[77][78] The paradox is commonly described with a person who travels to the past and kills their own grandfather, prevents the existence of their father or mother, and therefore their own existence.[27] Philosophers question whether these paradoxes make time travel impossible. Some philosophers answer the paradoxes by arguing that it might be the case that backward time travel could be possible but that it would be impossible to actually change the past in any way,[79] an idea similar to the proposed Novikov self-consistency principle in physics.

According to the philosophical theory of compossibility, what can happen, for example in the context of time travel, must be weighed against the context of everything relating to the situation. If the past is a certain way, it’s not possible for it to be any other way. What can happen when a time traveler visits the past is limited to what did happen, in order to prevent logical contradictions.[80]

The Novikov self-consistency principle, named after Igor Dmitrievich Novikov, states that any actions taken by a time traveler or by an object that travels back in time were part of history all along, and therefore it is impossible for the time traveler to “change” history in any way. The time traveler’s actions may be the cause of events in their own past though, which leads to the potential for circular causation, sometimes called a predestination paradox,[81] ontological paradox,[82] or bootstrap paradox.[82][83] The term bootstrap paradox was popularized by Robert A. Heinlein’s story “By His Bootstraps”.[84] The Novikov self-consistency principle proposes that the local laws of physics in a region of spacetime containing time travelers cannot be any different from the local laws of physics in any other region of spacetime.[85]

The philosopher Kelley L. Ross argues in “Time Travel Paradoxes”[86] that in a scenario involving a physical object whose world-line or history forms a closed loop in time there can be a violation of the second law of thermodynamics. Ross uses “Somewhere in Time” as an example of such an ontological paradox, where a watch is given to a person, and 60 years later the same watch is brought back in time and given to the same character. Ross states that entropy of the watch will increase, and the watch carried back in time will be more worn with each repetition of its history. The second law of thermodynamics is understood by modern physicists to be a statistical law, so decreasing entropy or non-increasing entropy are not impossible, just improbable. Additionally, entropy statistically increases in systems which are isolated, so non-isolated systems, such as an object, that interact with the outside world, can become less worn and decrease in entropy, and it’s possible for an object whose world-line forms a closed loop to be always in the same condition in the same point of its history.[24]:23

Time travel themes in science fiction and the media can generally be grouped into three categories: immutable timeline; mutable timeline; and alternate histories, as in the interacting-many-worlds interpretation.[87][88][89] Frequently in fiction, timeline is used to refer to all physical events in history, so that in time travel stories where events can be changed, the time traveler is described as creating a new or altered timeline.[90] This usage is distinct from the use of the term timeline to refer to a type of chart that illustrates a particular series of events, and the concept is also distinct from a world line, a term from Einstein’s theory of relativity which refers to the entire history of a single object.

Excerpt from:

Time travel – Wikipedia

Time travel – Wikipedia

Time travel is the concept of movement between certain points in time, analogous to movement between different points in space by an object or a person, typically using a hypothetical device known as a time machine, in the form of a vehicle or of a portal connecting distant points in spacetime, either to an earlier time or to a later time, without the need for the time-traveling body to experience the intervening period in the usual sense. Time travel is a widely-recognized concept in philosophy and fiction. It was popularized by H. G. Wells’ 1895 novel The Time Machine, which moved the concept of time travel into the public imagination. However, it is uncertain if time travel to the past is physically possible. Forward time travel, outside the usual sense of the perception of time, is possible according to special relativity and general relativity, although making one body advance or delay more than a few milliseconds compared to another body is not feasible with current technology.[1] As for backwards time travel, it is possible to find solutions in general relativity that allow for it, but the solutions require conditions that may not be physically possible. Traveling to an arbitrary point in spacetime has a very limited support in theoretical physics, and usually only connected with quantum mechanics or wormholes, also known as Einstein-Rosen bridges.

Some ancient myths depict a character skipping forward in time. In Hindu mythology, the Mahabharata mentions the story of King Raivata Kakudmi, who travels to heaven to meet the creator Brahma and is surprised to learn when he returns to Earth that many ages have passed.[2] The Buddhist Pli Canon mentions the relativity of time. The Payasi Sutta tells of one of the Buddha’s chief disciples, Kumara Kassapa, who explains to the skeptic Payasi that time in the Heavens passes differently than on Earth.[3] The Japanese tale of “Urashima Tar”,[4] first described in the Nihongi (720) tells of a young fisherman named Urashima Taro who visits an undersea palace. After three days, he returns home to his village and finds himself 300 years in the future, where he has been forgotten, his house is in ruins, and his family has died.[5] In Jewish tradition, the 1st-century BC scholar Honi ha-M’agel is said to have fallen asleep and slept for seventy years. When waking up he returned home but found none of the people he knew, and no one believed he is who he claims to be.[6]

Early science fiction stories feature characters who sleep for years and awaken in a changed society, or are transported to the past through supernatural means. Among them L’An 2440, rve s’il en ft jamais (1770) by Louis-Sbastien Mercier, Rip Van Winkle (1819) by Washington Irving, Looking Backward (1888) by Edward Bellamy, and When the Sleeper Awakes (1899) by H.G. Wells. Prolonged sleep, like the more familiar time machine, is used as a means of time travel in these stories.[7]

The earliest work about backwards time travel is uncertain. Samuel Madden’s Memoirs of the Twentieth Century (1733) is a series of letters from British ambassadors in 1997 and 1998 to diplomats in the past, conveying the political and religious conditions of the future.[8]:9596 Because the narrator receives these letters from his guardian angel, Paul Alkon suggests in his book Origins of Futuristic Fiction that “the first time-traveler in English literature is a guardian angel.”[8]:85 Madden does not explain how the angel obtains these documents, but Alkon asserts that Madden “deserves recognition as the first to toy with the rich idea of time-travel in the form of an artifact sent backward from the future to be discovered in the present.”[8]:9596 In the science fiction anthology Far Boundaries (1951), editor August Derleth claims that an early short story about time travel is “Missing One’s Coach: An Anachronism”, written for the Dublin Literary Magazine[9] by an anonymous author in 1838.[10]:3 While the narrator waits under a tree for a coach to take him out of Newcastle, he is transported back in time over a thousand years. He encounters the Venerable Bede in a monastery and explains to him the developments of the coming centuries. However, the story never makes it clear whether these events are real or a dream.[10]:1138 Another early work about time travel is The Forebears of Kalimeros: Alexander, son of Philip of Macedon by Alexander Veltman published in 1836.[11]

Charles Dickens’s A Christmas Carol (1843) has early depictions of time travel in both directions, as the protagonist, Ebenezer Scrooge, is transported to Christmases past and future. Other stories employ the same template, where a character naturally goes to sleep, and upon waking up finds itself in a different time.[12] A clearer example of backward time travel is found in the popular 1861 book Paris avant les hommes (Paris before Men) by the French botanist and geologist Pierre Boitard, published posthumously. In this story, the protagonist is transported to the prehistoric past by the magic of a “lame demon” (a French pun on Boitard’s name), where he encounters a Plesiosaur and an apelike ancestor and is able to interact with ancient creatures.[13] Edward Everett Hale’s “Hands Off” (1881) tells the story of an unnamed being, possibly the soul of a person who has recently died, who interferes with ancient Egyptian history by preventing Joseph’s enslavement. This may have been the first story to feature an alternate history created as a result of time travel.[14]:54

One of the first stories to feature time travel by means of a machine is “The Clock that Went Backward” by Edward Page Mitchell,[15] which appeared in the New York Sun in 1881. However, the mechanism borders on fantasy. An unusual clock, when wound, runs backwards and transports people nearby back in time. The author does not explain the origin or properties of the clock.[14]:55 Enrique Gaspar y Rimbau’s El Anacronpete (1887) may have been the first story to feature a vessel engineered to travel through time.[16][17] Andrew Sawyer has commented that the story “does seem to be the first literary description of a time machine noted so far”, adding that “Edward Page Mitchell’s story ‘The Clock That Went Backward’ (1881) is usually described as the first time-machine story, but I’m not sure that a clock quite counts.”[18] H. G. Wells’s The Time Machine (1895) popularized the concept of time travel by mechanical means.[19]

Some theories, most notably special and general relativity, suggest that suitable geometries of spacetime or specific types of motion in space might allow time travel into the past and future if these geometries or motions were possible.[20]:499 In technical papers, physicists discuss the possibility of closed timelike curves, which are world lines that form closed loops in spacetime, allowing objects to return to their own past. There are known to be solutions to the equations of general relativity that describe spacetimes which contain closed timelike curves, such as Gdel spacetime, but the physical plausibility of these solutions is uncertain.

Many in the scientific community believe that backward time travel is highly unlikely. Any theory that would allow time travel would introduce potential problems of causality.[21] The classic example of a problem involving causality is the “grandfather paradox”: what if one were to go back in time and kill one’s own grandfather before one’s father was conceived? Some physicists, such as Novikov and Deutsch, suggested that these sorts of temporal paradoxes can be avoided through the Novikov self-consistency principle or to a variation of the many-worlds interpretation with interacting worlds.[22]

Time travel to the past is theoretically possible in certain general relativity spacetime geometries that permit traveling faster than the speed of light, such as cosmic strings, transversable wormholes, and Alcubierre drive.[23][24]:33130 The theory of general relativity does suggest a scientific basis for the possibility of backward time travel in certain unusual scenarios, although arguments from semiclassical gravity suggest that when quantum effects are incorporated into general relativity, these loopholes may be closed.[25] These semiclassical arguments led Hawking to formulate the chronology protection conjecture, suggesting that the fundamental laws of nature prevent time travel,[26] but physicists cannot come to a definite judgment on the issue without a theory of quantum gravity to join quantum mechanics and general relativity into a completely unified theory.[27][28]:150

The theory of general relativity describes the universe under a system of field equations that determine the metric, or distance function, of spacetime. There exist exact solutions to these equations that include closed time-like curves, which are world lines that intersect themselves; some point in the causal future of the world line is also in its causal past, a situation which is akin to time travel. Such a solution was first proposed by Kurt Gdel, a solution known as the Gdel metric, but his (and others’) solution requires the universe to have physical characteristics that it does not appear to have,[20]:499 such as rotation and lack of Hubble expansion. Whether general relativity forbids closed time-like curves for all realistic conditions is still being researched.[29]

Wormholes are a hypothetical warped spacetime which are permitted by the Einstein field equations of general relativity.[30]:100 A proposed time-travel machine using a traversable wormhole would hypothetically work in the following way: One end of the wormhole is accelerated to some significant fraction of the speed of light, perhaps with some advanced propulsion system, and then brought back to the point of origin. Alternatively, another way is to take one entrance of the wormhole and move it to within the gravitational field of an object that has higher gravity than the other entrance, and then return it to a position near the other entrance. For both of these methods, time dilation causes the end of the wormhole that has been moved to have aged less, or become “younger”, than the stationary end as seen by an external observer; however, time connects differently through the wormhole than outside it, so that synchronized clocks at either end of the wormhole will always remain synchronized as seen by an observer passing through the wormhole, no matter how the two ends move around.[20]:502 This means that an observer entering the “younger” end would exit the “older” end at a time when it was the same age as the “younger” end, effectively going back in time as seen by an observer from the outside. One significant limitation of such a time machine is that it is only possible to go as far back in time as the initial creation of the machine;[20]:503 in essence, it is more of a path through time than it is a device that itself moves through time, and it would not allow the technology itself to be moved backward in time.

According to current theories on the nature of wormholes, construction of a traversable wormhole would require the existence of a substance with negative energy, often referred to as “exotic matter”. More technically, the wormhole spacetime requires a distribution of energy that violates various energy conditions, such as the null energy condition along with the weak, strong, and dominant energy conditions. However, it is known that quantum effects can lead to small measurable violations of the null energy condition,[30]:101 and many physicists believe that the required negative energy may actually be possible due to the Casimir effect in quantum physics.[31] Although early calculations suggested a very large amount of negative energy would be required, later calculations showed that the amount of negative energy can be made arbitrarily small.[32]

In 1993, Matt Visser argued that the two mouths of a wormhole with such an induced clock difference could not be brought together without inducing quantum field and gravitational effects that would either make the wormhole collapse or the two mouths repel each other.[33] Because of this, the two mouths could not be brought close enough for causality violation to take place. However, in a 1997 paper, Visser hypothesized that a complex “Roman ring” (named after Tom Roman) configuration of an N number of wormholes arranged in a symmetric polygon could still act as a time machine, although he concludes that this is more likely a flaw in classical quantum gravity theory rather than proof that causality violation is possible.[34]

Another approach involves a dense spinning cylinder usually referred to as a Tipler cylinder, a GR solution discovered by Willem Jacob van Stockum[35] in 1936 and Kornel Lanczos[36] in 1924, but not recognized as allowing closed timelike curves[37]:21 until an analysis by Frank Tipler[38] in 1974. If a cylinder is infinitely long and spins fast enough about its long axis, then a spaceship flying around the cylinder on a spiral path could travel back in time (or forward, depending on the direction of its spiral). However, the density and speed required is so great that ordinary matter is not strong enough to construct it. A similar device might be built from a cosmic string, but none are known to exist, and it does not seem to be possible to create a new cosmic string. Physicist Ronald Mallett is attempting to recreate the conditions of a rotating black hole with ring lasers, in order to bend spacetime and allow for time travel.[39]

A more fundamental objection to time travel schemes based on rotating cylinders or cosmic strings has been put forward by Stephen Hawking, who proved a theorem showing that according to general relativity it is impossible to build a time machine of a special type (a “time machine with the compactly generated Cauchy horizon”) in a region where the weak energy condition is satisfied, meaning that the region contains no matter with negative energy density (exotic matter). Solutions such as Tipler’s assume cylinders of infinite length, which are easier to analyze mathematically, and although Tipler suggested that a finite cylinder might produce closed timelike curves if the rotation rate were fast enough,[37]:169 he did not prove this. But Hawking points out that because of his theorem, “it can’t be done with positive energy density everywhere! I can prove that to build a finite time machine, you need negative energy.”[28]:96 This result comes from Hawking’s 1992 paper on the chronology protection conjecture, where he examines “the case that the causality violations appear in a finite region of spacetime without curvature singularities” and proves that “[t]here will be a Cauchy horizon that is compactly generated and that in general contains one or more closed null geodesics which will be incomplete. One can define geometrical quantities that measure the Lorentz boost and area increase on going round these closed null geodesics. If the causality violation developed from a noncompact initial surface, the averaged weak energy condition must be violated on the Cauchy horizon.”[26] This theorem does not rule out the possibility of time travel by means of time machines with the non-compactly generated Cauchy horizons (such as the Deutsch-Politzer time machine) or in regions which contain exotic matter, which would be used for traversable wormholes or the Alcubierre drive.

When a signal is sent from one location and received at another location, then as long as the signal is moving at the speed of light or slower, the mathematics of simultaneity in the theory of relativity show that all reference frames agree that the transmission-event happened before the reception-event. When the signal travels faster than light, it is received before it is sent, in all reference frames.[40] The signal could be said to have moved backward in time. This hypothetical scenario is sometimes referred to as a tachyonic antitelephone.[41]

Quantum-mechanical phenomena such as quantum teleportation, the EPR paradox, or quantum entanglement might appear to create a mechanism that allows for faster-than-light (FTL) communication or time travel, and in fact some interpretations of quantum mechanics such as the Bohm interpretation presume that some information is being exchanged between particles instantaneously in order to maintain correlations between particles.[42] This effect was referred to as “spooky action at a distance” by Einstein.

Nevertheless, the fact that causality is preserved in quantum mechanics is a rigorous result in modern quantum field theories, and therefore modern theories do not allow for time travel or FTL communication. In any specific instance where FTL has been claimed, more detailed analysis has proven that to get a signal, some form of classical communication must also be used.[43] The no-communication theorem also gives a general proof that quantum entanglement cannot be used to transmit information faster than classical signals.

A variation of Everett’s many-worlds interpretation (MWI) of quantum mechanics provides a resolution to the grandfather paradox that involves the time traveler arriving in a different universe than the one they came from; it’s been argued that since the traveler arrives in a different universe’s history and not their own history, this is not “genuine” time travel.[44] The accepted many-worlds interpretation suggests that all possible quantum events can occur in mutually exclusive histories.[45] However, some variations allow different universes to interact. This concept is most often used in science-fiction, but some physicists such as David Deutsch have suggested that a time traveler should end up in a different history than the one he started from.[46][47] On the other hand, Stephen Hawking has argued that even if the MWI is correct, we should expect each time traveler to experience a single self-consistent history, so that time travelers remain within their own world rather than traveling to a different one.[48] The physicist Allen Everett argued that Deutsch’s approach “involves modifying fundamental principles of quantum mechanics; it certainly goes beyond simply adopting the MWI”. Everett also argues that even if Deutsch’s approach is correct, it would imply that any macroscopic object composed of multiple particles would be split apart when traveling back in time through a wormhole, with different particles emerging in different worlds.[22]

Daniel Greenberger and Karl Svozil proposed that quantum theory gives a model for time travel without paradoxes.[49][50] The quantum theory observation causes possible states to ‘collapse’ into one measured state; hence, the past observed from the present is deterministic (it has only one possible state), but the present observed from the past has many possible states until our actions cause it to collapse into one state. Our actions will then be seen to have been inevitable.

Certain experiments carried out give the impression of reversed causality, but fail to show it under closer examination.

The delayed choice quantum eraser experiment performed by Marlan Scully involves pairs of entangled photons that are divided into “signal photons” and “idler photons”, with the signal photons emerging from one of two locations and their position later measured as in the double-slit experiment. Depending on how the idler photon is measured, the experimenter can either learn which of the two locations the signal photon emerged from or “erase” that information. Even though the signal photons can be measured before the choice has been made about the idler photons, the choice seems to retroactively determine whether or not an interference pattern is observed when one correlates measurements of idler photons to the corresponding signal photons. However, since interference can only be observed after the idler photons are measured and they are correlated with the signal photons, there is no way for experimenters to tell what choice will be made in advance just by looking at the signal photons, only by gathering classical information from the entire system; thus causality is preserved.[51]

The experiment of Lijun Wang might also show causality violation since it made it possible to send packages of waves through a bulb of caesium gas in such a way that the package appeared to exit the bulb 62 nanoseconds before its entry, but a wave package is not a single well-defined object but rather a sum of multiple waves of different frequencies (see Fourier analysis), and the package can appear to move faster than light or even backward in time even if none of the pure waves in the sum do so. This effect cannot be used to send any matter, energy, or information faster than light,[52] so this experiment is understood not to violate causality either.

The physicists Gnter Nimtz and Alfons Stahlhofen, of the University of Koblenz, claim to have violated Einstein’s theory of relativity by transmitting photons faster than the speed of light. They say they have conducted an experiment in which microwave photons traveled “instantaneously” between a pair of prisms that had been moved up to 3ft (0.91m) apart, using a phenomenon known as quantum tunneling. Nimtz told New Scientist magazine: “For the time being, this is the only violation of special relativity that I know of.” However, other physicists say that this phenomenon does not allow information to be transmitted faster than light. Aephraim Steinberg, a quantum optics expert at the University of Toronto, Canada, uses the analogy of a train traveling from Chicago to New York, but dropping off train cars at each station along the way, so that the center of the train moves forward at each stop; in this way, the speed of the center of the train exceeds the speed of any of the individual cars.[53]

Shengwang Du claims in a peer-reviewed journal to have observed single photons’ precursors, saying that they travel no faster than c in a vacuum. His experiment involved slow light as well as passing light through a vacuum. He generated two single photons, passing one through rubidium atoms that had been cooled with a laser (thus slowing the light) and passing one through a vacuum. Both times, apparently, the precursors preceded the photons’ main bodies, and the precursor traveled at c in a vacuum. According to Du, this implies that there is no possibility of light traveling faster than c and, thus, no possibility of violating causality.[54]

The absence of time travelers from the future is a variation of the Fermi paradox, and like the absence of extraterrestrial visitors, the absence of time travelers does not prove time travel is physically impossible; it might be that time travel is physically possible but is never developed or is cautiously used. Carl Sagan once suggested the possibility that time travelers could be here but are disguising their existence or are not recognized as time travelers.[27] Some versions of general relativity suggest that time travel might only be possible in a region of spacetime that is warped a certain way, and hence time travelers would not be able to travel back to earlier regions in spacetime, before this region existed. Stephen Hawking stated that this would explain why the world has not already been overrun by “tourists from the future.”[48]

Several experiments have been carried out to try to entice future humans, who might invent time travel technology, to come back and demonstrate it to people of the present time. Events such as Perth’s Destination Day (2005) or MIT’s Time Traveler Convention heavily publicized permanent “advertisements” of a meeting time and place for future time travelers to meet. Back in 1982, a group in Baltimore, Maryland, identifying itself as the Krononauts, hosted an event of this type welcoming visitors from the future.[55][56] These experiments only stood the possibility of generating a positive result demonstrating the existence of time travel, but have failed so farno time travelers are known to have attended either event. Some versions of the many-worlds interpretation can be used to suggest that future humans have traveled back in time, but have traveled back to the meeting time and place in a parallel universe.[57]

There is a great deal of observable evidence for time dilation in special relativity[58] and gravitational time dilation in general relativity,[59][60][61] for example in the famous and easy-to-replicate observation of atmospheric muon decay.[62][63][64] The theory of relativity states that the speed of light is invariant for all observers in any frame of reference; that is, it is always the same. Time dilation is a direct consequence of the invariance of the speed of light.[64] Time dilation may be regarded in a limited sense as “time travel into the future”: a person may use time dilation so that a small amount of proper time passes for them, while a large amount of proper time passes elsewhere. This can be achieved by traveling at relativistic speeds or through the effects of gravity.[65]

For two identical clocks moving relative to each other without accelerating, each clock measures the other to be ticking slower. This is possible due to the relativity of simultaneity. However, the symmetry is broken if one clock accelerates, allowing for less proper time to pass for one clock than the other. The twin paradox describes this: one twin remains on Earth, while the other undergoes acceleration to relativistic speed as they travel into space, turn around, and travel back to Earth; the traveling twin ages less than the twin who stayed on Earth, because of the time dilation experienced during their acceleration. General relativity treats the effects of acceleration and the effects of gravity as equivalent, and shows that time dilation also occurs in gravity wells, with a clock deeper in the well ticking more slowly; this effect is taken into account when calibrating the clocks on the satellites of the Global Positioning System, and it could lead to significant differences in rates of aging for observers at different distances from a large gravity well such as a black hole.[24]:33130

A time machine that utilizes this principle might be, for instance, a spherical shell with a diameter of 5 meters and the mass of Jupiter. A person at its center will travel forward in time at a rate four times that of distant observers. Squeezing the mass of a large planet into such a small structure is not expected to be within humanity’s technological capabilities in the near future.[24]:76140 With current technologies, it is only possible to cause a human traveler to age less than companions on Earth by a very small fraction of a second, the current record being about 20 milliseconds for the cosmonaut Sergei Avdeyev.[66]

Philosophers have discussed the nature of time since at least the time of ancient Greece; for example, Parmenides presented the view that time is an illusion. Centuries later, Newton supported the idea of absolute time, while his contemporary Leibniz maintained that time is only a relation between events and it cannot be expressed independently. The latter approach eventually gave rise to the spacetime of relativity.[67]

Many philosophers have argued that relativity implies eternalism, the idea that the past and future exist in a real sense, not only as changes that occurred or will occur to the present.[68] Philosopher of science Dean Rickles disagrees with some qualifications, but notes that “the consensus among philosophers seems to be that special and general relativity are incompatible with presentism.”[69] Some philosophers view time as a dimension equal to spatial dimensions, that future events are “already there” in the same sense different places exist, and that there is no objective flow of time; however, this view is disputed.[70]

Presentism is a school of philosophy that holds that the future and the past exist only as changes that occurred or will occur to the present, and they have no real existence of their own. In this view, time travel is impossible because there is no future or past to travel to.[68] Keller and Nelson have argued that even if past and future objects do not exist, there can still be definite truths about past and future events, and thus it is possible that a future truth about a time traveler deciding to travel back to the present date could explain the time traveler’s actual appearance in the present;[71] these views are contested by some authors.[72]

Presentism in classical spacetime deems that only the present exists; this is not reconcilable with special relativity, shown in the following example: Alice and Bob are simultaneous observers of event O. For Alice, some event E is simultaneous with O, but for Bob, event E is in the past or future. Therefore, Alice and Bob disagree about what exists in the present, which contradicts classical presentism. “Here-now presentism” attempts to reconcile this by only acknowledging the time and space of a single point; this is unsatisfactory because objects coming and going from the “here-now” alternate between real and unreal, in addition to the lack of a privileged “here-now” that would be the “real” present. “Relativized presentism” acknowledges that there are infinite frames of reference, each of them has a different set of simultaneous events, which makes it impossible to distinguish a single “real” present, and hence either all events in time are realblurring the difference between presentism and eternalismor each frame of reference exists in its own reality. Options for presentism in special relativity appear to be exhausted, but Gdel and others suspect presentism may be valid for some forms of general relativity.[73] Generally, the idea of absolute time and space is considered incompatible with general relativity; there is no universal truth about the absolute position of events which occur at different times, and thus no way to determine which point in space at one time is at the universal “same position” at another time,[74] and all coordinate systems are on equal footing as given by the principle of diffeomorphism invariance.[75]

A common objection to the idea of traveling back in time is put forth in the grandfather paradox or the argument of auto-infanticide.[76] If one were able to go back in time, inconsistencies and contradictions would ensue if the time traveler were to change anything; there is a contradiction if the past becomes different from the way it is.[77][78] The paradox is commonly described with a person who travels to the past and kills their own grandfather, prevents the existence of their father or mother, and therefore their own existence.[27] Philosophers question whether these paradoxes make time travel impossible. Some philosophers answer the paradoxes by arguing that it might be the case that backward time travel could be possible but that it would be impossible to actually change the past in any way,[79] an idea similar to the proposed Novikov self-consistency principle in physics.

According to the philosophical theory of compossibility, what can happen, for example in the context of time travel, must be weighed against the context of everything relating to the situation. If the past is a certain way, it’s not possible for it to be any other way. What can happen when a time traveler visits the past is limited to what did happen, in order to prevent logical contradictions.[80]

The Novikov self-consistency principle, named after Igor Dmitrievich Novikov, states that any actions taken by a time traveler or by an object that travels back in time were part of history all along, and therefore it is impossible for the time traveler to “change” history in any way. The time traveler’s actions may be the cause of events in their own past though, which leads to the potential for circular causation, sometimes called a predestination paradox,[81] ontological paradox,[82] or bootstrap paradox.[82][83] The term bootstrap paradox was popularized by Robert A. Heinlein’s story “By His Bootstraps”.[84] The Novikov self-consistency principle proposes that the local laws of physics in a region of spacetime containing time travelers cannot be any different from the local laws of physics in any other region of spacetime.[85]

The philosopher Kelley L. Ross argues in “Time Travel Paradoxes”[86] that in a scenario involving a physical object whose world-line or history forms a closed loop in time there can be a violation of the second law of thermodynamics. Ross uses “Somewhere in Time” as an example of such an ontological paradox, where a watch is given to a person, and 60 years later the same watch is brought back in time and given to the same character. Ross states that entropy of the watch will increase, and the watch carried back in time will be more worn with each repetition of its history. The second law of thermodynamics is understood by modern physicists to be a statistical law, so decreasing entropy or non-increasing entropy are not impossible, just improbable. Additionally, entropy statistically increases in systems which are isolated, so non-isolated systems, such as an object, that interact with the outside world, can become less worn and decrease in entropy, and it’s possible for an object whose world-line forms a closed loop to be always in the same condition in the same point of its history.[24]:23

Time travel themes in science fiction and the media can generally be grouped into three categories: immutable timeline; mutable timeline; and alternate histories, as in the interacting-many-worlds interpretation.[87][88][89] Frequently in fiction, timeline is used to refer to all physical events in history, so that in time travel stories where events can be changed, the time traveler is described as creating a new or altered timeline.[90] This usage is distinct from the use of the term timeline to refer to a type of chart that illustrates a particular series of events, and the concept is also distinct from a world line, a term from Einstein’s theory of relativity which refers to the entire history of a single object.

Read the original here:

Time travel – Wikipedia

How Long Would It Take To Travel To The Nearest Star …

Weve all asked this question at some point in our lives: How long would it take to travel to the stars? Could it be within a persons own lifetime, and could this kind of travel become the norm someday? There are many possible answers to this question some very simple, others in the realms of science fiction. But coming up with a comprehensive answer means taking a lot of things into consideration.

Unfortunately, any realistic assessment is likely to produce answers that would totally discourage futurists and enthusiasts of interstellar travel. Like it or not, space is very large, and our technology is still very limited. But should we ever contemplate leaving the nest, we will have a range of options for getting to the nearest Solar Systems in our galaxy.

The nearest star to Earth is our Sun, which is a fairly average star in the Hertzsprung Russell Diagrams Main Sequence. This means that it is highly stable, providing Earth with just the right type of sunlight for life to evolve on our planet. We know there are planets orbiting other stars near to our Solar System, and many of these stars are similar to our own.

In the future, should mankind wish to leave the Solar System, well have a huge choice of stars we could travel to, and many could have the right conditions for life to thrive. But where would we go and how long would it take for us to get there? Just remember, this is all speculative and there is currently no benchmark for interstellar trips. That being said, here we go!

Over 2000 exoplanets have been identified, many of which are believed to be habitable. Credit: phl.upl.edu

As already noted, the closest star to our Solar System is Proxima Centauri, which is why it makes the most sense to plot an interstellar mission to this system first. As part of a triple star system called Alpha Centauri, Proxima is about 4.24 light years (or 1.3 parsecs) from Earth. Alpha Centauri is actually the brightest star of the three in the system part of a closely orbiting binary 4.37 light years from Earth whereas Proxima Centauri (the dimmest of the three) is an isolated red dwarf about 0.13 light years from the binary.

And while interstellar travel conjures up all kinds of visions of Faster-Than-Light (FTL) travel, ranging from warp speed and wormholes to jump drives, such theories are either highly speculative (such as the Alcubierre Drive) or entirely the province of science fiction. In all likelihood, any deep space mission will likely take generations to get there, rather than a few days or in an instantaneous flash.

So, starting with one of the slowest forms of space travel, how long will it take to get to Proxima Centauri?

The question of how long would it take to get somewhere in space is somewhat easier when dealing with existing technology and bodies within our Solar System. For instance, using the technology that powered the New Horizons mission which consisted of 16 thrusters fueled with hydrazine monopropellant reaching the Moon would take a mere 8 hours and 35 minutes.

On the other hand, there is the European Space Agencys (ESA) SMART-1 mission, which took its time traveling to the Moon using the method of ionic propulsion. With this revolutionary technology, a variation of which has since been used by the Dawn spacecraft to reach Vesta, the SMART-1 mission took one year, one month and two weeks to reach the Moon.

So, from the speedy rocket-propelled spacecraft to the economical ion drive, we have a few options for getting around local space plus we could use Jupiter or Saturn for a hefty gravitational slingshot. However, if we were to contemplate missions to somewhere a little more out of the way, we would have to scale up our technology and look at whats really possible.

When we say possible methods, we are talking about those that involve existing technology, or those that do not yet exist, but are technically feasible. Some, as you will see, are time-honored and proven, while others are emerging or still on the board. In just about all cases though, they present a possible, but extremely time-consuming or expensive, scenario for getting to even the closest stars

Ionic Propulsion:Currently, the slowest form of propulsion, and the most fuel-efficient, is the ion engine. A few decades ago, ionic propulsion was considered to be the subject of science fiction. However, in recent years, the technology to support ion engines has moved from theory to practice in a big way. The ESAs SMART-1 mission for example successfully completed its mission to the Moon after taking a 13 month spiral path from the Earth.

SMART-1 used solar powered ion thrusters, where electrical energy was harvested from its solar panels and used to power its Hall-effect thrusters. Only 82 kg of xenon propellant was used to propel SMART-1 to the Moon. 1 kg of xenon propellant provided a delta-v of 45 m/s. This is a highly efficient form of propulsion, but it is by no means fast.

Artists concept of Dawn mission above Ceres. Since its arrival, the spacecraft turned around to point the blue glow of its ion engine in the opposite direction. Image credit: NASA/JPL

One of the first missions to use ion drive technology was the Deep Space 1 mission to Comet Borrelly that took place in 1998. DS1 also used a xenon-powered ion drive, consuming 81.5 kg of propellant. Over 20 months of thrusting, DS1 was managed to reach a velocity of 56,000 km/hr (35,000 miles/hr) during its flyby of the comet.

Ion thrusters are therefore more economical than rocket technology, as the thrust per unit mass of propellant (a.k.a. specific impulse) is far higher. But it takes a long time for ion thrusters to accelerate spacecraft to any great speeds, and the maximum velocity it can achieve is dependent on its fuel supply and how much electrical energy it can generate.

So if ionic propulsion were to be used for a mission to Proxima Centauri, the thrusters would need a huge source of energy production (i.e. nuclear power) and a large quantity of propellant (although still less than conventional rockets). But based on the assumption that a supply of 81.5 kg of xenon propellant translates into a maximum velocity of 56,000 km/hr (and that there are no other forms of propulsion available, such as a gravitational slingshot to accelerate it further), some calculations can be made.

In short, at a maximum velocity of 56,000 km/h, Deep Space 1 would take over 81,000 years to traverse the 4.24 light years between Earth and Proxima Centauri. To put that time-scale into perspective, that would be over 2,700 human generations. So it is safe to say that an interplanetary ion engine mission would be far too slow to be considered for a manned interstellar mission.

Ionic propulsion is currently the slowest, but most fuel-efficient, form of space travel. Credit: NASA/JPL

But, should ion thrusters be made larger and more powerful (i.e. ion exhaust velocity would need to be significantly higher), and enough propellant could be hauled to keep the spacecrafts going for the entire 4.243 light-year trip, that travel time could be greatly reduced. Still not enough to happen in someones lifetime though.

Gravity Assist Method:The fastest existing means of space travel is known the Gravity Assist method, which involves a spacecraft using the relative movement (i.e. orbit) and gravity of a planet to alter is path and speed. Gravitational assists are a very useful spaceflight technique, especially when using the Earth or another massive planet (like a gas giant) for a boost in velocity.

The Mariner 10 spacecraft was the first to use this method, using Venus gravitational pull to slingshot it towards Mercury in February of 1974. In the 1980s, the Voyager 1 probe used Saturn and Jupiter for gravitational slingshots to attain its current velocity of 60,000 km/hr (38,000 miles/hr) and make it into interstellar space.

However, it was the Helios 2 mission which was launched in 1976 to study the interplanetary medium from 0.3 AU to 1 AU to the Sun that holds the record for highest speed achieved with a gravity assist. At the time, Helios 1 (which launched in 1974) and Helios 2 held the record for closest approach to the Sun. Helios 2 was launched by a conventional NASA Titan/Centaur launch vehicle and placed in a highly elliptical orbit.

A Helios probe being encapsulated for launch. Credit: Public Domain

Due to the large eccentricity (0.54) of the 190 day solar orbit, at perihelion Helios 2 was able to reach a maximum velocity of over 240,000 km/hr (150,000 miles/hr). This orbital speed was attained by the gravitational pull of the Sun alone. Technically, the Helios 2 perihelion velocity was not a gravitational slingshot, it was a maximum orbital velocity, but it still holds the record for being the fastest man-made object regardless.

So, if Voyager 1 was traveling in the direction of the red dwarf Proxima Centauri at a constant velocity of 60,000 km/hr, it would take 76,000 years (or over 2,500 generations) to travel that distance. But if it could attain the record-breaking speed of Helios 2s close approach of the Sun a constant speed of 240,000 km/hr it would take 19,000 years (or over 600 generations) to travel 4.243 light years. Significantly better, but still not in the ream of practicality.

Electromagnetic (EM) Drive:Another proposed method of interstellar travel comes in the form of the Radio Frequency (RF) Resonant Cavity Thruster, also known as the EM Drive. Originally proposed in 2001 by Roger K. Shawyer, a UK scientist who started Satellite Propulsion Research Ltd (SPR) to bring it to fruition, this drive is built around the idea that electromagnetic microwave cavities can allow for the direct conversion of electrical energy to thrust.

Whereas conventional electromagnetic thrusters are designed to propel a certain type of mass (such as ionized particles), this particular drive system relies on no reaction mass and emits no directional radiation. Such a proposal has met with a great deal of skepticism, mainly because it violates the law of Conservation of Momentum which states that within a system, the amount of momentum remains constant and is neither created nor destroyed, but only changes through the action of forces.

The EM Drive prototype produced by NASA/Eagleworks. Credit: NASA Spaceflight Forum

However, recent experiments with the technology have apparently yielded positive results. In July of 2014, at the 50th AIAA/ASME/SAE/ASEE Joint Propulsion Conference in Cleveland, Ohio, researchers from NASAs advanced propulsion research claimed that they had successfully tested a new design for an electromagnetic propulsion drive.

This was followed up in April of 2015 when researchers at NASA Eagleworks (part of the Johnson Space Center) claimed that they had successfully tested the drive in a vacuum, an indication that it might actually work in space. In July of that same year, a research team from the Dresden University of Technologys Space System department built their own version of the engine and observed a detectable thrust.

And in 2010, Prof. Juan Yang of the Northwestern Polytechnical University in Xian, China, began publishing a series of papers about her research into EM Drive technology. This culminated in her 2012 paper where she reported higher input power (2.5kW) and tested thrust (720mN) levels. In 2014, she further reported extensive tests involving internal temperature measurements with embedded thermocouples, which seemed to confirm that the system worked.

Artists concept of an interstellar craft equipped with an EM Drive. Credit: NASA Spaceflight Center

According to calculations based on the NASA prototype (which yielded a power estimate of 0.4 N/kilowatt), a spacecraft equipped with the EM drive could make the trip to Pluto in less than 18 months. Thats one-sixth the time it took for the New Horizons probe to get there, which was traveling at speeds of close to 58,000 km/h (36,000 mph).

Sounds impressive. But even at that rate, it would take a ship equipped with EM engines over 13,000 years for the vessel to make it to Proxima Centauri. Getting closer, but not quickly enough! and until such time that technology can be definitively proven to work, it doesnt make much sense to put our eggs into this basket.

Nuclear Thermal and Nuclear Electric Propulsion (NTP/NEP):Another possibility for interstellar space flight is to use spacecraft equipped with nuclear engines, a concept which NASA has been exploring for decades. In a Nuclear Thermal Propulsion (NTP) rocket, uranium or deuterium reactions are used to heat liquid hydrogen inside a reactor, turning it into ionized hydrogen gas (plasma), which is then channeled through a rocket nozzle to generate thrust.

A Nuclear Electric Propulsion (NEP) rocket involves the same basic reactor converting its heat and energy into electrical energy, which would then power an electrical engine. In both cases, the rocket would rely on nuclear fission or fusion to generates propulsion rather than chemical propellants, which has been the mainstay of NASA and all other space agencies to date.

Artists impression of a Crew Transfer Vehicle (CTV) using its nuclear-thermal rocket engines to slow down and establish orbit around Mars. Credit: NASA

Compared to chemical propulsion, both NTP and NEC offers a number of advantages. The first and most obvious is the virtually unlimited energy density it offers compared to rocket fuel. In addition, a nuclear-powered engine could also provide superior thrust relative to the amount of propellant used. This would cut the total amount of propellant needed, thus cutting launch weight and the cost of individual missions.

Although no nuclear-thermal engines have ever flown, several design concepts have been built and tested over the past few decades, and numerous concepts have been proposed. These have ranged from the traditional solid-core design such as the Nuclear Engine for Rocket Vehicle Application (NERVA) to more advanced and efficient concepts that rely on either a liquid or a gas core.

However, despite these advantages in fuel-efficiency and specific impulse, the most sophisticated NTP concept has a maximum specific impulse of 5000 seconds (50 kNs/kg). Using nuclear engines driven by fission or fusion, NASA scientists estimate it would could take a spaceship only 90 days to get to Mars when the planet was at opposition i.e. as close as 55,000,000 km from Earth.

But adjusted for a one-way journey to Proxima Centauri, a nuclear rocket would still take centuries to accelerate to the point where it was flying a fraction of the speed of light. It would then require several decades of travel time, followed by many more centuries of deceleration before reaching it destination. All told, were still talking about 1000 years before it reaches its destination. Good for interplanetary missions, not so good for interstellar ones.

Using existing technology, the time it would take to send scientists and astronauts on an interstellar mission would be prohibitively slow. If we want to make that journey within a single lifetime, or even a generation, something a bit more radical (aka. highly theoretical) will be needed. And while wormholes and jump engines may still be pure fiction at this point, there are some rather advanced ideas that have been considered over the years.

Nuclear Pulse Propulsion:Nuclear pulse propulsion is a theoretically possible form of fast space travel. The concept was originally proposed in 1946 by Stanislaw Ulam, a Polish-American mathematician who participated in the Manhattan Project, and preliminary calculations were then made by F. Reines and Ulam in 1947. The actual project known as Project Orion was initiated in 1958 and lasted until 1963.

The Project Orion concept for a nuclear-powered spacecraft. Credit: silodrome.co

Led by Ted Taylor at General Atomics and physicist Freeman Dyson from the Institute for Advanced Study in Princeton, Orion hoped to harness the power of pulsed nuclear explosions to provide a huge thrust with very high specific impulse (i.e. the amount of thrust compared to weight or the amount of seconds the rocket can continually fire).

In a nutshell, the Orion design involves a large spacecraft with a high supply of thermonuclear warheads achieving propulsion by releasing a bomb behind it and then riding the detonation wave with the help of a rear-mounted pad called a pusher. After each blast, the explosive force would be absorbed by this pusher pad, which then translates the thrust into forward momentum.

Though hardly elegant by modern standards, the advantage of the design is that it achieves a high specific impulse meaning it extracts the maximum amount of energy from its fuel source (in this case, nuclear bombs) at minimal cost. In addition, the concept could theoretically achieve very high speeds, with some estimates suggesting a ballpark figure as high as 5% the speed of light (or 5.4107 km/hr).

But of course, there the inevitable downsides to the design. For one, a ship of this size would be incredibly expensive to build. According to estimates produced by Dyson in 1968, an Orion spacecraft that used hydrogen bombs to generate propulsion would weight 400,000 to 4,000,000 metric tons. And at least three quarters of that weight consists of nuclear bombs, where each warhead weights approximately 1 metric ton.

Artists concept of Orion spacecraft leaving Earth. Credit: bisbos.com/Adrian Mann

All told, Dysons most conservative estimates placed the total cost of building an Orion craft at 367 billion dollars. Adjusted for inflation, that works out to roughly $2.5 trillion dollars which accounts for over two thirds of the US governments current annual revenue. Hence, even at its lightest, the craft would be extremely expensive to manufacture.

Theres also the slight problem of all the radiation it generates, not to mention nuclear waste. In fact, it is for this reason that the Project is believed to have been terminated, owing to the passage of the Partial Test Ban Treaty of 1963 which sought to limit nuclear testing and stop the excessive release of nuclear fallout into the planets atmosphere.

Fusion Rockets:Another possibility within the realm of harnessed nuclear power involves rockets that rely on thermonuclear reactions to generate thrust. For this concept, energy is created when pellets of a deuterium/helium-3 mix are ignited in a reaction chamber by inertial confinement using electron beams (similar to what is done at the National Ignition Facility in California). This fusion reactor would detonate 250 pellets per second to create high-energy plasma, which would then be directed by a magnetic nozzle to create thrust.

Like a rocket that relies on a nuclear reactor, this concept offers advantages as far as fuel efficiency and specific impulse are concerned. Exhaust velocities of up to 10,600km/s are estimated, which is far beyond the speed of conventional rockets. Whats more, the technology has been studied extensively over the past few decades, and many proposals have been made.

Artists concept of the Daedalus spacecraft, a two-stage fusion rocket that would achieve up to 12% he speed of light. Credit: Adrian Mann

For example, between 1973 and 1978, the British Interplanetary Society conducted feasibility study known as Project Daedalus. Relying on current knowledge of fusion technology and existing methods, the study called for the creation of a two-stage unmanned scientific probe making a trip to Barnards Star (5.9 light years from Earth) in a single lifetime.

The first stage, the larger of the two, would operate for 2.05 years and accelerate the spacecraft to 7.1% the speed of light (o.071 c). This stage would then be jettisoned, at which point, the second stage would ignite its engine and accelerate the spacecraft up to about 12% of light speed (0.12 c) over the course of 1.8 years. The second-stage engine would then be shut down and the ship would enter into a 46-year cruise period.

According to the Projects estimates, the mission would take 50 years to reach Barnards Star. Adjusted for Proxima Centauri, the same craft could make the trip in 36 years. But of course, the project also identified numerous stumbling blocks that made it unfeasible using then-current technology most of which are still unresolved.

For instance, there is the fact that helium-3 is scare on Earth, which means it would have to be mined elsewhere (most likely on the Moon). Second, the reaction that drives the spacecraft requires that the energy released vastly exceed the energy used to trigger the reaction. And while experiments here on Earth have surpassed the break-even goal, we are still a long way away from the kinds of energy needed to power an interstellar spaceship.

Artists concept of the Project Daedalus spacecraft, with a Saturn V rocket standing next to it for scale. Credit: Adrian Mann

Third, there is the cost factor of constructing such a ship. Even by the modest standard of Project Daedalus unmanned craft, a fully-fueled craft would weight as much as 60,000 Mt. To put that in perspective, the gross weight of NASAs SLS is just over 30 Mt, and a single launch comes with a price tag of $5 billion (based on estimates made in 2013).

In short, a fusion rocket would not only be prohibitively expensive to build, it would require a level of fusion reactor technology that is currently beyond our means. Icarus Interstellar, an international organization of volunteer citizen scientists (some of whom worked for NASA or the ESA) have since attempted to revitalize the concept with Project Icarus. Founded in 2009, the group hopes to make fusion propulsion (among other things) feasible by the near future.

Fusion Ramjet:Also known as the Bussard Ramjet, this theoretical form of propulsion was first proposed by physicist Robert W. Bussard in 1960. Basically, it is an improvement over the standard nuclear fusion rocket, which uses magnetic fields to compress hydrogen fuel to the point that fusion occurs. But in the Ramjets case, an enormous electromagnetic funnel scoops hydrogen from the interstellar medium and dumps it into the reactor as fuel.

Artists concept of the Bussard Ramjet, which would harness hydrogen from the interstellar medium to power its fusion engines. Credit: futurespacetransportation.weebly.com

As the ship picks up speed, the reactive mass is forced into a progressively constricted magnetic field, compressing it until thermonuclear fusion occurs. The magnetic field then directs the energy as rocket exhaust through an engine nozzle, thereby accelerating the vessel. Without any fuel tanks to weigh it down, a fusion ramjet could achieve speeds approaching 4% of the speed of light and travel anywhere in the galaxy.

However, the potential drawbacks of this design are numerous. For instance, there is the problem of drag. The ship relies on increased speed to accumulate fuel, but as it collides with more and more interstellar hydrogen, it may also lose speed especially in denser regions of the galaxy. Second, deuterium and tritium (used in fusion reactors here on Earth) are rare in space, whereas fusing regular hydrogen (which is plentiful in space) is beyond our current methods.

This concept has been popularized extensively in science fiction. Perhaps the best known example of this is in the franchise of Star Trek, where Bussard collectors are the glowing nacelles on warp engines. But in reality, our knowledge of fusion reactions need to progress considerably before a ramjet is possible. We would also have to figure out that pesky drag problem before we began to consider building such a ship!

Laser Sail:Solar sails have long been considered to be a cost-effective way of exploring the Solar System. In addition to being relatively easy and cheap to manufacture, theres the added bonus of solar sails requiring no fuel. Rather than using rockets that require propellant, the sail uses the radiation pressure from stars to push large ultra-thin mirrors to high speeds.

IKAROS spaceprobe with solar sail in flight (artists depiction) showing a typical square sail configuration. Credit: Wikimedia Commons/Andrzej Mirecki

However, for the sake of interstellar flight, such a sail would need to be driven by focused energy beams (i.e. lasers or microwaves) to push it to a velocity approaching the speed of light. The concept was originally proposed by Robert Forward in 1984, who was a physicist at the Hughes Aircrafts research laboratories at the time.

The concept retains the benefits of a solar sail, in that it requires no on-board fuel, but also from the fact that laser energy does not dissipate with distance nearly as much as solar radiation. So while a laser-driven sail would take some time to accelerate to near-luminous speeds, it would be limited only to the speed of light itself.

According to a 2000 study produced by Robert Frisbee, a director of advanced propulsion concept studies at NASAs Jet Propulsion Laboratory, a laser sail could be accelerated to half the speed of light in less than a decade. He also calculated that a sail measuring about 320 km (200 miles) in diameter could reach Proxima Centauri in just over 12 years. Meanwhile, a sail measuring about 965 km (600 miles) in diameter would arrive in just under 9 years.

However, such a sail would have to be built from advanced composites to avoid melting. Combined with its size, this would add up to a pretty penny! Even worse is the sheer expense incurred from building a laser large and powerful enough to drive a sail to half the speed of light. According to Frisbees own study, the lasers would require a steady flow of 17,000 terawatts of power close to what the entire world consumes in a single day.

Antimatter Engine:Fans of science fiction are sure to have heard of antimatter. But in case you havent, antimatter is essentially material composed of antiparticles, which have the same mass but opposite charge as regular particles. An antimatter engine, meanwhile, is a form of propulsion that uses interactions between matter and antimatter to generate power, or to create thrust.

Artists concept of an antimatter-powered spacecraft for missions to Mars, as part of the Mars Reference Mission. Credit: NASA

In short, an antimatter engine involves particles of hydrogen and antihydrogen being slammed together. This reaction unleashes as much as energy as a thermonuclear bomb, along with a shower of subatomic particles called pions and muons. These particles, which would travel at one-third the speed of light, are then be channeled by a magnetic nozzle to generate thrust.

The advantage to this class of rocket is that a large fraction of the rest mass of a matter/antimatter mixture may be converted to energy, allowing antimatter rockets to have a far higher energy density and specific impulse than any other proposed class of rocket. Whats more, controlling this kind of reaction could conceivably push a rocket up to half the speed of light.

Pound for pound, this class of ship would be the fastest and most fuel-efficient ever conceived. Whereas conventional rockets require tons of chemical fuel to propel a spaceship to its destination, an antimatter engine could do the same job with just a few milligrams of fuel. In fact, the mutual annihilation of a half pound of hydrogen and antihydrogen particles would unleash more energy than a 10-megaton hydrogen bomb.

It is for this exact reason that NASAs Institute for Advanced Concepts (NIAC) has investigated the technology as a possible means for future Mars missions. Unfortunately, when contemplating missions to nearby star systems, the amount if fuel needs to make the trip is multiplied exponentially, and the cost involved in producing it would be astronomical (no pun!).

What matter and antimatter might look like annihilating one another. Credit: NASA/CXC/M. Weiss

According to report prepared for the 39th AIAA/ASME/SAE/ASEE Joint Propulsion Conference and Exhibit (also by Robert Frisbee), a two-stage antimatter rocket would need over 815,000 metric tons (900,000 US tons) of fuel to make the journey to Proxima Centauri in approximately 40 years. Thats not bad, as far as timelines go. But again, the cost

Whereas a single gram of antimatter would produce an incredible amount of energy, it is estimated that producing just one gram would require approximately 25 million billion kilowatt-hours of energy and cost over a trillion dollars. At present, the total amount of antimatter that has been created by humans is less 20 nanograms.

And even if we could produce antimatter for cheap, you would need a massive ship to hold the amount of fuel needed. According to a report by Dr. Darrel Smith & Jonathan Webby of the Embry-Riddle Aeronautical University in Arizona, an interstellar craft equipped with an antimatter engine could reach 0.5 the speed of light and reach Proxima Centauri in a little over 8 years. However, the ship itself would weigh 400 Mt, and would need 170 MT of antimatter fuel to make the journey.

A possible way around this is to create a vessel that can create antimatter which it could then store as fuel. This concept, known as the Vacuum to Antimatter Rocket Interstellar Explorer System (VARIES), was proposed by Richard Obousy of Icarus Interstellar. Based on the idea of in-situ refueling, a VARIES ship would rely on large lasers (powered by enormous solar arrays) which would create particles of antimatter when fired at empty space.

Artists concept of the Vacuum to Antimatter Rocket Interstellar Explorer System (VARIES), a concept that would use solar arrays to power lasers that create particles of antimatter to be used as fuel. Credit: Adrian Mann

Much like the Ramjet concept, this proposal solves the problem of carrying fuel by harnessing it from space. But once again, the sheer cost of such a ship would be prohibitively expensive using current technology. In addition, the ability to create dark matter in large volumes is not something we currently have the power to do. Theres also the matter of radiation, as matter-antimatter annihilation can produce blasts of high-energy gamma rays.

This not only presents a danger to the crew, requiring significant radiations shielding, but requires the engines be shielded as well to ensure they dont undergo atomic degradation from all the radiation they are exposed to. So bottom line, the antimatter engine is completely impractical with our current technology and in the current budget environment.

Alcubierre Warp Drive:Fans of science fiction are also no doubt familiar with the concept of an Alcubierre (or Warp) Drive. Proposed by Mexican physicist Miguel Alcubierre in 1994, this proposed method was an attempt to make FTL travel possible without violating Einsteins theory of Special Relativity. In short, the concept involves stretching the fabric of space-time in a wave, which would theoretically cause the space ahead of an object to contract and the space behind it to expand.

An object inside this wave (i.e. a spaceship) would then be able to ride this wave, known as a warp bubble, beyond relativistic speeds. Since the ship is not moving within this bubble, but is being carried along as it moves, the rules of space-time and relativity would cease to apply. The reason being, this method does not rely on moving faster than light in the local sense.

Artist Mark Rademakers concept for the IXS Enterprise, a theoretical interstellar warp spacecraft. Credit: Mark Rademaker/flickr.com

It is only faster than light in the sense that the ship could reach its destination faster than a beam of light that was traveling outside the warp bubble. So assuming that a spacecraft could be outfitted with an Alcubierre Drive system, it would be able to make the trip to Proxima Centauri in less than 4 years. So when it comes to theoretical interstellar space travel, this is by far the most promising technology, at least in terms of speed.

Naturally, the concept has been received its share of counter-arguments over the years. Chief amongst them are the fact that it does not take quantum mechanics into account, and could be invalidated by a Theory of Everything (such as loop quantum gravity). Calculations on the amount of energy required have also indicated that a warp drive would require a prohibitive amount of power to work. Other uncertainties include the safety of such a system, the effects on space-time at the destination, and violations of causality.

However, in 2012, NASA scientist Harold Sonny White announced that he and his colleagues had begun researching the possibility of an Alcubierre Drive. In a paper titled Warp Field Mechanics 101, White claimed that they had constructed an interferometer that will detect the spatial distortions produced by the expanding and contracting spacetime of the Alcubierre metric.

In 2013, the Jet Propulsion Laboratory published results of a warp field test which was conducted under vacuum conditions. Unfortunately, the results were reported as inconclusive. Long term, we may find that Alcubierres metric may violate one or more fundamental laws of nature. And even if the physics should prove to be sound, there is no guarantee it can be harnessed for the sake of FTL flight.

In conclusion, if you were hoping to travel to the nearest star within your lifetime, the outlook isnt very good. However, if mankind felt the incentive to build an interstellar ark filled with a self-sustaining community of space-faring humans, it might be possible to travel there in a little under a century if we were willing to invest in the requisite technology.

But all the available methods are still very limited when it comes to transit time. And while taking hundreds or thousands of years to reach the nearest star may matter less to us if our very survival was at stake, it is simply not practical as far as space exploration and travel goes. By the time a mission reached even the closest stars in our galaxy, the technology employed would be obsolete and humanity might not even exist back home anymore.

So unless we make a major breakthrough in the realms of fusion, antimatter, or laser technology, we will either have to be content with exploring our own Solar System, or be forced to accept a very long-term transit strategy

We have written many interesting articles about space travel here at Universe Today. Heres Will We Ever Reach Another Star?, Warp Drives May Come With a Killer Downside, The Alcubierre Warp Drive, How Far Is A Light Year?, When Light Just Isnt Fast Enough, When Will We Become Interstellar?, and Can We Travel Faster Than the Speed of Light?

For more information, be sure to consult NASAs pages on Propulsion Systems of the Future, and Is Warp Drive Real?

View original post here:

How Long Would It Take To Travel To The Nearest Star …


12345...10...