12345...102030...


Spaceflight – Wikipedia

Spaceflight (also written space flight) is ballistic flight into or through outer space. Spaceflight can occur with spacecraft with or without humans on board. Examples of human spaceflight include the U.S. Apollo Moon landing and Space Shuttle programs and the Russian Soyuz program, as well as the ongoing International Space Station. Examples of unmanned spaceflight include space probes that leave Earth orbit, as well as satellites in orbit around Earth, such as communications satellites. These operate either by telerobotic control or are fully autonomous.

Spaceflight is used in space exploration, and also in commercial activities like space tourism and satellite telecommunications. Additional non-commercial uses of spaceflight include space observatories, reconnaissance satellites and other Earth observation satellites.

A spaceflight typically begins with a rocket launch, which provides the initial thrust to overcome the force of gravity and propels the spacecraft from the surface of the Earth. Once in space, the motion of a spacecraft both when unpropelled and when under propulsion is covered by the area of study called astrodynamics. Some spacecraft remain in space indefinitely, some disintegrate during atmospheric reentry, and others reach a planetary or lunar surface for landing or impact.

The first theoretical proposal of space travel using rockets was published by Scottish astronomer and mathematician William Leitch, in an 1861 essay “A Journey Through Space”.[1] More well-known (though not widely outside Russia) is Konstantin Tsiolkovsky’s work, ” ” (The Exploration of Cosmic Space by Means of Reaction Devices), published in 1903.

Spaceflight became an engineering possibility with the work of Robert H. Goddard’s publication in 1919 of his paper A Method of Reaching Extreme Altitudes. His application of the de Laval nozzle to liquid fuel rockets improved efficiency enough for interplanetary travel to become possible. He also proved in the laboratory that rockets would work in the vacuum of space;[specify] nonetheless, his work was not taken seriously by the public. His attempt to secure an Army contract for a rocket-propelled weapon in the first World War was defeated by the November 11, 1918 armistice with Germany.

Nonetheless, Goddard’s paper was highly influential on Hermann Oberth, who in turn influenced Wernher von Braun. Von Braun became the first to produce modern rockets as guided weapons, employed by Adolf Hitler. Von Braun’s V-2 was the first rocket to reach space, at an altitude of 189 kilometers (102 nautical miles) on a June 1944 test flight.[2]

Tsiolkovsky’s rocketry work was not fully appreciated in his lifetime, but he influenced Sergey Korolev, who became the Soviet Union’s chief rocket designer under Joseph Stalin, to develop intercontinental ballistic missiles to carry nuclear weapons as a counter measure to United States bomber planes. Derivatives of Korolev’s R-7 Semyorka missiles were used to launch the world’s first artificial Earth satellite, Sputnik 1, on October 4, 1957, and later the first human to orbit the Earth, Yuri Gagarin in Vostok 1, on April 12, 1961.[3]

At the end of World War II, von Braun and most of his rocket team surrendered to the United States, and were expatriated to work on American missiles at what became the Army Ballistic Missile Agency. This work on missiles such as Juno I and Atlas enabled launch of the first US satellite Explorer 1 on February 1, 1958, and the first American in orbit, John Glenn in Friendship 7 on February 20, 1962. As director of the Marshall Space Flight Center, Von Braun oversaw development of a larger class of rocket called Saturn, which allowed the US to send the first two humans, Neil Armstrong and Buzz Aldrin, to the Moon and back on Apollo 11 in July 1969. Over the same period, the Soviet Union secretly tried but failed to develop the N1 rocket to give them the capability to land one person on the Moon.

Rockets are the only means currently capable of reaching orbit or beyond. Other non-rocket spacelaunch technologies have yet to be built, or remain short of orbital speeds.A rocket launch for a spaceflight usually starts from a spaceport (cosmodrome), which may be equipped with launch complexes and launch pads for vertical rocket launches, and runways for takeoff and landing of carrier airplanes and winged spacecraft. Spaceports are situated well away from human habitation for noise and safety reasons. ICBMs have various special launching facilities.

A launch is often restricted to certain launch windows. These windows depend upon the position of celestial bodies and orbits relative to the launch site. The biggest influence is often the rotation of the Earth itself. Once launched, orbits are normally located within relatively constant flat planes at a fixed angle to the axis of the Earth, and the Earth rotates within this orbit.

A launch pad is a fixed structure designed to dispatch airborne vehicles. It generally consists of a launch tower and flame trench. It is surrounded by equipment used to erect, fuel, and maintain launch vehicles.

The most commonly used definition of outer space is everything beyond the Krmn line, which is 100 kilometers (62mi) above the Earth’s surface. The United States sometimes defines outer space as everything beyond 50 miles (80km) in altitude.

Rockets are the only currently practical means of reaching space. Conventional airplane engines cannot reach space due to the lack of oxygen. Rocket engines expel propellant to provide forward thrust that generates enough delta-v (change in velocity) to reach orbit.

For manned launch systems launch escape systems are frequently fitted to allow astronauts to escape in the case of emergency.

Many ways to reach space other than rockets have been proposed. Ideas such as the space elevator, and momentum exchange tethers like rotovators or skyhooks require new materials much stronger than any currently known. Electromagnetic launchers such as launch loops might be feasible with current technology. Other ideas include rocket assisted aircraft/spaceplanes such as Reaction Engines Skylon (currently in early stage development), scramjet powered spaceplanes, and RBCC powered spaceplanes. Gun launch has been proposed for cargo.

Achieving a closed orbit is not essential to lunar and interplanetary voyages. Early Russian space vehicles successfully achieved very high altitudes without going into orbit. NASA considered launching Apollo missions directly into lunar trajectories but adopted the strategy of first entering a temporary parking orbit and then performing a separate burn several orbits later onto a lunar trajectory. This costs additional propellant because the parking orbit perigee must be high enough to prevent reentry while direct injection can have an arbitrarily low perigee because it will never be reached.

However, the parking orbit approach greatly simplified Apollo mission planning in several important ways. It substantially widened the allowable launch windows, increasing the chance of a successful launch despite minor technical problems during the countdown. The parking orbit was a stable “mission plateau” that gave the crew and controllers several hours to thoroughly check out the spacecraft after the stresses of launch before committing it to a long lunar flight; the crew could quickly return to Earth, if necessary, or an alternate Earth-orbital mission could be conducted. The parking orbit also enabled translunar trajectories that avoided the densest parts of the Van Allen radiation belts.

Apollo missions minimized the performance penalty of the parking orbit by keeping its altitude as low as possible. For example, Apollo 15 used an unusually low parking orbit (even for Apollo) of 92.5 nmi by 91.5 nmi (171km by 169km) where there was significant atmospheric drag. But it was partially overcome by continuous venting of hydrogen from the third stage of the Saturn V, and was in any event tolerable for the short stay.

Robotic missions do not require an abort capability or radiation minimization, and because modern launchers routinely meet “instantaneous” launch windows, space probes to the Moon and other planets generally use direct injection to maximize performance. Although some might coast briefly during the launch sequence, they do not complete one or more full parking orbits before the burn that injects them onto an Earth escape trajectory.

Note that the escape velocity from a celestial body decreases with altitude above that body. However, it is more fuel-efficient for a craft to burn its fuel as close to the ground as possible; see Oberth effect and reference.[5] This is anotherway to explain the performance penalty associated with establishing the safe perigee of a parking orbit.

Plans for future crewed interplanetary spaceflight missions often include final vehicle assembly in Earth orbit, such as NASA’s Project Orion and Russia’s Kliper/Parom tandem.

Astrodynamics is the study of spacecraft trajectories, particularly as they relate to gravitational and propulsion effects. Astrodynamics allows for a spacecraft to arrive at its destination at the correct time without excessive propellant use. An orbital maneuvering system may be needed to maintain or change orbits.

Non-rocket orbital propulsion methods include solar sails, magnetic sails, plasma-bubble magnetic systems, and using gravitational slingshot effects.

The term “transfer energy” means the total amount of energy imparted by a rocket stage to its payload. This can be the energy imparted by a first stage of a launch vehicle to an upper stage plus payload, or by an upper stage or spacecraft kick motor to a spacecraft.[6][7]

Vehicles in orbit have large amounts of kinetic energy. This energy must be discarded if the vehicle is to land safely without vaporizing in the atmosphere. Typically this process requires special methods to protect against aerodynamic heating. The theory behind reentry was developed by Harry Julian Allen. Based on this theory, reentry vehicles present blunt shapes to the atmosphere for reentry. Blunt shapes mean that less than 1% of the kinetic energy ends up as heat that reaches the vehicle and the heat energy instead ends up in the atmosphere.

The Mercury, Gemini, and Apollo capsules all splashed down in the sea. These capsules were designed to land at relatively low speeds with the help of a parachute.Russian capsules for Soyuz make use of a big parachute and braking rockets to touch down on land.The Space Shuttle glided to a touchdown like a plane.

After a successful landing the spacecraft, its occupants and cargo can be recovered. In some cases, recovery has occurred before landing: while a spacecraft is still descending on its parachute, it can be snagged by a specially designed aircraft. This mid-air retrieval technique was used to recover the film canisters from the Corona spy satellites.

Uncrewed spaceflight (or unmanned) is all spaceflight activity without a necessary human presence in space. This includes all space probes, satellites and robotic spacecraft and missions. Uncrewed spaceflight is the opposite of manned spaceflight, which is usually called human spaceflight. Subcategories of uncrewed spaceflight are “robotic spacecraft” (objects) and “robotic space missions” (activities). A robotic spacecraft is an uncrewed spacecraft with no humans on board, that is usually under telerobotic control. A robotic spacecraft designed to make scientific research measurements is often called a space probe.

Uncrewed space missions use remote-controlled spacecraft. The first uncrewed space mission was Sputnik I, launched October 4, 1957 to orbit the Earth. Space missions where animals but no humans are on-board are considered uncrewed missions.

Many space missions are more suited to telerobotic rather than crewed operation, due to lower cost and lower risk factors. In addition, some planetary destinations such as Venus or the vicinity of Jupiter are too hostile for human survival, given current technology. Outer planets such as Saturn, Uranus, and Neptune are too distant to reach with current crewed spaceflight technology, so telerobotic probes are the only way to explore them. Telerobotics also allows exploration of regions that are vulnerable to contamination by Earth micro-organisms since spacecraft can be sterilized. Humans can not be sterilized in the same way as a spaceship, as they coexist with numerous micro-organisms, and these micro-organisms are also hard to contain within a spaceship or spacesuit.

Telerobotics becomes telepresence when the time delay is short enough to permit control of the spacecraft in close to real time by humans. Even the two seconds light speed delay for the Moon is too far away for telepresence exploration from Earth. The L1 and L2 positions permit 400-millisecond round trip delays, which is just close enough for telepresence operation. Telepresence has also been suggested as a way to repair satellites in Earth orbit from Earth. The Exploration Telerobotics Symposium in 2012 explored this and other topics.[8]

The first human spaceflight was Vostok 1 on April 12, 1961, on which cosmonaut Yuri Gagarin of the USSR made one orbit around the Earth. In official Soviet documents, there is no mention of the fact that Gagarin parachuted the final seven miles.[9] Currently, the only spacecraft regularly used for human spaceflight are the Russian Soyuz spacecraft and the Chinese Shenzhou spacecraft. The U.S. Space Shuttle fleet operated from April 1981 until July 2011. SpaceShipOne has conducted two human suborbital spaceflights.

On a sub-orbital spaceflight the spacecraft reaches space and then returns to the atmosphere after following a (primarily) ballistic trajectory. This is usually because of insufficient specific orbital energy, in which case a suborbital flight will last only a few minutes, but it is also possible for an object with enough energy for an orbit to have a trajectory that intersects the Earth’s atmosphere, sometimes after many hours. Pioneer 1 was NASA’s first space probe intended to reach the Moon. A partial failure caused it to instead follow a suborbital trajectory to an altitude of 113,854 kilometers (70,746mi) before reentering the Earth’s atmosphere 43 hours after launch.

The most generally recognized boundary of space is the Krmn line 100km above sea level. (NASA alternatively defines an astronaut as someone who has flown more than 50 miles (80km) above sea level.) It is not generally recognized by the public that the increase in potential energy required to pass the Krmn line is only about 3% of the orbital energy (potential plus kinetic energy) required by the lowest possible Earth orbit (a circular orbit just above the Krmn line.) In other words, it is far easier to reach space than to stay there. On May 17, 2004, Civilian Space eXploration Team launched the GoFast Rocket on a suborbital flight, the first amateur spaceflight. On June 21, 2004, SpaceShipOne was used for the first privately funded human spaceflight.

Point-to-point is a category of sub-orbital spaceflight in which a spacecraft provides rapid transport between two terrestrial locations. Consider a conventional airline route between London and Sydney, a flight that normally lasts over twenty hours. With point-to-point suborbital travel the same route could be traversed in less than one hour.[10] While no company offers this type of transportation today, SpaceX has revealed plans to do so as early as the 2020s using its BFR vehicle.[11] Suborbital spaceflight over an intercontinental distance requires a vehicle velocity that is only a little lower than the velocity required to reach low Earth orbit.[12] If rockets are used, the size of the rocket relative to the payload is similar to an Intercontinental Ballistic Missile (ICBM). Any intercontinental spaceflight has to surmount problems of heating during atmosphere re-entry that are nearly as large as those faced by orbital spaceflight.

A minimal orbital spaceflight requires much higher velocities than a minimal sub-orbital flight, and so it is technologically much more challenging to achieve. To achieve orbital spaceflight, the tangential velocity around the Earth is as important as altitude. In order to perform a stable and lasting flight in space, the spacecraft must reach the minimal orbital speed required for a closed orbit.

Interplanetary travel is travel between planets within a single planetary system. In practice, the use of the term is confined to travel between the planets of our Solar System.

Five spacecraft are currently leaving the Solar System on escape trajectories, Voyager 1, Voyager 2, Pioneer 10, Pioneer 11, and New Horizons. The one farthest from the Sun is Voyager 1, which is more than 100 AU distant and is moving at 3.6 AU per year.[13] In comparison, Proxima Centauri, the closest star other than the Sun, is 267,000 AU distant. It will take Voyager 1 over 74,000 years to reach this distance. Vehicle designs using other techniques, such as nuclear pulse propulsion are likely to be able to reach the nearest star significantly faster. Another possibility that could allow for human interstellar spaceflight is to make use of time dilation, as this would make it possible for passengers in a fast-moving vehicle to travel further into the future while aging very little, in that their great speed slows down the rate of passage of on-board time. However, attaining such high speeds would still require the use of some new, advanced method of propulsion.

Intergalactic travel involves spaceflight between galaxies, and is considered much more technologically demanding than even interstellar travel and, by current engineering terms, is considered science fiction.

Spacecraft are vehicles capable of controlling their trajectory through space.

The first ‘true spacecraft’ is sometimes said to be Apollo Lunar Module,[14] since this was the only manned vehicle to have been designed for, and operated only in space; and is notable for its non aerodynamic shape.

Spacecraft today predominantly use rockets for propulsion, but other propulsion techniques such as ion drives are becoming more common, particularly for unmanned vehicles, and this can significantly reduce the vehicle’s mass and increase its delta-v.

Launch systems are used to carry a payload from Earth’s surface into outer space.

All launch vehicles contain a huge amount of energy that is needed for some part of it to reach orbit. There is therefore some risk that this energy can be released prematurely and suddenly, with significant effects. When a Delta II rocket exploded 13 seconds after launch on January 17, 1997, there were reports of store windows 10 miles (16km) away being broken by the blast.[16]

Space is a fairly predictable environment, but there are still risks of accidental depressurization and the potential failure of equipment, some of which may be very newly developed.

In 2004 the International Association for the Advancement of Space Safety was established in the Netherlands to further international cooperation and scientific advancement in space systems safety.[17]

In a microgravity environment such as that provided by a spacecraft in orbit around the Earth, humans experience a sense of “weightlessness.” Short-term exposure to microgravity causes space adaptation syndrome, a self-limiting nausea caused by derangement of the vestibular system. Long-term exposure causes multiple health issues. The most significant is bone loss, some of which is permanent, but microgravity also leads to significant deconditioning of muscular and cardiovascular tissues.

Once above the atmosphere, radiation due to the Van Allen belts, solar radiation and cosmic radiation issues occur and increase. Further away from the Earth, solar flares can give a fatal radiation dose in minutes, and the health threat from cosmic radiation significantly increases the chances of cancer over a decade exposure or more.[18]

In human spaceflight, the life support system is a group of devices that allow a human being to survive in outer space. NASA often uses the phrase Environmental Control and Life Support System or the acronym ECLSS when describing these systems for its human spaceflight missions.[19] The life support system may supply: air, water and food. It must also maintain the correct body temperature, an acceptable pressure on the body and deal with the body’s waste products. Shielding against harmful external influences such as radiation and micro-meteorites may also be necessary. Components of the life support system are life-critical, and are designed and constructed using safety engineering techniques.

Space weather is the concept of changing environmental conditions in outer space. It is distinct from the concept of weather within a planetary atmosphere, and deals with phenomena involving ambient plasma, magnetic fields, radiation and other matter in space (generally close to Earth but also in interplanetary, and occasionally interstellar medium). “Space weather describes the conditions in space that affect Earth and its technological systems. Our space weather is a consequence of the behavior of the Sun, the nature of Earth’s magnetic field, and our location in the Solar System.”[20]

Space weather exerts a profound influence in several areas related to space exploration and development. Changing geomagnetic conditions can induce changes in atmospheric density causing the rapid degradation of spacecraft altitude in Low Earth orbit. Geomagnetic storms due to increased solar activity can potentially blind sensors aboard spacecraft, or interfere with on-board electronics. An understanding of space environmental conditions is also important in designing shielding and life support systems for manned spacecraft.

Rockets as a class are not inherently grossly polluting. However, some rockets use toxic propellants, and most vehicles use propellants that are not carbon neutral. Many solid rockets have chlorine in the form of perchlorate or other chemicals, and this can cause temporary local holes in the ozone layer. Re-entering spacecraft generate nitrates which also can temporarily impact the ozone layer. Most rockets are made of metals that can have an environmental impact during their construction.

In addition to the atmospheric effects there are effects on the near-Earth space environment. There is the possibility that orbit could become inaccessible for generations due to exponentially increasing space debris caused by spalling of satellites and vehicles (Kessler syndrome). Many launched vehicles today are therefore designed to be re-entered after use.

Current and proposed applications for spaceflight include:

Most early spaceflight development was paid for by governments. However, today major launch markets such as Communication satellites and Satellite television are purely commercial, though many of the launchers were originally funded by governments.

Private spaceflight is a rapidly developing area: space flight that is not only paid for by corporations or even private individuals, but often provided by private spaceflight companies. These companies often assert that much of the previous high cost of access to space was caused by governmental inefficiencies they can avoid. This assertion can be supported by much lower published launch costs for private space launch vehicles such as Falcon 9 developed with private financing. Lower launch costs and excellent safety will be required for the applications such as Space tourism and especially Space colonization to become successful.

Media related to Spaceflight at Wikimedia Commons

Read more from the original source:

Spaceflight – Wikipedia

Spaceflight Now The leading source for online space news

NASAs New Horizons spacecraft made a historic New Years encounter with an object nicknamed Ultima Thule in the Kuiper Belt a billion miles beyond Pluto. The NASA space probe passed Ultima Thule at a distance of around 2,200 miles (3,500 kilometers) at 12:33 a.m. EST (0533 GMT) on Jan. 1, making it the most distant planetary body ever explored up close.

Read more:

Spaceflight Now The leading source for online space news

Human spaceflight – Wikipedia

Inside a space suit on the Canadarm, 1993

Human spaceflight (also referred to as crewed spaceflight or manned spaceflight) is space travel with a crew or passengers aboard the spacecraft. Spacecraft carrying people may be operated directly, by human crew, or it may be either remotely operated from ground stations on Earth or be autonomous, able to carry out a specific mission with no human involvement.

The first human spaceflight was launched by the Soviet Union on 12 April 1961 as a part of the Vostok program, with cosmonaut Yuri Gagarin aboard. Humans have been continuously present in space for 18years and 63days on the International Space Station. All early human spaceflight was crewed, where at least some of the passengers acted to carry out tasks of piloting or operating the spacecraft. After 2015, several human-capable spacecraft are being explicitly designed with the ability to operate autonomously.

From the retirement of the US Space Shuttle in 2011 to the first SpaceShipTwo spaceflight in 2018, only Russia and China have maintained human spaceflight capability with the Soyuz program and Shenzhou program. Currently, all expeditions to the International Space Station use Soyuz vehicles, which remain attached to the station to allow quick return if needed. The United States is developing commercial crew transportation to facilitate domestic access to ISS and low Earth orbit, as well as the Orion vehicle for beyond-low Earth orbit applications.

While spaceflight has typically been a government-directed activity, commercial spaceflight has gradually been taking on a greater role. The first private human spaceflight took place on 21 June 2004, when SpaceShipOne conducted a suborbital flight, and a number of non-governmental companies have been working to develop a space tourism industry. NASA has also played a role to stimulate private spaceflight through programs such as Commercial Orbital Transportation Services (COTS) and Commercial Crew Development (CCDev). With its 2011 budget proposals released in 2010,[1] the Obama administration moved towards a model where commercial companies would supply NASA with transportation services of both people and cargo transport to low Earth orbit. The vehicles used for these services could then serve both NASA and potential commercial customers. Commercial resupply of ISS began two years after the retirement of the Shuttle, and commercial crew launches could begin by 2018.[2]

Human spaceflight capability was first developed during the Cold War between the United States and the Soviet Union (USSR), which developed the first intercontinental ballistic missile rockets to deliver nuclear weapons. These rockets were large enough to be adapted to carry the first artificial satellites into low Earth orbit. After the first satellites were launched in 1957 and 1958, the US worked on Project Mercury to launch men singly into orbit, while the USSR pursued the Vostok program to accomplish the same thing. The USSR launched the first human in space, Yuri Gagarin, into a single orbit in Vostok 1 on a Vostok 3KA rocket, on 12 April 1961. The US launched its first astronaut, Alan Shepard, on a suborbital flight aboard Freedom 7 on a Mercury-Redstone rocket, on 5 May 1961. Unlike Gagarin, Shepard manually controlled his spacecraft’s attitude, and landed inside it. The first American in orbit was John Glenn aboard Friendship 7, launched 20 February 1962 on a Mercury-Atlas rocket. The USSR launched five more cosmonauts in Vostok capsules, including the first woman in space, Valentina Tereshkova aboard Vostok 6 on 16 June 1963. The US launched a total of two astronauts in suborbital flight and four into orbit through 1963.

US President John F. Kennedy raised the stakes of the Space Race by setting the goal of landing a man on the Moon and returning him safely by the end of the 1960s.[3] The US started the three-man Apollo program in 1961 to accomplish this, launched by the Saturn family of launch vehicles, and the interim two-man Project Gemini in 1962, which flew 10 missions launched by Titan II rockets in 1965 and 1966. Gemini’s objective was to support Apollo by developing American orbital spaceflight experience and techniques to be used in the Moon mission.[4]

Meanwhile, the USSR proceeded to stretch the limits of their single-pilot Vostok capsule into a two- or three-person Voskhod capsule to compete with Gemini. They were able to launch two orbital flights in 1964 and 1965 and achieved the first spacewalk, made by Alexei Leonov on Voskhod 2 on 8 March 1965. But Voskhod did not have Gemini’s capability to maneuver in orbit, and the program was terminated. The US Gemini flights performed several spacewalks and addressed the problem of astronaut fatigue caused by overcoming the lack of gravity, demonstrating up to two weeks endurance in a human spaceflight, and the first space rendezvous and dockings of spacecraft.

The US developed the Saturn V rocket necessary to send the Apollo spacecraft to the Moon, and sent Frank Borman, James Lovell, and William Anders into 10 orbits around the Moon in Apollo 8 in December 1968. In July 1969, Apollo 11 accomplished Kennedy’s goal by landing Neil Armstrong and Buzz Aldrin on the Moon 21 July and returning them safely on 24 July along with Command Module pilot Michael Collins. A total of six Apollo missions landed 12 men to walk on the Moon through 1972, half of which drove electric powered vehicles on the surface. Apollo 13 suffered a catastrophic in-flight spacecraft failure. However the crew of Lovell, Jack Swigert, and Fred Haise survived and returned to Earth safely without landing on the Moon.

Meanwhile, the USSR pursued human lunar orbiting and landing programs. They successfully developed the three-person Soyuz spacecraft for use in the lunar programs, but failed to develop the N1 rocket necessary for a human landing, and discontinued the lunar programs in 1974.[5] Instead they concentrated on the development of space stations, using the Soyuz as a ferry to take cosmonauts to and from the stations. They started with a series of Salyut sortie stations from 1971 to 1986.

After the Apollo program, the US launched the Skylab sortie space station in 1973, manning it for 171 days with three crews aboard Apollo spacecraft. President Richard Nixon and Soviet Premier Leonid Brezhnev negotiated an easing of relations known as dtente. As part of this, they negotiated the Apollo-Soyuz Test Project, in which an Apollo spacecraft carrying a special docking adapter module rendezvoused and docked with Soyuz 19 in 1975. The American and Russian crews shook hands in space, achieving a significant diplomatic and symbolic easing of Cold War tensions.

Nixon appointed his Vice President Spiro Agnew to head a Space Task Group in 1969 to recommend follow-on human spaceflight programs after Apollo. The group proposed an ambitious Space Transportation System based on a reusable Space Shuttle which consisted of a winged, internally fueled orbiter stage burning liquid hydrogen, launched by a similar, but larger kerosene-fueled booster stage, each equipped with airbreathing jet engines for powered return to a runway at the Kennedy Space Center launch site. Other components of the system included a permanent modular space station, reusable space tug and nuclear interplanetary ferry, leading to a human expedition to Mars as early as 1986, or as late as 2000, depending on the level of funding allocated. However, Nixon knew the American political climate would not support Congressional funding for such an ambition, and killed proposals for all but the Shuttle, possibly to be followed by the space station. Plans for the Shuttle were scaled back to reduce development risk, cost, and time, replacing the piloted flyback booster with two reusable solid rocket boosters, and the smaller orbiter would use an expendable external propellant tank to feed its hydrogen-fueled main engines. The orbiter would have to make unpowered landings.

The two nations continued to compete rather than cooperate in space, as the US turned to developing the Space Shuttle and planning the space station, dubbed Freedom. The USSR launched three Almaz military sortie stations from 1973 to 1977, disguised as Salyuts. They followed Salyut with the development of Mir, the first modular, semi-permanent space station, the construction of which took place from 1986 to 1996. Mir orbited at an altitude of 354 kilometers (191 nautical miles), at a 51.6 inclination. It was occupied for 4,592 days, and made a controlled reentry in 2001.

The Space Shuttle started flying in 1981, but the US Congress failed to approve sufficient funds to make Freedom a reality. A fleet of four shuttles was built: Columbia, Challenger, Discovery, and Atlantis. A fifth shuttle, Endeavour, was built to replace Challenger, which was destroyed in an accident during launch that killed 7 astronauts on 28 January 1986. Twenty-two Shuttle flights carried a European Space Agency sortie space station called Spacelab in the payload bay from 1983 to 1998.[6]

The USSR copied the reusable Space Shuttle orbiter, which it called Buran. It was designed to be launched into orbit by the expendable Energia rocket, and capable of robotic orbital flight and landing. Unlike the US Shuttle, Buran had no main rocket engines, but like the Shuttle used its orbital maneuvering engines to perform its final orbital insertion. A single unmanned orbital test flight was successfully made in November 1988. A second test flight was planned by 1993, but the program was cancelled due to lack of funding and the dissolution of the Soviet Union in 1991. Two more orbiters were never completed, and the first one was destroyed in a hangar roof collapse in May 2002.

The dissolution of the Soviet Union in 1991 brought an end to the Cold War and opened the door to true cooperation between the US and Russia. The Soviet Soyuz and Mir programs were taken over by the Russian Federal Space Agency, now known as the Roscosmos State Corporation. The Shuttle-Mir Program included American Space Shuttles visiting the Mir space station, Russian cosmonauts flying on the Shuttle, and an American astronaut flying aboard a Soyuz spacecraft for long-duration expeditions aboard Mir.

In 1993, President Bill Clinton secured Russia’s cooperation in converting the planned Space Station Freedom into the International Space Station (ISS). Construction of the station began in 1998. The station orbits at an altitude of 409 kilometers (221nmi) and an inclination of 51.65.

The Space Shuttle was retired in 2011 after 135 orbital flights, several of which helped assemble, supply, and crew the ISS. Columbia was destroyed in another accident during reentry, which killed 7 astronauts on 1 February 2003.

After Russia’s launch of Sputnik 1 in 1957, Chairman Mao Zedong intended to place a Chinese satellite in orbit by 1959 to celebrate the 10th anniversary of the founding of the People’s Republic of China (PRC),[7] However, China did not successfully launch its first satellite until 24 April 1970. Mao and Premier Zhou Enlai decided on 14 July 1967, that the PRC should not be left behind, and started China’s own human spaceflight program.[8] The first attempt, the Shuguang spacecraft copied from the US Gemini, was cancelled on 13 May 1972.

China later designed the Shenzhou spacecraft resembling the Russian Soyuz, and became the third nation to achieve independent human spaceflight capability by launching Yang Liwei on a 21-hour flight aboard Shenzhou 5 on 15 October 2003. China launched the Tiangong-1 space station on 29 September 2011, and two sortie missions to it: Shenzhou 9 1629 June 2012, with China’s first female astronaut Liu Yang; and Shenzhou 10, 1326 June 2013. The station was retired on 21 March 2016 and remains in a 363-kilometer (196-nautical-mile), 42.77 inclination orbit.

The European Space Agency began development in 1987 of the Hermes spaceplane, to be launched on the Ariane 5 expendable launch vehicle. The project was cancelled in 1992, when it became clear that neither cost nor performance goals could be achieved. No Hermes shuttles were ever built.

Japan began development in the 1980s of the HOPE-X experimental spaceplane, to be launched on its H-IIA expendable launch vehicle. A string of failures in 1998 led to funding reduction, and the project’s cancellation in 2003.

Under the Bush administration, the Constellation Program included plans for retiring the Shuttle program and replacing it with the capability for spaceflight beyond low Earth orbit. In the 2011 United States federal budget, the Obama administration cancelled Constellation for being over budget and behind schedule while not innovating and investing in critical new technologies.[9] For beyond low Earth orbit human spaceflight NASA is developing the Orion spacecraft to be launched by the Space Launch System. Under the Commercial Crew Development plan, NASA will rely on transportation services provided by the private sector to reach low Earth orbit, such as SpaceX’s Falcon 9/Dragon V2, Sierra Nevada Corporation’s Dream Chaser, or Boeing’s CST-100. The period between the retirement of the shuttle in 2011 and the first launch to space of Spaceshiptwo on December 13, 2018 is similar to the gap between the end of Apollo in 1975 and the first space shuttle flight in 1981, is referred to by a presidential Blue Ribbon Committee as the U.S. human spaceflight gap.[10]

Since the early 2000s, a variety of private spaceflight ventures have been undertaken. Several of the companies, including Blue Origin, SpaceX, Virgin Galactic, and Sierra Nevada have explicit plans to advance human spaceflight. As of 2016[update], all four of those companies have development programs underway to fly commercial passengers.

A commercial suborbital spacecraft aimed at the space tourism market is being developed by Virgin Galactic called SpaceshipTwo which reached space in December 2018.[11][12]Blue Origin has begun a multi-year test program of their New Shepard vehicle and carried out six successful uncrewed test flights in 20152016. Blue Origin plan to fly “test passengers” in Q2 2017, and initiate commercial flights in 2018.[13][14]

SpaceX and Boeing are both developing passenger-capable orbital space capsules as of 2015, planning to fly NASA astronauts to the International Space Station by 2018. SpaceX will be carrying passengers on Dragon 2 launched on a Falcon 9 launch vehicle. Boeing will be doing it with their CST-100 launched on a United Launch Alliance Atlas V launch vehicle.[15]Development funding for these orbital-capable technologies has been provided by a mix of government and private funds, with SpaceX providing a greater portion of total development funding for this human-carrying capability from private investment.[16][17]There have been no public announcements of commercial offerings for orbital flights from either company, although both companies are planning some flights with their own private, not NASA, astronauts on board.

Yuri Gagarin became the first human in space and the Earth’s orbit on Vostok 1 on April 12, 1961.

Alan Shepard became the first American to reach space on Freedom 7 on May 5, 1961.

John Glenn became the first American to orbit the Earth on February 20, 1962.

Valentina Tereshkova became the first woman to orbit the Earth on June 16, 1963.

Either Robert M. White or Joseph A. Walker (depending on the definition of the space border) became the first human to pilot a spaceplane, the North American X-15, on July 17, 1962 (White) or July 19, 1963 (Walker).

Alexey Leonov became the first human to walk in space on March 18, 1965.

Frank Borman, Jim Lovell, and William Anders became the first humans to travel beyond low Earth orbit (LEO) Dec 2127, 1968, when the Apollo 8 mission took them to 10 orbits around the Moon and back.

Neil Armstrong and Buzz Aldrin became the first humans to land on the Moon on July 20, 1969.

Svetlana Savitskaya became the first woman to walk in space on July 25, 1984.

Sally Ride became the first American woman in space in 1983. Eileen Collins was the first female shuttle pilot, and with shuttle mission STS-93 in 1999 she became the first woman to command a U.S. spacecraft.

The longest single human spaceflight is that of Valeri Polyakov, who left Earth on 8 January 1994, and did not return until 22 March 1995 (a total of 437 days 17 h 58 min 16 s). Sergei Krikalyov has spent the most time of anyone in space, 803 days, 9 hours, and 39 minutes altogether. The longest period of continuous human presence in space is 18years and 63days on the International Space Station, exceeding the previous record of almost 10 years (or 3,634 days) held by Mir, spanning the launch of Soyuz TM-8 on 5 September 1989 to the landing of Soyuz TM-29 on 28 August 1999.

Yang Liwei became the first Chinese in space and the Earth’s orbit on Shenzhou 5 on October 15, 2003.

For many years, only the USSR (later Russia) and the United States had their own astronauts. Citizens of other nations flew in space, beginning with the flight of Vladimir Remek, a Czech, on a Soviet spacecraft on 2 March 1978, in the Interkosmos programme. As of 2010[update], citizens from 38 nations (including space tourists) have flown in space aboard Soviet, American, Russian, and Chinese spacecraft.

Human spaceflight programs have been conducted by the former Soviet Union and current Russian Federation, the United States, the People’s Republic of China and by private spaceflight company Scaled Composites.

Currently have human spaceflight programs.

Confirmed and dated plans for human spaceflight programs.

Plans for human spaceflight on the simplest form (suborbital spaceflight, etc.).

Plans for human spaceflight on the extreme form (space stations, etc.).

Once had official plans for human spaceflight programs, but have since been abandoned.

Space vehicles are spacecraft used for transportation between the Earth’s surface and outer space, or between locations in outer space. The following space vehicles and spaceports are currently used for launching human spaceflights:

The following space stations are currently maintained in Earth orbit for human occupation:

Numerous private companies attempted human spaceflight programs in an effort to win the $10 million Ansari X Prize. The first private human spaceflight took place on 21 June 2004, when SpaceShipOne conducted a suborbital flight. SpaceShipOne captured the prize on 4 October 2004, when it accomplished two consecutive flights within one week. SpaceShipTwo, launching from the carrier aircraft White Knight Two, is planned to conduct regular suborbital space tourism.[18]

Most of the time, the only humans in space are those aboard the ISS, whose crew of six spends up to six months at a time in low Earth orbit.

NASA and ESA use the term “human spaceflight” to refer to their programs of launching people into space. These endeavors have also been referred to as “manned space missions,” though because of gender specificity this is no longer official parlance according to NASA style guides.[19]

India has declared it will send humans to space on its orbital vehicle Gaganyaan by 2022. The Indian Space Research Organisation (ISRO) began work on this project in 2006.[20] The objective is to carry a crew of two to low Earth orbit (LEO) and return them safely for a water-landing at a predefined landing zone. The program is proposed to be implemented in defined phases. Currently, the activities are progressing with a focus on the development of critical technologies for subsystems such as the Crew Module (CM), Environmental Control and Life Support System (ECLSS), Crew Escape System, etc. The department has initiated activities to study technical and managerial issues related to crewed missions. The program envisages the development of a fully autonomous orbital vehicle carrying 2 or 3 crew members to about 300km low Earth orbit and their safe return.

NASA is developing a plan to land humans on Mars by the 2030s. The first step in this mission begins sometime during 2020, when NASA plans to send an uncrewed craft into deep space to retrieve an asteroid.[21] The asteroid will be pushed into the moons orbit, and studied by astronauts aboard Orion, NASAs first human spacecraft in a generation.[22] Orions crew will return to Earth with samples of the asteroid and their collected data. In addition to broadening Americas space capabilities, this mission will test newly developed technology, such as solar electric propulsion, which uses solar arrays for energy and requires ten times less propellant than the conventional chemical counterpart used for powering space shuttles to orbit.[23]

Several other countries and space agencies have announced and begun human spaceflight programs by their own technology, Japan (JAXA), Iran (ISA) and Malaysia (MNSA).

A number of spacecraft have been proposed over the decades that might facilitate spaceliner passenger travel. Somewhat analogous to travel by airliner after the middle of the 20th century, these vehicles are proposed to transport a large number of passengers to destinations in space, or to destinations on Earth which travel through space. To date, none of these concepts have been built, although a few vehicles that carry fewer than 10 persons are currently in the flight testing phase of their development process.

One large spaceliner concept currently in early development is the SpaceX BFR which, in addition to replacing the Falcon 9 and Falcon Heavy launch vehicles in the legacy Earth-orbit market after 2020, has been proposed by SpaceX for long-distance commercial travel on Earth. This is to transport people on point-to-point suborbital flights between two points on Earth in under one hour, also known as “Earth-to-Earth,” and carrying 100+ passengers.[24][25][26]

Small spaceplane or small capsule suborbital spacecraft have been under development for the past decade or so and, as of 2017[update], at least one of each type are under development. Both Virgin Galactic and Blue Origin are in active development, with the SpaceShipTwo spaceplane and the New Shepard capsule, respectively. Both would carry approximately a half-dozen passengers up to space for a brief time of zero gravity before returning to the same location from where the trip began. XCOR Aerospace had been developing the Lynx single-passenger spaceplane since the 2000s[27][28][29] but development was halted in 2017.[30]

There are two main sources of hazard in space flight: those due to the environment of space which make it hostile to the human body, and the potential for mechanical malfunctions of the equipment required to accomplish space flight.

Planners of human spaceflight missions face a number of safety concerns.

The immediate needs for breathable air and drinkable water are addressed by the life support system of the spacecraft.

Medical consequences such as possible blindness and bone loss have been associated with human space flight.[39][40]

On 31 December 2012, a NASA-supported study reported that spaceflight may harm the brain of astronauts and accelerate the onset of Alzheimer’s disease.[41][42][43]

In October 2015, the NASA Office of Inspector General issued a health hazards report related to space exploration, including a human mission to Mars.[44][45]

On 2 November 2017, scientists reported that significant changes in the position and structure of the brain have been found in astronauts who have taken trips in space, based on MRI studies. Astronauts who took longer space trips were associated with greater brain changes.[46][47]

Researchers in 2018 reported, after detecting the presence on the International Space Station (ISS) of five Enterobacter bugandensis bacterial strains, none pathogenic to humans, that microorganisms on ISS should be carefully monitored to continue assuring a medically healthy environment for astronauts.[48][49]

Medical data from astronauts in low Earth orbits for long periods, dating back to the 1970s, show several adverse effects of a microgravity environment: loss of bone density, decreased muscle strength and endurance, postural instability, and reductions in aerobic capacity. Over time these deconditioning effects can impair astronauts performance or increase their risk of injury.[50]

In a weightless environment, astronauts put almost no weight on the back muscles or leg muscles used for standing up, which causes them to weaken and get smaller. Astronauts can lose up to twenty per cent of their muscle mass on spaceflights lasting five to eleven days. The consequent loss of strength could be a serious problem in case of a landing emergency.[51] Upon return to Earth from long-duration flights, astronauts are considerably weakened, and are not allowed to drive a car for twenty-one days.[52]

Astronauts experiencing weightlessness will often lose their orientation, get motion sickness, and lose their sense of direction as their bodies try to get used to a weightless environment. When they get back to Earth, or any other mass with gravity, they have to readjust to the gravity and may have problems standing up, focusing their gaze, walking and turning. Importantly, those body motor disturbances after changing from different gravities only get worse the longer the exposure to little gravity.[53] These changes will affect operational activities including approach and landing, docking, remote manipulation, and emergencies that may happen while landing. This can be a major roadblock to mission success.[citation needed]

In addition, after long space flight missions, male astronauts may experience severe eyesight problems.[54][55][56][57][58] Such eyesight problems may be a major concern for future deep space flight missions, including a crewed mission to the planet Mars.[54][55][56][57][59]

Without proper shielding, the crews of missions beyond low Earth orbit (LEO) might be at risk from high-energy protons emitted by solar flares and associated solar particle events (SPEs). Lawrence Townsend of the University of Tennessee and others have studied the overall most powerful solar storm ever recorded. The flare was seen by the British astronomer Richard Carrington in September 1859. Radiation doses astronauts would receive from a Carrington-type storm could cause acute radiation sickness and possibly even death.[61] Another storm that could have incurred a lethal radiation dose if astronauts were outside the Earth’s protective magnetosphere occurred during the Space Age, in fact, shortly after Apollo 16 landed and before Apollo 17 launched.[62] This solar storm of August 1972 would likely at least have caused acute illness.[63]

Another type of radiation, galactic cosmic rays, presents further challenges to human spaceflight beyond low Earth orbit.[64]

There is also some scientific concern that extended spaceflight might slow down the bodys ability to protect itself against diseases.[65] Some of the problems are a weakened immune system and the activation of dormant viruses in the body. Radiation can cause both short and long term consequences to the bone marrow stem cells which create the blood and immune systems. Because the interior of a spacecraft is so small, a weakened immune system and more active viruses in the body can lead to a fast spread of infection.[citation needed]

During long missions, astronauts are isolated and confined into small spaces. Depression, cabin fever and other psychological problems may impact the crew’s safety and mission success.[66]

Astronauts may not be able to quickly return to Earth or receive medical supplies, equipment or personnel if a medical emergency occurs. The astronauts may have to rely for long periods on their limited existing resources and medical advice from the ground.

Space flight requires much higher velocities than ground or air transportation, which in turn requires the use of high energy density propellants for launch, and the dissipation of large amounts of energy, usually as heat, for safe reentry through the Earth’s atmosphere.

Since rockets carry the potential for fire or explosive destruction, space capsules generally employ some sort of launch escape system, consisting either of a tower-mounted solid fuel rocket to quickly carry the capsule away from the launch vehicle (employed on Mercury, Apollo, and Soyuz), or else ejection seats (employed on Vostok and Gemini) to carry astronauts out of the capsule and away for individual parachute landing. The escape tower is discarded at some point before the launch is complete, at a point where an abort can be performed using the spacecraft’s engines.

Such a system is not always practical for multiple crew member vehicles (particularly spaceplanes), depending on location of egress hatch(es). When the single-hatch Vostok capsule was modified to become the 2 or 3-person Voskhod, the single-cosmonaut ejection seat could not be used, and no escape tower system was added. The two Voskhod flights in 1964 and 1965 avoided launch mishaps. The Space Shuttle carried ejection seats and escape hatches for its pilot and copilot in early flights, but these could not be used for passengers who sat below the flight deck on later flights, and so were discontinued.

There have only been two in-flight launch aborts of a crewed flight. The first occurred on Soyuz 18a on 5 April 1975. The abort occurred after the launch escape system had been jettisoned, when the launch vehicle’s spent second stage failed to separate before the third stage ignited. The vehicle strayed off course, and the crew separated the spacecraft and fired its engines to pull it away from the errant rocket. Both cosmonauts landed safely. The second occurred on 11 October 2018 with the launch of Soyuz MS-10. Again, both crew members survived.

In the only use of a launch escape system on a crewed flight, the planned Soyuz T-10a launch on 26 September 1983 was aborted by a launch vehicle fire 90 seconds before liftoff. Both cosmonauts aboard landed safely.

The only crew fatality during launch occurred on 28 January 1986, when the Space Shuttle Challenger broke apart 73 seconds after liftoff, due to failure of a solid rocket booster seal which caused separation of the booster and failure of the external fuel tank, resulting in explosion of the fuel. All seven crew members were killed.

The single pilot of Soyuz 1, Vladimir Komarov was killed when his capsule’s parachutes failed during an emergency landing on 24 April 1967, causing the capsule to crash.

The crew of seven aboard the Space Shuttle Columbia were killed on reentry after completing a successful mission in space on 1 February 2003. A wing leading edge reinforced carbon-carbon heat shield had been damaged by a piece of frozen external tank foam insulation which broke off and struck the wing during launch. Hot reentry gasses entered and destroyed the wing structure, leading to breakup of the orbiter vehicle.

There are two basic choices for an artificial atmosphere: either an Earth-like mixture of oxygen in an inert gas such as nitrogen or helium, or pure oxygen, which can be used at lower than standard atmospheric pressure. A nitrogen-oxygen mixture is used in the International Space Station and Soyuz spacecraft, while low-pressure pure oxygen is commonly used in space suits for extravehicular activity.

Use of a gas mixture carries risk of decompression sickness (commonly known as “the bends”) when transitioning to or from the pure oxygen space suit environment. There have also been instances of injury and fatalities caused by suffocation in the presence of too much nitrogen and not enough oxygen.

A pure oxygen atmosphere carries risk of fire. The original design of the Apollo spacecraft used pure oxygen at greater than atmospheric pressure prior to launch. An electrical fire started in the cabin of Apollo 1 during a ground test at Cape Kennedy Air Force Station Launch Complex 34 on 27 January 1967, and spread rapidly. The high pressure (increased even higher by the fire) prevented removal of the plug door hatch cover in time to rescue the crew. All three, Gus Grissom, Ed White, and Roger Chaffee, were killed.[70] This led NASA to use a nitrogen/oxygen atmosphere before launch, and low pressure pure oxygen only in space.

The March 1966 Gemini 8 mission was aborted in orbit when an attitude control system thruster stuck in the on position, sending the craft into a dangerous spin which threatened the lives of Neil Armstrong and David Scott. Armstrong had to shut the control system off and use the reentry control system to stop the spin. The craft made an emergency reentry and the astronauts landed safely. The most probable cause was determined to be an electrical short due to a static electricity discharge, which caused the thruster to remain powered even when switched off. The control system was modified to put each thruster on its own isolated circuit.

The third lunar landing expedition Apollo 13 in April 1970, was aborted and the lives of the crew, James Lovell, Jack Swigert and Fred Haise, were threatened by failure of a cryogenic liquid oxygen tank en route to the Moon. The tank burst when electrical power was applied to internal stirring fans in the tank, causing the immediate loss of all of its contents, and also damaging the second tank, causing the loss of its remaining oxygen in a span of 130 minutes. This in turn caused loss of electrical power provided by fuel cells to the command spacecraft. The crew managed to return to Earth safely by using the lunar landing craft as a “life boat”. The tank failure was determined to be caused by two mistakes. The tank’s drain fitting had been damaged when it was dropped during factory testing. This necessitated use of its internal heaters to boil out the oxygen after a pre-launch test, which in turn damaged the fan wiring’s electrical insulation, because the thermostats on the heaters did not meet the required voltage rating due to a vendor miscommunication.

The crew of Soyuz 11 were killed on June 30, 1971 by a combination of mechanical malfunctions: they were asphyxiated due to cabin decompression following separation of their descent capsule from the service module. A cabin ventilation valve had been jolted open at an altitude of 168 kilometres (551,000ft) by the stronger than expected shock of explosive separation bolts which were designed to fire sequentially, but in fact had fired simultaneously. The loss of pressure became fatal within about 30 seconds.[71]

As of December2015[update], 22 crew members have died in accidents aboard spacecraft. Over 100 others have died in accidents during activity directly related to spaceflight or testing.

The rest is here:

Human spaceflight – Wikipedia

Artificial intelligence – Wikipedia

Intelligence demonstrated by machines

In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. Computer science defines AI research as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] More in detail, Kaplan and Haenlein define AI as a systems ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.[2] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect, leading to the quip in Tesler’s Theorem, “AI is whatever hasn’t been done yet.”[4] For instance, optical character recognition is frequently excluded from “artificial intelligence”, having become a routine technology.[5] Modern machine capabilities generally classified as AI include successfully understanding human speech, competing at the highest level in strategic game systems (such as chess and Go),[7] autonomously operating cars, and intelligent routing in content delivery networks and military simulations.

Borrowing from the management literature, Kaplan and Haenlein classify artificial intelligence into three different types of AI systems: analytical, human-inspired, and humanized artificial intelligence.[8] Analytical AI has only characteristics consistent with cognitive intelligence generating cognitive representation of the world and using learning based on past experience to inform future decisions. Human-inspired AI has elements from cognitive as well as emotional intelligence, understanding, in addition to cognitive elements, also human emotions considering them in their decision making. Humanized AI shows characteristics of all types of competencies (i.e., cognitive, emotional, and social intelligence), able to be self-conscious and self-aware in interactions with others.

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism,[9][10] followed by disappointment and the loss of funding (known as an “AI winter”),[11][12] followed by new approaches, success and renewed funding.[10][13] For most of its history, AI research has been divided into subfields that often fail to communicate with each other.[14] These sub-fields are based on technical considerations, such as particular goals (e.g. “robotics” or “machine learning”),[15] the use of particular tools (“logic” or artificial neural networks), or deep philosophical differences.[16][17][18] Subfields have also been based on social factors (particular institutions or the work of particular researchers).[14]

The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[15] General intelligence is among the field’s long-term goals.[19] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics. The AI field draws upon computer science, information engineering, mathematics, psychology, linguistics, philosophy, and many others.

The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it”.[20] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence which are issues that have been explored by myth, fiction and philosophy since antiquity.[21] Some people also consider AI to be a danger to humanity if it progresses unabated.[22] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[23]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science, software engineering and operations research.[24][13]

Thought-capable artificial beings appeared as storytelling devices in antiquity,[25] and have been common in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots).[26] These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence.[21]

The study of mechanical or “formal” reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing’s theory of computation, which suggested that a machine, by shuffling symbols as simple as “0” and “1”, could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the ChurchTuring thesis.[27] Along with concurrent discoveries in neurobiology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. Turing proposed that “if a human could not distinguish between responses from a machine and a human, the machine could be considered “intelligent”.[28] The first work that is now generally recognized as AI was McCullouch and Pitts’ 1943 formal design for Turing-complete “artificial neurons”.

The field of AI research was born at a workshop at Dartmouth College in 1956.[30] Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[31] They and their students produced programs that the press described as “astonishing”: computers were learning checkers strategies (c. 1954)[33] (and by 1959 were reportedly playing better than the average human),[34] solving word problems in algebra, proving logical theorems (Logic Theorist, first run c. 1956) and speaking English.[35] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[36] and laboratories had been established around the world.[37] AI’s founders were optimistic about the future: Herbert Simon predicted, “machines will be capable, within twenty years, of doing any work a man can do”. Marvin Minsky agreed, writing, “within a generation… the problem of creating ‘artificial intelligence’ will substantially be solved”.[9]

They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an “AI winter”,[11] a period when obtaining funding for AI projects was difficult.

In the early 1980s, AI research was revived by the commercial success of expert systems,[39] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan’s fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[10] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.[12]

In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas.[24] The success was due to increasing computational power (see Moore’s law), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards.[40] Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997.

In 2011, a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin. Faster computers, algorithmic improvements, and access to large amounts of data enabled advances in machine learning and perception; data-hungry deep learning methods started to dominate accuracy benchmarks around 2012.[43] The Kinect, which provides a 3D bodymotion interface for the Xbox 360 and the Xbox One, uses algorithms that emerged from lengthy AI research[44] as do intelligent personal assistants in smartphones.[45] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[7][46] In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie,[47] who at the time continuously held the world No. 1 ranking for two years.[48][49] This marked the completion of a significant milestone in the development of Artificial Intelligence as Go is an extremely complex game, more so than Chess.

According to Bloomberg’s Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a “sporadic usage” in 2012 to more than 2,700 projects. Clark also presents factual data indicating that error rates in image processing tasks have fallen significantly since 2011.[50] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[13] Other cited examples include Microsoft’s development of a Skype system that can automatically translate from one language to another and Facebook’s system that can describe images to blind people.[50] In a 2017 survey, one in five companies reported they had “incorporated AI in some offerings or processes”.[51][52] Around 2016, China greatly accelerated its government funding; given its large supply of data and its rapidly increasing research output, some observers believe it may be on track to becoming an “AI superpower”.[53][54]

A typical AI perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] An AI’s intended goal function can be simple (“1 if the AI wins a game of Go, 0 otherwise”) or complex (“Do actions mathematically similar to the actions that got you rewards in the past”). Goals can be explicitly defined, or can be induced. If the AI is programmed for “reinforcement learning”, goals can be implicitly induced by rewarding some types of behavior and punishing others.[a] Alternatively, an evolutionary system can induce goals by using a “fitness function” to mutate and preferentially replicate high-scoring AI systems; this is similar to how animals evolved to innately desire certain goals such as finding food, or how dogs can be bred via artificial selection to possess desired traits. Some AI systems, such as nearest-neighbor, instead reason by analogy; these systems are not generally given goals, except to the degree that goals are somehow implicit in their training data. Such systems can still be benchmarked if the non-goal system is framed as a system whose “goal” is to successfully accomplish its narrow classification task.[57]

AI often revolves around the use of algorithms. An algorithm is a set of unambiguous instructions that a mechanical computer can execute.[b] A complex algorithm is often built on top of other, simpler, algorithms. A simple example of an algorithm is the following recipe for optimal play at tic-tac-toe:

Many AI algorithms are capable of learning from data; they can enhance themselves by learning new heuristics (strategies, or “rules of thumb”, that have worked well in the past), or can themselves write other algorithms. Some of the “learners” described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, if given infinite data, time, and memory, learn to approximate any function, including whatever combination of mathematical functions would best describe the entire world. These learners could therefore, in theory, derive all possible knowledge, by considering every possible hypothesis and matching it against the data. In practice, it is almost never possible to consider every possibility, because of the phenomenon of “combinatorial explosion”, where the amount of time needed to solve a problem grows exponentially. Much of AI research involves figuring out how to identify and avoid considering broad swaths of possibilities that are unlikely to be fruitful.[59] For example, when viewing a map and looking for the shortest driving route from Denver to New York in the East, one can in most cases skip looking at any path through San Francisco or other areas far to the West; thus, an AI wielding an pathfinding algorithm like A* can avoid the combinatorial explosion that would ensue if every possible route had to be ponderously considered in turn.[61]

The earliest (and easiest to understand) approach to AI was symbolism (such as formal logic): “If an otherwise healthy adult has a fever, then they may have influenza”. A second, more general, approach is Bayesian inference: “If the current patient has a fever, adjust the probability they have influenza in such-and-such way”. The third major approach, extremely popular in routine business AI applications, are analogizers such as SVM and nearest-neighbor: “After examining the records of known past patients whose temperature, symptoms, age, and other factors mostly match the current patient, X% of those patients turned out to have influenza”. A fourth approach is harder to intuitively understand, but is inspired by how the brain’s machinery works: the artificial neural network approach uses artificial “neurons” that can learn by comparing itself to the desired output and altering the strengths of the connections between its internal neurons to “reinforce” connections that seemed to be useful. These four main approaches can overlap with each other and with evolutionary systems; for example, neural nets can learn to make inferences, to generalize, and to make analogies. Some systems implicitly or explicitly use multiple of these approaches, alongside many other AI and non-AI algorithms;[62] the best approach is often different depending on the problem.[64]

Learning algorithms work on the basis that strategies, algorithms, and inferences that worked well in the past are likely to continue working well in the future. These inferences can be obvious, such as “since the sun rose every morning for the last 10,000 days, it will probably rise tomorrow morning as well”. They can be nuanced, such as “X% of families have geographically separate species with color variants, so there is an Y% chance that undiscovered black swans exist”. Learners also work on the basis of “Occam’s razor”: The simplest theory that explains the data is the likeliest. Therefore, to be successful, a learner must be designed such that it prefers simpler theories to complex theories, except in cases where the complex theory is proven substantially better. Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is. Besides classic overfitting, learners can also disappoint by “learning the wrong lesson”. A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses. A real-world example is that, unlike humans, current image classifiers don’t determine the spatial relationship between components of the picture; instead, they learn abstract patterns of pixels that humans are oblivious to, but that linearly correlate with images of certain types of real objects. Faintly superimposing such a pattern on a legitimate image results in an “adversarial” image that the system misclassifies.[c][67][68][69]

Compared with humans, existing AI lacks several features of human “commonsense reasoning”; most notably, humans have powerful mechanisms for reasoning about “nave physics” such as space, time, and physical interactions. This enables even young children to easily make inferences like “If I roll this pen off a table, it will fall on the floor”. Humans also have a powerful mechanism of “folk psychology” that helps them to interpret natural-language sentences such as “The city councilmen refused the demonstrators a permit because they advocated violence”. (A generic AI has difficulty inferring whether the councilmen or the demonstrators are the ones alleged to be advocating violence.)[72][73][74] This lack of “common knowledge” means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.[75][76][77]

The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.[15]

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[78] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[79]

These algorithms proved to be insufficient for solving large reasoning problems, because they experienced a “combinatorial explosion”: they became exponentially slower as the problems grew larger.[59] In fact, even humans rarely use the step-by-step deduction that early AI research was able to model. They solve most of their problems using fast, intuitive judgements.[80]

Knowledge representation[81] and knowledge engineering[82] are central to classical AI research. Some “expert systems” attempt to gather together explicit knowledge possessed by experts in some narrow domain. In addition, some projects attempt to gather the “commonsense knowledge” known to the average person into a database containing extensive knowledge about the world. Among the things a comprehensive commonsense knowledge base would contain are: objects, properties, categories and relations between objects;[83] situations, events, states and time;[84] causes and effects;[85] knowledge about knowledge (what we know about what other people know);[86] and many other, less well researched domains. A representation of “what exists” is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The semantics of these are captured as description logic concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the Web Ontology Language.[87] The most general ontologies are called upper ontologies, which attempt to provide a foundation for all other knowledge[88] by acting as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). Such formal knowledge representations can be used in content-based indexing and retrieval,[89] scene interpretation,[90] clinical decision support,[91] knowledge discovery (mining “interesting” and actionable inferences from large databases),[92] and other areas.[93]

Among the most difficult problems in knowledge representation are:

Intelligent agents must be able to set goals and achieve them.[100] They need a way to visualize the futurea representation of the state of the world and be able to make predictions about how their actions will change itand be able to make choices that maximize the utility (or “value”) of available choices.[101]

In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[102] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that can not only assess its environment and make predictions, but also evaluate its predictions and adapt based on its assessment.[103]

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[104]

Machine learning, a fundamental concept of AI research since the field’s inception,[105] is the study of computer algorithms that improve automatically through experience.[106][107]

Unsupervised learning is the ability to find patterns in a stream of input, without requiring a human to label the inputs first.[108] Supervised learning includes both classification and numerical regression, which requires a human to label the input data first. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change.[107] Both classifiers and regression learners can be viewed as “function approximators” trying to learn an unknown (possibly implicit) function; for example, a spam classifier can be viewed as learning a function that maps from the text of an email to one of two categories, “spam” or “not spam”. Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization.[109] In reinforcement learning[110] the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.

Natural language processing[111] (NLP) gives machines the ability to read and understand human language. A sufficiently powerful natural language processing system would enable natural-language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval, text mining, question answering[112] and machine translation.[113] Many current approaches use word co-occurrence frequencies to construct syntactic representations of text. “Keyword spotting” strategies for search are popular and scalable but dumb; a search query for “dog” might only match documents with the literal word “dog” and miss a document with the word “poodle”. “Lexical affinity” strategies use the occurrence of words such as “accident” to assess the sentiment of a document. Modern statistical NLP approaches can combine all these strategies as well as others, and often achieve acceptable accuracy at the page or paragraph level, but continue to lack the semantic understanding required to classify isolated sentences well. Besides the usual difficulties with encoding semantic commonsense knowledge, existing semantic NLP sometimes scales too poorly to be viable in business applications. Beyond semantic NLP, the ultimate goal of “narrative” NLP is to embody a full understanding of commonsense reasoning.[114]

Machine perception[115] is the ability to use input from sensors (such as cameras (visible spectrum or infrared), microphones, wireless signals, and active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Applications include speech recognition,[116] facial recognition, and object recognition.[117] Computer vision is the ability to analyze visual input. Such input is usually ambiguous; a giant, fifty-meter-tall pedestrian far away may produce exactly the same pixels as a nearby normal-sized pedestrian, requiring the AI to judge the relative likelihood and reasonableness of different interpretations, for example by using its “object model” to assess that fifty-meter pedestrians do not exist.[118]

AI is heavily used in robotics.[119] Advanced robotic arms and other industrial robots, widely used in modern factories, can learn from experience how to move efficiently despite the presence of friction and gear slippage.[120] A modern mobile robot, when given a small, static, and visible environment, can easily determine its location and map its environment; however, dynamic environments, such as (in endoscopy) the interior of a patient’s breathing body, pose a greater challenge. Motion planning is the process of breaking down a movement task into “primitives” such as individual joint movements. Such movement often involves compliant motion, a process where movement requires maintaining physical contact with an object.[122][123] Moravec’s paradox generalizes that low-level sensorimotor skills that humans take for granted are, counterintuitively, difficult to program into a robot; the paradox is named after Hans Moravec, who stated in 1988 that “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility”.[124][125] This is attributed to the fact that, unlike checkers, physical dexterity has been a direct target of natural selection for millions of years.[126]

Moravec’s paradox can be extended to many forms of social intelligence.[128][129] Distributed multi-agent coordination of autonomous vehicles remains a difficult problem.[130] Affective computing is an interdisciplinary umbrella that comprises systems which recognize, interpret, process, or simulate human affects. Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal affect analysis (see multimodal sentiment analysis), wherein AI classifies the affects displayed by a videotaped subject.[134]

In the long run, social skills and an understanding of human emotion and game theory would be valuable to a social agent. Being able to predict the actions of others by understanding their motives and emotional states would allow an agent to make better decisions. Some computer systems mimic human emotion and expressions to appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate humancomputer interaction.[135] Similarly, some virtual assistants are programmed to speak conversationally or even to banter humorously; this tends to give nave users an unrealistic conception of how intelligent existing computer agents actually are.[136]

Historically, projects such as the Cyc knowledge base (1984) and the massive Japanese Fifth Generation Computer Systems initiative (19821992) attempted to cover the breadth of human cognition. These early projects failed to escape the limitations of non-quantitative symbolic logic models and, in retrospect, greatly underestimated the difficulty of cross-domain AI. Nowadays, the vast majority of current AI researchers work instead on tractable “narrow AI” applications (such as medical diagnosis or automobile navigation).[137] Many researchers predict that such “narrow AI” work in different individual domains will eventually be incorporated into a machine with artificial general intelligence (AGI), combining most of the narrow skills mentioned in this article and at some point even exceeding human ability in most or all these areas.[19][138] Many advances have general, cross-domain significance. One high-profile example is that DeepMind in the 2010s developed a “generalized artificial intelligence” that could learn many diverse Atari games on its own, and later developed a variant of the system which succeeds at sequential learning.[139][140][141] Besides transfer learning,[142] hypothetical AGI breakthroughs could include the development of reflective architectures that can engage in decision-theoretic metareasoning, and figuring out how to “slurp up” a comprehensive knowledge base from the entire unstructured Web. Some argue that some kind of (currently-undiscovered) conceptually straightforward, but mathematically difficult, “Master Algorithm” could lead to AGI. Finally, a few “emergent” approaches look to simulating human intelligence extremely closely, and believe that anthropomorphic features like an artificial brain or simulated child development may someday reach a critical point where general intelligence emerges.[144][145]

Many of the problems in this article may also require general intelligence, if machines are to solve the problems as well as people do. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author’s argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author’s original intent (social intelligence). A problem like machine translation is considered “AI-complete”, because all of these problems need to be solved simultaneously in order to reach human-level machine performance.

There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues.[146] A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurobiology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?[16]Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems?[17]

In the 1940s and 1950s, a number of researchers explored the connection between neurobiology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter’s turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[147] By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: Carnegie Mellon University, Stanford and MIT, and as described below, each one developed its own style of research. John Haugeland named these symbolic approaches to AI “good old fashioned AI” or “GOFAI”.[148] During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or artificial neural networks were abandoned or pushed into the background.[149]Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[150][151]

Unlike Simon and Newell, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem-solving, regardless of whether people used the same algorithms.[16] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[152] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[153]

Researchers at MIT (such as Marvin Minsky and Seymour Papert)[154] found that solving difficult problems in vision and natural language processing required ad-hoc solutionsthey argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their “anti-logic” approaches as “scruffy” (as opposed to the “neat” paradigms at CMU and Stanford).[17] Commonsense knowledge bases (such as Doug Lenat’s Cyc) are an example of “scruffy” AI, since they must be built by hand, one complicated concept at a time.[155]

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[156] This “knowledge revolution” led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[39] A key component of the system architecture for all expert systems is the knowledge base, which stores facts and rules that illustrate AI.[157] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

By the 1980s, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into “sub-symbolic” approaches to specific AI problems.[18] Sub-symbolic methods manage to approach intelligence without specific representations of knowledge.

This includes embodied, situated, behavior-based, and nouvelle AI. Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.[158] Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[159][160]

Interest in neural networks and “connectionism” was revived by David Rumelhart and others in the middle of the 1980s.[163] Artificial neural networks are an example of soft computingthey are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient. Other soft computing approaches to AI include fuzzy systems, evolutionary computation and many statistical tools. The application of soft computing to AI is studied collectively by the emerging discipline of computational intelligence.[164]

Much of traditional GOFAI got bogged down on ad hoc patches to symbolic computation that worked on their own toy models but failed to generalize to real-world results. However, around the 1990s, AI researchers adopted sophisticated mathematical tools, such as hidden Markov models (HMM), information theory, and normative Bayesian decision theory to compare or to unify competing architectures. The shared mathematical language permitted a high level of collaboration with more established fields (like mathematics, economics or operations research).[d] Compared with GOFAI, new “statistical learning” techniques such as HMM and neural networks were gaining higher levels of accuracy in many practical domains such as data mining, without necessarily acquiring semantic understanding of the datasets. The increased successes with real-world data led to increasing emphasis on comparing different approaches against shared test data to see which approach performed best in a broader context than that provided by idiosyncratic toy models; AI research was becoming more scientific. Nowadays results of experiments are often rigorously measurable, and are sometimes (with difficulty) reproducible.[40][165] Different statistical learning techniques have different limitations; for example, basic HMM cannot model the infinite possible combinations of natural language. Critics note that the shift from GOFAI to statistical learning is often also a shift away from Explainable AI. In AGI research, some scholars caution against over-reliance on statistical learning, and argue that continuing research into GOFAI will still be necessary to attain general intelligence.

AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.

Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[174] Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[175] Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[176] Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[120] Many learning algorithms use search algorithms based on optimization.

Simple exhaustive searches[177] are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use “heuristics” or “rules of thumb” that prioritize choices in favor of those that are more likely to reach a goal and to do so in a shorter number of steps. In some search methodologies heuristics can also serve to entirely eliminate some choices that are unlikely to lead to a goal (called “pruning the search tree”). Heuristics supply the program with a “best guess” for the path on which the solution lies.[178] Heuristics limit the search for solutions into a smaller sample size.

A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization.[179]

Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Classic evolutionary algorithms include genetic algorithms, gene expression programming, and genetic programming.[180] Alternatively, distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails).[181][182]

Logic[183] is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning[184] and inductive logic programming is a method for learning.[185]

Several different forms of logic are used in AI research. Propositional logic[186] involves truth functions such as “or” and “not”. First-order logic[187] adds quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy set theory assigns a “degree of truth” (between 0 and 1) to vague statements such as “Alice is old” (or rich, or tall, or hungry) that are too linguistically imprecise to be completely true or false. Fuzzy logic is successfully used in control systems to allow experts to contribute vague rules such as “if you are close to the destination station and moving fast, increase the train’s brake pressure”; these vague rules can then be numerically refined within the system. Fuzzy logic fails to scale well in knowledge bases; many AI researchers question the validity of chaining fuzzy-logic inferences.[e][189][190]

Default logics, non-monotonic logics and circumscription[95] are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics;[83] situation calculus, event calculus and fluent calculus (for representing events and time);[84] causal calculus;[85] belief calculus;[191] and modal logics.[86]

Overall, qualitiative symbolic logic is brittle and scales poorly in the presence of noise or other uncertainty. Exceptions to rules are numerous, and it is difficult for logical systems to function in the presence of contradictory rules.[193]

Many problems in AI (in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.[194]

Bayesian networks[195] are a very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference algorithm),[196] learning (using the expectation-maximization algorithm),[f][198] planning (using decision networks)[199] and perception (using dynamic Bayesian networks).[200] Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[200] Compared with symbolic logic, formal Bayesian inference is computationally expensive. For inference to be tractable, most observations must be conditionally independent of one another. Complicated graphs with diamonds or other “loops” (undirected cycles) can require a sophisticated method such as Markov Chain Monte Carlo, which spreads an ensemble of random walkers throughout the Bayesian network and attempts to converge to an assessment of the conditional probabilities. Bayesian networks are used on Xbox Live to rate and match players; wins and losses are “evidence” of how good a player is. AdSense uses a Bayesian network with over 300 million edges to learn which ads to serve.

A key concept from the science of economics is “utility”: a measure of how valuable something is to an intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[201] and information value theory.[101] These tools include models such as Markov decision processes,[202] dynamic decision networks,[200] game theory and mechanism design.[203]

The simplest AI applications can be divided into two types: classifiers (“if shiny then diamond”) and controllers (“if shiny then pick up”). Controllers do, however, also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[204]

A classifier can be trained in various ways; there are many statistical and machine learning approaches. The decision tree[205] is perhaps the most widely used machine learning algorithm. Other widely used classifiers are the neural network,[207]k-nearest neighbor algorithm,[g][209]kernel methods such as the support vector machine (SVM),[h][211]Gaussian mixture model,[212] and the extremely popular naive Bayes classifier.[i][214] Classifier performance depends greatly on the characteristics of the data to be classified, such as the dataset size, distribution of samples across classes, the dimensionality, and the level of noise. Model-based classifiers perform well if the assumed model is an extremely good fit for the actual data. Otherwise, if no matching model is available, and if accuracy (rather than speed or scalability) is the sole concern, conventional wisdom is that discriminative classifiers (especially SVM) tend to be more accurate than model-based classifiers such as “naive Bayes” on most practical data sets.[215]

Neural networks, or neural nets, were inspired by the architecture of neurons in the human brain. A simple “neuron” N accepts input from multiple other neurons, each of which, when activated (or “fired”), cast a weighted “vote” for or against whether neuron N should itself activate. Learning requires an algorithm to adjust these weights based on the training data; one simple algorithm (dubbed “fire together, wire together”) is to increase the weight between two connected neurons when the activation of one triggers the successful activation of another. The net forms “concepts” that are distributed among a subnetwork of shared[j] neurons that tend to fire together; a concept meaning “leg” might be coupled with a subnetwork meaning “foot” that includes the sound for “foot”. Neurons have a continuous spectrum of activation; in addition, neurons can process inputs in a nonlinear way rather than weighing straightforward votes. Modern neural nets can learn both continuous functions and, surprisingly, digital logical operations. Neural networks’ early successes included predicting the stock market and (in 1995) a mostly self-driving car.[k] In the 2010s, advances in neural networks using deep learning thrust AI into widespread public consciousness and contributed to an enormous upshift in corporate AI spending; for example, AI-related M&A in 2017 was over 25 times as large as in 2015.[218][219]

The study of non-learning artificial neural networks[207] began in the decade before the field of AI research was founded, in the work of Walter Pitts and Warren McCullouch. Frank Rosenblatt invented the perceptron, a learning network with a single layer, similar to the old concept of linear regression. Early pioneers also include Alexey Grigorevich Ivakhnenko, Teuvo Kohonen, Stephen Grossberg, Kunihiko Fukushima, Christoph von der Malsburg, David Willshaw, Shun-Ichi Amari, Bernard Widrow, John Hopfield, Eduardo R. Caianiello, and others.

The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events). Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks.[220] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning (“fire together, wire together”), GMDH or competitive learning.[221]

Today, neural networks are often trained by the backpropagation algorithm, which had been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa,[222][223] and was introduced to neural networks by Paul Werbos.[224][225][226]

Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[227]

To summarize, most neural networks use some form of gradient descent on a hand-created neural topology. However, some research groups, such as Uber, argue that simple neuroevolution to mutate new neural network topologies and weights may be competitive with sophisticated gradient descent approaches. One advantage of neuroevolution is that it may be less prone to get caught in “dead ends”.[228]

Deep learning is any artificial neural network that can learn a long chain of causal links. For example, a feedforward network with six hidden layers can learn a seven-link causal chain (six hidden layers + output layer) and has a “credit assignment path” (CAP) depth of seven. Many deep learning systems need to be able to learn chains ten or more causal links in length.[229] Deep learning has transformed many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing and others.[230][231][229]

According to one overview,[232] the expression “Deep Learning” was introduced to the Machine Learning community by Rina Dechter in 1986[233] and gained traction afterIgor Aizenberg and colleagues introduced it to Artificial Neural Networks in 2000.[234] The first functional Deep Learning networks were published by Alexey Grigorevich Ivakhnenko and V. G. Lapa in 1965.[235][pageneeded] These networks are trained one layer at a time. Ivakhnenko’s 1971 paper[236] describes the learning of a deep feedforward multilayer perceptron with eight layers, already much deeper than many later networks. In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced another way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised backpropagation for fine-tuning. Similar to shallow artificial neural networks, deep neural networks can model complex non-linear relationships. Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[238]

Deep learning often uses convolutional neural networks (CNNs), whose origins can be traced back to the Neocognitron introduced by Kunihiko Fukushima in 1980.[239] In 1989, Yann LeCun and colleagues applied backpropagation to such an architecture. In the early 2000s, in an industrial application CNNs already processed an estimated 10% to 20% of all the checks written in the US.[240]Since 2011, fast implementations of CNNs on GPUs havewon many visual pattern recognition competitions.[229]

CNNs with 12 convolutional layers were used in conjunction with reinforcement learning by Deepmind’s “AlphaGo Lee”, the program that beat a top Go champion in 2016.[241]

Early on, deep learning was also applied to sequence learning with recurrent neural networks (RNNs)[242] which are in theory Turing complete[243] and can run arbitrary programs to process arbitrary sequences of inputs. The depth of an RNN is unlimited and depends on the length of its input sequence; thus, an RNN is an example of deep learning.[229] RNNs can be trained by gradient descent[244][245][246] but suffer from the vanishing gradient problem.[230][247] In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[248]

Numerous researchers now use variants of a deep learning recurrent NN called the long short-term memory (LSTM) network published by Hochreiter & Schmidhuber in 1997.[249] LSTM is often trained by Connectionist Temporal Classification (CTC).[250] At Google, Microsoft and Baidu this approach has revolutionised speech recognition.[251][252][253] For example, in 2015, Google’s speech recognition experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available through Google Voice to billions of smartphone users.[254] Google also used LSTM to improve machine translation,[255] Language Modeling[256] and Multilingual Language Processing.[257] LSTM combined with CNNs also improved automatic image captioning[258] and a plethora of other applications.

AI, like electricity or the steam engine, is a general purpose technology. There is no consensus on how to characterize which tasks AI tends to excel at.[259] While projects such as AlphaZero have succeeded in generating their own knowledge from scratch, many other machine learning projects require large training datasets.[260][261] Researcher Andrew Ng has suggested, as a “highly imperfect rule of thumb”, that “almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI.”[262] Moravec’s paradox suggests that AI lags humans at many tasks that the human brain has specifically evolved to perform well.[126]

Games provide a well-publicized benchmark for assessing rates of progress. AlphaGo around 2016 brought the era of classical board-game benchmarks to a close. Games of imperfect knowledge provide new challenges to AI in the area of game theory.[263][264] E-sports such as StarCraft continue to provide additional public benchmarks.[265][266] There are many competitions and prizes, such as the Imagenet Challenge, to promote research in artificial intelligence. The most common areas of competition include general machine intelligence, conversational behavior, data-mining, robotic cars, and robot soccer as well as conventional games.[267]

The “imitation game” (an interpretation of the 1950 Turing test that assesses whether a computer can imitate a human) is nowadays considered too exploitable to be a meaningful benchmark.[268] A derivative of the Turing test is the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). As the name implies, this helps to determine that a user is an actual person and not a computer posing as a human. In contrast to the standard Turing test, CAPTCHA is administered by a machine and targeted to a human as opposed to being administered by a human and targeted to a machine. A computer asks a user to complete a simple test then generates a grade for that test. Computers are unable to solve the problem, so correct solutions are deemed to be the result of a person taking the test. A common type of CAPTCHA is the test that requires the typing of distorted letters, numbers or symbols that appear in an image undecipherable by a computer.

Proposed “universal intelligence” tests aim to compare how well machines, humans, and even non-human animals perform on problem sets that are generic as possible. At an extreme, the test suite can contain every possible problem, weighted by Kolmogorov complexity; unfortunately, these problem sets tend to be dominated by impoverished pattern-matching exercises where a tuned AI can easily exceed human performance levels.[270][271]

AI is relevant to any intellectual task. Modern artificial intelligence techniques are pervasive and are too numerous to list here. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delays,[274] prediction of judicial decisions[275] and targeting online advertisements.[276][277]

With social media sites overtaking TV as a source for news for young people and news organisations increasingly reliant on social media platforms for generating distribution,[278] major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.[279]

AI is being applied to the high cost problem of dosage issueswhere findings suggested that AI could save $16 billion. In 2016, a ground breaking study in California found that a mathematical formula developed with the help of AI correctly determined the accurate dose of immunosuppressant drugs to give to organ patients.[280]

Artificial intelligence is breaking into the healthcare industry by assisting doctors. According to Bloomberg Technology, Microsoft has developed AI to help doctors find the right treatments for cancer.[281] There is a great amount of research and drugs developed relating to cancer. In detail, there are more than 800 medicines and vaccines to treat cancer. This negatively affects the doctors, because there are too many options to choose from, making it more difficult to choose the right drugs for the patients. Microsoft is working on a project to develop a machine called “Hanover”. Its goal is to memorize all the papers necessary to cancer and help predict which combinations of drugs will be most effective for each patient. One project that is being worked on at the moment is fighting myeloid leukemia, a fatal cancer where the treatment has not improved in decades. Another study was reported to have found that artificial intelligence was as good as trained doctors in identifying skin cancers.[282] Another study is using artificial intelligence to try and monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[283] One study was done with transfer learning, the machine performed a diagnosis similarly to a well-trained ophthalmologist, and could generate a decision within 30 seconds on whether or not the patient should be referred for treatment, with more than 95% percent accuracy.[284]

According to CNN, a recent study by surgeons at the Children’s National Medical Center in Washington successfully demonstrated surgery with an autonomous robot. The team supervised the robot while it performed soft-tissue surgery, stitching together a pig’s bowel during open surgery, and doing so better than a human surgeon, the team claimed.[285] IBM has created its own artificial intelligence computer, the IBM Watson, which has beaten human intelligence (at some levels). Watson not only won at the game show Jeopardy! against former champions,[286] but was declared a hero after successfully diagnosing a woman who was suffering from leukemia.[287]

Advancements in AI have contributed to the growth of the automotive industry through the creation and evolution of self-driving vehicles. As of 2016[update], there are over 30 companies utilizing AI into the creation of driverless cars. A few companies involved with AI include Tesla, Google, and Apple.[288]

Many components contribute to the functioning of self-driving cars. These vehicles incorporate systems such as braking, lane changing, collision prevention, navigation and mapping. Together, these systems, as well as high performance computers, are integrated into one complex vehicle.[289]

Recent developments in autonomous automobiles have made the innovation of self-driving trucks possible, though they are still in the testing phase. The UK government has passed legislation to begin testing of self-driving truck platoons in 2018.[290] Self-driving truck platoons are a fleet of self-driving trucks following the lead of one non-self-driving truck, so the truck platoons aren’t entirely autonomous yet. Meanwhile, the Daimler, a German automobile corporation, is testing the Freightliner Inspiration which is a semi-autonomous truck that will only be used on the highway.[291]

One main factor that influences the ability for a driver-less automobile to function is mapping. In general, the vehicle would be pre-programmed with a map of the area being driven. This map would include data on the approximations of street light and curb heights in order for the vehicle to be aware of its surroundings. However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[292] Some self-driving cars are not equipped with steering wheels or brake pedals, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[293]

Another factor that is influencing the ability for a driver-less automobile is the safety of the passenger. To make a driver-less automobile, engineers must program it to handle high-risk situations. These situations could include a head-on collision with pedestrians. The car’s main goal should be to make a decision that would avoid hitting the pedestrians and saving the passengers in the car. But there is a possibility the car would need to make a decision that would put someone in danger. In other words, the car would need to decide to save the pedestrians or the passengers.[294] The programming of the car in these situations is crucial to a successful driver-less automobile.

Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation. The use of AI in banking can be traced back to 1987 when Security Pacific National Bank in US set-up a Fraud Prevention Task force to counter the unauthorised use of debit cards. Programs like Kasisto and Moneystream are using AI in financial services.

Banks use artificial intelligence systems today to organize operations, maintain book-keeping, invest in stocks, and manage properties. AI can react to changes overnight or when business is not taking place.[295] In August 2001, robots beat humans in a simulated financial trading competition.[296] AI has also reduced fraud and financial crimes by monitoring behavioral patterns of users for any abnormal changes or anomalies.[297]

The use of AI machines in the market in applications such as online trading and decision making has changed major economic theories.[298] For example, AI based buying and selling platforms have changed the law of supply and demand in that it is now possible to easily estimate individualized demand and supply curves and thus individualized pricing. Furthermore, AI machines reduce information asymmetry in the market and thus making markets more efficient while reducing the volume of trades. Furthermore, AI in the markets limits the consequences of behavior in the markets again making markets more efficient. Other theories where AI has had impact include in rational choice, rational expectations, game theory, Lewis turning point, portfolio optimization and counterfactual thinking.

In video games, artificial intelligence is routinely used to generate dynamic purposeful behavior in non-player characters (NPCs). In addition, well-understood AI techniques are routinely used for pathfinding. Some researchers consider NPC AI in games to be a “solved problem” for most production tasks. Games with more atypical AI include the AI director of Left 4 Dead (2008) and the neuroevolutionary training of platoons in Supreme Commander 2 (2010).[299][300]

Worldwide annual military spending on robotics rose from US$5.1 billion in 2010 to US$7.5 billion in 2015.[301][302] Military drones capable of autonomous action are widely considered a useful asset. In 2017, Vladimir Putin stated that “Whoever becomes the leader in (artificial intelligence) will become the ruler of the world”.[303][304] Many artificial intelligence researchers seek to distance themselves from military applications of AI.[305]

For financial statements audit, AI makes continuous audit possible. AI tools could analyze many sets of different information immediately. The potential benefit would be the overall audit risk will be reduced, the level of assurance will be increased and the time duration of audit will be reduced.[306]

It is possible to use AI to predict or generalize the behavior of customers from their digital footprints in order to target them with personalized promotions or build customer personas automatically.[307] A documented case reports that online gambling companies were using AI to improve customer targeting.[308]

Moreover, the application of Personality computing AI models can help reducing the cost of advertising campaigns by adding psychological targeting to more traditional sociodemographic or behavioral targeting.[309]

Visit link:

Artificial intelligence – Wikipedia

What is Artificial Intelligence (AI)? – Definition from …

Artificial intelligence is a branch of computer science that aims to create intelligent machines. It has become an essential part of the technology industry.

Research associated with artificial intelligence is highly technical and specialized. The core problems of artificial intelligence include programming computers for certain traits such as:

Knowledge engineering is a core part of AI research. Machines can often act and react like humans only if they have abundant information relating to the world. Artificial intelligence must have access to objects, categories, properties and relations between all of them to implement knowledge engineering. Initiating common sense, reasoning and problem-solving power in machines is a difficult and tedious task.

Machine learning is also a core part of AI. Learning without any kind of supervision requires an ability to identify patterns in streams of inputs, whereas learning with adequate supervision involves classification and numerical regressions. Classification determines the category an object belongs to and regression deals with obtaining a set of numerical input or output examples, thereby discovering functions enabling the generation of suitable outputs from respective inputs. Mathematical analysis of machine learning algorithms and their performance is a well-defined branch of theoretical computer science often referred to as computational learning theory.

Machine perception deals with the capability to use sensory inputs to deduce the different aspects of the world, while computer vision is the power to analyze visual inputs with a few sub-problems such as facial, object and gesture recognition.

Robotics is also a major field related to AI. Robots require intelligence to handle tasks such as object manipulation and navigation, along with sub-problems of localization, motion planning and mapping.

Read the original:

What is Artificial Intelligence (AI)? – Definition from …

Artificial Intelligence: The Robots Are Now Hiring – WSJ

Sept. 20, 2018 5:30 a.m. ET

Some Fortune 500 companies are using tools that deploy artificial intelligence to weed out job applicants. But is this practice fair? In this episode of Moving Upstream, WSJ’s Jason Bellini investigates.

Some Fortune 500 companies are using tools that deploy artificial intelligence to weed out job applicants. But is this practice fair? In this episode of Moving Upstream, WSJ’s Jason Bellini investigates.

Hiring is undergoing a profound revolution.

Nearly all Fortune 500 companies now use some form of automation — from robot avatars interviewing job candidates to computers weeding out potential employees by scanning keywords in resumes. And more and more companies are using artificial intelligence and machine learning tools to assess possible employees.

DeepSense, based in San Francisco and India, helps hiring managers scan peoples social media accounts to surface underlying personality traits. The company says it uses a scientifically based personality test, and it can be done with or without a potential candidates knowledge.

The practice is part of a general trend of some hiring companies to move away from assessing candidates based on their resumes and skills, towards making hiring decisions based on peoples personalities.

The Robot Revolution: An inside look at how humanoid robots are evolving.

WSJS Jason Bellini explores breakthrough technologies that are reshaping our world and beginning to impact human happiness, health and productivity. Catch the latest episode by signing up here.

Cornell sociology and law professor Ifeoma Ajunwa said shes concerned about these tools potential for bias. Given the large scale of these automatic assessments, she believes potentially faulty algorithms could do more damage than one biased human manager. And she wants scientists to test if the algorithms are fair, transparent and accurate.

In the episode of Moving Upstream above, correspondent Jason Bellini visits South Jordan, Utah-based HireVue, which is delivering AI-based assessments of digital interviews to over 50 companies. HireVue says its algorithm compares candidates tone of voice, word clusters and micro facial expressionsCC with people who have previously been identified as high performers on the job.

Write to Jason Bellini at jason.bellini@wsj.com and Hilke Schellmann at hilke.schellmann@wsj.com

Excerpt from:

Artificial Intelligence: The Robots Are Now Hiring – WSJ

What is AI (artificial intelligence)? – Definition from …

Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction. Particular applications of AI include expert systems, speech recognition and machine vision.

AI can be categorized as either weak or strong. Weak AI, also known as narrow AI, is an AI system that is designed and trained for a particular task. Virtual personal assistants, such as Apple’s Siri, are a form of weak AI. Strong AI, also known as artificial general intelligence, is an AI system with generalized human cognitive abilities. When presented with an unfamiliar task, a strong AI system is able to find a solution without human intervention.

Because hardware, software and staffing costs for AI can be expensive, many vendors are including AI components in their standard offerings, as well as access to Artificial Intelligence as a Service (AIaaS) platforms. AI as a Service allows individuals and companies to experiment with AI for various business purposes and sample multiple platforms before making a commitment. Popular AI cloud offerings include Amazon AI services, IBM Watson Assistant, Microsoft Cognitive Services and Google AI services.

While AI tools present a range of new functionality for businesses,the use of artificial intelligence raises ethical questions. This is because deep learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human selects what data should be used for training an AI program, the potential for human bias is inherent and must be monitored closely.

Some industry experts believe that the term artificial intelligence is too closely linked to popular culture, causing the general public to have unrealistic fears about artificial intelligence and improbable expectations about how it will change the workplace and life in general. Researchers and marketers hope the label augmented intelligence, which has a more neutral connotation, will help people understand that AI will simply improve products and services, not replace the humans that use them.

Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University, categorizes AI into four types, from the kind of AI systems that exist today to sentient systems, which do not yet exist. His categories are as follows:

AI is incorporated into a variety of different types of technology. Here are seven examples.

Artificial intelligence has made its way into a number of areas. Here are six examples.

The application of AI in the realm of self-driving cars raises security as well as ethical concerns. Cars can be hacked, and when an autonomous vehicle is involved in an accident, liability is unclear. Autonomous vehicles may also be put in a position where an accident is unavoidable, forcing the programming to make an ethical decision about how to minimize damage.

Another major concern is the potential for abuse of AI tools. Hackers are starting to use sophisticated machine learning tools to gain access to sensitive systems, complicating the issue of security beyond its current state.

Deep learning-based video and audio generation tools also present bad actors with the tools necessary to create so-called deepfakes, convincingly fabricated videos of public figures saying or doing things that never took place.

Despite these potential risks, there are few regulations governing the use AI tools, and where laws do exist, the typically pertain to AI only indirectly. For example, federal Fair Lending regulations require financial institutions to explain credit decisions to potential customers, which limit the extent to which lenders can use deep learning algorithms, which by their nature are typically opaque. Europe’s GDPR puts strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.

In 2016, the National Science and Technology Council issued a report examining the potential role governmental regulation might play in AI development, but it did not recommend specific legislation be considered. Since that time the issue has received little attention from lawmakers.

Continue reading here:

What is AI (artificial intelligence)? – Definition from …

Artificial Intelligence – Journal – Elsevier

This journal has partnered with Heliyon, an open access journal from Elsevier publishing quality peer reviewed research across all disciplines. Heliyons team of experts provides editorial excellence, fast publication, and high visibility for your paper. Authors can quickly and easily transfer their research from a Partner Journal to Heliyon without the need to edit, reformat or resubmit.>Learn more at Heliyon.com

Link:

Artificial Intelligence – Journal – Elsevier

Benefits & Risks of Artificial Intelligence – Future of Life …

Many AI researchers roll their eyes when seeing this headline:Stephen Hawking warns that rise of robots may be disastrous for mankind. And as many havelost count of how many similar articles theyveseen.Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because theyve become conscious and/or evil.On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that AI researchers dontworry about. That scenario combines as many as three separate misconceptions: concern about consciousness, evil, androbots.

If you drive down the road, you have a subjective experience of colors, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car?Although this mystery of consciousness is interesting in its own right, its irrelevant to AI risk. If you get struck by a driverless car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what superintelligent AIdoes, not how it subjectively feels.

The fear of machines turning evil is another red herring. The real worry isnt malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans dont generally hate ants, but were more intelligent than they are so if we want to build a hydroelectric dam and theres an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.

The consciousness misconception is related to the myth that machines cant have goals.Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target.If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose.If that heat-seeking missile were chasing you, you probably wouldnt exclaim: Im not worried, because machines cant have goals!

I sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids,because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes. In fact, the main concern of the beneficial-AI movement isnt with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.

The robot misconception is related to the myth that machines cant control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, its possible that we might also cede control.

Link:

Benefits & Risks of Artificial Intelligence – Future of Life …

A.I. Artificial Intelligence (2001) – IMDb

Nominated for 2 Oscars. Another 17 wins & 68 nominations. See more awards Learn more More Like This

Comedy | Drama | Sci-Fi

An android endeavors to become human as he gradually acquires emotions.

Director:Chris Columbus

Stars:Robin Williams,Embeth Davidtz,Sam Neill

Adventure | Sci-Fi | Thriller

As Earth is invaded by alien tripod fighting machines, one family fights for survival.

Director:Steven Spielberg

Stars:Tom Cruise,Dakota Fanning,Tim Robbins

Action | Crime | Mystery

In a future where a special police unit is able to arrest murderers before they commit their crimes, an officer from that unit is himself accused of a future murder.

Director:Steven Spielberg

Stars:Tom Cruise,Colin Farrell,Samantha Morton

Drama | History

In 1839, the revolt of Mende captives aboard a Spanish owned ship causes a major controversy in the United States when the ship is captured off the coast of Long Island. The courts must decide whether the Mende are slaves or legally free.

Director:Steven Spielberg

Stars:Djimon Hounsou,Matthew McConaughey,Anthony Hopkins

Drama | History | War

Young Albert enlists to serve in World War I after his beloved horse is sold to the cavalry. Albert’s hopeful journey takes him out of England and to the front lines as the war rages on.

Director:Steven Spielberg

Stars:Jeremy Irvine,Emily Watson,David Thewlis

Drama | Sci-Fi

Roy Neary, an electric lineman, watches how his quiet and ordinary daily life turns upside down after a close encounter with a UFO.

Director:Steven Spielberg

Stars:Richard Dreyfuss,Franois Truffaut,Teri Garr

Drama | History | War

A young English boy struggles to survive under Japanese occupation during World War II.

Director:Steven Spielberg

Stars:Christian Bale,John Malkovich,Miranda Richardson

Drama | History | Thriller

Based on the true story of the Black September aftermath, about the five men chosen to eliminate the ones responsible for that fateful day.

Director:Steven Spielberg

Stars:Eric Bana,Daniel Craig,Marie-Jose Croze

In the not-so-far future the polar ice caps have melted and the resulting rise of the ocean waters has drowned all the coastal cities of the world. Withdrawn to the interior of the continents, the human race keeps advancing, reaching the point of creating realistic robots (called mechas) to serve them. One of the mecha-producing companies builds David, an artificial kid which is the first to have real feelings, especially a never-ending love for his “mother”, Monica. Monica is the woman who adopted him as a substitute for her real son, who remains in cryo-stasis, stricken by an incurable disease. David is living happily with Monica and her husband, but when their real son returns home after a cure is discovered, his life changes dramatically. Written byChris Makrozahopoulos

Budget:$100,000,000 (estimated)

Opening Weekend USA: $29,352,630,1 July 2001, Wide Release

Gross USA: $78,616,689, 23 September 2001

Cumulative Worldwide Gross: $235,927,000

Runtime: 146 min

Aspect Ratio: 1.85 : 1

Read the original post:

A.I. Artificial Intelligence (2001) – IMDb

Online Artificial Intelligence Courses | Microsoft …

The Microsoft Professional Program (MPP) is a collection of courses that teach skills in several core technology tracks that help you excel in the industry’s newest job roles.

These courses are created and taught by experts and feature quizzes, hands-on labs, and engaging communities. For each track you complete, you earn a certificate of completion from Microsoft proving that you mastered those skills.

See the original post:

Online Artificial Intelligence Courses | Microsoft …

A.I. Artificial Intelligence – Wikipedia

A.I. Artificial Intelligence, also known as A.I., is a 2001 American science fiction drama film directed by Steven Spielberg. The screenplay by Spielberg and screen story by Ian Watson were based on the 1969 short story “Supertoys Last All Summer Long” by Brian Aldiss. The film was produced by Kathleen Kennedy, Spielberg and Bonnie Curtis. It stars Haley Joel Osment, Jude Law, Frances O’Connor, Brendan Gleeson and William Hurt. Set in a futuristic post-climate change society, A.I. tells the story of David (Osment), a childlike android uniquely programmed with the ability to love.

Development of A.I. originally began with producer-director Stanley Kubrick, after he acquired the rights to Aldiss’ story in the early 1970s. Kubrick hired a series of writers until the mid-1990s, including Brian Aldiss, Bob Shaw, Ian Watson, and Sara Maitland. The film languished in protracted development for years, partly because Kubrick felt computer-generated imagery was not advanced enough to create the David character, who he believed no child actor would convincingly portray. In 1995, Kubrick handed A.I. to Spielberg, but the film did not gain momentum until Kubrick’s death in 1999. Spielberg remained close to Watson’s film treatment for the screenplay.

The film divided critics, with the overall balance being positive, and grossed approximately $235 million. The film was nominated for two Academy Awards at the 74th Academy Awards, for Best Visual Effects and Best Original Score (by John Williams).

In a 2016 BBC poll of 177 critics around the world, Steven Spielberg’s A.I. Artificial Intelligence was voted the eighty-third greatest film since 2000.[3] A.I. is dedicated to Stanley Kubrick.

In the late 22nd century, rising sea levels from global warming have wiped out coastal cities such as Amsterdam, Venice, and New York and drastically reduced the world’s population. A new type of robots called Mecha, advanced humanoids capable of thought and emotion, have been created.

David, a Mecha that resembles a human child and is programmed to display love for his owners, is given to Henry Swinton and his wife Monica, whose son Martin, after contracting a rare disease, has been placed in suspended animation and not expected to recover. Monica feels uneasy with David, but eventually warms to him and activates his imprinting protocol, causing him to have an enduring childlike love for her. David is befriended by Teddy, a robotic teddy bear that belonged to Martin.

Martin is cured of his disease and brought home. As he recovers, he grows jealous of David. He tricks David into entering the parents’s bedroom at night and cutting off a lock of Monica’s hair. This upsets the parents, particularly Henry, who fears David intended to injure them. At a pool party, one of Martin’s friends pokes David with a knife, activating David’s self-protection programming. David grabs Martin and they fall into the pool. Martin is saved from drowning, but Henry persuades Monica to return David to his creators for destruction. Instead, she abandons David and Teddy in the forest. She warns David to avoid all humans, and tells him to find other unregistered Mecha who can protect him.

David is captured for an anti-Mecha “Flesh Fair”, where obsolete, unlicensed Mecha are destroyed before cheering crowds. David is placed on a platform with Gigolo Joe, a male prostitute Mecha who is on the run after being framed for murder. Before the pair can be destroyed with acid, the crowd, thinking David is a real boy, begins booing and throwing things at the show’s emcee. In the chaos, David and Joe escape. Since Joe survived thanks to David, he agrees to help him find Blue Fairy, whom David remembers from The Adventures of Pinocchio, and believes can turn him into a real boy, allowing Monica to love him and take him home.

Joe and David make their way to the decadent resort town of Rouge City, where “Dr. Know”, a holographic answer engine, directs them to the top of Rockefeller Center in the flooded ruins of Manhattan. There, David meets a copy of himself and destroys it. He then meets Professor Hobby, his creator, who tells David he was built in the image of the professor’s dead son David. The engineers are thrilled by his ability to have a will without being programmed. He reveals they have been monitoring him to see how he progresses and altered Dr. Know to guide him to Manhattan, back to the lab he was created in. David finds more copies of him, as well as female versions called Darlene, that have been made there.

Disheartened, David lets himself fall from a ledge of the building. He is rescued by Joe, flying an amphibicopter he has stolen from the police who were pursuing him. David tells Joe he saw the Blue Fairy underwater, and wants to go down to meet her. Joe is captured by the authorities, who snare him with an electromagnet. Before he is pulled up, he activates the amphibicopter’s dive function for David, telling him to remember him for he declares “I am, I was.” David and Teddy dive to see the Fairy, which turns out to be a statue at the now-sunken Coney Island. The two become trapped when the Wonder Wheel falls on their vehicle. David repeatedly asks the Fairy to turn him into a real boy. Eventually the ocean freezes and David’s power source is depleted.

Two thousand years later, humans are extinct, and Manhattan is buried under glacial ice. The Mecha have evolved into an advanced silicon-based form called Specialists. They find David and Teddy, and discover they are original Mecha who knew living humans, making them special. The Specialists revive David and Teddy. David walks to the frozen Fairy statue, which collapses when he touches it. The Mecha use David’s memories to reconstruct the Swinton home. David asks the Specialists if they can make him human, but they cannot. However, he insists they recreate Monica from DNA from the lock of her hair, which Teddy has kept. The Mecha warn David that the clone can live for only a day, and that the process cannot be repeated. David spends the next day with Monica and Teddy. Before she drifts off to sleep, Monica tells David she has always loved him. Teddy climbs onto the bed and watches the two lie peacefully together.

Kubrick began development on an adaptation of “Super-Toys Last All Summer Long” in the late 1970s, hiring the story’s author, Brian Aldiss, to write a film treatment. In 1985, Kubrick asked Steven Spielberg to direct the film, with Kubrick producing.[6] Warner Bros. agreed to co-finance A.I. and cover distribution duties.[7] The film labored in development hell, and Aldiss was fired by Kubrick over creative differences in 1989.[8] Bob Shaw briefly served as writer, leaving after six weeks due to Kubrick’s demanding work schedule, and Ian Watson was hired as the new writer in March 1990. Aldiss later remarked, “Not only did the bastard fire me, he hired my enemy [Watson] instead.” Kubrick handed Watson The Adventures of Pinocchio for inspiration, calling A.I. “a picaresque robot version of Pinocchio”.[7][9]

Three weeks later, Watson gave Kubrick his first story treatment, and concluded his work on A.I. in May 1991 with another treatment of 90 pages. Gigolo Joe was originally conceived as a G.I. Mecha, but Watson suggested changing him to a male prostitute. Kubrick joked, “I guess we lost the kiddie market.”[7] Meanwhile, Kubrick dropped A.I. to work on a film adaptation of Wartime Lies, feeling computer animation was not advanced enough to create the David character. However, after the release of Spielberg’s Jurassic Park, with its innovative computer-generated imagery, it was announced in November 1993 that production of A.I. would begin in 1994.[10] Dennis Muren and Ned Gorman, who worked on Jurassic Park, became visual effects supervisors,[8] but Kubrick was displeased with their previsualization, and with the expense of hiring Industrial Light & Magic.[11]

“Stanley [Kubrick] showed Steven [Spielberg] 650 drawings which he had, and the script and the story, everything. Stanley said, ‘Look, why don’t you direct it and I’ll produce it.’ Steven was almost in shock.”

Producer Jan Harlan, on Spielberg’s first meeting with Kubrick about A.I.[12]

In early 1994, the film was in pre-production with Christopher “Fangorn” Baker as concept artist, and Sara Maitland assisting on the story, which gave it “a feminist fairy-tale focus”.[7] Maitland said that Kubrick never referred to the film as A.I., but as Pinocchio.[11] Chris Cunningham became the new visual effects supervisor. Some of his unproduced work for A.I. can be seen on the DVD, The Work of Director Chris Cunningham.[13] Aside from considering computer animation, Kubrick also had Joseph Mazzello do a screen test for the lead role.[11] Cunningham helped assemble a series of “little robot-type humans” for the David character. “We tried to construct a little boy with a movable rubber face to see whether we could make it look appealing,” producer Jan Harlan reflected. “But it was a total failure, it looked awful.” Hans Moravec was brought in as a technical consultant.[11]Meanwhile, Kubrick and Harlan thought A.I. would be closer to Steven Spielberg’s sensibilities as director.[14][15] Kubrick handed the position to Spielberg in 1995, but Spielberg chose to direct other projects, and convinced Kubrick to remain as director.[12][16] The film was put on hold due to Kubrick’s commitment to Eyes Wide Shut (1999).[17] After the filmmaker’s death in March 1999, Harlan and Christiane Kubrick approached Spielberg to take over the director’s position.[18][19] By November 1999, Spielberg was writing the screenplay based on Watson’s 90-page story treatment. It was his first solo screenplay credit since Close Encounters of the Third Kind (1977).[20] Spielberg remained close to Watson’s treatment, but removed various sex scenes with Gigolo Joe. Pre-production was briefly halted during February 2000, because Spielberg pondered directing other projects, which were Harry Potter and the Philosopher’s Stone, Minority Report and Memoirs of a Geisha.[17][21] The following month Spielberg announced that A.I. would be his next project, with Minority Report as a follow-up.[22] When he decided to fast track A.I., Spielberg brought Chris Baker back as concept artist.[16]

The original start date was July 10, 2000,[15] but filming was delayed until August.[23] Aside from a couple of weeks shooting on location in Oxbow Regional Park in Oregon, A.I. was shot entirely using sound stages at Warner Bros. Studios and the Spruce Goose Dome in Long Beach, California.[24]The Swinton house was constructed on Stage 16, while Stage 20 was used for Rouge City and other sets.[25][26] Spielberg copied Kubrick’s obsessively secretive approach to filmmaking by refusing to give the complete script to cast and crew, banning press from the set, and making actors sign confidentiality agreements. Social robotics expert Cynthia Breazeal served as technical consultant during production.[15][27] Haley Joel Osment and Jude Law applied prosthetic makeup daily in an attempt to look shinier and robotic.[4] Costume designer Bob Ringwood (Batman, Troy) studied pedestrians on the Las Vegas Strip for his influence on the Rouge City extras.[28] Spielberg found post-production on A.I. difficult because he was simultaneously preparing to shoot Minority Report.[29]

The film’s soundtrack was released by Warner Sunset Records in 2001. The original score was composed and conducted by John Williams and featured singers Lara Fabian on two songs and Josh Groban on one. The film’s score also had a limited release as an official “For your consideration Academy Promo”, as well as a complete score issue by La-La Land Records in 2015.[30] The band Ministry appears in the film playing the song “What About Us?” (but the song does not appear on the official soundtrack album).

Warner Bros. used an alternate reality game titled The Beast to promote the film. Over forty websites were created by Atomic Pictures in New York City (kept online at Cloudmakers.org) including the website for Cybertronics Corp. There were to be a series of video games for the Xbox video game console that followed the storyline of The Beast, but they went undeveloped. To avoid audiences mistaking A.I. for a family film, no action figures were created, although Hasbro released a talking Teddy following the film’s release in June 2001.[15]

A.I. had its premiere at the Venice Film Festival in 2001.[31]

A.I. Artificial Intelligence was released on VHS and DVD by Warner Home Video on March 5, 2002 in both a standard full-screen release with no bonus features, and as a 2-Disc Special Edition featuring the film in its original 1.85:1 anamorphic widescreen format as well as an eight-part documentary detailing the film’s development, production, music and visual effects. The bonus features also included interviews with Haley Joel Osment, Jude Law, Frances O’Connor, Steven Spielberg and John Williams, two teaser trailers for the film’s original theatrical release and an extensive photo gallery featuring production sills and Stanley Kubrick’s original storyboards.[32]

The film was released on Blu-ray Disc on April 5, 2011 by Paramount Home Media Distribution for the U.S. and by Warner Home Video for international markets. This release featured the film a newly restored high-definition print and incorporated all the bonus features previously included on the 2-Disc Special Edition DVD.[33]

The film opened in 3,242 theaters in the United States on June 29, 2001, earning $29,352,630 during its opening weekend. A.I went on to gross $78.62 million in US totals as well as $157.31 million in foreign countries, coming to a worldwide total of $235.93 million.[34]

Based on 192 reviews collected by Rotten Tomatoes, 73% of critics gave the film positive notices with a score of 6.6/10. The website’s critical consensus reads, “A curious, not always seamless, amalgamation of Kubrick’s chilly bleakness and Spielberg’s warm-hearted optimism. A.I. is, in a word, fascinating.”[35] By comparison, Metacritic collected an average score of 65, based on 32 reviews, which is considered favorable.[36]

Producer Jan Harlan stated that Kubrick “would have applauded” the final film, while Kubrick’s widow Christiane also enjoyed A.I.[37] Brian Aldiss admired the film as well: “I thought what an inventive, intriguing, ingenious, involving film this was. There are flaws in it and I suppose I might have a personal quibble but it’s so long since I wrote it.” Of the film’s ending, he wondered how it might have been had Kubrick directed the film: “That is one of the ‘ifs’ of film historyat least the ending indicates Spielberg adding some sugar to Kubrick’s wine. The actual ending is overly sympathetic and moreover rather overtly engineered by a plot device that does not really bear credence. But it’s a brilliant piece of film and of course it’s a phenomenon because it contains the energies and talents of two brilliant filmmakers.”[38] Richard Corliss heavily praised Spielberg’s direction, as well as the cast and visual effects.[39] Roger Ebert gave the film three stars, saying that it was “wonderful and maddening.”[40] Leonard Maltin, on the other hand, gives the film two stars out of four in his Movie Guide, writing: “[The] intriguing story draws us in, thanks in part to Osment’s exceptional performance, but takes several wrong turns; ultimately, it just doesn’t work. Spielberg rewrote the adaptation Stanley Kubrick commissioned of the Brian Aldiss short story ‘Super Toys Last All Summer Long’; [the] result is a curious and uncomfortable hybrid of Kubrick and Spielberg sensibilities.” However, he calls John Williams’ music score “striking”. Jonathan Rosenbaum compared A.I. to Solaris (1972), and praised both “Kubrick for proposing that Spielberg direct the project and Spielberg for doing his utmost to respect Kubrick’s intentions while making it a profoundly personal work.”[41] Film critic Armond White, of the New York Press, praised the film noting that “each part of David’s journey through carnal and sexual universes into the final eschatological devastation becomes as profoundly philosophical and contemplative as anything by cinema’s most thoughtful, speculative artists Borzage, Ozu, Demy, Tarkovsky.”[42] Filmmaker Billy Wilder hailed A.I. as “the most underrated film of the past few years.”[43] When British filmmaker Ken Russell saw the film, he wept during the ending.[44]

Mick LaSalle gave a largely negative review. “A.I. exhibits all its creators’ bad traits and none of the good. So we end up with the structureless, meandering, slow-motion endlessness of Kubrick combined with the fuzzy, cuddly mindlessness of Spielberg.” Dubbing it Spielberg’s “first boring movie”, LaSalle also believed the robots at the end of the film were aliens, and compared Gigolo Joe to the “useless” Jar Jar Binks, yet praised Robin Williams for his portrayal of a futuristic Albert Einstein.[45][not in citation given] Peter Travers gave a mixed review, concluding “Spielberg cannot live up to Kubrick’s darker side of the future.” But he still put the film on his top ten list that year for best movies.[46] David Denby in The New Yorker criticized A.I. for not adhering closely to his concept of the Pinocchio character. Spielberg responded to some of the criticisms of the film, stating that many of the “so called sentimental” elements of A.I., including the ending, were in fact Kubrick’s and the darker elements were his own.[47] However, Sara Maitland, who worked on the project with Kubrick in the 1990s, claimed that one of the reasons Kubrick never started production on A.I. was because he had a hard time making the ending work.[48] James Berardinelli found the film “consistently involving, with moments of near-brilliance, but far from a masterpiece. In fact, as the long-awaited ‘collaboration’ of Kubrick and Spielberg, it ranks as something of a disappointment.” Of the film’s highly debated finale, he claimed, “There is no doubt that the concluding 30 minutes are all Spielberg; the outstanding question is where Kubrick’s vision left off and Spielberg’s began.”[49]

Screenwriter Ian Watson has speculated, “Worldwide, A.I. was very successful (and the 4th highest earner of the year) but it didn’t do quite so well in America, because the film, so I’m told, was too poetical and intellectual in general for American tastes. Plus, quite a few critics in America misunderstood the film, thinking for instance that the Giacometti-style beings in the final 20 minutes were aliens (whereas they were robots of the future who had evolved themselves from the robots in the earlier part of the film) and also thinking that the final 20 minutes were a sentimental addition by Spielberg, whereas those scenes were exactly what I wrote for Stanley and exactly what he wanted, filmed faithfully by Spielberg.”[50]

In 2002, Spielberg told film critic Joe Leydon that “People pretend to think they know Stanley Kubrick, and think they know me, when most of them don’t know either of us”. “And what’s really funny about that is, all the parts of A.I. that people assume were Stanley’s were mine. And all the parts of A.I. that people accuse me of sweetening and softening and sentimentalizing were all Stanley’s. The teddy bear was Stanley’s. The whole last 20 minutes of the movie was completely Stanley’s. The whole first 35, 40 minutes of the film all the stuff in the house was word for word, from Stanley’s screenplay. This was Stanley’s vision.” “Eighty percent of the critics got it all mixed up. But I could see why. Because, obviously, I’ve done a lot of movies where people have cried and have been sentimental. And I’ve been accused of sentimentalizing hard-core material. But in fact it was Stanley who did the sweetest parts of A.I., not me. I’m the guy who did the dark center of the movie, with the Flesh Fair and everything else. That’s why he wanted me to make the movie in the first place. He said, ‘This is much closer to your sensibilities than my own.'”[51]

Upon rewatching the film many years after its release, BBC film critic Mark Kermode apologized to Spielberg in an interview in January 2013 for “getting it wrong” on the film when he first viewed it in 2001. He now believes the film to be Spielberg’s “enduring masterpiece”.[52]

Visual effects supervisors Dennis Muren, Stan Winston, Michael Lantieri and Scott Farrar were nominated for the Academy Award for Best Visual Effects, while John Williams was nominated for Best Original Music Score.[53] Steven Spielberg, Jude Law and Williams received nominations at the 59th Golden Globe Awards.[54] A.I. was successful at the Saturn Awards, winning five awards, including Best Science Fiction Film along with Best Writing for Spielberg and Best Performance by a Younger Actor for Osment.[55]

Link:

A.I. Artificial Intelligence – Wikipedia

Artificial Intelligence: The Pros, Cons, and What to Really Fear

For the last several years, Russia has been steadily improving its ground combat robots. Just last year,Kalashnikov, the maker of the famous AK-47 rifle,announced it would builda range of products based on neural networks, including a fully automated combat module that promises to identify and shoot at targets.

According to Bendett,Russia delivered a white paperto the UN saying that from Moscow’s perspective,it would be inadmissible to leave UASwithout anyhuman oversight. In other words, Russia always wants a human in the loop and to be the one to push the final button to fire that weapon.

Worth noting: “A lot of these are still kind of far-out applications,” Bendett said.

The same can be said for China’s more military-focused applications of AI, largely in surveillance and UAV operations for the PLA,said Elsa Kania, Technology Fellow at the Center for a New American Security. Speaking beside Bendett at the Genius Machines event in March, Kania said China’s military applications appear to beat a a fairly nascent stage in its development.

That is to say: There’snothing to fear about lethal AI applications yet unless you’re an alleged terrorist in the Middle East. For the rest of us, we have our Siris, Alexas, Cortanas and more, helping us shop, search, listen to music,and tag friends in images on social media.

Until the robot uprising comes, let us hope there will always be clips ofthe swearing Atlas Robot from Boston Dynamics available online whenever we need a laugh. It may be better to laugh before these robots start helping each other through doorwaysentirely independent of humans. (Too late.)

Read more:

Artificial Intelligence: The Pros, Cons, and What to Really Fear

Spaceflight Now The leading source for online space news

A Russian Soyuz rocket lifted off from the Vostochny Cosmodrome in Russias Far East on Thursday carrying 28 satellites, including a pair of Russian mapping satellites, secondary payloads from Germany, Japan, Spain, South Africa, and a dozen Earth-observing CubeSats and eight commercial weather payloads for Planet and Spire.

See original here:

Spaceflight Now The leading source for online space news

Spaceflight – Wikipedia

Spaceflight (also written space flight) is ballistic flight into or through outer space. Spaceflight can occur with spacecraft with or without humans on board. Examples of human spaceflight include the U.S. Apollo Moon landing and Space Shuttle programs and the Russian Soyuz program, as well as the ongoing International Space Station. Examples of unmanned spaceflight include space probes that leave Earth orbit, as well as satellites in orbit around Earth, such as communications satellites. These operate either by telerobotic control or are fully autonomous.

Spaceflight is used in space exploration, and also in commercial activities like space tourism and satellite telecommunications. Additional non-commercial uses of spaceflight include space observatories, reconnaissance satellites and other Earth observation satellites.

A spaceflight typically begins with a rocket launch, which provides the initial thrust to overcome the force of gravity and propels the spacecraft from the surface of the Earth. Once in space, the motion of a spacecraft both when unpropelled and when under propulsion is covered by the area of study called astrodynamics. Some spacecraft remain in space indefinitely, some disintegrate during atmospheric reentry, and others reach a planetary or lunar surface for landing or impact.

The first theoretical proposal of space travel using rockets was published by Scottish astronomer and mathematician William Leitch, in an 1861 essay “A Journey Through Space”.[1] More well-known (though not widely outside Russia) is Konstantin Tsiolkovsky’s work, ” ” (The Exploration of Cosmic Space by Means of Reaction Devices), published in 1903.

Spaceflight became an engineering possibility with the work of Robert H. Goddard’s publication in 1919 of his paper A Method of Reaching Extreme Altitudes. His application of the de Laval nozzle to liquid fuel rockets improved efficiency enough for interplanetary travel to become possible. He also proved in the laboratory that rockets would work in the vacuum of space;[specify] nonetheless, his work was not taken seriously by the public. His attempt to secure an Army contract for a rocket-propelled weapon in the first World War was defeated by the November 11, 1918 armistice with Germany.

Nonetheless, Goddard’s paper was highly influential on Hermann Oberth, who in turn influenced Wernher von Braun. Von Braun became the first to produce modern rockets as guided weapons, employed by Adolf Hitler. Von Braun’s V-2 was the first rocket to reach space, at an altitude of 189 kilometers (102 nautical miles) on a June 1944 test flight.[2]

Tsiolkovsky’s rocketry work was not fully appreciated in his lifetime, but he influenced Sergey Korolev, who became the Soviet Union’s chief rocket designer under Joseph Stalin, to develop intercontinental ballistic missiles to carry nuclear weapons as a counter measure to United States bomber planes. Derivatives of Korolev’s R-7 Semyorka missiles were used to launch the world’s first artificial Earth satellite, Sputnik 1, on October 4, 1957, and later the first human to orbit the Earth, Yuri Gagarin in Vostok 1, on April 12, 1961.[3]

At the end of World War II, von Braun and most of his rocket team surrendered to the United States, and were expatriated to work on American missiles at what became the Army Ballistic Missile Agency. This work on missiles such as Juno I and Atlas enabled launch of the first US satellite Explorer 1 on February 1, 1958, and the first American in orbit, John Glenn in Friendship 7 on February 20, 1962. As director of the Marshall Space Flight Center, Von Braun oversaw development of a larger class of rocket called Saturn, which allowed the US to send the first two humans, Neil Armstrong and Buzz Aldrin, to the Moon and back on Apollo 11 in July 1969. Over the same period, the Soviet Union secretly tried but failed to develop the N1 rocket to give them the capability to land one person on the Moon.

Rockets are the only means currently capable of reaching orbit or beyond. Other non-rocket spacelaunch technologies have yet to be built, or remain short of orbital speeds.A rocket launch for a spaceflight usually starts from a spaceport (cosmodrome), which may be equipped with launch complexes and launch pads for vertical rocket launches, and runways for takeoff and landing of carrier airplanes and winged spacecraft. Spaceports are situated well away from human habitation for noise and safety reasons. ICBMs have various special launching facilities.

A launch is often restricted to certain launch windows. These windows depend upon the position of celestial bodies and orbits relative to the launch site. The biggest influence is often the rotation of the Earth itself. Once launched, orbits are normally located within relatively constant flat planes at a fixed angle to the axis of the Earth, and the Earth rotates within this orbit.

A launch pad is a fixed structure designed to dispatch airborne vehicles. It generally consists of a launch tower and flame trench. It is surrounded by equipment used to erect, fuel, and maintain launch vehicles.

The most commonly used definition of outer space is everything beyond the Krmn line, which is 100 kilometers (62mi) above the Earth’s surface. The United States sometimes defines outer space as everything beyond 50 miles (80km) in altitude.

Rockets are the only currently practical means of reaching space. Conventional airplane engines cannot reach space due to the lack of oxygen. Rocket engines expel propellant to provide forward thrust that generates enough delta-v (change in velocity) to reach orbit.

For manned launch systems launch escape systems are frequently fitted to allow astronauts to escape in the case of emergency.

Many ways to reach space other than rockets have been proposed. Ideas such as the space elevator, and momentum exchange tethers like rotovators or skyhooks require new materials much stronger than any currently known. Electromagnetic launchers such as launch loops might be feasible with current technology. Other ideas include rocket assisted aircraft/spaceplanes such as Reaction Engines Skylon (currently in early stage development), scramjet powered spaceplanes, and RBCC powered spaceplanes. Gun launch has been proposed for cargo.

Achieving a closed orbit is not essential to lunar and interplanetary voyages. Early Russian space vehicles successfully achieved very high altitudes without going into orbit. NASA considered launching Apollo missions directly into lunar trajectories but adopted the strategy of first entering a temporary parking orbit and then performing a separate burn several orbits later onto a lunar trajectory. This costs additional propellant because the parking orbit perigee must be high enough to prevent reentry while direct injection can have an arbitrarily low perigee because it will never be reached.

However, the parking orbit approach greatly simplified Apollo mission planning in several important ways. It substantially widened the allowable launch windows, increasing the chance of a successful launch despite minor technical problems during the countdown. The parking orbit was a stable “mission plateau” that gave the crew and controllers several hours to thoroughly check out the spacecraft after the stresses of launch before committing it to a long lunar flight; the crew could quickly return to Earth, if necessary, or an alternate Earth-orbital mission could be conducted. The parking orbit also enabled translunar trajectories that avoided the densest parts of the Van Allen radiation belts.

Apollo missions minimized the performance penalty of the parking orbit by keeping its altitude as low as possible. For example, Apollo 15 used an unusually low parking orbit (even for Apollo) of 92.5 nmi by 91.5 nmi (171km by 169km) where there was significant atmospheric drag. But it was partially overcome by continuous venting of hydrogen from the third stage of the Saturn V, and was in any event tolerable for the short stay.

Robotic missions do not require an abort capability or radiation minimization, and because modern launchers routinely meet “instantaneous” launch windows, space probes to the Moon and other planets generally use direct injection to maximize performance. Although some might coast briefly during the launch sequence, they do not complete one or more full parking orbits before the burn that injects them onto an Earth escape trajectory.

Note that the escape velocity from a celestial body decreases with altitude above that body. However, it is more fuel-efficient for a craft to burn its fuel as close to the ground as possible; see Oberth effect and reference.[5] This is anotherway to explain the performance penalty associated with establishing the safe perigee of a parking orbit.

Plans for future crewed interplanetary spaceflight missions often include final vehicle assembly in Earth orbit, such as NASA’s Project Orion and Russia’s Kliper/Parom tandem.

Astrodynamics is the study of spacecraft trajectories, particularly as they relate to gravitational and propulsion effects. Astrodynamics allows for a spacecraft to arrive at its destination at the correct time without excessive propellant use. An orbital maneuvering system may be needed to maintain or change orbits.

Non-rocket orbital propulsion methods include solar sails, magnetic sails, plasma-bubble magnetic systems, and using gravitational slingshot effects.

The term “transfer energy” means the total amount of energy imparted by a rocket stage to its payload. This can be the energy imparted by a first stage of a launch vehicle to an upper stage plus payload, or by an upper stage or spacecraft kick motor to a spacecraft.[6][7]

Vehicles in orbit have large amounts of kinetic energy. This energy must be discarded if the vehicle is to land safely without vaporizing in the atmosphere. Typically this process requires special methods to protect against aerodynamic heating. The theory behind reentry was developed by Harry Julian Allen. Based on this theory, reentry vehicles present blunt shapes to the atmosphere for reentry. Blunt shapes mean that less than 1% of the kinetic energy ends up as heat that reaches the vehicle and the heat energy instead ends up in the atmosphere.

The Mercury, Gemini, and Apollo capsules all splashed down in the sea. These capsules were designed to land at relatively low speeds with the help of a parachute.Russian capsules for Soyuz make use of a big parachute and braking rockets to touch down on land.The Space Shuttle glided to a touchdown like a plane.

After a successful landing the spacecraft, its occupants and cargo can be recovered. In some cases, recovery has occurred before landing: while a spacecraft is still descending on its parachute, it can be snagged by a specially designed aircraft. This mid-air retrieval technique was used to recover the film canisters from the Corona spy satellites.

Uncrewed spaceflight (or unmanned) is all spaceflight activity without a necessary human presence in space. This includes all space probes, satellites and robotic spacecraft and missions. Uncrewed spaceflight is the opposite of manned spaceflight, which is usually called human spaceflight. Subcategories of uncrewed spaceflight are “robotic spacecraft” (objects) and “robotic space missions” (activities). A robotic spacecraft is an uncrewed spacecraft with no humans on board, that is usually under telerobotic control. A robotic spacecraft designed to make scientific research measurements is often called a space probe.

Uncrewed space missions use remote-controlled spacecraft. The first uncrewed space mission was Sputnik I, launched October 4, 1957 to orbit the Earth. Space missions where animals but no humans are on-board are considered uncrewed missions.

Many space missions are more suited to telerobotic rather than crewed operation, due to lower cost and lower risk factors. In addition, some planetary destinations such as Venus or the vicinity of Jupiter are too hostile for human survival, given current technology. Outer planets such as Saturn, Uranus, and Neptune are too distant to reach with current crewed spaceflight technology, so telerobotic probes are the only way to explore them. Telerobotics also allows exploration of regions that are vulnerable to contamination by Earth micro-organisms since spacecraft can be sterilized. Humans can not be sterilized in the same way as a spaceship, as they coexist with numerous micro-organisms, and these micro-organisms are also hard to contain within a spaceship or spacesuit.

Telerobotics becomes telepresence when the time delay is short enough to permit control of the spacecraft in close to real time by humans. Even the two seconds light speed delay for the Moon is too far away for telepresence exploration from Earth. The L1 and L2 positions permit 400-millisecond round trip delays, which is just close enough for telepresence operation. Telepresence has also been suggested as a way to repair satellites in Earth orbit from Earth. The Exploration Telerobotics Symposium in 2012 explored this and other topics.[8]

The first human spaceflight was Vostok 1 on April 12, 1961, on which cosmonaut Yuri Gagarin of the USSR made one orbit around the Earth. In official Soviet documents, there is no mention of the fact that Gagarin parachuted the final seven miles.[9] Currently, the only spacecraft regularly used for human spaceflight are the Russian Soyuz spacecraft and the Chinese Shenzhou spacecraft. The U.S. Space Shuttle fleet operated from April 1981 until July 2011. SpaceShipOne has conducted two human suborbital spaceflights.

On a sub-orbital spaceflight the spacecraft reaches space and then returns to the atmosphere after following a (primarily) ballistic trajectory. This is usually because of insufficient specific orbital energy, in which case a suborbital flight will last only a few minutes, but it is also possible for an object with enough energy for an orbit to have a trajectory that intersects the Earth’s atmosphere, sometimes after many hours. Pioneer 1 was NASA’s first space probe intended to reach the Moon. A partial failure caused it to instead follow a suborbital trajectory to an altitude of 113,854 kilometers (70,746mi) before reentering the Earth’s atmosphere 43 hours after launch.

The most generally recognized boundary of space is the Krmn line 100km above sea level. (NASA alternatively defines an astronaut as someone who has flown more than 50 miles (80km) above sea level.) It is not generally recognized by the public that the increase in potential energy required to pass the Krmn line is only about 3% of the orbital energy (potential plus kinetic energy) required by the lowest possible Earth orbit (a circular orbit just above the Krmn line.) In other words, it is far easier to reach space than to stay there. On May 17, 2004, Civilian Space eXploration Team launched the GoFast Rocket on a suborbital flight, the first amateur spaceflight. On June 21, 2004, SpaceShipOne was used for the first privately funded human spaceflight.

Point-to-point is a category of sub-orbital spaceflight in which a spacecraft provides rapid transport between two terrestrial locations. Consider a conventional airline route between London and Sydney, a flight that normally lasts over twenty hours. With point-to-point suborbital travel the same route could be traversed in less than one hour.[10] While no company offers this type of transportation today, SpaceX has revealed plans to do so as early as the 2020s using its BFR vehicle.[11] Suborbital spaceflight over an intercontinental distance requires a vehicle velocity that is only a little lower than the velocity required to reach low Earth orbit.[12] If rockets are used, the size of the rocket relative to the payload is similar to an Intercontinental Ballistic Missile (ICBM). Any intercontinental spaceflight has to surmount problems of heating during atmosphere re-entry that are nearly as large as those faced by orbital spaceflight.

A minimal orbital spaceflight requires much higher velocities than a minimal sub-orbital flight, and so it is technologically much more challenging to achieve. To achieve orbital spaceflight, the tangential velocity around the Earth is as important as altitude. In order to perform a stable and lasting flight in space, the spacecraft must reach the minimal orbital speed required for a closed orbit.

Interplanetary travel is travel between planets within a single planetary system. In practice, the use of the term is confined to travel between the planets of our Solar System.

Five spacecraft are currently leaving the Solar System on escape trajectories, Voyager 1, Voyager 2, Pioneer 10, Pioneer 11, and New Horizons. The one farthest from the Sun is Voyager 1, which is more than 100 AU distant and is moving at 3.6 AU per year.[13] In comparison, Proxima Centauri, the closest star other than the Sun, is 267,000 AU distant. It will take Voyager 1 over 74,000 years to reach this distance. Vehicle designs using other techniques, such as nuclear pulse propulsion are likely to be able to reach the nearest star significantly faster. Another possibility that could allow for human interstellar spaceflight is to make use of time dilation, as this would make it possible for passengers in a fast-moving vehicle to travel further into the future while aging very little, in that their great speed slows down the rate of passage of on-board time. However, attaining such high speeds would still require the use of some new, advanced method of propulsion.

Intergalactic travel involves spaceflight between galaxies, and is considered much more technologically demanding than even interstellar travel and, by current engineering terms, is considered science fiction.

Spacecraft are vehicles capable of controlling their trajectory through space.

The first ‘true spacecraft’ is sometimes said to be Apollo Lunar Module,[14] since this was the only manned vehicle to have been designed for, and operated only in space; and is notable for its non aerodynamic shape.

Spacecraft today predominantly use rockets for propulsion, but other propulsion techniques such as ion drives are becoming more common, particularly for unmanned vehicles, and this can significantly reduce the vehicle’s mass and increase its delta-v.

Launch systems are used to carry a payload from Earth’s surface into outer space.

All launch vehicles contain a huge amount of energy that is needed for some part of it to reach orbit. There is therefore some risk that this energy can be released prematurely and suddenly, with significant effects. When a Delta II rocket exploded 13 seconds after launch on January 17, 1997, there were reports of store windows 10 miles (16km) away being broken by the blast.[16]

Space is a fairly predictable environment, but there are still risks of accidental depressurization and the potential failure of equipment, some of which may be very newly developed.

In 2004 the International Association for the Advancement of Space Safety was established in the Netherlands to further international cooperation and scientific advancement in space systems safety.[17]

In a microgravity environment such as that provided by a spacecraft in orbit around the Earth, humans experience a sense of “weightlessness.” Short-term exposure to microgravity causes space adaptation syndrome, a self-limiting nausea caused by derangement of the vestibular system. Long-term exposure causes multiple health issues. The most significant is bone loss, some of which is permanent, but microgravity also leads to significant deconditioning of muscular and cardiovascular tissues.

Once above the atmosphere, radiation due to the Van Allen belts, solar radiation and cosmic radiation issues occur and increase. Further away from the Earth, solar flares can give a fatal radiation dose in minutes, and the health threat from cosmic radiation significantly increases the chances of cancer over a decade exposure or more.[18]

In human spaceflight, the life support system is a group of devices that allow a human being to survive in outer space. NASA often uses the phrase Environmental Control and Life Support System or the acronym ECLSS when describing these systems for its human spaceflight missions.[19] The life support system may supply: air, water and food. It must also maintain the correct body temperature, an acceptable pressure on the body and deal with the body’s waste products. Shielding against harmful external influences such as radiation and micro-meteorites may also be necessary. Components of the life support system are life-critical, and are designed and constructed using safety engineering techniques.

Space weather is the concept of changing environmental conditions in outer space. It is distinct from the concept of weather within a planetary atmosphere, and deals with phenomena involving ambient plasma, magnetic fields, radiation and other matter in space (generally close to Earth but also in interplanetary, and occasionally interstellar medium). “Space weather describes the conditions in space that affect Earth and its technological systems. Our space weather is a consequence of the behavior of the Sun, the nature of Earth’s magnetic field, and our location in the Solar System.”[20]

Space weather exerts a profound influence in several areas related to space exploration and development. Changing geomagnetic conditions can induce changes in atmospheric density causing the rapid degradation of spacecraft altitude in Low Earth orbit. Geomagnetic storms due to increased solar activity can potentially blind sensors aboard spacecraft, or interfere with on-board electronics. An understanding of space environmental conditions is also important in designing shielding and life support systems for manned spacecraft.

Rockets as a class are not inherently grossly polluting. However, some rockets use toxic propellants, and most vehicles use propellants that are not carbon neutral. Many solid rockets have chlorine in the form of perchlorate or other chemicals, and this can cause temporary local holes in the ozone layer. Re-entering spacecraft generate nitrates which also can temporarily impact the ozone layer. Most rockets are made of metals that can have an environmental impact during their construction.

In addition to the atmospheric effects there are effects on the near-Earth space environment. There is the possibility that orbit could become inaccessible for generations due to exponentially increasing space debris caused by spalling of satellites and vehicles (Kessler syndrome). Many launched vehicles today are therefore designed to be re-entered after use.

Current and proposed applications for spaceflight include:

Most early spaceflight development was paid for by governments. However, today major launch markets such as Communication satellites and Satellite television are purely commercial, though many of the launchers were originally funded by governments.

Private spaceflight is a rapidly developing area: space flight that is not only paid for by corporations or even private individuals, but often provided by private spaceflight companies. These companies often assert that much of the previous high cost of access to space was caused by governmental inefficiencies they can avoid. This assertion can be supported by much lower published launch costs for private space launch vehicles such as Falcon 9 developed with private financing. Lower launch costs and excellent safety will be required for the applications such as Space tourism and especially Space colonization to become successful.

Media related to Spaceflight at Wikimedia Commons

More here:

Spaceflight – Wikipedia

SpaceFlight Insider – Official Site

December 31stThe 20th year of International Space Station operations showcased the international partner agencies’ ability to handle large unexpected events.

December 31stRobotic spacecraft launched by the world’s space agencies spent 2018 studying the Sun, planets and asteroids as well as the cosmos beyond the Solar System.

December 30thThe United States’ 2019 orbital launch manifest looks to be just as busy as 2018 with the first two missions of the year poised to be spillover from 2018.

December 30thCAPE CANAVERAL, Fla. — One of the things you have to get accustomed to when covering the space program is when things go wrong. Weather, idiotic boat captains, “out of family” sensor data, bad mouse food – you never know what, or who, might creep in to complicate what should be a simple process.

December 27thAt 11:07 a.m. local time (02:07 GMT) Dec. 27, 2018,the Kanopus-V No. 5 & No. 6 satellites took to the skies above Russia atop a Soyuz 2.1a rocket.

December 25thNew Horizons is clear to fly an optimal path for observation after hazard searches near Ultima Thule found no debris that could pose a danger to the probe.

December 25thOn Dec. 25, 1968 astronauts Frank Borman, Jim Lovell and Bill Anders circled the Moon in their Apollo 8 capsule. This was a dark period in U.S. history and, as one person stated via a telegraph, Apollo 8 had “saved 1968.” It was a time when anything seemed possible. It now serves as a reminder of a bygone age.

December 22ndUsing tools ranging from rakes and shovels to augmented reality headsets, engineers with NASA’s Mars InSight mission have built a Martian rock garden which recreates the lander’s new home on Mars. This allows engineers to practice placing science instrument on the surface using InSight’s Earth-bound twin – ForeSight.

December 21stLifting off from Baikonur Cosmodrome in Kazakhstan, a Russian Proton-M rocket soared into the sky to orbit a military satellite called Blagovest-13L.

December 20thPROMONTORY, Utah — Each component of a rocket is individual, yet it must function as one on the day of launch. The GEM 63 solid rocket motor has begun taking its first steps toward being integrated into one of the most successful rockets ever flown. What were those first steps like?

December 15thOne of the questions I’m often asked is Whats it like attending a rocket launch? Perhaps the best way to answer the question is by detailing some of our recent experiences during the launch of Northrop Grumman’s S.S. John Young to the International Space Station.

December 14thThe surface of the dwarf planet Ceres holds high levels of organic material, according to a new study of images returned by NASA’s Dawn spacecraft. This provides a tantalizing glimpse into the possibility of life throughout our solar system.

December 14thNASA’s Voyager 2 spacecraft has become the second probe to enter interstellar space, as confirmed by data returned by several of the science instruments on board the spacecraft.

December 13thSending crews to destinations such as the Moon and Mars is not easy. As many within the space industry will tell you, one of the most harrowing times during these missions is the first few minutes of the flight. A test carried out today worked to make this tense period a little less stressful for []

December 12thNASA’s Mars InSight lander has been getting to known for its new home at Elysium Planitia and preparing for what it will be doing there. As it turns out, one of those things – is listening.

Read more from the original source:

SpaceFlight Insider – Official Site

Spaceflight – About

Traditionally, access to space has been limited to government entities due to high cost. Sending satellites into orbit once required purchasing an entire rocket; however, with the growing industry of smallsats, the demand for routine, cost-effective access to space has increased exponentially. Demand, coupled with the growing number of launch vehicle providers, created an opportunity for Spaceflight to assist in identifying, booking and managing rideshare launches.

With a straightforward and cost-effective suite of products and services including state-of-the-art satellite infrastructure, rideshare launch offerings, payload integration and global communications networks, Spaceflight enables commercial, non-profit organizations and government entities to achieve their mission goals on time and on budget.

Read more here:

Spaceflight – About

Apollo Astronaut: It Would Be “Stupid” to Send People to Mars

According to Apollo 8 astronaut Bill Anders, crewed missions to Mars and hyped-up chatter of settling the planet are all a waste of time and money.

Fool’s Errand

According to one of the astronauts aboard NASA’s 1968 Apollo 8 mission, it would be “stupid” and “almost ridiculous” to pursue a crewed mission to Mars.

“What’s the imperative? What’s pushing us to go to Mars? I don’t think the public is that interested,” said Bill Anders, who orbited the Moon before returning to Earth 50 years ago, in a new documentary by BBC Radio 5 Live.

Anders argued that there are plenty of things that NASA could be doing that would be a better use of time and money, like the unmanned InSight rover that recently touched down to study Mars’ interior. The comments, by one of the most accomplished space explorers in human history, illustrates a deep and public philosophical rift about whether the future of spaceflight will be characterized by splashy crewed missions or less expensive automated ones.

Mars Bars

The crux of Anders’ argument on the BBC boils down to his perception that NASA is fueling a vicious cycle of highly-publicized missions that bolster its image, improve its funding, and attract top talent so that it can launch more highly-publicized missions. Sending an astronaut to Mars would dominate the news cycle, but wouldn’t push the frontier of practical scientific knowledge, Anders argued — a mismatch, essentially, between the priorities of NASA and those of the public.

That skepticism places Anders among the ranks of other high-profile critics of NASA, Elon Musk’s SpaceX, and Jeff Bezos’ Blue Origin — all three of which have set their sights on the Red Planet.

For instance, science communicator and advocate Bill Nye predicted last year that no layperson would want to settle Mars. Nye also doubled down last month to say that anyone planning on terraforming Mars must be high on drugs.

Robust Explanation

But Anders’ own Apollo 8 crewmate Frank Borman disagreed, arguing in the documentary that crewed exploration is important.

“I’m not as critical of NASA as Bill is,” Borman told BBC. “I firmly believe that we need robust exploration of our Solar System and I think man is part of that.”

However, even Borman draws the line somewhere between exploration and settlement.

“I do think there’s a lot of hype about Mars that is nonsense,” Borman said. “Musk and Bezos, they’re talking about putting colonies on Mars. That’s nonsense.”

READ MORE: Sending astronauts to Mars would be stupid, astronaut says [BBC]

More on reaching Mars: Four Legal Challenges to Resolve Before Settling on Mars

The post Apollo Astronaut: It Would Be “Stupid” to Send People to Mars appeared first on Futurism.

Read the original:

Apollo Astronaut: It Would Be “Stupid” to Send People to Mars

Elon Musk Tweets Image of SpaceX’s Stainless Steel Starship

Stainless steel starship

Big Picture

Christmas came early for Elon Musk’s Twitter followers.

The SpaceX CEO took to the social media platform on Christmas Eve to share a new image of a prototype version of the Starship spacecraft at the company’s Texas testing facilities.

The massive rocket with the ever-changing name — it was previously known as the “Mars Colonial Transporter,” the “Interplanetary Transport System,” and the “Big Falcon Rocket” — could one day ferry passengers to Mars. And Musk’s new photo reveals that the key to making that possible might be a material you’ve got in your kitchen right now.

Stainless Steel Starship

The new Starship is made out of stainless steel,  according to the tweet, a material which handles extreme heat very well — polish it up, and its mirror-like finish will reflect thermal energy far better than the carbon-based materials used for many rockets.

That could help Starship withstand the strain of long-term spaceflight, but stainless steel is heavier than carbon fiber, and keeping weight down is extremely important in space travel.

From an impromptu Twitter Q&A following the reveal of the Starship prototype, we learned that by exposing the stainless steel to extremely cold temperatures — that is, giving it a cryogenic treatment — SpaceX was able to get around the issue of the material weighing more than carbon fiber. According to a Musk tweet, “Usable strength/weight of full hard stainless at cryo is slightly better than carbon fiber, room temp is worse, high temp is vastly better.”

Stainless Steel Starship pic.twitter.com/rRoiEKKrYc

— Elon Musk (@elonmusk) December 24, 2018

Countdown to Liftoff

Perhaps the most exciting Starship revelation of the past week, though, is Musk’s assertion that the prototype could be ready for liftoff in just a few months’ time.

On December 22, he tweeted that he would “do a full technical presentation of Starship” after the prototype’s test flight, which could happen in March or April. If all goes well with that test flight, SpaceX could be one step closer to achieving Musk’s vision of making humanity a multiplanetary species.

READ MORE: SpaceX CEO Elon Musk: Starship Prototype to Have 3 Raptors and “Mirror Finish” [Teslarati]

More on Starship: Elon Musk Just Changed the BFR’s Name for a Fourth Time

The post Elon Musk Tweets Image of SpaceX’s Stainless Steel Starship appeared first on Futurism.

Read the original post:

Elon Musk Tweets Image of SpaceX’s Stainless Steel Starship


12345...102030...