Quantum computing – Wikipedia

Computation based on quantum mechanics

A quantum computer is a computer that exploits quantum mechanical phenomena.At small scales, physical matter exhibits properties of both particles and waves, and quantum computing leverages this behavior using specialized hardware.Classical physics cannot explain the operation of these quantum devices, and a scalable quantum computer could perform some calculations exponentially faster than any modern "classical" computer.In particular, a large-scale quantum computer could break widely-used encryption schemes and aid physicists in performing physical simulations; however, the current state of the art is still largely experimental and impractical.

The basic unit of information in quantum computing is the qubit, similar to the bit in traditional digital electronics. Unlike a classical bit, a qubit can exist in a superposition of its two "basis" states, which loosely means that it is in both states simultaneously. When measuring a qubit, the result is a probabilistic output of a classical bit. If a quantum computer manipulates the qubit in a particular way, wave interference effects can amplify the desired measurement results. The design of quantum algorithms involves creating procedures that allow a quantum computer to perform calculations efficiently.

Physically engineering high-quality qubits has proven challenging.If a physical qubit is not sufficiently isolated from its environment, it suffers from quantum decoherence, introducing noise into calculations.National governments have invested heavily in experimental research that aims to develop scalable qubits with longer coherence times and lower error rates.Two of the most promising technologies are superconductors (which isolate an electrical current by eliminating electrical resistance) and ion traps (which confine a single atomic particle using electromagnetic fields).

Any computational problem that can be solved by a classical computer can also be solved by a quantum computer. Conversely, any problem that can be solved by a quantum computer can also be solved by a classical computer, at least in principle given enough time. In other words, quantum computers obey the ChurchTuring thesis. This means that while quantum computers provide no additional advantages over classical computers in terms of computability, quantum algorithms for certain problems have significantly lower time complexities than corresponding known classical algorithms. Notably, quantum computers are believed to be able to quickly solve certain problems that no classical computer could solve in any feasible amount of timea feat known as "quantum supremacy." The study of the computational complexity of problems with respect to quantum computers is known as quantum complexity theory.

For many years, the fields of quantum mechanics and computer science formed distinct academic communities. Modern quantum theory developed in the 1920s to explain the waveparticle duality observed at atomic scales,[4] and digital computers emerged in the following decades to replace human computers for tedious calculations.[5] Both disciplines had practical applications during World War II; computers played a major role in wartime cryptography,[6] and quantum physics was essential for the nuclear physics used in the Manhattan Project.[7]

As physicists applied quantum mechanical models to computational problems and swapped digital bits for quantum bits (qubits), the fields of quantum mechanics and computer science began to converge.In 1980, Paul Benioff introduced the quantum Turing machine, which uses quantum theory to describe a simplified computer.[8]When digital computers became faster, physicists faced an exponential increase in overhead when simulating quantum dynamics,[9] prompting Yuri Manin and Richard Feynman to independently suggest that hardware based on quantum phenomena might be more efficient for computer simulation.[10][11]In a 1984 paper, Charles Bennett and Gilles Brassard applied quantum theory to cryptography protocols and demonstrated that quantum key distribution could enhance information security.[13][14]

Quantum algorithms then emerged for solving oracle problems, such as Deutsch's algorithm in 1985,[15] the BernsteinVazirani algorithm in 1993,[16] and Simon's algorithm in 1994.[17]These algorithms did not solve practical problems, but demonstrated mathematically that one could gain more information by querying a black box in superposition, sometimes referred to as quantum parallelism.Peter Shor built on these results with his 1994 algorithms for breaking the widely-used RSA and DiffieHellman encryption protocols, which drew significant attention to the field of quantum computing.[20]In 1996, Grover's algorithm established a quantum speedup for the widely-applicable unstructured search problem.[21] The same year, Seth Lloyd proved that quantum computers could simulate quantum systems without the exponential overhead present in classical simulations,[23] validating Feynman's 1982 conjecture.[24]

Over the years, experimentalists have constructed small-scale quantum computers using trapped ions and superconductors.[25]In 1998, a two-qubit quantum computer demonstrated the feasibility of the technology,[26][27] and subsequent experiments have increased the number of qubits and reduced error rates.[25]In 2019, Google AI and NASA announced that they had achieved quantum supremacy with a 54-qubit machine, performing a computation that is impossible for any classical computer.[28][29][30] However, the validity of this claim is still being actively researched.[31][32]

According to some researchers, noisy intermediate-scale quantum (NISQ) machines may have specialized uses in the near future, but noise in quantum gates limits their reliability.[33]The threshold theorem shows how increasing the number of qubits can mitigate errors, but fully fault-tolerant quantum computing remains "a rather distant dream".[33]Estimates suggest that a quantum computer with nearly 3million fault-tolerant qubits could factor a 2,048-bit integer in five months.[35][36]

In recent years, investment in quantum computing research has increased in the public and private sectors.[37][38]As one consulting firm summarized,[39]

...investment dollars are pouring in, and quantum-computing start-ups are proliferating.... While quantum computing promises to help businesses solve problems that are beyond the reach and speed of conventional high-performance computers, use cases are largely experimental and hypothetical at this early stage.

Computer engineers typically describe a modern computer's operation in terms of classical electrodynamics.Within these "classical" computers, some components (such as semiconductors and random number generators) may rely on quantum behavior, but these components are not isolated from their environment, so any quantum information quickly decoheres.While programmers may depend on probability theory when designing a randomized algorithm, quantum mechanical notions like superposition and interference are largely irrelevant for program analysis.

Quantum programs, in contrast, rely on precise control of coherent quantum systems. Physicists describe these systems mathematically using linear algebra. Complex numbers model probability amplitudes, vectors model quantum states, and matrices model the operations that can be performed on these states. Programming a quantum computer is then a matter of composing operations in such a way that the resulting program computes a useful result in theory and is implementable in practice.

The prevailing model of quantum computation describes the computation in terms of a network of quantum logic gates. This model is a complex linear-algebraic generalization of boolean circuits.[a]

A memory consisting of n {textstyle n} bits of information has 2 n {textstyle 2^{n}} possible states. A vector representing all memory states thus has 2 n {textstyle 2^{n}} entries (one for each state). This vector is viewed as a probability vector and represents the fact that the memory is to be found in a particular state.

The bits of classical computers are not capable of being in superposition, so one entry must have a value of 1 (i.e. a 100% probability of being in this state) and all other entries would be zero.

In quantum mechanics, probability vectors can be generalized to density operators. The quantum state vector formalism is usually introduced first because it is conceptually simpler, and because it can be used instead of the density matrix formalism for pure states, where the whole quantum system is known.

Consider a simple memory consisting of only one quantum bit. When measured, this memory may be found in one of two states: the zero state or the one state. We may represent the state of this memory using Dirac notation so that

The state of this one-qubit quantum memory can be manipulated by applying quantum logic gates, analogous to how classical memory can be manipulated with classical logic gates. One important gate for both classical and quantum computation is the NOT gate, which can be represented by a matrix

X | 0 = | 1 {textstyle X|0rangle =|1rangle } and X | 1 = | 0 {textstyle X|1rangle =|0rangle } .

The mathematics of single qubit gates can be extended to operate on multi-qubit quantum memories in two important ways. One way is simply to select a qubit and apply that gate to the target qubit while leaving the remainder of the memory unaffected. Another way is to apply the gate to its target only if another part of the memory is in a desired state. These two choices can be illustrated using another example. The possible states of a two-qubit quantum memory are

In summary, a quantum computation can be described as a network of quantum logic gates and measurements. However, any measurement can be deferred to the end of quantum computation, though this deferment may come at a computational cost, so most quantum circuits depict a network consisting only of quantum logic gates and no measurements.

Quantum parallelism refers to the ability of quantum computers to evaluate a function for multiple input values simultaneously. This can be achieved by preparing a quantum system in a superposition of input states, and applying a unitary transformation that encodes the function to be evaluated. The resulting state encodes the function's output values for all input values in the superposition, allowing for the computation of multiple outputs simultaneously. This property is key to the speedup of many quantum algorithms.

There are a number of models of computation for quantum computing, distinguished by the basic elements in which the computation is decomposed.

A quantum gate array decomposes computation into a sequence of few-qubit quantum gates. A quantum computation can be described as a network of quantum logic gates and measurements. However, any measurement can be deferred to the end of quantum computation, though this deferment may come at a computational cost, so most quantum circuits depict a network consisting only of quantum logic gates and no measurements.

Any quantum computation (which is, in the above formalism, any unitary matrix of size 2 n 2 n {displaystyle 2^{n}times 2^{n}} over n {displaystyle n} qubits) can be represented as a network of quantum logic gates from a fairly small family of gates. A choice of gate family that enables this construction is known as a universal gate set, since a computer that can run such circuits is a universal quantum computer. One common such set includes all single-qubit gates as well as the CNOT gate from above. This means any quantum computation can be performed by executing a sequence of single-qubit gates together with CNOT gates. Though this gate set is infinite, it can be replaced with a finite gate set by appealing to the Solovay-Kitaev theorem.

A measurement-based quantum computer decomposes computation into a sequence of Bell state measurements and single-qubit quantum gates applied to a highly entangled initial state (a cluster state), using a technique called quantum gate teleportation.

An adiabatic quantum computer, based on quantum annealing, decomposes computation into a slow continuous transformation of an initial Hamiltonian into a final Hamiltonian, whose ground states contain the solution.[42]

A topological quantum computer decomposes computation into the braiding of anyons in a 2D lattice.[43]

The quantum Turing machine is theoretically important but the physical implementation of this model is not feasible. All of these models of computationquantum circuits,[44] one-way quantum computation,[45] adiabatic quantum computation,[46] and topological quantum computation[47]have been shown to be equivalent to the quantum Turing machine; given a perfect implementation of one such quantum computer, it can simulate all the others with no more than polynomial overhead. This equivalence need not hold for practical quantum computers, since the overhead of simulation may be too large to be practical.

Quantum cryptography could potentially fulfill some of the functions of public key cryptography. Quantum-based cryptographic systems could, therefore, be more secure than traditional systems against quantum hacking.[48]

Progress in finding quantum algorithms typically focuses on this quantum circuit model, though exceptions like the quantum adiabatic algorithm exist. Quantum algorithms can be roughly categorized by the type of speedup achieved over corresponding classical algorithms.[49]

Quantum algorithms that offer more than a polynomial speedup over the best-known classical algorithm include Shor's algorithm for factoring and the related quantum algorithms for computing discrete logarithms, solving Pell's equation, and more generally solving the hidden subgroup problem for abelian finite groups.[49] These algorithms depend on the primitive of the quantum Fourier transform. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, although this is considered unlikely.[50][self-published source?] Certain oracle problems like Simon's problem and the BernsteinVazirani problem do give provable speedups, though this is in the quantum query model, which is a restricted model where lower bounds are much easier to prove and doesn't necessarily translate to speedups for practical problems.

Other problems, including the simulation of quantum physical processes from chemistry and solid-state physics, the approximation of certain Jones polynomials, and the quantum algorithm for linear systems of equations have quantum algorithms appearing to give super-polynomial speedups and are BQP-complete. Because these problems are BQP-complete, an equally fast classical algorithm for them would imply that no quantum algorithm gives a super-polynomial speedup, which is believed to be unlikely.

Some quantum algorithms, like Grover's algorithm and amplitude amplification, give polynomial speedups over corresponding classical algorithms.[49] Though these algorithms give comparably modest quadratic speedup, they are widely applicable and thus give speedups for a wide range of problems. Many examples of provable quantum speedups for query problems are related to Grover's algorithm, including Brassard, Hyer, and Tapp's algorithm for finding collisions in two-to-one functions,[52] which uses Grover's algorithm, and Farhi, Goldstone, and Gutmann's algorithm for evaluating NAND trees,[53] which is a variant of the search problem.

A notable application of quantum computation is for attacks on cryptographic systems that are currently in use. Integer factorization, which underpins the security of public key cryptographic systems, is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers (e.g., products of two 300-digit primes).[54] By comparison, a quantum computer could efficiently solve this problem using Shor's algorithm to find its factors. This ability would allow a quantum computer to break many of the cryptographic systems in use today, in the sense that there would be a polynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers or the discrete logarithm problem, both of which can be solved by Shor's algorithm. In particular, the RSA, DiffieHellman, and elliptic curve DiffieHellman algorithms could be broken. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security.

Identifying cryptographic systems that may be secure against quantum algorithms is an actively researched topic under the field of post-quantum cryptography.[55][56] Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory.[55][57] Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem.[58] It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case,[59] meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search (see Key size).

The most well-known example of a problem that allows for a polynomial quantum speedup is unstructured search, which involves finding a marked item out of a list of n {displaystyle n} items in a database. This can be solved by Grover's algorithm using O ( n ) {displaystyle O({sqrt {n}})} queries to the database, quadratically fewer than the ( n ) {displaystyle Omega (n)} queries required for classical algorithms. In this case, the advantage is not only provable but also optimal: it has been shown that Grover's algorithm gives the maximal possible probability of finding the desired element for any number of oracle lookups.

Problems that can be efficiently addressed with Grover's algorithm have the following properties:[60][61]

For problems with all these properties, the running time of Grover's algorithm on a quantum computer scales as the square root of the number of inputs (or elements in the database), as opposed to the linear scaling of classical algorithms. A general class of problems to which Grover's algorithm can be applied[62] is Boolean satisfiability problem, where the database through which the algorithm iterates is that of all possible answers. An example and possible application of this is a password cracker that attempts to guess a password. Breaking symmetric ciphers with this algorithm is of interest to government agencies.[63]

Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, many[who?] believe quantum simulation will be one of the most important applications of quantum computing.[64] Quantum simulation could also be used to simulate the behavior of atoms and particles at unusual conditions such as the reactions inside a collider.[65]

Quantum simulations might be used to predict future paths of particles and protons under superposition in the double-slit experiment.[66]

About 2% of the annual global energy output is used for nitrogen fixation to produce ammonia for the Haber process in the agricultural fertilizer industry (even though naturally occurring organisms also produce ammonia). Quantum simulations might be used to understand this process and increase the energy efficiency of production.[67]

Quantum annealing relies on the adiabatic theorem to undertake calculations. A system is placed in the ground state for a simple Hamiltonian, which slowly evolves to a more complicated Hamiltonian whose ground state represents the solution to the problem in question. The adiabatic theorem states that if the evolution is slow enough the system will stay in its ground state at all times through the process.

Since quantum computers can produce outputs that classical computers cannot produce efficiently, and since quantum computation is fundamentally linear algebraic, some express hope in developing quantum algorithms that can speed up machine learning tasks.[68][69]

For example, the quantum algorithm for linear systems of equations, or "HHL Algorithm", named after its discoverers Harrow, Hassidim, and Lloyd, is believed to provide speedup over classical counterparts.[70][69] Some research groups have recently explored the use of quantum annealing hardware for training Boltzmann machines and deep neural networks.[71][72][73]

In the field of computational biology, quantum computing has the potential to play a big role in solving many biological problems. Given how computational biology is using generic data modeling and storage, its applications to computational biology are expected to arise as well.[74]

Deep generative chemistry models emerge as powerful tools to expedite drug discovery. However, the immense size and complexity of the structural space of all possible drug-like molecules pose significant obstacles, which could be overcome in the future by quantum computers. Quantum computers are naturally good for solving complex quantum many-body problems[75] and thus may be instrumental in applications involving quantum chemistry. Therefore, one can expect that quantum-enhanced generative models[76] including quantum GANs[77] may eventually be developed into ultimate generative chemistry algorithms.

There are a number of technical challenges in building a large-scale quantum computer.[78] Physicist David DiVincenzo has listed these requirements for a practical quantum computer:[79]

Sourcing parts for quantum computers is also very difficult. Superconducting quantum computers, like those constructed by Google and IBM, need helium-3, a nuclear research byproduct, and special superconducting cables made only by the Japanese company Coax Co.[80]

The control of multi-qubit systems requires the generation and coordination of a large number of electrical signals with tight and deterministic timing resolution. This has led to the development of quantum controllers which enable interfacing with the qubits. Scaling these systems to support a growing number of qubits is an additional challenge.[81]

One of the greatest challenges involved with constructing quantum computers is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, and the lattice vibrations and background thermonuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is effectively non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems in particular, the transverse relaxation time T2 (for NMR and MRI technology, also called the dephasing time), typically range between nanoseconds and seconds at low temperature.[82] Currently, some quantum computers require their qubits to be cooled to 20 millikelvin (usually using a dilution refrigerator[83]) in order to prevent significant decoherence.[84] A 2020 study argues that ionizing radiation such as cosmic rays can nevertheless cause certain systems to decohere within milliseconds.[85]

As a result, time-consuming tasks may render some quantum algorithms inoperable, as attempting to maintain the state of qubits for a long enough duration will eventually corrupt the superpositions.[86]

These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time.

As described in the threshold theorem, if the error rate is small enough, it is thought to be possible to use quantum error correction to suppress errors and decoherence. This allows the total calculation time to be longer than the decoherence time if the error correction scheme can correct errors faster than decoherence introduces them. An often-cited figure for the required error rate in each gate for fault-tolerant computation is 103, assuming the noise is depolarizing.

Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L2, where L is the number of digits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 bits without error correction.[87] With error correction, the figure would rise to about 107 bits. Computation time is about L2 or about 107 steps and at 1MHz, about 10 seconds. However, other careful estimates[35][36] lower the qubit count to 3million for factorizing 2,048-bit integer in 5 months on a trapped-ion quantum computer.

Another approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads, and relying on braid theory to form stable logic gates.[88][89]

Quantum supremacy is a term coined by John Preskill referring to the engineering feat of demonstrating that a programmable quantum device can solve a problem beyond the capabilities of state-of-the-art classical computers.[90][91][92] The problem need not be useful, so some view the quantum supremacy test only as a potential future benchmark.[93]

In October 2019, Google AI Quantum, with the help of NASA, became the first to claim to have achieved quantum supremacy by performing calculations on the Sycamore quantum computer more than 3,000,000 times faster than they could be done on Summit, generally considered the world's fastest computer.[94][95][96] This claim has been subsequently challenged: IBM has stated that Summit can perform samples much faster than claimed,[97][98] and researchers have since developed better algorithms for the sampling problem used to claim quantum supremacy, giving substantial reductions to the gap between Sycamore and classical supercomputers[99][100][101] and even beating it.[102][103][104]

In December 2020, a group at USTC implemented a type of Boson sampling on 76 photons with a photonic quantum computer, Jiuzhang, to demonstrate quantum supremacy.[105][106][107] The authors claim that a classical contemporary supercomputer would require a computational time of 600 million years to generate the number of samples their quantum processor can generate in 20 seconds.[108]

On November 16, 2021, at the quantum computing summit, IBM presented a 127-qubit microprocessor named IBM Eagle.[109]

Some researchers have expressed skepticism that scalable quantum computers could ever be built, typically because of the issue of maintaining coherence at large scales, but also for other reasons.

Bill Unruh doubted the practicality of quantum computers in a paper published in 1994.[110] Paul Davies argued that a 400-qubit computer would even come into conflict with the cosmological information bound implied by the holographic principle.[111] Skeptics like Gil Kalai doubt that quantum supremacy will ever be achieved.[112][113][114] Physicist Mikhail Dyakonov has expressed skepticism of quantum computing as follows:

For physically implementing a quantum computer, many different candidates are being pursued, among them (distinguished by the physical system used to realize the qubits):

The large number of candidates demonstrates that quantum computing, despite rapid progress, is still in its infancy.[143]

Any computational problem solvable by a classical computer is also solvable by a quantum computer. Intuitively, this is because it is believed that all physical phenomena, including the operation of classical computers, can be described using quantum mechanics, which underlies the operation of quantum computers.

Conversely, any problem solvable by a quantum computer is also solvable by a classical computer. It is possible to simulate both quantum and classical computers manually with just some paper and a pen, if given enough time. More formally, any quantum computer can be simulated by a Turing machine. In other words, quantum computers provide no additional power over classical computers in terms of computability. This means that quantum computers cannot solve undecidable problems like the halting problem and the existence of quantum computers does not disprove the ChurchTuring thesis.

While quantum computers cannot solve any problems that classical computers cannot already solve, it is suspected that they can solve certain problems faster than classical computers. For instance, it is known that quantum computers can efficiently factor integers, while this is not believed to be the case for classical computers.

The class of problems that can be efficiently solved by a quantum computer with bounded error is called BQP, for "bounded error, quantum, polynomial time". More formally, BQP is the class of problems that can be solved by a polynomial-time quantum Turing machine with an error probability of at most 1/3. As a class of probabilistic problems, BQP is the quantum counterpart to BPP ("bounded error, probabilistic, polynomial time"), the class of problems that can be solved by polynomial-time probabilistic Turing machines with bounded error. It is known that B P P B Q P {displaystyle {mathsf {BPPsubseteq BQP}}} and is widely suspected that B Q P B P P {displaystyle {mathsf {BQPsubsetneq BPP}}} , which intuitively would mean that quantum computers are more powerful than classical computers in terms of time complexity.

The exact relationship of BQP to P, NP, and PSPACE is not known. However, it is known that P B Q P P S P A C E {displaystyle {mathsf {Psubseteq BQPsubseteq PSPACE}}} ; that is, all problems that can be efficiently solved by a deterministic classical computer can also be efficiently solved by a quantum computer, and all problems that can be efficiently solved by a quantum computer can also be solved by a deterministic classical computer with polynomial space resources. It is further suspected that BQP is a strict superset of P, meaning there are problems that are efficiently solvable by quantum computers that are not efficiently solvable by deterministic classical computers. For instance, integer factorization and the discrete logarithm problem are known to be in BQP and are suspected to be outside of P. On the relationship of BQP to NP, little is known beyond the fact that some NP problems that are believed not to be in P are also in BQP (integer factorization and the discrete logarithm problem are both in NP, for example). It is suspected that N P B Q P {displaystyle {mathsf {NPnsubseteq BQP}}} ; that is, it is believed that there are efficiently checkable problems that are not efficiently solvable by a quantum computer. As a direct consequence of this belief, it is also suspected that BQP is disjoint from the class of NP-complete problems (if an NP-complete problem were in BQP, then it would follow from NP-hardness that all problems in NP are in BQP).[147]

The relationship of BQP to the basic classical complexity classes can be summarized as follows:

It is also known that BQP is contained in the complexity class # P {displaystyle color {Blue}{mathsf {#P}}} (or more precisely in the associated class of decision problems P # P {displaystyle {mathsf {P^{#P}}}} ),[147] which is a subclass of PSPACE.

It has been speculated that further advances in physics could lead to even faster computers. For instance, it has been shown that a non-local hidden variable quantum computer based on Bohmian Mechanics could implement a search of an N-item database in at most O ( N 3 ) {displaystyle O({sqrt[{3}]{N}})} steps, a slight speedup over Grover's algorithm, which runs in O ( N ) {displaystyle O({sqrt {N}})} steps. Note, however, that neither search method would allow quantum computers to solve NP-complete problems in polynomial time.[148] Theories of quantum gravity, such as M-theory and loop quantum gravity, may allow even faster computers to be built. However, defining computation in these theories is an open problem due to the problem of time; that is, within these physical theories there is currently no obvious way to describe what it means for an observer to submit input to a computer at one point in time and then receive output at a later point in time.[149][150]

Read the original post:

Quantum computing - Wikipedia

Quantum Computing – Intel

Quantum Computing Research

Quantum computing employs the properties of quantum physics like superposition and entanglement to perform computation. Traditional transistors use binary encoding of data represented electrically as on or off states. Quantum bits or qubits can simultaneously operate in multiple states enabling unprecedented levels of parallelism and computing efficiency.

Todays quantum systems only include tens or hundreds of entangled qubits, limiting them from solving real-world problems. To achievequantum practicality, commercial quantum systems need to scale to over a million qubits and overcome daunting challenges like qubit fragility and software programmability. Intel Labs is working to overcome these challenges with the help of industry and academic partners and has made significant progress.

First, Intel is leveraging its expertise in high-volume transistor manufacturing to develophot silicon spin-qubits, much smaller computing devices that operate at higher temperatures. Second, theHorse Ridge IIcryogenic quantum control chip provides tighter integration. And third, thecryoproberenables high-volume testing that is helping to accelerate commercialization.

Even though we may be years away from large-scale implementation, quantum computing promises to enable breakthroughs in materials, chemicals and drug design, financial and climate modeling, and cryptography.

View post:

Quantum Computing - Intel

What is Quantum Computing? | IBM

Quantum computers are elegant machines, smaller and requiring less energy than supercomputers. An IBM Quantum processor is a wafer not much bigger than the one found in a laptop. And a quantum hardware system is about the size of a car, made up mostly of cooling systems to keep the superconducting processor at its ultra-cold operational temperature.

A classical processor uses bits to perform its operations. A quantum computer uses qubits (CUE-bits) to run multidimensional quantum algorithms.

SuperfluidsYour desktop computer likely uses a fan to get cold enough to work. Our quantum processors need to be very cold about a hundredth of a degree above absolute zero. To achieve this, we use super-cooled superfluids to create superconductors.

SuperconductorsAt those ultra-low temperatures certain materials in our processors exhibit another important quantum mechanical effect: electrons move through them without resistance. This makes them "superconductors."

When electrons pass through superconductors they match up, forming "Cooper pairs." These pairs can carry a charge across barriers, or insulators, through a process known as quantum tunneling. Two superconductors placed on either side of an insulator form a Josephson junction

ControlOur quantum computers use Josephson junctions as superconducting qubits. By firing microwave photons at these qubits, we can control their behavior and get them to hold, change, and read out individual units of quantum information.

SuperpositionA qubit itself isn't very useful. But it can perform an important trick: placing the quantum information it holds into a state of superposition, which represents a combination of all possible configurations of the qubit. Groups of qubits in superposition can create complex, multidimensional computational spaces. Complex problems can be represented in new ways in these spaces.

EntanglementEntanglement is a quantum mechanical effect that correlates the behavior of two separate things. When two qubits are entangled, changes to one qubit directly impact the other. Quantum algorithms leverage those relationships to find solutions to complex problems

Read more:

What is Quantum Computing? | IBM

What Is Quantum Computing? | NVIDIA Blog

Twenty-seven years before Steve Jobs unveiled a computer you could put in your pocket, physicist Paul Benioff published a paper showing it was theoretically possible to build a much more powerful system you could hide in a thimble a quantum computer.

Named for the subatomic physics it aimed to harness, the concept Benioff described in 1980 still fuels research today, including efforts to build the next big thing in computing: a system that could make a PC look in some ways quaint as an abacus.

Richard Feynman a Nobel Prize winner whose wit-laced lectures brought physics to a broad audience helped establish the field, sketching out how such systems could simulate quirky quantum phenomena more efficiently than traditional computers. So,

Quantum computing is a sophisticated approach to making parallel calculations, using the physics that governs subatomic particles to replace the more simplistic transistors in todays computers.

Quantum computers calculate using qubits, computing units that can be on, off or any value between, instead of the bits in traditional computers that are either on or off, one or zero. The qubits ability to live in the in-between state called superposition adds a powerful capability to the computing equation, making quantum computers superior for some kinds of math.

Using qubits, quantum computers could buzz through calculations that would take classical computers a loooong time if they could even finish them.

For example, todays computers use eight bits to represent any number between 0 and 255. Thanks to features like superposition, a quantum computer can use eight qubits to represent every number between 0 and 255, simultaneously.

Its a feature like parallelism in computing: All possibilities are computed at once rather than sequentially, providing tremendous speedups.

So, while a classical computer steps through long division calculations one at a time to factor a humongous number, a quantum computer can get the answer in a single step. Boom!

That means quantum computers could reshape whole fields, like cryptography, that are based on factoring what are today impossibly large numbers.

That could be just the start. Some experts believe quantum computers will bust through limits that now hinder simulations in chemistry, materials science and anything involving worlds built on the nano-sized bricks of quantum mechanics.

Quantum computers could even extend the life of semiconductors by helping engineers create more refined simulations of the quantum effects theyre starting to find in todays smallest transistors.

Indeed, experts say quantum computers ultimately wont replace classical computers, theyll complement them. And some predict quantum computers will be used as accelerators much as GPUs accelerate todays computers.

Dont expect to build your own quantum computer like a DIY PC with parts scavenged from discount bins at the local electronics shop.

The handful of systems operating today typically require refrigeration that creates working environments just north of absolute zero. They need that computing arctic to handle the fragile quantum states that power these systems.

In a sign of how hard constructing a quantum computer can be, one prototype suspends an atom between two lasers to create a qubit. Try that in your home workshop!

Quantum computing takes nano-Herculean muscles to create something called entanglement. Thats when two or more qubits exist in a single quantum state, a condition sometimes measured by electromagnetic waves just a millimeter wide.

Crank up that wave with a hair too much energy and you lose entanglement or superposition, or both. The result is a noisy state called decoherence, the equivalent in quantum computing of the blue screen of death.

A handful of companies such as Alibaba, Google, Honeywell, IBM, IonQ and Xanadu operate early versions of quantum computers today.

Today they provide tens of qubits. But qubits can be noisy, making them sometimes unreliable. To tackle real-world problems reliably, systems need tens or hundreds of thousands of qubits.

Experts believe it could be a couple decades before we get to a high-fidelity era when quantum computers are truly useful.

Predictions of when we reach so-called quantum computing supremacy the time when quantum computers execute tasks classical ones cant is a matter of lively debate in the industry.

The good news is the world of AI and machine learning put a spotlight on accelerators like GPUs, which can perform many of the types of operations quantum computers would calculate with qubits.

So, classical computers are already finding ways to host quantum simulations with GPUs today. For example, NVIDIA ran a leading-edge quantum simulation on Selene, our in-house AI supercomputer.

NVIDIA announced in the GTC keynote the cuQuantum SDK to speed quantum circuit simulations running on GPUs. Early work suggests cuQuantum will be able to deliver orders of magnitude speedups.

The SDK takes an agnostic approach, providing a choice of tools users can pick to best fit their approach. For example, the state vector method provides high-fidelity results, but its memory requirements grow exponentially with the number of qubits.

That creates a practical limit of roughly 50 qubits on todays largest classical supercomputers. Nevertheless weve seen great results (below) using cuQuantum to accelerate quantum circuit simulations that use this method.

Researchers from the Jlich Supercomputing Centre will provide a deep dive on their work with the state vector method in session E31941 at GTC (free with registration).

A newer approach, tensor network simulations, use less memory and more computation to perform similar work.

Using this method, NVIDIA and Caltech accelerated a state-of-the-art quantum circuit simulator with cuQuantum running on NVIDIA A100 Tensor Core GPUs. It generated a sample from a full-circuit simulation of the Google Sycamore circuit in 9.3 minutes on Selene, a task that 18 months ago experts thought would take days using millions of CPU cores.

Using the Cotengra/Quimb packages, NVIDIAs newly announced cuQuantum SDK, and the Selene supercomputer, weve generated a sample of the Sycamore quantum circuit at depth m=20 in record time less than 10 minutes, said Johnnie Gray, a research scientist at Caltech.

This sets the benchmark for quantum circuit simulation performance and will help advance the field of quantum computing by improving our ability to verify the behavior of quantum circuits, said Garnet Chan, a chemistry professor at Caltech whose lab hosted the work.

NVIDIA expects the performance gains and ease of use of cuQuantum will make it a foundational element in every quantum computing framework and simulator at the cutting edge of this research.

Sign up to show early interest in cuQuantum.

Learn more about quantum computing on the NVIDIA Technical Blog.

Originally posted here:

What Is Quantum Computing? | NVIDIA Blog

The AI of digitalization – Bits&Chips

Jan Bosch is a research center director, professor, consultant and angel investor in start-ups. You can contact him at jan@janbosch.com.

yesterday

This article is the last of four where I explore different dimensions of digital transformation. Earlier, I discussed business models, product upgrades and data exploitation. The fourth dimension is concerned with artificial intelligence. Similar to the other dimensions, our research showed that theres a clear evolution path that companies go through as they transition from being traditional companies to becoming digital ones (see the figure).

In the first stage, the company is still focused on data analytics. All data is processed for the sole purpose of human consumption and interpretation. At this point, things are all about dashboard, visualization and stakeholder views.

In the second stage, the first machine learning (ML) or deep learning (DL) models are starting to be developed and deployed. The training of the models is based on static data sets that have been assembled at one point in time and that dont evolve unless theres an explicit decision taken. When that happens, a new data set is assembled and used for training.

In the third stage, DevOps and MLOps are merged in the sense that theres a continuous retraining of models based on the most recent data. This data is no longer a data set, but rather a window over a data stream thats used for training and continuous re-training. Depending on the domain and the rate of change in the underlying data, the MLOps loop is either aligned with the DevOps loop or is executed more or less frequently. For instance, when using ML/DL for house price prediction in a real-estate market, its important to frequently retrain the model based on the most recent sales data as house prices change continuously.

Especially in the software-intensive embedded systems industry, as ML/DL models are deployed in each product instance, the next step tends to be the adoption of federated approaches. Rather than conducting all training centrally, the company adopts federated learning approaches where all product instances are involved in training and model updates are shared between product instances. This allows for localization and customization as specific regions and users may want the system to behave differently. Depending on the approach to federated learning, its feasible to allow for this. For example, different drivers want their adaptive cruise control system to behave in different ways. Some want to have the system take a more careful approach whereas others would like to see a more aggressive way of breaking and accelerating. Each product instance can, over time, adjust itself in response to driver feedback.

Finally, we reach the automated experimentation stage where the system fully autonomously experiments with its own behavior with the intent of improving certain success metrics. Whereas in earlier stages, humans conduct A/B experiments or similar and the humans are the ones coming up with the A and B alternatives, here its the system itself that generates alternatives, deploys, measures the effect and decides on next steps. Although the examples in this category are few and far between, weve been involved in, among others, cases where we use a system of this type to explore configuration parameter settings (most systems have thousands) in order to optimize the systems performance automatically.

Using AI is not a binary step, but a process that evolves over time

Concluding, digital transformation is a complex, multi-dimensional challenge. One of the dimensions is the adoption of AI/ML/DL. Using AI is not a binary step, but rather a process that evolves over time and proceeds through predefined steps. Deploying AI allows for automation of tasks that couldnt be automated earlier and for improving the outcomes of automated processes through smart, automated decisions. Once you have software, you can generate data. Once you have data, you can employ AI. Once you have AI, you can truly capitalize on the potential of digitalization.

In his course Speed, data and ecosystems, Jan Bosch provides you with a holistic framework that offers strategic guidance into how you successfully can identify and address the key challenges to excel in a software-driven world.

Read more here:

The AI of digitalization - Bits&Chips

What Is So Special about Nano? – National Nanotechnology Initiative

Nanotechnology is not simply working at ever-smaller dimensions; rather, working at the nanoscale enables scientists to understand and utilize the unique physical, chemical, mechanical, and optical properties of materials that occur at this scale.

When particles are created with dimensions of about 1100 nanometers, the materials properties can change significantly from those at larger scales. This is the size scale where quantum effects can rule the behavior and properties of particles. A fascinating and powerful result of the quantum effects of the nanoscale is the concept of tunability of properties. That is, by changing the size of the particle, a scientist can literally fine-tune a material property of interest. At the nanoscale, properties such as melting point, fluorescence, electrical conductivity, magnetic permeability, and chemical reactivity can change as a function of the size of the particle.

Nanoscale gold illustrates the unique properties that occur at the nanoscale. Nanoscale gold can appear red or purple depending on the size of the particle. Gold nanoparticles interact differently with light compared to larger-scale gold particles due to quantum effects.

Nanoscale materials have far larger surface area-to-volume ratio than bulk materials. As surface area per volume increases, materials can become more reactive.

A simple thought experiment shows why nanoparticles have phenomenally high surface areas. A solid cube of a material 1 cm on a side has 6 square centimeters of surface area, about equal to one side of half a stick of gum. But if that volume of 1 cubic centimeter were filled with cubes 1 mm on a side, that would be 1,000 millimeter-sized cubes (10 x 10 x 10), each one of which has a surface area of 6 square millimeters, for a total surface area of 60 square centimetersslightly larger than a credit card. When the 1 cubic centimeter is filled with micrometer-sized cubesa trillion (1012) of them, each with a surface area of 6 square micrometersthe total surface area amounts to 6 square meters, or somewhat smaller than the footprint of a small car. And when that single cubic centimeter of volume is filled with 1-nanometer-sized cubes1021 of them, each with an area of 6 square nanometerstheir total surface area comes to 6,000 square meters. In other words, a single cubic centimeter of cubic nanoparticles has a total surface area that is even bigger than the area of a football field! Because of this higher surface area, more of the material is exposed to the surrounding environment, which can greatly speed chemical reactions of these materials, or reactivity. One benefit of greater surface areaand improved reactivityin nanostructured materials is that they have helped create better catalysts. An everyday example of catalysis is the catalytic converter in a car, which cleans the exhaust and reduces air pollution. The higher surface area of nanoscale catalysts have enabled modern catalytic converters to use far less precious metal in the catalytic converter than was previously needed to achieve the same reductions in polluting gases. Engineers are taking advantage of the increased reactivity at the nanoscale to design better batteries, fuel cells, and catalysts for cleaner and safer energy generation and storage systems.

Over millennia, nature has perfected the art of biology at the nanoscale. Many of the inner workings of cells naturally occur at the nanoscale. For example, hemoglobin, the protein that carries oxygen through the body, is 5.5 nanometers in diameter. A strand of DNA, one of the building blocks of life, is only about 2 nanometers in diameter.

Drawing on the natural nanoscale of biology, many medical researchers are working on designing tools, treatments, and therapies that are more precise and personalized than conventional ones. Nanomedicine formulations can be designed to deliver therapeutics directly to a specific site within the body, which can lower the dose required to achieve therapeutic effect and reduce adverse side effects. Nanomaterials also are being used to develop affordable and easy-to-use diagnostics and monitoring devices for a broad range of applications that includes glucose monitoring, pregnancy tests, and viral detection. Advanced nanomaterials are used to improve chemical, physical, and mechanical performance of prosthetics materials, with benefits that can include better biocompatibility, strength-to-weight ratios, and antimicrobial properties to reduce risk of infection.

Other fields are also benefiting from an understanding of natural nanotechnology. Some scientists are exploring the use molecular self-assembly, self-organization, and quantum mechanics to create novel computing platforms. Other researchers are using nanomaterials to develop nature-inspired systems for artificial photosynthesis to harness solar energy.

Excerpt from:

What Is So Special about Nano? - National Nanotechnology Initiative

What Is Nanotechnology? | National Nanotechnology Initiative

Nanotechnology is science, engineering, and technologyconductedat the nanoscale, which is about 1 to 100 nanometers.

Physicist Richard Feynman, the father of nanotechnology.

Nanoscience and nanotechnology are the study and application of extremely small things and can be used across all the other science fields, such as chemistry, biology, physics, materials science, and engineering.

The ideas and concepts behind nanoscience and nanotechnology started with a talk entitled Theres Plenty of Room at the Bottom by physicist Richard Feynman at an American Physical Society meeting at the California Institute of Technology (CalTech) on December 29, 1959, long before the term nanotechnology was used. In his talk, Feynman described a process in which scientists would be able to manipulate and control individual atoms and molecules. Over a decade later, in his explorations of ultraprecision machining, Professor Norio Taniguchi coined the term nanotechnology. It wasn't until 1981, with the development of the scanning tunneling microscope that could "see" individual atoms, that modern nanotechnology began.

Its hard to imagine just how small nanotechnology is. One nanometer is a billionth of a meter, or 10-9 of a meter. Here are a few illustrative examples:

Nanoscience and nanotechnology involve the ability to see and to control individual atoms and molecules. Everything on Earth is made up of atomsthe food we eat, the clothes we wear, the buildings and houses we live in, and our own bodies.

But something as small as an atom is impossible to see with the naked eye. In fact, its impossible to see with the microscopes typically used in a high school science classes. The microscopes needed to see things at the nanoscale were invented in the early 1980s.

Once scientists had the right tools, such as thescanning tunneling microscope (STM)and the atomic force microscope (AFM), the age of nanotechnology was born.

Although modern nanoscience and nanotechnology are quite new, nanoscale materialswereused for centuries. Alternate-sized gold and silver particles created colors in the stained glass windows of medieval churches hundreds of years ago. The artists back then just didnt know that the process they used to create these beautiful works of art actually led to changes in the composition of the materials they were working with.

Today's scientists andengineers are finding a wide variety of ways to deliberatelymake materials at the nanoscale to take advantage of their enhanced properties such as higher strength, lighter weight,increased control oflight spectrum, and greater chemical reactivity than theirlarger-scale counterparts.

Read more:

What Is Nanotechnology? | National Nanotechnology Initiative

Nano (cryptocurrency) – Wikipedia

From Wikipedia, the free encyclopedia

Cryptocurrency

Nano (Abbreviation: XNO; sign: ) is a cryptocurrency. The currency is based on a directed acyclic graph data structure and distributed ledger, making it possible for Nano to work without intermediaries. To agree on what transactions to commit (i.e. achieving consensus), it uses a voting system with weight based on the amount of currency accounts hold.[2][3]

Nano was launched in October 2015 by Colin LeMahieu to address the Bitcoin scalability problem and created with the intention to reduce confirmation times and fees.[4] The currency implements no-fee transactions and achieves confirmation in under one second.[5]

Colin LeMahieu started development of Nano in 2014 under its original name "RaiBlocks".[1][6] A year later, RaiBlocks was distributed for free through a captcha-secured faucet.[7] In 2017, after 126,248,289 RaiBlocks were distributed, the faucet shut down. This fixed the total supply to 133,248,297 RaiBlocks, after an addition of a 7,000,000 RaiBlocks developer fund.

On January 31st 2018,[citation needed] RaiBlocks was rebranded to Nano.[8]

On 9 February 2018, an Italian cryptocurrency exchange BitGrail announced its hack and eventual shutdown.[9]Users were prevented from accessing assets stored on the platform, which was collectively worth 17 million Nano.[10] The victims then launched a class-action lawsuit against BitGrail owner Francesco Firano for recoupment, inside the Florence Courthouse. The exchange was ruled to be found guilty in January 2019, as it was found to fail at implementing safeguards and reporting losses.[11] The Italian police branch Network Operations Command (Italy) alleged the Bitgrail founder had conducted fraud.[12] Nano prices had been around $10 prior to the hack and after the hack fell to $0.10.[13]

Nano uses a block-lattice data structure, where every account has its own blockchain for storing transactions.[14][15] It is the first cryptocurrency to use a directed acyclic graph data structure,[16] by having a "block" consisting of only one transaction and the account's current balance.[17][14]

Consensus is reached through an algorithm similar to proof of stake.[18] In this system, the voting weight is distributed to accounts based on the amount of Nano they hold; accounts then freely delegate this weight to a peer (node) of their choice. No mining of cryptocurrency is needed.[19]

If two contradictory transactions are broadcast to the network, indicating a double-spend attempt, nodes will then vote for either the transactions. Afterwards, they broadcast their vote to the other nodes for strictly informational purpose. The first to reach 67% of the total voting weight is confirmed, while the other transaction is discarded.[20][non-primary source needed]

More:

Nano (cryptocurrency) - Wikipedia

Nano- – Wikipedia

From Wikipedia, the free encyclopedia

Nano (symbol n) is a unit prefix meaning "one billionth". Used primarily with the metric system, this prefix denotes a factor of 109 or 0.000000001. It is frequently encountered in science and electronics for prefixing units of time and length.

The prefix derives from the Greek (Latin nanus), meaning "dwarf". The General Conference on Weights and Measures (CGPM) officially endorsed the usage of nano as a standard prefix in 1960.

When used as a prefix for something other than a unit of measure (as for example in words like "nanoscience"), nano refers to nanotechnology, or means "on a scale of nanometres" (nanoscale).

A nanosecond (ns) is a unit of time in the International System of Units (SI) equal to one billionth of a second, that is, 11 000 000 000 of a second, or 109 seconds.

The term combines the SI prefix nano- indicating a 1 billionth submultiple of an SI unit (e.g. nanogram, nanometre, etc.) and second, the primary unit of time in the SI.

A nanosecond is equal to 1000picoseconds or 11000microsecond. Time units ranging between 108 and 107 seconds are typically expressed as tens or hundreds of nanoseconds.

Read this article:

Nano- - Wikipedia

Immortality | philosophy and religion | Britannica

immortality, in philosophy and religion, the indefinite continuation of the mental, spiritual, or physical existence of individual human beings. In many philosophical and religious traditions, immortality is specifically conceived as the continued existence of an immaterial soul or mind beyond the physical death of the body.

The earlier anthropologists, such as Sir Edward Burnett Tylor and Sir James George Frazer, assembled convincing evidence that the belief in a future life was widespread in the regions of primitive culture. Among most peoples the belief has continued through the centuries. But the nature of future existence has been conceived in very different ways. As Tylor showed, in the earliest known times there was little, often no, ethical relation between conduct on earth and the life beyond. Morris Jastrow wrote of the almost complete absence of all ethical considerations in connection with the dead in ancient Babylonia and Assyria.

In some regions and early religious traditions, it came to be declared that warriors who died in battle went to a place of happiness. Later there was a general development of the ethical idea that the afterlife would be one of rewards and punishments for conduct on earth. So in ancient Egypt at death the individual was represented as coming before judges as to that conduct. The Persian followers of Zoroaster accepted the notion of Chinvat peretu, or the Bridge of the Requiter, which was to be crossed after death and which was broad for the righteous and narrow for the wicked, who fell from it into hell. In Indian philosophy and religion, the steps upwardor downwardin the series of future incarnated lives have been (and still are) regarded as consequences of conduct and attitudes in the present life (see karma). The idea of future rewards and punishments was pervasive among Christians in the Middle Ages and is held today by many Christians of all denominations. In contrast, many secular thinkers maintain that the morally good is to be sought for itself and evil shunned on its own account, irrespective of any belief in a future life.

That the belief in immortality has been widespread through history is no proof of its truth. It may be a superstition that arose from dreams or other natural experiences. Thus, the question of its validity has been raised philosophically from the earliest times that people began to engage in intelligent reflection. In the Hindu Katha Upanishad, Naciketas says: This doubt there is about a man departedsome say: He is; some: He does not exist. Of this would I know. The Upanishadsthe basis of most traditional philosophy in Indiaare predominantly a discussion of the nature of humanity and its ultimate destiny.

Immortality was also one of the chief problems of Platos thought. With the contention that reality, as such, is fundamentally spiritual, he tried to prove immortality, maintaining that nothing could destroy the soul. Aristotle conceived of reason as eternal but did not defend personal immortality, as he thought the soul could not exist in a disembodied state. The Epicureans, from a materialistic standpoint, held that there is no consciousness after death, and it is thus not to be feared. The Stoics believed that it is the rational universe as a whole that persists. Individual humans, as the Roman emperor Marcus Aurelius wrote, simply have their allotted periods in the drama of existence. The Roman orator Cicero, however, finally accepted personal immortality. St. Augustine of Hippo, following Neoplatonism, regarded human beings souls as being in essence eternal.

The Islamic philosopher Avicenna declared the soul immortal, but his coreligionist Averros, keeping closer to Aristotle, accepted the eternity only of universal reason. St. Albertus Magnus defended immortality on the ground that the soul, in itself a cause, is an independent reality. John Scotus Erigena contended that personal immortality cannot be proved or disproved by reason. Benedict de Spinoza, taking God as ultimate reality, as a whole maintained his eternity but not the immortality of individual persons within him. The German philosopher Gottfried Wilhelm Leibniz contended that reality is constituted of spiritual monads. Human beings, as finite monads, not capable of origination by composition, are created by God, who could also annihilate them. However, because God has planted in humans a striving for spiritual perfection, there may be faith that he will ensure their continued existence, thus giving them the possibility to achieve it.

The French mathematician and philosopher Blaise Pascal argued that belief in the God of Christianityand accordingly in the immortality of the soulis justified on practical grounds by the fact that one who believes has everything to gain if he is right and nothing to lose if he is wrong, while one who does not believe has everything to lose if he is wrong and nothing to gain if he is right. The German Enlightenment philosopher Immanuel Kant held that immortality cannot be demonstrated by pure reason but must be accepted as an essential condition of morality. Holiness, the perfect accordance of the will with the moral law, demands endless progress only possible on the supposition of an endless duration of the existence and personality of the same rational being (which is called the immortality of the soul). Considerably less-sophisticated arguments both before and after Kant attempted to demonstrate the reality of an immortal soul by asserting that human beings would have no motivation to behave morally unless they believed in an eternal afterlife in which the good are rewarded and the evil are punished. A related argument held that denying an eternal afterlife of reward and punishment would lead to the repugnant conclusion that the universe is unjust.

In the late 19th century, the concept of immortality waned as a philosophical preoccupation, in part because of the secularization of philosophy under the growing influence of science.

Read the original:

Immortality | philosophy and religion | Britannica

Immortality – Wikipedia

Eternal life

Immortality is the concept of eternal life.[2] Some modern species may possess biological immortality.[citation needed]

Some scientists, futurists, and philosophers have theorized about the immortality of the human body, with some suggesting that human immortality may be achievable in the first few decades of the 21st century with the help of certain technologies such as mind uploading (digital immortality).[3] Other advocates believe that life extension is a more achievable goal in the short term, with immortality awaiting further research breakthroughs. The absence of aging would provide humans with biological immortality, but not invulnerability to death by disease or injury. Whether the process of internal immortality is delivered within the upcoming years depends chiefly on research (and in neuron research in the case of internal immortality through an immortalized cell line) in the former view and perhaps is an awaited goal in the latter case.[4]

What form an unending human life would take, or whether an immaterial soul exists and possesses immortality, has been a major point of focus of religion,[citation needed] as well as the subject of speculation and debate. In religious contexts, immortality is often stated to be one of the promises of divinities to human beings who perform virtue or follow divine law.[citation needed]

Life extension technologies promise a path to complete rejuvenation. Cryonics holds out the hope that the dead can be revived in the future, following sufficient medical advancements. While, as shown with creatures such as hydra and Planarian worms, it is indeed possible for a creature to be biologically immortal, it is not known if it will be possible for humans in the near-future.

Immortality in religion refers usually to either the belief in physical immortality or a more spiritual afterlife. In traditions such as ancient Egyptian beliefs, Mesopotamian beliefs and ancient Greek beliefs, the immortal gods consequently were considered to have physical bodies. In Mesopotamian and Greek religion, the gods also made certain men and women physically immortal, whereas in Christianity, many believe that all true believers will be resurrected to physical immortality. Similar beliefs that physical immortality is possible are held by Rastafarians or Rebirthers.

Physical immortality is a state of life that allows a person to avoid death and maintain conscious thought. It can mean the unending existence of a person from a physical source other than organic life, such as a computer.

Pursuit of physical immortality before the advent of modern science included alchemists seeking to create the Philosopher's Stone,[5] and various cultures' legends such as the Fountain of Youth or the Peaches of Immortality inspiring attempts at discovering elixirs of life.

Modern scientific trends, such as cryonics, digital immortality, breakthroughs in rejuvenation, or predictions of an impending technological singularity, to achieve genuine human physical immortality, must still overcome all causes of death to succeed:

There are three main causes of death: aging, disease, and injury[6] Such issues can be resolved with the solutions provided in research to any end providing such alternate theories at present that require unification.

Aubrey de Grey, a leading researcher in the field,[7] defines aging as "a collection of cumulative changes to the molecular and cellular structure of an adult organism, which result in essential metabolic processes, but which also, once they progress far enough, increasingly disrupt metabolism, resulting in pathology and death." The current causes of aging in humans are cell loss (without replacement), DNA damage, oncogenic nuclear mutations and epimutations, cell senescence, mitochondrial mutations, lysosomal aggregates, extracellular aggregates, random extracellular cross-linking, immune system decline, and endocrine changes. Eliminating aging would require finding a solution to each of these causes, a program de Grey calls engineered negligible senescence. There is also a huge body of knowledge indicating that change is characterized by the loss of molecular fidelity.[8]

Disease is theoretically surmountable by technology. In short, it is an abnormal condition affecting the body of an organism, something the body shouldn't typically have to deal with its natural make up.[9] Human understanding of genetics is leading to cures and treatments for a myriad of previously incurable diseases. The mechanisms by which other diseases do damage are becoming better understood. Sophisticated methods of detecting diseases early are being developed. Preventative medicine is becoming better understood. Neurodegenerative diseases like Parkinson's and Alzheimer's may soon be curable with the use of stem cells. Breakthroughs in cell biology and telomere research are leading to treatments for cancer. Vaccines are being researched for AIDS and tuberculosis. Genes associated with type 1 diabetes and certain types of cancer have been discovered, allowing for new therapies to be developed. Artificial devices attached directly to the nervous system may restore sight to the blind. Drugs are being developed to treat a myriad of other diseases and ailments.

Physical trauma would remain as a threat to perpetual physical life, as an otherwise immortal person would still be subject to unforeseen accidents or catastrophes. The speed and quality of paramedic response remains a determining factor in surviving severe trauma.[10] A body that could automatically repair itself from severe trauma, such as speculated uses for nanotechnology, would mitigate this factor. The brain cannot be risked to trauma if a continuous physical life is to be maintained. This aversion to trauma risk to the brain would naturally result in significant behavioral changes that would render physical immortality undesirable for some people.

Organisms otherwise unaffected by these causes of death would still face the problem of obtaining sustenance (whether from currently available agricultural processes or from hypothetical future technological processes) in the face of changing availability of suitable resources as environmental conditions change. After avoiding aging, disease, and trauma, death through resource limitation is still possible, such as hypoxia or starvation.

If there is no limitation on the degree of gradual mitigation of risk then it is possible that the cumulative probability of death over an infinite horizon is less than certainty, even when the risk of fatal trauma in any finite period is greater than zero. Mathematically, this is an aspect of achieving actuarial escape velocity.

Biological immortality is an absence of aging. Specifically it is the absence of a sustained increase in rate of mortality as a function of chronological age. A cell or organism that does not experience aging, or ceases to age at some point, is biologically immortal.[11]

Biologists have chosen the word "immortal" to designate cells that are not limited by the Hayflick limit, where cells no longer divide because of DNA damage or shortened telomeres. The first and still most widely used immortal cell line is HeLa, developed from cells taken from the malignant cervical tumor of Henrietta Lacks without her consent in 1951. Prior to the 1961 work of Leonard Hayflick, there was the erroneous belief fostered by Alexis Carrel that all normal somatic cells are immortal. By preventing cells from reaching senescence one can achieve biological immortality; telomeres, a "cap" at the end of DNA, are thought to be the cause of cell aging. Every time a cell divides the telomere becomes a bit shorter; when it is finally worn down, the cell is unable to split and dies. Telomerase is an enzyme which rebuilds the telomeres in stem cells and cancer cells, allowing them to replicate an infinite number of times.[12] No definitive work has yet demonstrated that telomerase can be used in human somatic cells to prevent healthy tissues from aging. On the other hand, scientists hope to be able to grow organs with the help of stem cells, allowing organ transplants without the risk of rejection, another step in extending human life expectancy. These technologies are the subject of ongoing research, and are not yet realized.[13]

Life defined as biologically immortal is still susceptible to causes of death besides aging, including disease and trauma, as defined above. Notable immortal species include:

As the existence of biologically immortal species demonstrates, there is no thermodynamic necessity for senescence: a defining feature of life is that it takes in free energy from the environment and unloads its entropy as waste. Living systems can even build themselves up from seed, and routinely repair themselves. Aging is therefore presumed to be a byproduct of evolution, but why mortality should be selected for remains a subject of research and debate. Programmed cell death and the telomere "end replication problem" are found even in the earliest and simplest of organisms.[21] This may be a tradeoff between selecting for cancer and selecting for aging.[22]

Modern theories on the evolution of aging include the following:

Individual organisms ordinarily age and die, while the germlines which connect successive generations are potentially immortal. The basis for this difference is a fundamental problem in biology. The Russian biologist and historian Zhores A. Medvedev[25] considered that the accuracy of genome replicative and other synthetic systems alone cannot explain the immortality of germ lines. Rather Medvedev thought that known features of the biochemistry and genetics of sexual reproduction indicate the presence of unique information maintenance and restoration processes at the different stages of gametogenesis. In particular, Medvedev considered that the most important opportunities for information maintenance of germ cells are created by recombination during meiosis and DNA repair; he saw these as processes within the germ cells that were capable of restoring the integrity of DNA and chromosomes from the types of damage that cause irreversible aging in somatic cells.

Some[who?] scientists believe that boosting the amount or proportion of telomerase in the body, a naturally forming enzyme that helps maintain the protective caps at the ends of chromosomes, could prevent cells from dying and so may ultimately lead to extended, healthier lifespans. A team of researchers at the Spanish National Cancer Centre (Madrid) tested the hypothesis on mice. It was found that those mice which were "genetically engineered to produce 10 times the normal levels of telomerase lived 50% longer than normal mice".[26]

In normal circumstances, without the presence of telomerase, if a cell divides repeatedly, at some point all the progeny will reach their Hayflick limit. With the presence of telomerase, each dividing cell can replace the lost bit of DNA, and any single cell can then divide unbounded. While this unbounded growth property has excited many researchers, caution is warranted in exploiting this property, as exactly this same unbounded growth is a crucial step in enabling cancerous growth. If an organism can replicate its body cells faster, then it would theoretically stop aging.

Embryonic stem cells express telomerase, which allows them to divide repeatedly and form the individual. In adults, telomerase is highly expressed in cells that need to divide regularly (e.g., in the immune system), whereas most somatic cells express it only at very low levels in a cell-cycle dependent manner.

Technological immortality is the prospect for much longer life spans made possible by scientific advances in a variety of fields: nanotechnology, emergency room procedures, genetics, biological engineering, regenerative medicine, microbiology, and others. Contemporary life spans in the advanced industrial societies are already markedly longer than those of the past because of better nutrition, availability of health care, standard of living and bio-medical scientific advances.[citation needed] Technological immortality predicts further progress for the same reasons over the near term. An important aspect of current scientific thinking about immortality is that some combination of human cloning, cryonics or nanotechnology will play an essential role in extreme life extension. Robert Freitas, a nanorobotics theorist, suggests tiny medical nanorobots could be created to go through human bloodstreams, find dangerous things like cancer cells and bacteria, and destroy them.[27] Freitas anticipates that gene-therapies and nanotechnology will eventually make the human body effectively self-sustainable and capable of living indefinitely in empty space, short of severe brain trauma. This supports the theory that we will be able to continually create biological or synthetic replacement parts to replace damaged or dying ones. Future advances in nanomedicine could give rise to life extension through the repair of many processes thought to be responsible for aging. K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and using as yet hypothetical biological machines, in his 1986 book Engines of Creation. Raymond Kurzweil, a futurist and transhumanist, stated in his book The Singularity Is Near that he believes that advanced medical nanorobotics could completely remedy the effects of aging by 2030.[28] According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micromachines (see biological machine). Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom.[29]

Cryonics, the practice of preserving organisms (either intact specimens or only their brains) for possible future revival by storing them at cryogenic temperatures where metabolism and decay are almost completely stopped, can be used to 'pause' for those who believe that life extension technologies will not develop sufficiently within their lifetime. Ideally, cryonics would allow clinically dead people to be brought back in the future after cures to the patients' diseases have been discovered and aging is reversible. Modern cryonics procedures use a process called vitrification which creates a glass-like state rather than freezing as the body is brought to low temperatures. This process reduces the risk of ice crystals damaging the cell-structure, which would be especially detrimental to cell structures in the brain, as their minute adjustment evokes the individual's mind.

One idea that has been advanced involves uploading an individual's habits and memories via direct mind-computer interface. The individual's memory may be loaded to a computer or to a new organic body. Extropian futurists like Moravec and Kurzweil have proposed that, thanks to exponentially growing computing power, it will someday be possible to upload human consciousness onto a computer system, and exist indefinitely in a virtual environment.

This could be accomplished via advanced cybernetics, where computer hardware would initially be installed in the brain to help sort memory or accelerate thought processes. Components would be added gradually until the person's entire brain functions were handled by artificial devices, avoiding sharp transitions that would lead to issues of identity, thus running the risk of the person to be declared dead and thus not be a legitimate owner of his or her property. After this point, the human body could be treated as an optional accessory and the program implementing the person could be transferred to any sufficiently powerful computer.

Another possible mechanism for mind upload is to perform a detailed scan of an individual's original, organic brain and simulate the entire structure in a computer. What level of detail such scans and simulations would need to achieve to emulate awareness, and whether the scanning process would destroy the brain, is still to be determined.[a]

It is suggested that achieving immortality through this mechanism would require specific consideration to be given to the role of consciousness in the functions of the mind. An uploaded mind would only be a copy of the original mind, and not the conscious mind of the living entity associated in such a transfer. Without a simultaneous upload of consciousness, the original living entity remains mortal, thus not achieving true immortality.[31]Research on neural correlates of consciousness is yet inconclusive on this issue. Whatever the route to mind upload, persons in this state could then be considered essentially immortal, short of loss or traumatic destruction of the machines that maintained them.[clarification needed][citation needed]

Transforming a human into a cyborg can include brain implants or extracting a human processing unit and placing it in a robotic life-support system.[citation needed] Even replacing biological organs with robotic ones could increase life span (e.g. pace makers) and depending on the definition, many technological upgrades to the body, like genetic modifications or the addition of nanobots would qualify an individual as a cyborg. Some people believe that such modifications would make one impervious to aging and disease and theoretically immortal unless killed or destroyed.[citation needed]

As late as 1952, the editorial staff of the Syntopicon found in their compilation of the Great Books of the Western World, that "The philosophical issue concerning immortality cannot be separated from issues concerning the existence and nature of man's soul."[32] Thus, the vast majority of speculation on immortality before the 21st century was regarding the nature of the afterlife.

Immortality in ancient Greek religion originally always included an eternal union of body and soul as can be seen in Homer, Hesiod, and various other ancient texts. The soul was considered to have an eternal existence in Hades, but without the body the soul was considered dead. Although almost everybody had nothing to look forward to but an eternal existence as a disembodied dead soul, a number of men and women were considered to have gained physical immortality and been brought to live forever in either Elysium, the Islands of the Blessed, heaven, the ocean or literally right under the ground.Among those humans made immortal were Amphiaraus, Ganymede, Ino, Iphigenia, Menelaus, Peleus, and a great number of those who fought in the Trojan and Theban wars.

Some were considered to have died and been resurrected before they achieved physical immortality. Asclepius was killed by Zeus only to be resurrected and transformed into a major deity. In some versions of the Trojan War myth, Achilles, after being killed, was snatched from his funeral pyre by his divine mother Thetis, resurrected, and brought to an immortal existence in either Leuce, the Elysian plains, or the Islands of the Blessed. Memnon, who was killed by Achilles, seems to have received a similar fate. Alcmene, Castor, Heracles, and Melicertes were also among the figures sometimes considered to have been resurrected to physical immortality. According to Herodotus' Histories, the 7thcenturyBCE sage Aristeas of Proconnesus was first found dead, after which his body disappeared from a locked room. Later he was found not only to have been resurrected but to have gained immortality.

The parallel between these traditional beliefs and the later resurrection of Jesus was not lost on early Christians, as Justin Martyr argued:

The philosophical idea of an immortal soul was a belief first appearing with either Pherecydes or the Orphics, and most importantly advocated by Plato and his followers. This, however, never became the general norm in Hellenistic thought. As may be witnessed even into the Christian era, not least by the complaints of various philosophers over popular beliefs, many or perhaps most traditional Greeks maintained the conviction that certain individuals were resurrected from the dead and made physically immortal and that others could only look forward to an existence as disembodied and dead, though everlasting, souls.

One of the three marks of existence in Buddhism is anatt, "non-self". This teaching states that the body does not have an eternal soul but is composed of five skandhas or aggregates. Additionally, another mark of existence is impermanence, also called anicca, which runs directly counter to concepts of immortality or permanence. According to one Tibetan Buddhist teaching, Dzogchen, individuals can transform the physical body into an immortal body of light called the rainbow body.

Christian theology holds that Adam and Eve lost physical immortality for themselves and all their descendants through the Fall, although this initial "imperishability of the bodily frame of man" was "a preternatural condition".[37]

Christians who profess the Nicene Creed believe that every dead person (whether they believed in Christ or not) will be resurrected from the dead at the Second Coming; this belief is known as universal resurrection. Paul the Apostle, in following his past life as a Pharisee (a Jewish social movement that held to a future physical resurrection[39]), proclaims an amalgamated view of resurrected believers where both the physical and the spiritual are rebuilt in the likeness of post-resurrection Christ, who "will transform our lowly body to be like his glorious body" (ESV).[40] This thought mirrors Paul's depiction of believers having been "buried therefore with him [that is, Christ] by baptism into death" (ESV).[41]

N.T. Wright, a theologian and former Bishop of Durham, has said many people forget the physical aspect of what Jesus promised. He told Time: "Jesus' resurrection marks the beginning of a restoration that he will complete upon his return. Part of this will be the resurrection of all the dead, who will 'awake', be embodied and participate in the renewal. Wright says John Polkinghorne, a physicist and a priest, has put it this way: 'God will download our software onto his hardware until the time he gives us new hardware to run the software again for ourselves.' That gets to two things nicely: that the period after death (the Intermediate state) is a period when we are in God's presence but not active in our own bodies, and also that the more important transformation will be when we are again embodied and administering Christ's kingdom."[42] This kingdom will consist of Heaven and Earth "joined together in a new creation", he said.

Christian apocrypha include immortal human figures such as Cartaphilus[43] and Longinus[44] who were cursed with physical immortality for various transgressions against Christ during the Passion. Leaders of sects such as John Asgill and John Wroe taught followers that physical immortality was possible.[45][46]

Hindus believe in an immortal soul which is reincarnated after death. According to Hinduism, people repeat a process of life, death, and rebirth in a cycle called samsara. If they live their life well, their karma improves and their station in the next life will be higher, and conversely lower if they live their life poorly. After many life times of perfecting its karma, the soul is freed from the cycle and lives in perpetual bliss. There is no place of eternal torment in Hinduism, although if a soul consistently lives very evil lives, it could work its way down to the very bottom of the cycle.[citation needed]

There are explicit renderings in the Upanishads alluding to a physically immortal state brought about by purification, and sublimation of the 5 elements that make up the body. For example, in the Shvetashvatara Upanishad (Chapter 2, Verse 12), it is stated "When earth, water, fire, air and sky arise, that is to say, when the five attributes of the elements, mentioned in the books on yoga, become manifest then the yogi's body becomes purified by the fire of yoga and he is free from illness, old age and death."

Another view of immortality is traced to the Vedic tradition by the interpretation of Maharishi Mahesh Yogi:

That man indeed whom these (contacts)do not disturb, who is even-minded inpleasure and pain, steadfast, he is fitfor immortality, O best of men.[47]

To Maharishi Mahesh Yogi, the verse means, "Once a man has become established in the understanding of the permanent reality of life, his mind rises above the influence of pleasure and pain. Such an unshakable man passes beyond the influence of death and in the permanent phase of life: he attains eternal life... A man established in the understanding of the unlimited abundance of absolute existence is naturally free from existence of the relative order. This is what gives him the status of immortal life."[47]

An Indian Tamil saint known as Vallalar claimed to have achieved immortality before disappearing forever from a locked room in 1874.[48][unreliable source?][49]

The traditional concept of an immaterial and immortal soul distinct from the body was not found in Judaism before the Babylonian exile, but developed as a result of interaction with Persian and Hellenistic philosophies. Accordingly, the Hebrew word nephesh, although translated as "soul" in some older English-language Bibles, actually has a meaning closer to "living being".[50][need quotation to verify] Nephesh was rendered in the Septuagint as (psch), the Greek word for soul.[citation needed]

The only Hebrew word traditionally translated "soul" (nephesh) in English language Bibles refers to a living, breathing conscious body, rather than to an immortal soul.[b]In the New Testament, the Greek word traditionally translated "soul" () has substantially the same meaning as the Hebrew, without reference to an immortal soul.[c]"Soul" may refer either to the whole person, the self, as in "three thousand souls" were converted in Acts 2:41 (see Acts 3:23).

The Hebrew Bible speaks about Sheol (), originally a synonym of the grave the repository of the dead or the cessation of existence, until the resurrection of the dead. This doctrine of resurrection is mentioned explicitly only in Daniel 12:14 although it may be implied in several other texts. New theories arose concerning Sheol during the intertestamental period.

The views about immortality in Judaism is perhaps best exemplified by the various references to this in Second Temple period. The concept of resurrection of the physical body is found in 2 Maccabees, according to which it will happen through recreation of the flesh.[52] Resurrection of the dead is specified in detail in the extra-canonical books of Enoch,[53] and in Apocalypse of Baruch.[54] According to the British scholar in ancient Judaism P.R. Davies, there is "little or no clear reference ... either to immortality or to resurrection from the dead" in the Dead Sea scrolls texts.[55]Both Josephus and the New Testament record that the Sadducees did not believe in an afterlife,[56]but the sources vary on the beliefs of the Pharisees. The New Testament claims that the Pharisees believed in the resurrection, but does not specify whether this included the flesh or not.[57] According to Josephus, who himself was a Pharisee, the Pharisees held that only the soul was immortal and the souls of good people will be reincarnated and "pass into other bodies," while "the souls of the wicked will suffer eternal punishment."[58]The Book of Jubilees seems to refer to the resurrection of the soul only, or to a more general idea of an immortal soul.[59]

Rabbinic Judaism claims that the righteous dead will be resurrected in the Messianic Age, with the coming of the messiah. They will then be granted immortality in a perfect world. The wicked dead, on the other hand, will not be resurrected at all. This is not the only Jewish belief about the afterlife. The Tanakh is not specific about the afterlife, so there are wide differences in views and explanations among believers.[citation needed]

It is repeatedly stated in the Lshi Chunqiu that death is unavoidable.[60] Henri Maspero noted that many scholarly works frame Taoism as a school of thought focused on the quest for immortality.[61] Isabelle Robinet asserts that Taoism is better understood as a way of life than as a religion, and that its adherents do not approach or view Taoism the way non-Taoist historians have done.[62] In the Tractate of Actions and their Retributions, a traditional teaching, spiritual immortality can be rewarded to people who do a certain amount of good deeds and live a simple, pure life. A list of good deeds and sins are tallied to determine whether or not a mortal is worthy. Spiritual immortality in this definition allows the soul to leave the earthly realms of afterlife and go to pure realms in the Taoist cosmology.[63]

Zoroastrians believe that on the fourth day after death, the human soul leaves the body and the body remains as an empty shell. Souls would go to either heaven or hell; these concepts of the afterlife in Zoroastrianism may have influenced Abrahamic religions. The Persian word for "immortal" is associated with the month "Amurdad", meaning "deathless" in Persian, in the Iranian calendar (near the end of July). The month of Amurdad or Ameretat is celebrated in Persian culture as ancient Persians believed the "Angel of Immortality" won over the "Angel of Death" in this month.[64]

Alcmaeon of Croton argued that the soul is continuously and ceaselessly in motion. The exact form of his argument is unclear, but it appears to have influenced Plato, Aristotle, and other later writers.[65]

Plato's Phaedo advances four arguments for the soul's immortality:[66]

Plotinus offers a version of the argument that Kant calls "The Achilles of Rationalist Psychology". Plotinus first argues that the soul is simple, then notes that a simple being cannot decompose. Many subsequent philosophers have argued both that the soul is simple and that it must be immortal. The tradition arguably culminates with Moses Mendelssohn's Phaedon.[67]

Theodore Metochites argues that part of the soul's nature is to move itself, but that a given movement will cease only if what causes the movement is separated from the thing moved an impossibility if they are one and the same.[68]

Avicenna argued for the distinctness of the soul and the body, and the incorruptibility of the former.[d]

The full argument for the immortality of the soul and Thomas Aquinas' elaboration of Aristotelian theory is found in Question 75 of the First Part of the Summa Theologica.[74]

Ren Descartes endorses the claim that the soul is simple, and also that this entails that it cannot decompose. Descartes does not address the possibility that the soul might suddenly disappear.[75]

In early work, Gottfried Wilhelm Leibniz endorses a version of the argument from the simplicity of the soul to its immortality, but like his predecessors, he does not address the possibility that the soul might suddenly disappear. In his monadology he advances a sophisticated novel argument for the immortality of monads.[76]

Moses Mendelssohn's Phaedon is a defense of the simplicity and immortality of the soul. It is a series of three dialogues, revisiting the Platonic dialogue Phaedo, in which Socrates argues for the immortality of the soul, in preparation for his own death. Many philosophers, including Plotinus, Descartes, and Leibniz, argue that the soul is simple, and that because simples cannot decompose they must be immortal. In the Phaedon, Mendelssohn addresses gaps in earlier versions of this argument (an argument that Kant calls the Achilles of Rationalist Psychology). The Phaedon contains an original argument for the simplicity of the soul, and also an original argument that simples cannot suddenly disappear. It contains further original arguments that the soul must retain its rational capacities as long as it exists.[77]

The possibility of clinical immortality raises a host of medical, philosophical, and religious issues and ethical questions. These include persistent vegetative states, the nature of personality over time, technology to mimic or copy the mind or its processes, social and economic disparities created by longevity, and survival of the heat death of the universe.

Physical immortality has also been imagined as a form of eternal torment, as in the myth of Tithonus, or in Mary Shelley's short story The Mortal Immortal, where the protagonist lives to witness everyone he cares about die around him. For additional examples in fiction, see Immortality in fiction.

Kagan (2012)[78] argues that any form of human immortality would be undesirable. Kagan's argument takes the form of a dilemma. Either our characters remain essentially the same in an immortal afterlife, or they do not:

Either way, Kagan argues, immortality is unattractive. The best outcome, Kagan argues, would be for humans to live as long as they desired and then to accept death gratefully as rescuing us from the unbearable tedium of immortality.[78]

If human beings were to achieve immortality, there would most likely be a change in the world's social structures. Sociologists argue that human beings' awareness of their own mortality shapes their behavior.[80] With the advancements in medical technology in extending human life, there may need to be serious considerations made about future social structures. The world is already experiencing a global demographic shift of increasingly ageing populations with lower replacement rates.[81] The social changes that are made to accommodate this new population shift may be able to offer insight on the possibility of an immortal society.

Sociology has a growing body of literature on the sociology of immortality, which details the different attempts at reaching immortality (whether actual or symbolic) and their prominence in the 21st century. These attempts include renewed attention to the dead in the West,[82] practices of online memorialization,[83] and biomedical attempts to increase longevity.[84] These attempts at reaching immortality and their effects in societal structures have led some to argue that we are becoming a "Postmortal Society".[85][86] Foreseen changes to societies derived from the pursuit of immortality would encompass societal paradigms and worldviews, as well as the institutional landscape. Similarly, different forms of reaching immortality might entail a significant reconfiguration of societies, from becoming more technologically-oriented to becoming more aligned with nature [87]

Immortality would increase population growth,[88] bringing with it many consequences as for example the impact of population growth on the environment and planetary boundaries.

Although some scientists state that radical life extension, delaying and stopping aging are achievable,[89] there are no international or national programs focused on stopping aging or on radical life extension. In 2012 in Russia, and then in the United States, Israel and the Netherlands, pro-immortality political parties were launched. They aimed to provide political support to anti-aging and radical life extension research and technologies and at the same time transition to the next step, radical life extension, life without aging, and finally, immortality and aim to make possible access to such technologies to most currently living people.[90]

There are numerous symbols representing immortality. The ankh is an Egyptian symbol of life that holds connotations of immortality when depicted in the hands of the gods and pharaohs, who were seen as having control over the journey of life. The Mbius strip in the shape of a trefoil knot is another symbol of immortality. Most symbolic representations of infinity or the life cycle are often used to represent immortality depending on the context they are placed in. Other examples include the Ouroboros, the Chinese fungus of longevity, the ten kanji, the phoenix, the peacock in Christianity,[91] and the colors amaranth (in Western culture) and peach (in Chinese culture).

Link:

Immortality - Wikipedia

Will humans ever be immortal? | Live Science

If you are human, you are going to die. This isn't the most comforting thought, but death is the inevitable price we must pay for being alive. Humans are, however, getting better at pushing back our expiration date, as our medicines and technologies advance.

If the human life span continues to stretch, could we one day become immortal? The answer depends on what you think it means to be an immortal human.

"I don't think when people are even asking about immortality they really mean true immortality, unless they believe in something like a soul," Susan Schneider, a philosopher and founding director of the Center for the Future Mind at Florida Atlantic University, told Live Science. "If someone was, say, to upgrade their brain and body to live a really long time, they would still not be able to live beyond the end of the universe."

Scientists expect the universe will end, which puts an immediate dampener on a mystery about the potential for human immortality. Some scientists have speculated about surviving the death of the universe, as science journalist John Horgan reported for Scientific American, but it's unlikely that any humans alive today will experience the universe's demise anyway.

Related: What happens when you die?

Many humans grow old and die. To live indefinitely, we would need to stop the body from aging. A group of animals may have already solved this problem, so it isn't as far-fetched as it sounds.

Hydra are small, jellyfish-like invertebrates with a remarkable approach to aging. They are largely made up of stem cells that constantly divide to make new cells, as their older cells are discarded. The constant influx of new cells allows hydra to rejuvenate themselves and stay forever young, Live Science previously reported.

"They don't seem to age, so, potentially they are immortal," Daniel Martnez, a biology professor at Pomona College in Claremont, California, who discovered the hydra's lack of aging, told Live Science. Hydra show that animals do not have to grow old, but that doesn't mean humans could replicate their rejuvenating habits. At 0.4 inches (10 millimeters) long, hydra are small and don't have organs. "It's impossible for us because our bodies are super complex," Martnez said.

Humans have stem cells that can repair and even regrow parts of the body, such as in the liver, but the human body is not made almost entirely of these cells, like hydra are. That's because humans need cells to do things other than just divide and make new cells. For example, our red blood cells transport oxygen around the body. "We make cells commit to a function, and in doing that, they have to lose the ability to divide," Martnez said. As the cells age, so do we.

We can't simply discard our old cells like hydra do, because we need them. For example, the neurons in the brain transmit information. "We don't want those to be replaced," Martnez said. "Because otherwise, we won't remember anything." Hydra could inspire research that allows humans to live healthier lives, for example, by finding ways for our cells to function better as they age, according to Martnez. However, his gut feeling is that humans will never achieve such biological immortality.

Though Martnez personally doesn't want to live forever, he thinks humans are already capable of a form of immortality. "I always say, 'I think we are immortal,'" he said. "Poets to me are immortal because they're still with us after so many years and they still influence us. And so I think that people survive through their legacy."

The oldest-living human on record is Jeanne Calment from France, who died at the age of 122 in 1997, according to Guinness World Records. In a 2021 study published in the journal Nature Communications, researchers reported that humans may be able to live up to a maximum of between 120 and 150 years, after which, the researchers anticipate a complete loss of resilience the body's ability to recover from things like illness or injury. To live beyond this limit, humans would need to stop cells from aging and prevent disease.

Related: What's the oldest living thing alive today?

Humans may be able to live beyond their biological limits with future technological advancements involving nanotechnology. This is the manipulation of materials on a nanoscale, less than 100 nanometers (one-billionth of a meter or 400-billionths of an inch). Machines this small could travel in the blood and possibly prevent aging by repairing the damage cells experience over time. Nanotech could also cure certain diseases, including some types of cancer, by removing cancerous cells from the body, according to the University of Melbourne in Australia.

Preventing the human body from aging still isn't enough to achieve immortality; just ask the hydra. Even though hydra don't show signs of aging, the creatures still die. They are eaten by predators, such as fish, and perish if their environment changes too much, such as if their ponds freeze in winter, Martnez said.

Humans don't have many predators to contend with, but we are prone to fatal accidents and vulnerable to extreme environmental events, such as those intensified by climate change. We'll need a sturdier vessel than our current bodies to ensure our survival long into the future. Technology may provide the solution for this, too.

As technology advances, futurists anticipate two defining milestones. The first is the singularity, in which we will design artificial intelligence (A.I.) smart enough to redesign itself, and it will get progressively smarter until it is vastly superior to our own intelligence, Live Science previously reported. The second milestone is virtual immortality, where we will be able to scan our brains and transfer ourselves to a non-biological medium, like a computer.

Researchers have already mapped the neural connections of a roundworm (Caenorhabditis elegans). As part of the so-called OpenWorm project, they then simulated the roundworm's brain in software replicating the neural connections, and programmed that software to direct a Lego robot, according to Smithsonian Magazine. The robot then appeared to start behaving like a roundworm. Scientists aren't close to mapping the connections between the 86 billion neurons of the human brain (roundworms have only 302 neurons), but advances in artificial intelligence may help us get there.

Once the human mind is in a computer and can be uploaded to the internet, we won't have to worry about the human body perishing. Moving the human mind out of the body would be a significant step on the road to immortality but, according to Schneider, there's a catch. "I don't think that will achieve immortality for you, and that's because I think you'd be creating a digital double," she said.

Schneider, who is also the author of "Artificial You: AI and the Future of Your Mind (opens in new tab)" (Princeton University Press, 2019), describes a thought experiment in which the brain either does or doesn't survive the upload process. If the brain does survive, then the digital copy can't be you as you're still alive; conversely, the digital copy also can't be you if your brain doesn't survive the upload process, because it wouldn't be if you did the copy can only be your digital double.

Related: What is consciousness?

According to Schneider, a better route to extreme longevity, while also preserving the person, would be through biological enhancements compatible with the survival of the human brain. Another, more controversial route would be through brain chips.

"There's been a lot of talk about gradually replacing parts of the brain with chips. So, eventually, one becomes like an artificial intelligence," Schneider said. In other words, slowly transitioning into a cyborg and thinking in chips rather than neurons. But if the human brain is intimately connected to you, then replacing it could mean suicide, she added.

The human body appears to have an expiration date, regardless of how it is upgraded or uploaded. Whether humans are still human without their bodies is an open question.

"To me, it's not even really an issue about whether you're technically a human being or not," Schneider said. "The real issue is whether you're the same self of a person. So, what really matters here is, what is it to be a conscious being? And when is it that changes in the brain change which conscious being you are?" In other words, at what point does changing what we can do with our brains change who we are?

Schneider is excited by the potential brain and body enhancements of the future and likes the idea of ridding ourselves of death by old age, despite some of her reservations. "I would love that, absolutely, she said. "And I would love to see science and technology cure ailments, make us smarter. I would love to see people have the option of upgrading their brains with chips. I just want them to understand what's at stake."

Originally published on Live Science.

Excerpt from:

Will humans ever be immortal? | Live Science

NATO – Wikipedia

The North Atlantic Treaty Organization (NATO, ; French: Organisation du trait de l'Atlantique nord, OTAN), also called the North Atlantic Alliance, is an intergovernmental military alliance between 30 member states 28 European and two North American. Established in the aftermath of World WarII, the organization implemented the North Atlantic Treaty, signed in Washington, D.C., on 4 April 1949.[3][4] NATO is a collective security system: its independent member states agree to defend each other against attacks by third parties. During the Cold War, NATO operated as a check on the perceived threat posed by the Soviet Union. The alliance remained in place after the dissolution of the Soviet Union and has been involved in military operations in the Balkans, the Middle East, South Asia, and Africa. The organization's motto is animus in consulendo liber[5] (Latin for "a mind unfettered in deliberation").

NATO's main headquarters are located in Brussels, Belgium, while NATO's military headquarters are near Mons, Belgium. The alliance has targeted its NATO Response Force deployments in Eastern Europe, and the combined militaries of all NATO members include around 3.5million soldiers and personnel.[6] Their combined military spending as of 2020[update] constituted over 57 percent of the global nominal total.[7] Moreover, members have agreed to reach or maintain the target defence spending of at least two percent of their GDP by 2024.[8][9]

NATO formed with twelve founding members and has added new members eight times, most recently when North Macedonia joined the alliance in March 2020. Following the acceptance of their applications for membership in June 2022, Finland and Sweden are anticipated to become the 31st and 32nd members, with their Accession Protocols to the North Atlantic Treaty now in the process of being ratified by the existing members.[10] The other remaining European Economic Area or Schengen Area countries not part of NATO are Ireland, Austria and Switzerland (in addition to a few other European island countries and microstates). In addition, NATO currently recognizes Bosnia and Herzegovina, Georgia, and Ukraine as aspiring members.[3] Enlargement has led to tensions with non-member Russia, one of the twenty additional countries participating in NATO's Partnership for Peace programme. Another nineteen countries are involved in institutionalized dialogue programmes with NATO.

The Treaty of Dunkirk was signed by France and the United Kingdom on 4 March 1947, during the aftermath of World WarII and the start of the Cold War, as a Treaty of Alliance and Mutual Assistance in the event of possible attacks by Germany or the Soviet Union. In March 1948, this alliance was expanded in the Treaty of Brussels to include the Benelux countries, forming the Brussels Treaty Organization, commonly known as the Western Union.[11] Talks for a wider military alliance, which could include North America, also began that month in the United States, where their foreign policy under the Truman Doctrine promoted international solidarity against actions they saw as communist aggression, such as the February 1948 coup d'tat in Czechoslovakia. These talks resulted in the signature of the North Atlantic Treaty on 4 April 1949 by the member states of the Western Union plus the United States, Canada, Portugal, Italy, Norway, Denmark, and Iceland.[12] Canadian diplomat Lester B. Pearson was a key author and drafter of the treaty.[13][14][15]

The North Atlantic Treaty was largely dormant until the Korean War initiated the establishment of NATO to implement it with an integrated military structure. This included the formation of Supreme Headquarters Allied Powers Europe (SHAPE) in 1951, which adopted many of the Western Union's military structures and plans,[16] including their agreements on standardizing equipment and agreements on stationing foreign military forces in European countries. In 1952, the post of Secretary General of NATO was established as the organization's chief civilian. That year also saw the first major NATO maritime exercises, Exercise Mainbrace and the accession of Greece and Turkey to the organization.[17][18] Following the London and Paris Conferences, West Germany was permitted to rearm militarily, as they joined NATO in May 1955, which was, in turn, a major factor in the creation of the Soviet-dominated Warsaw Pact, delineating the two opposing sides of the Cold War.[19]

The building of the Berlin Wall in 1961 marked a height in Cold War tensions, when 400,000US troops were stationed in Europe.[20] Doubts over the strength of the relationship between the European states and the United States ebbed and flowed, along with doubts over the credibility of the NATO defence against a prospective Soviet invasion doubts that led to the development of the independent French nuclear deterrent and the withdrawal of France from NATO's military structure in 1966.[22] In 1982, the newly democratic Spain joined the alliance.[23]

The Revolutions of 1989 in Europe led to a strategic re-evaluation of NATO's purpose, nature, tasks, and focus on the continent. In October 1990, East Germany became part of the Federal Republic of Germany and the alliance, and in November 1990, the alliance signed the Treaty on Conventional Armed Forces in Europe (CFE) in Paris with the Soviet Union. It mandated specific military reductions across the continent, which continued after the collapse of the Warsaw Pact in February 1991 and the dissolution of the Soviet Union that December, which removed the de facto main adversaries of NATO.[24] This began a draw-down of military spending and equipment in Europe. The CFE treaty allowed signatories to remove 52,000pieces of conventional armaments in the following sixteen years,[25] and allowed military spending by NATO's European members to decline by 28 percent from 1990 to 2015.[26] In 1990 assurances were given by several Western leaders to Mikhail Gorbachev that NATO would not expand further east, as revealed by memoranda of private conversations.[27][28][29][30] However, the final text of the Treaty on the Final Settlement with Respect to Germany, signed later that year, contained no mention of the issue of eastward expansion.

In the 1990s, the organization extended its activities into political and humanitarian situations that had not formerly been NATO concerns.[31] During the Breakup of Yugoslavia, the organization conducted its first military interventions in Bosnia from 1992 to 1995 and later Yugoslavia in 1999.[32] These conflicts motivated a major post-Cold War military restructuring. NATO's military structure was cut back and reorganized, with new forces such as the Headquarters Allied Command Europe Rapid Reaction Corps established.

Politically, the organization sought better relations with the newly autonomous Central and Eastern European states, and diplomatic forums for regional cooperation between NATO and its neighbours were set up during this post-Cold War period, including the Partnership for Peace and the Mediterranean Dialogue initiative in 1994, the Euro-Atlantic Partnership Council in 1997, and the NATORussia Permanent Joint Council in 1998. At the 1999 Washington summit, Hungary, Poland, and the Czech Republic officially joined NATO, and the organization also issued new guidelines for membership with individualized "Membership Action Plans". These plans governed the addition of new alliance members: Bulgaria, Estonia, Latvia, Lithuania, Romania, Slovakia, and Slovenia in 2004, Albania and Croatia in 2009, Montenegro in 2017, and North Macedonia in 2020.[33] The election of French President Nicolas Sarkozy in 2007 led to a major reform of France's military position, culminating with the return to full membership on 4 April 2009, which also included France rejoining the NATO Military Command Structure, while maintaining an independent nuclear deterrent.[22][34][35]

Article5 of the North Atlantic treaty, requiring member states to come to the aid of any member state subject to an armed attack, was invoked for the first and only time after the September 11 attacks,[36] after which troops were deployed to Afghanistan under the NATO-led ISAF. The organization has operated a range of additional roles since then, including sending trainers to Iraq, assisting in counter-piracy operations,[37] and in 2011 enforcing a no-fly zone over Libya in accordance with UN Security Council Resolution 1973.

Russia's annexation of Crimea led to strong condemnation by all NATO members,[38] and was one of the seven times that Article 4, which calls for consultation among NATO members, has been invoked. Prior times included during the Iraq War and Syrian Civil War.[39] At the 2014 Wales summit, the leaders of NATO's member states formally committed for the first time to spend the equivalent of at least two percent of their gross domestic products on defence by 2024, which had previously been only an informal guideline.[40] At the 2016 Warsaw summit, NATO countries agreed on the creation of NATO Enhanced Forward Presence, which deployed four multinational battalion-sized battlegroups in Estonia, Latvia, Lithuania, and Poland.[41] Before and during the 2022 Russian invasion of Ukraine, several NATO countries sent ground troops, warships and fighter aircraft to reinforce the alliance's eastern flank, and multiple countries again invoked Article 4.[42][43][44] In March 2022, NATO leaders met at Brussels for an extraordinary summit which also involved Group of Seven and European Union leaders.[45] NATO member states agreed to establish four additional battlegroups in Bulgaria, Hungary, Romania, and Slovakia,[41] and elements of the NATO Response Force were activated for the first time in NATO's history.[46]

As of June 2022, NATO had deployed 40,000 troops along its 2,500-kilometres-long Eastern flank to deter Russian aggression. More than half of this number have been deployed in Bulgaria, Romania, Hungary, Slovakia, and Poland, which five countries muster a considerable combined ex-NATO force of 259,000 troops. To supplement Bulgaria's Air Force, Spain sent Eurofighter Typhoons, the Netherlands sent eight F-35 attack aircraft, and additional French and US attack aircraft would arrive soon as well.[47]

NATO enjoys public support across its member states.[48]

No military operations were conducted by NATO during the Cold War. Following the end of the Cold War, the first operations, Anchor Guard in 1990 and Ace Guard in 1991, were prompted by the Iraqi invasion of Kuwait. Airborne early warning aircraft were sent to provide coverage of southeastern Turkey, and later a quick-reaction force was deployed to the area.[49]

The Bosnian War began in 1992, as a result of the Breakup of Yugoslavia. The deteriorating situation led to United Nations Security Council Resolution 816 on 9 October 1992, ordering a no-fly zone over central Bosnia and Herzegovina, which NATO began enforcing on 12 April 1993 with Operation Deny Flight. From June 1993 until October 1996, Operation Sharp Guard added maritime enforcement of the arms embargo and economic sanctions against the Federal Republic of Yugoslavia. On 28 February 1994, NATO took its first wartime action by shooting down four Bosnian Serb aircraft violating the no-fly zone.

On 10 and 11 April 1994, the United Nations Protection Force called in air strikes to protect the Gorade safe area, resulting in the bombing of a Bosnian Serb military command outpost near Gorade by two US F-16 jets acting under NATO direction. In retaliation, Serbs took 150U.N. personnel hostage on 14 April.[52][53] On 16 April a British Sea Harrier was shot down over Gorade by Serb forces.

In August 1995, a two-week NATO bombing campaign, Operation Deliberate Force, began against the Army of the Republika Srpska, after the Srebrenica genocide.[55] Further NATO air strikes helped bring the Yugoslav Wars to an end, resulting in the Dayton Agreement in November 1995.[55] As part of this agreement, NATO deployed a UN-mandated peacekeeping force, under Operation Joint Endeavor, named IFOR. Almost 60,000 NATO troops were joined by forces from non-NATO countries in this peacekeeping mission. This transitioned into the smaller SFOR, which started with 32,000 troops initially and ran from December 1996 until December 2004, when operations were then passed onto the European Union Force Althea. Following the lead of its member states, NATO began to award a service medal, the NATO Medal, for these operations.[57]

In an effort to stop Slobodan Miloevi's Serbian-led crackdown on KLA separatists and Albanian civilians in Kosovo, the United Nations Security Council passed Resolution 1199 on 23 September 1998 to demand a ceasefire. Negotiations under US Special Envoy Richard Holbrooke broke down on 23 March 1999, and he handed the matter to NATO,[58] which started a 78-day bombing campaign on 24 March 1999.[59] Operation Allied Force targeted the military capabilities of what was then the Federal Republic of Yugoslavia. During the crisis, NATO also deployed one of its international reaction forces, the ACE Mobile Force (Land), to Albania as the Albania Force (AFOR), to deliver humanitarian aid to refugees from Kosovo.[citation needed]

The campaign was criticized over whether it had legitimacy and for the civilian casualties, including the bombing of the Chinese embassy in Belgrade. Miloevi finally accepted the terms of an international peace plan on 3 June 1999, ending the Kosovo War. On 11 June, Miloevi further accepted UN resolution 1244, under the mandate of which NATO then helped establish the KFOR peacekeeping force. Nearly one million refugees had fled Kosovo, and part of KFOR's mandate was to protect the humanitarian missions, in addition to deterring violence.[60] In AugustSeptember 2001, the alliance also mounted Operation Essential Harvest, a mission disarming ethnic Albanian militias in the Republic of Macedonia.[61] As of 1December2013[update], 4,882KFOR soldiers, representing 31countries, continue to operate in the area.[62][non-primary source needed]

The US, the UK, and most other NATO countries opposed efforts to require the UN Security Council to approve NATO military strikes, such as the action against Serbia in 1999, while France and some others claimed that the alliance needed UN approval.[63] The US/UK side claimed that this would undermine the authority of the alliance, and they noted that Russia and China would have exercised their Security Council vetoes to block the strike on Yugoslavia, and could do the same in future conflicts where NATO intervention was required, thus nullifying the entire potency and purpose of the organization. Recognizing the post-Cold War military environment, NATO adopted the Alliance Strategic Concept during its Washington summit in April 1999 that emphasized conflict prevention and crisis management.[64][non-primary source needed]

The September 11 attacks in the United States caused NATO to invoke Article 5 of the NATO Charter for the first time in the organization's history.[65] The Article states that an attack on any member shall be considered to be an attack on all. The invocation was confirmed on 4 October 2001 when NATO determined that the attacks were indeed eligible under the terms of the North Atlantic Treaty.[66][non-primary source needed] The eight official actions taken by NATO in response to the attacks included Operation Eagle Assist and Operation Active Endeavour, a naval operation in the Mediterranean Sea designed to prevent the movement of terrorists or weapons of mass destruction, and to enhance the security of shipping in general, which began on 4 October 2001.[67][non-primary source needed]

The alliance showed unity: on 16 April 2003, NATO agreed to take command of the International Security Assistance Force (ISAF), which included troops from 42 countries. The decision came at the request of Germany and the Netherlands, the two countries leading ISAF at the time of the agreement, and all nineteen NATO ambassadors approved it unanimously. The handover of control to NATO took place on 11 August, and marked the first time in NATO's history that it took charge of a mission outside the north Atlantic area.[68][pageneeded]

ISAF was initially charged with securing Kabul and surrounding areas from the Taliban, al Qaeda and factional warlords, so as to allow for the establishment of the Afghan Transitional Administration headed by Hamid Karzai. In October 2003, the UN Security Council authorized the expansion of the ISAF mission throughout Afghanistan,[69][non-primary source needed] and ISAF subsequently expanded the mission in four main stages over the whole of the country.[70][non-primary source needed]

On 31 July 2006, the ISAF additionally took over military operations in the south of Afghanistan from a US-led anti-terrorism coalition.[71] Due to the intensity of the fighting in the south, in 2011 France allowed a squadron of Mirage 2000 fighter/attack aircraft to be moved into the area, to Kandahar, in order to reinforce the alliance's efforts.[72] During its 2012 Chicago Summit, NATO endorsed a plan to end the Afghanistan war and to remove the NATO-led ISAF Forces by the end of December 2014.[73] ISAF was disestablished in December 2014 and replaced by the follow-on training Resolute Support Mission.[74]

On 14 April 2021, NATO Secretary General Jens Stoltenberg said the alliance had agreed to start withdrawing its troops from Afghanistan by May 1.[75] Soon after the withdrawal of NATO troops started, the Taliban launched an offensive against the Afghan government, quickly advancing in front of collapsing Afghan Armed Forces.[76] By 15 August 2021, Taliban militants controlled the vast majority of Afghanistan and had encircled the capital city of Kabul.[77] Some politicians in NATO member states have described the chaotic withdrawal of Western troops from Afghanistan and the collapse of the Afghan government as the greatest debacle that NATO has suffered since its founding.[78][79]

In August 2004, during the Iraq War, NATO formed the NATO Training Mission Iraq, a training mission to assist the Iraqi security forces in conjunction with the US-led MNF-I.[80] The NATO Training Mission-Iraq (NTM-I) was established at the request of the Iraqi Interim Government under the provisions of United Nations Security Council Resolution 1546. The aim of NTM-I was to assist in the development of Iraqi security forces training structures and institutions so that Iraq can build an effective and sustainable capability that addresses the needs of the country. NTM-I was not a combat mission but is a distinct mission, under the political control of the North Atlantic Council. Its operational emphasis was on training and mentoring. The activities of the mission were coordinated with Iraqi authorities and the US-led Deputy Commanding General Advising and Training, who was also dual-hatted as the Commander of NTM-I. The mission officially concluded on 17 December 2011.[81]

Turkey invoked the first Article 4 meetings in 2003 at the start of the Iraq War. Turkey also invoked this article twice in 2012 during the Syrian Civil War, after the downing of an unarmed Turkish F-4 reconnaissance jet, and after a mortar was fired at Turkey from Syria,[82] and again in 2015 after threats by Islamic State of Iraq and the Levant to its territorial integrity.[83]

Beginning on 17 August 2009, NATO deployed warships in an operation to protect maritime traffic in the Gulf of Aden and the Indian Ocean from Somali pirates, and help strengthen the navies and coast guards of regional states. The operation was approved by the North Atlantic Council and involved warships primarily from the United States though vessels from many other countries were also included. Operation Ocean Shield focused on protecting the ships of Operation Allied Provider which were distributing aid as part of the World Food Programme mission in Somalia. Russia, China and South Korea sent warships to participate in the activities as well.[84][non-primary source needed][85][non-primary source needed] The operation sought to dissuade and interrupt pirate attacks, protect vessels, and to increase the general level of security in the region.[86][non-primary source needed]

During the Libyan Civil War, violence between protesters and the Libyan government under Colonel Muammar Gaddafi escalated, and on 17 March 2011 led to the passage of United Nations Security Council Resolution 1973, which called for a ceasefire, and authorized military action to protect civilians. A coalition that included several NATO members began enforcing a no-fly zone over Libya shortly afterwards, beginning with Opration Harmattan by the French Air Force on 19 March.

On 20 March 2011, NATO states agreed on enforcing an arms embargo against Libya with Operation Unified Protector using ships from NATO Standing Maritime Group1 and Standing Mine Countermeasures Group1,[87] and additional ships and submarines from NATO members.[88] They would "monitor, report and, if needed, interdict vessels suspected of carrying illegal arms or mercenaries".[87]

On 24 March, NATO agreed to take control of the no-fly zone from the initial coalition, while command of targeting ground units remained with the coalition's forces.[89][90] NATO began officially enforcing the UN resolution on 27 March 2011 with assistance from Qatar and the United Arab Emirates.[91] By June, reports of divisions within the alliance surfaced as only eight of the 28 member states were participating in combat operations,[92] resulting in a confrontation between US Defense Secretary Robert Gates and countries such as Poland, Spain, the Netherlands, Turkey, and Germany with Gates calling on the latter to contribute more and the latter believing the organization has overstepped its mandate in the conflict.[93][94][95] In his final policy speech in Brussels on 10 June, Gates further criticized allied countries in suggesting their actions could cause the demise of NATO.[96] The German foreign ministry pointed to "aconsiderable [German] contribution to NATO and NATO-led operations" and to the fact that this engagement was highly valued by President Obama.[97]

While the mission was extended into September, Norway that day (10 June) announced it would begin scaling down contributions and complete withdrawal by 1 August.[98] Earlier that week it was reported Danish air fighters were running out of bombs.[99][100] The following week, the head of the Royal Navy said the country's operations in the conflict were not sustainable.[101] By the end of the mission in October 2011, after the death of Colonel Gaddafi, NATO planes had flown about 9,500 strike sorties against pro-Gaddafi targets.[102][103] A report from the organization Human Rights Watch in May 2012 identified at least 72 civilians killed in the campaign.[104]

Following a coup d'tat attempt in October 2013, Libyan Prime Minister Ali Zeidan requested technical advice and trainers from NATO to assist with ongoing security issues.[105]

Use of Article 5 has been threatened multiple times and four out of seven official Article 4 consultations have been called due to spillover in Turkey from the Syrian Civil War. In April 2012, Turkish Prime Minister Recep Tayyip Erdoan considered invoking Article 5 of the NATO treaty to protect Turkish national security in a dispute over the Syrian Civil War.[106][107] The alliance responded quickly, and a spokesperson said the alliance was "monitoring the situation very closely and will continue to do so" and "takes it very seriously protecting its members."[108]

After the shooting down of a Turkish military jet by Syria in June 2012 and Syrian forces shelling Turkish cities in October of 2012[109] resulting in two Article 4 consultations, NATO approved Operation Active Fence. In the past decade the conflict has only escalated. In response to the 2015 Suru bombing, which Turkey attributed to ISIS, and other security issues along its southern border,[110][111][112][113] Turkey called for an emergency meeting. The latest consultation happened in February 2020, as part of increasing tensions due to the Northwestern Syria offensive, which involved[114] Syrian and suspected Russian airstrikes on Turkish troops, and risked direct confrontration between Russia and a NATO member.[115] Each escalation and attack has been meet with an extension of the initial Operation Active Fence mission.

NATO has thirty members, mainly in Europe and North America. Some of these countries also have territory on multiple continents, which can be covered only as far south as the Tropic of Cancer in the Atlantic Ocean, which defines NATO's "area of responsibility" under Article 6 of the North Atlantic Treaty. During the original treaty negotiations, the United States insisted that colonies such as the Belgian Congo be excluded from the treaty.[117] French Algeria was, however, covered until its independence on 3 July 1962.[118] Twelve of these thirty are original members who joined in 1949, while the other eighteen joined in one of eight enlargement rounds.[citation needed]

Few members spend more than two percent of their gross domestic product on defence,[119] with the United States accounting for three quarters of NATO defence spending.[120]

The three Nordic countries which joined NATO as founding members, Denmark, Iceland, and Norway, chose to limit their participation in three areas: there would be no permanent peacetime bases, no nuclear warheads and no Allied military activity (unless invited) permitted on their territory. However, Denmark allowed the U.S. Air Force to maintain an existing base, Thule Air Base, in Greenland.[121]

From the mid-1960s to the mid-1990s, France pursued a military strategy of independence from NATO under a policy dubbed "Gaullo-Mitterrandism".[122] Nicolas Sarkozy negotiated the return of France to the integrated military command and the Defence Planning Committee in 2009, the latter being disbanded the following year. France remains the only NATO member outside the Nuclear Planning Group and unlike the United States and the United Kingdom, will not commit its nuclear-armed submarines to the alliance.[22][34]

Accession to the alliance is governed with individual Membership Action Plans, and requires approval by each current member. NATO currently has three candidate countries that are in the process of joining the alliance: Bosnia and Herzegovina, Finland, and Sweden. North Macedonia is the most recent state to sign an accession protocol to become a NATO member state, which it did in February 2019 and became a member state on 27 March 2020.[123][124] Its accession had been blocked by Greece for many years due to the Macedonia naming dispute, which was resolved in 2018 by the Prespa agreement.[125] In order to support each other in the process, new and potential members in the region formed the Adriatic Charter in 2003.[126] Georgia was also named as an aspiring member, and was promised "future membership" during the 2008 summit in Bucharest,[127] though in 2014, US President Barack Obama said the country was not "currently on a path" to membership.[128]

Ukraine's relationship with NATO and Europe has been politically controversial, and improvement of these relations was one of the goals of the "Euromaidan" protests that saw the ousting of pro-Russian President Viktor Yanukovych in 2014. Ukraine is one of eight countries in Eastern Europe with an Individual Partnership Action Plan. IPAPs began in 2002, and are open to countries that have the political will and ability to deepen their relationship with NATO.[129] On 21 February 2019, the Constitution of Ukraine was amended, the norms on the strategic course of Ukraine for membership in the European Union and NATO are enshrined in the preamble of the Basic Law, three articles and transitional provisions.[130] At the June 2021 Brussels Summit, NATO leaders reiterated the decision taken at the 2008 Bucharest Summit that Ukraine would become a member of the Alliance with the Membership Action Plan (MAP) as an integral part of the process and Ukraine's right to determine its own future and foreign policy course without outside interference.[131] On 30 November 2021, Russian President Vladimir Putin stated that an expansion of NATO's presence in Ukraine, especially the deployment of any long-range missiles capable of striking Russian cities or missile defence systems similar to those in Romania and Poland, would be a "red line" issue for Russia.[132][133][134] Putin asked U.S. President Joe Biden for legal guarantees that NATO would not expand eastward or put "weapons systems that threaten us in close vicinity to Russian territory."[135] NATO Secretary-General Jens Stoltenberg replied that "It's only Ukraine and 30 NATO allies that decide when Ukraine is ready to join NATO. Russia has no veto, Russia has no say, and Russia has no right to establish a sphere of influence to try to control their neighbors."[136][137]

Russia continued to politically oppose further expansion, seeing it as inconsistent with informal understandings between Soviet leader Mikhail Gorbachev and European and US negotiators that allowed for a peaceful German reunification.[138] NATO's expansion efforts are often seen by Moscow leaders as a continuation of a Cold War attempt to surround and isolate Russia,[139] though they have also been criticized in the West.[140] A June 2016 Levada poll found that 68 percent of Russians think that deploying NATO troops in the Baltic states and Poland former Eastern bloc countries bordering Russia is a threat to Russia.[141] In contrast, 65 percent of Poles surveyed in a 2017 Pew Research Center report identified Russia as a "major threat", with an average of 31 percent saying so across all NATO countries,[142] and 67 percent of Poles surveyed in 2018 favour US forces being based in Poland.[143] Of non-CIS Eastern European countries surveyed by Gallup in 2016, all but Serbia and Montenegro were more likely than not to view NATO as a protective alliance rather than a threat.[144] A 2006 study in the journal Security Studies argued that NATO enlargement contributed to democratic consolidation in Central and Eastern Europe.[145] China also opposes further expansion.[146]

Following the 2022 Russian invasion of Ukraine, public opinion in Finland and in Sweden swung sharply in favor of joining NATO, with more citizens supporting NATO membership than those who were opposed to it for the first time. A poll on 30 March 2022 revealed that about 61% of Finns were in favor of NATO membership, as opposed to 16% against and 23% uncertain. A poll on 1 April revealed that about 51% of Swedes were in favor of NATO membership, as opposed to 27% against.[147][148] In mid-April, the governments of Finland and Sweden began exploring NATO membership, with their governments commissioning security reports on this subject.[149][150] The addition of the two Nordic countries would significantly expand NATO's capabilities in the Arctic, Nordic, and Baltic regions.[151]

On 15 May 2022, the Finnish government announced that it would apply for NATO membership, subject to approval by the country's parliament,[152] which voted 1888 in favor of the move on 17 May.[153] Swedish Prime Minister Magdalena Andersson announced her country would apply for NATO membership on 17 May,[154] and both Finland and Sweden formally submitted applications for NATO membership on 18 May.[155] Turkey voiced opposition to Finland and Sweden joining NATO, accusing the two countries of providing support to the Kurdistan Workers' Party (PKK) and the People's Defense Units (YPG), two Kurdish groups which Turkey has designated as terrorist organizations. On 28 June, at a NATO summit in Madrid, Turkey agreed to support the membership bids of Finland and Sweden.[156][157] On 5 July, the 30 NATO ambassadors signed off on the accession protocols for Sweden and Finland and formally approved the decisions of the NATO summit on 28 June.[158] This must be ratified by the various governments; all countries except Hungary and Turkey had ratified it in 2022.[159][160][161]

The increase in the number of NATO members over the years hasnt been sustained by an increase in the defense expenditures.[162] Being concerned about the decreasing defense budgets and aiming to improve financial equity commitments and boost the effectiveness of financial expenditure, NATO members met at the 2014 Wales Summit to establish the Defense Investment Pledge.[163][164] Members considered it necessary to contribute at least 2% of their Gross Domestic Product (GDP) to defense and 20% of their defense budget to major equipment which includes allocations to defense research and development by 2024.[165]

The implementation of the Defense Investment Pledge is hindered by the lack of legal binding obligation by the members, European Union fiscal laws, member states domestic public expenditure priorities, and political willingness.[163][162] In 2021, eight member states achieved the goal of 2% GDP contribution to defense spending.[166] In 2020, 18 NATO members were able to reach the target of 20% contribution to major equipment.[citation needed] The improvements in the compliance with the Wales recommendations are facilitated by the increasing risk to the security of the members posed by the Russian Federation.[citation needed]

The Partnership for Peace (PfP) programme was established in 1994 and is based on individual bilateral relations between each partner country and NATO: each country may choose the extent of its participation.[168] Members include all current and former members of the Commonwealth of Independent States.[169] The Euro-Atlantic Partnership Council (EAPC) was first established on 29 May 1997, and is a forum for regular coordination, consultation and dialogue between all fifty participants.[170] The PfP programme is considered the operational wing of the Euro-Atlantic Partnership.[168] Other third countries also have been contacted for participation in some activities of the PfP framework such as Afghanistan.[171]

The European Union (EU) signed a comprehensive package of arrangements with NATO under the Berlin Plus agreement on 16 December 2002. With this agreement, the EU was given the possibility of using NATO assets in case it wanted to act independently in an international crisis, on the condition that NATO itself did not want to act the so-called "right of first refusal".[172] For example, Article 42(7) of the 1982 Treaty of Lisbon specifies that "If a Member State is the victim of armed aggression on its territory, the other Member States shall have towards it an obligation of aid and assistance by all the means in their power". The treaty applies globally to specified territories whereas NATO is restricted under its Article 6 to operations north of the Tropic of Cancer. It provides a "double framework" for the EU countries that are also linked with the PfP programme.[citation needed]

Additionally, NATO cooperates and discusses its activities with numerous other non-NATO members. The Mediterranean Dialogue was established in 1994 to coordinate in a similar way with Israel and countries in North Africa. The Istanbul Cooperation Initiative was announced in 2004 as a dialogue forum for the Middle East along the same lines as the Mediterranean Dialogue. The four participants are also linked through the Gulf Cooperation Council.[173] In June 2018, Qatar expressed its wish to join NATO.[174] However, NATO declined membership, stating that only additional European countries could join according to Article 10 of NATO's founding treaty.[175] Qatar and NATO have previously signed a security agreement together in January 2018.[176]

Political dialogue with Japan began in 1990, and since then, the Alliance has gradually increased its contact with countries that do not form part of any of these cooperation initiatives.[177] In 1998, NATO established a set of general guidelines that do not allow for a formal institutionalization of relations, but reflect the Allies' desire to increase cooperation. Following extensive debate, the term "Contact Countries" was agreed by the Allies in 2000. By 2012, the Alliance had broadened this group, which meets to discuss issues such as counter-piracy and technology exchange, under the names "partners across the globe" or "global partners".[178][179] Australia and New Zealand, both contact countries, are also members of the AUSCANNZUKUS strategic alliance, and similar regional or bilateral agreements between contact countries and NATO members also aid cooperation. NATO Secretary General Jens Stoltenberg stated that NATO needs to "address the rise of China," by closely cooperating with Australia, New Zealand, Japan and South Korea.[180] Colombia is NATO's latest partner and Colombia has access to the full range of cooperative activities NATO offers to partners; Colombia became the first and only Latin American country to cooperate with NATO.[181]

All agencies and organizations of NATO are integrated into either the civilian administrative or military executive roles. For the most part they perform roles and functions that directly or indirectly support the security role of the alliance as a whole.

The civilian structure includes:

The military structure includes:

The organizations and agencies of NATO include:

The NATO Parliamentary Assembly (NATO PA) is a body that sets broad strategic goals for NATO, which meets at two session per year. NATO PA interacts directly with the parliamentary structures of the national governments of the member states which appoint Permanent Members, or ambassadors to NATO. The NATO Parliamentary Assembly is made up of legislators from the member countries of the North Atlantic Alliance as well as thirteen associate members. It is however officially a structure different from NATO, and has as aim to join deputies of NATO countries in order to discuss security policies on the NATO Council.[citation needed]

NATO is an alliance of 30 sovereign states and their individual sovereignty is unaffected by participation in the alliance. NATO has no parliaments, no laws, no enforcement, and no power to punish individual citizens. As a consequence of this lack of sovereignty the power and authority of a NATO commander are limited. NATO commanders cannot punish offences such as failure to obey a lawful order; dereliction of duty; or disrespect to a senior officer.[193] NATO commanders expect obeisance but sometimes need to subordinate their desires or plans to the operators who are themselves subject to sovereign codes of conduct like the UCMJ. A case in point was the clash between General Sir Mike Jackson and General Wesley Clark over KFOR actions at Pristina Airport.[194]

NATO commanders can issue orders to their subordinate commanders in the form of operational plans (OPLANs), operational orders (OPORDERs), tactical direction, or fragmental orders (FRAGOs) and others. The joint rules of engagement must be followed, and the Law of Armed Conflict must be obeyed at all times. Operational resources "remain under national command but have been transferred temporarily to NATO. Although these national units, through the formal process of transfer of authority, have been placed under the operational command and control of a NATO commander, they never lose their national character." Senior national representatives, like CDS, "are designated as so-called red-cardholders". Caveats are restrictions listed "nation by nation... that NATO Commanders... must take into account."[193]

Read more:

NATO - Wikipedia

NATO | Founders, History, Purpose, Countries, Map, & Facts

Top Questions

What is NATO?

The North Atlantic Treaty Organization (NATO) is a military alliance originally established in 1949 to create a counterweight to Soviet armies stationed in central and eastern Europe after World War II. When the Cold War ended, NATO was reconceived as a cooperative-security organization.

What does NATO stand for?

NATO is an acronym for North Atlantic Treaty Organization.

How many countries are in NATO?

Currently 30 countries are members of NATO.

Which countries are in NATO?

The current member states of NATO are Albania, Belgium, Bulgaria, Canada, Croatia, the Czech Republic, Denmark, Estonia, France, Germany, Greece, Hungary, Iceland, Italy, Latvia, Lithuania, Luxembourg, Montenegro, the Netherlands, North Macedonia, Norway, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Turkey, the United Kingdom, and the United States.

What are the newest members of NATO?

North Macedonia is the newest member of NATO; it joined the alliance in 2020. In May 2022 Sweden and Finland announced their intention to join NATO in response to Russias unprovoked invasion of Ukraine.

Summary

North Atlantic Treaty Organization (NATO), military alliance established by the North Atlantic Treaty (also called the Washington Treaty) of April 4, 1949, which sought to create a counterweight to Soviet armies stationed in central and eastern Europe after World War II. Its original members were Belgium, Canada, Denmark, France, Iceland, Italy, Luxembourg, the Netherlands, Norway, Portugal, the United Kingdom, and the United States. Joining the original signatories were Greece and Turkey (1952); West Germany (1955; from 1990 as Germany); Spain (1982); the Czech Republic, Hungary, and Poland (1999); Bulgaria, Estonia, Latvia, Lithuania, Romania, Slovakia, and Slovenia (2004); Albania and Croatia (2009); Montenegro (2017); and North Macedonia (2020). France withdrew from the integrated military command of NATO in 1966 but remained a member of the organization; it resumed its position in NATOs military command in 2009. Finland and Sweden, two long-neutral countries, were formally invited to join NATO in 2022.

The heart of NATO is expressed in Article 5 of the North Atlantic Treaty, in which the signatory members agree that

an armed attack against one or more of them in Europe or North America shall be considered an attack against them all; and consequently they agree that, if such an armed attack occurs, each of them, in exercise of the right of individual or collective self-defense recognized by Article 51 of the Charter of the United Nations, will assist the Party or Parties so attacked by taking forthwith, individually and in concert with the other Parties, such action as it deems necessary, including the use of armed force, to restore and maintain the security of the North Atlantic area.

NATO invoked Article 5 for the first time in 2001, after the September 11 attacks organized by exiled Saudi Arabian millionaire Osama bin Laden destroyed the World Trade Center in New York City and part of the Pentagon outside Washington, D.C., killing some 3,000 people.

Article 6 defines the geographic scope of the treaty as covering an armed attack on the territory of any of the Parties in Europe or North America. Other articles commit the allies to strengthening their democratic institutions, to building their collective military capability, to consulting each other, and to remaining open to inviting other European states to join.

After World War II in 1945, western Europe was economically exhausted and militarily weak (the western Allies had rapidly and drastically reduced their armies at the end of the war), and newly powerful communist parties had arisen in France and Italy. By contrast, the Soviet Union had emerged from the war with its armies dominating all the states of central and eastern Europe, and by 1948 communists under Moscows sponsorship had consolidated their control of the governments of those countries and suppressed all noncommunist political activity. What became known as the Iron Curtain, a term popularized by Winston Churchill, had descended over central and eastern Europe. Further, wartime cooperation between the western Allies and the Soviets had completely broken down. Each side was organizing its own sector of occupied Germany, so that two German states would emerge, a democratic one in the west and a communist one in the east.

In 1948 the United States launched the Marshall Plan, which infused massive amounts of economic aid to the countries of western and southern Europe on the condition that they cooperate with each other and engage in joint planning to hasten their mutual recovery. As for military recovery, under the Brussels Treaty of 1948, the United Kingdom, France, and the Low CountriesBelgium, the Netherlands, and Luxembourgconcluded a collective-defense agreement called the Western European Union. It was soon recognized, however, that a more formidable alliance would be required to provide an adequate military counterweight to the Soviets.

By this time Britain, Canada, and the United States had already engaged in secret exploratory talks on security arrangements that would serve as an alternative to the United Nations (UN), which was becoming paralyzed by the rapidly emerging Cold War. In March 1948, following a virtual communist coup dtat in Czechoslovakia in February, the three governments began discussions on a multilateral collective-defense scheme that would enhance Western security and promote democratic values. These discussions were eventually joined by France, the Low Countries, and Norway and in April 1949 resulted in the North Atlantic Treaty.

Spurred by the North Korean invasion of South Korea in June 1950 (see Korean War), the United States took steps to demonstrate that it would resist any Soviet military expansion or pressures in Europe. Gen. Dwight D. Eisenhower, the leader of the Allied forces in western Europe in World War II, was named Supreme Allied Commander Europe (SACEUR) by the North Atlantic Council (NATOs governing body) in December 1950. He was followed as SACEUR by a succession of American generals.

The North Atlantic Council, which was established soon after the treaty came into effect, is composed of ministerial representatives of the member states, who meet at least twice a year. At other times the council, chaired by the NATO secretary-general, remains in permanent session at the ambassadorial level. Just as the position of SACEUR has always been held by an American, the secretary-generalship has always been held by a European.

NATOs military organization encompasses a complete system of commands for possible wartime use. The Military Committee, consisting of representatives of the military chiefs of staff of the member states, subsumes two strategic commands: Allied Command Operations (ACO) and Allied Command Transformation (ACT). ACO is headed by the SACEUR and located at Supreme Headquarters Allied Powers Europe (SHAPE) in Casteau, Belgium. ACT is headquartered in Norfolk, Virginia, U.S. During the alliances first 20 years, more than $3 billion worth of infrastructure for NATO forcesbases, airfields, pipelines, communications networks, depotswas jointly planned, financed, and built, with about one-third of the funding from the United States. NATO funding generally is not used for the procurement of military equipment, which is provided by the member statesthough the NATO Airborne Early Warning Force, a fleet of radar-bearing aircraft designed to protect against a surprise low-flying attack, was funded jointly.

More:

NATO | Founders, History, Purpose, Countries, Map, & Facts

Member states of NATO – Wikipedia

NATO (North Atlantic Treaty Organization) is an international military alliance that consists of 30 member states from Europe and North America. It was established at the signing of the North Atlantic Treaty on 4 April 1949. Article 5 of the treaty states that if an armed attack occurs against one of the member states, it shall be considered an attack against all members, and other members shall assist the attacked member, with armed forces if necessary.[1] Article 6 of the treaty limits the scope of Article 5 to the islands north of the Tropic of Cancer, the North American and European mainlands, the entirety of Turkey, and French Algeria. As such, an attack on Hawaii, Puerto Rico, French Guiana, Ceuta, or Melilla, among other places, would not trigger an Article 5 response.

Of the 30 member countries, 28 are in Europe and two in North America. Between 1994 and 1997, wider forums for regional cooperation between NATO and its neighbours were set up, including the Partnership for Peace, the Mediterranean Dialogue initiative and the Euro-Atlantic Partnership Council.

All members have militaries, except for Iceland, which does not have a typical army (but it does have a coast guard and a small unit of civilian specialists for NATO operations). Three of NATO's members are nuclear weapons states: France, the United Kingdom, and the United States. NATO has 12 original founding member states. Three more members joined between 1952 and 1955, and a fourth new member joined in 1982. After the end of the Cold War, NATO added 14 more members from 1999 to 2020.

NATO currently recognizes Bosnia and Herzegovina, Finland, Georgia, Sweden, and Ukraine as aspiring members as part of their Open Doors enlargement policy.[2]

NATO was established on 4 April 1949 via the signing of the North Atlantic Treaty (Washington Treaty). The 12 founding members of the Alliance were: Belgium, Canada, Denmark, France, Iceland, Italy, Luxembourg, the Netherlands, Norway, Portugal, the United Kingdom and the United States.[3]

The various allies all sign the Ottawa Agreement,[4] which is a 1951 document that acts to embody civilian oversight of the Alliance.[5][4]

Current membership consists of 30 countries. In addition to the 12 founding countries, four new members joined during the Cold War: Greece (1952), Turkey (1952), West Germany (1955) and Spain (1982). In 1990, the territory of the former East Germany was added with the reunification of Germany. NATO further expanded after the Cold War, adding the Czech Republic, Hungary and Poland (1999), Bulgaria, Estonia, Latvia, Lithuania, Romania, Slovakia and Slovenia (2004), Albania and Croatia (2009), Montenegro (2017) and North Macedonia (2020).[3] Of the territories and members added between 1990 and 2020, all were either formerly part of the Warsaw Pact (including the formerly Soviet Baltic states) or territories of the former Yugoslavia (which was not a Warsaw Pact member). No countries have left NATO since its founding.

As of June2022[update], five additional states have formally informed NATO of their membership aspirations: Bosnia and Herzegovina, Finland, Georgia, Sweden and Ukraine.[2]

The current members and their dates of admission are listed below.

The three Nordic countries which joined NATO as founding members, Denmark, Iceland and Norway, chose to limit their participation in three areas: there would be no permanent peacetime bases, no nuclear warheads and no Allied military activity (unless invited) permitted on their territory. However, Denmark allowed the U.S. Air Force to maintain an existing base, Thule Air Base, in Greenland.[16]

From the mid-1960s to the mid-1990s, France pursued a military strategy of independence from NATO under a policy dubbed "Gaullo-Mitterrandism".[17] Nicolas Sarkozy negotiated the return of France to the integrated military command and the Defence Planning Committee in 2009, the latter being disbanded the following year. France remains the only NATO member outside the Nuclear Planning Group and unlike the United States and the United Kingdom, will not commit its nuclear-armed submarines to the alliance.[18][19]

The following list is constructed from The Military Balance, published annually by the International Institute for Strategic Studies.

Military spending of the US compared to total of all 29 other NATO member countries (US$ millions).[i]

United States (70.46%)

All other NATO countries total (29.53%)

Total military spending of NATO member countries except the United States (US$ millions).[i][j]

Greece (1.58%)

Estonia (0.21%)

Portugal (1.09%)

Montenegro (0.03%)

Lithuania (0.35%)

Norway (2.34%)

Turkey (4.54%)

Latvia (0.23%)

Denmark (1.55%)

Croatia (0.35%)

North Macedonia (0.035%)

Romania (1.64%)

Hungary (0.67%)

Bulgaria (0.35%)

Italy (7.99%)

France (16.55%)

Poland (3.91%)

Spain (4.29%)

Slovenia (0.18%)

United Kingdom (19.72%)

Slovakia (0.62%)

Canada (7.15%)

Germany (17.68%)

Netherlands (4.05%)

Other (2.895%)

United States omitted - see above

The defence spending of the United States is more than double the defence spending of all other NATO members combined.[21] Criticism of the fact that many member states were not contributing their fair share in accordance with the international agreement by then US president Donald Trump caused various reactions from American and European political figures, ranging from ridicule to panic.[22][23][24]

This section needs to be updated. Please help update this article to reflect recent events or newly available information. (April 2022)

Pew Research Center's 2016 survey among its member states showed that while most countries viewed NATO positively, most NATO members preferred keeping their military spending the same. The response to whether their country should militarily aid another NATO country if it were to get into a serious military conflict with Russia was also mixed. Roughly half or fewer in six of the eight countries surveyed say their country should use military force if Russia attacks a neighboring country that is a NATO ally. And at least half in three of the eight NATO countries say that their government should not use military force in such circumstances. The strongest opposition to responding with armed force is in Germany (58%), followed by France (53%) and Italy (51%). More than half of Americans (56%) and Canadians (53%) are willing to respond to Russian military aggression against a fellow NATO country. A plurality of the British (49%) and Poles (48%) would also live up to their Article 5 commitment. The Spanish are divided on the issue: 48% support it, 47% oppose.[26][27]

Citations

Bibliography

View original post here:

Member states of NATO - Wikipedia