Island Maps: Caribbean Islands, Greek Islands, Pacific …

Arctic OceanAtlantic Ocean (North)North of the equatorAtlantic Ocean (South)South of the equatorAssorted (A – Z)Found in a variety of bays, channels, lakes, rivers, seas, straits, etc.Caribbean SeaFound in a variety of bays, channels, lakes, rivers, seas, straits, etc.Greek IslesIndian OceanMediterranean SeaPacific Ocean (north)north of the equatorPacific Ocean (South)south of the equatorOceania and the South Pacific Islands Trending on WorldAtlas

This page was last updated on August 26, 2015.

View post:

Island Maps: Caribbean Islands, Greek Islands, Pacific …

Island – Wikipedia

Manhattan, U.S. is home to over 1.6 million people.

An island or isle is any piece of sub-continental land that is surrounded by water.[2] Very small islands such as emergent land features on atolls can be called islets, skerries, cays or keys. An island in a river or a lake island may be called an eyot or ait, and a small island off the coast may be called a holm. A grouping of geographically or geologically related islands is called an archipelago, such as the Philippines, for example.

An island may be described as such, despite the presence of an artificial land bridge; examples are Singapore and its causeway, and the various Dutch delta islands, such as IJsselmonde. Some places may even retain “island” in their names for historical reasons after being connected to a larger landmass by a land bridge or landfill, such as Coney Island and Coronado Island, though these are, strictly speaking, tied islands. Conversely, when a piece of land is separated from the mainland by a man-made canal, for example the Peloponnese by the Corinth Canal or Marble Hill in northern Manhattan during the time between the building of the United States Ship Canal and the filling-in of the Harlem River which surrounded the area, it is generally not considered an island.

There are two main types of islands in the sea: continental and oceanic. There are also artificial islands.

Greenland is the world’s largest island, with an area of over 2.1 million km2, while Australia, the world’s smallest continent, has an area of 7.6 million km2, but there is no standard of size which distinguishes islands from continents,[5] or from islets.[6] There is a difference between islands and continents in terms of geology. Continents sit on continental lithosphere which is part of tectonic plates floating high on Earth’s mantle. Oceanic crust is also part of tectonic plates, but it is denser than continental lithosphere, so it floats low on the mantle. Islands are either extensions of the oceanic crust (e.g. volcanic islands) or geologically they are part of some continent sitting on continental lithosphere (e.g. Greenland). This holds true for Australia, which sits on its own continental lithosphere and tectonic plate.

Continental islands are bodies of land that lie on the continental shelf of a continent.[7] Examples are Borneo, Java, Sumatra, Sakhalin, Taiwan and Hainan off Asia; New Guinea, Tasmania, and Kangaroo Island off Australia; Great Britain, Ireland, and Sicily off Europe; Greenland, Newfoundland, Long Island, and Sable Island off North America; and Barbados, the Falkland Islands, and Trinidad off South America.

A special type of continental island is the microcontinental island, which is created when a continent is rifted. Examples are Madagascar and Socotra off Africa, the Kerguelen Islands, New Caledonia, New Zealand, and some of the Seychelles.

Another subtype is an island or bar formed by deposition of tiny rocks where water current loses some of its carrying capacity. This includes:

Islets are very small islands.

Oceanic islands are islands that do not sit on continental shelves. The vast majority are volcanic in origin, such as Saint Helena in the South Atlantic Ocean.[8] The few oceanic islands that are not volcanic are tectonic in origin and arise where plate movements have lifted up the ocean floor above the surface. Examples are Saint Peter and Paul Rocks in the Atlantic Ocean and Macquarie Island in the Pacific.

One type of volcanic oceanic island is found in a volcanic island arc. These islands arise from volcanoes where the subduction of one plate under another is occurring. Examples are the Aleutian Islands, the Mariana Islands, and most of Tonga in the Pacific Ocean. The only examples in the Atlantic Ocean are some of the Lesser Antilles and the South Sandwich Islands.

Another type of volcanic oceanic island occurs where an oceanic rift reaches the surface. There are two examples: Iceland, which is the world’s second largest volcanic island, and Jan Mayen. Both are in the Atlantic.

A third type of volcanic oceanic island is formed over volcanic hotspots. A hotspot is more or less stationary relative to the moving tectonic plate above it, so a chain of islands results as the plate drifts. Over long periods of time, this type of island is eventually “drowned” by isostatic adjustment and eroded, becoming a seamount. Plate movement across a hot-spot produces a line of islands oriented in the direction of the plate movement. An example is the Hawaiian Islands, from Hawaii to Kure, which continue beneath the sea surface in a more northerly direction as the Emperor Seamounts. Another chain with similar orientation is the Tuamotu Archipelago; its older, northerly trend is the Line Islands. The southernmost chain is the Austral Islands, with its northerly trending part the atolls in the nation of Tuvalu. Tristan da Cunha is an example of a hotspot volcano in the Atlantic Ocean. Another hotspot in the Atlantic is the island of Surtsey, which was formed in 1963.

An atoll is an island formed from a coral reef that has grown on an eroded and submerged volcanic island. The reef rises to the surface of the water and forms a new island. Atolls are typically ring-shaped with a central lagoon. Examples are the Line Islands in the Pacific and the Maldives in the Indian Ocean.

Approximately 45,000 tropical islands with an area of at least 5 hectares (12 acres) exist.[9] Examples formed from coral reefs include Maldives, Tonga, Samoa, Nauru, and Polynesia.[9] Granite islands include Seychelles and Tioman and volcanic islands such as Saint Helena.

The socio-economic diversity of tropical islands ranges from the Stone Age societies in the interior of Madagascar, Borneo, and Papua New Guinea to the high-tech lifestyles of the city-islands of Singapore and Hong Kong.[10]

International tourism is a significant factor in the economy of many tropical islands including Seychelles, Sri Lanka, Mauritius, Runion, Hawaii, and the Maldives.

Almost all of the Earth’s islands are natural and have been formed by tectonic forces or volcanic eruptions. However, artificial (man-made) islands also exist, such as the island in Osaka Bay off the Japanese island of Honshu, on which Kansai International Airport is located. Artificial islands can be built using natural materials (e.g., earth, rock, or sand) or artificial ones (e.g., concrete slabs or recycled waste).[11][12] Sometimes natural islands are artificially enlarged, such as Vasilyevsky Island in the Russian city of St. Petersburg, which had its western shore extended westward by some 0.5km in the construction of the Passenger Port of St. Petersburg.[13]

Artificial islands are sometimes built on pre-existing “low-tide elevation,” a naturally formed area of land which is surrounded by and above water at low tide but submerged at high tide. Legally these are not islands and have no territorial sea of their own.[14]

Original post:

Island – Wikipedia

National Speakers Association (NSA) | Where professional …

NSA provides professional speakers with the comprehensive resources, mentoring and professional connections they need to become more efficient and more effective in all aspects of their trade. Our 3,400+ members reach audiences as thought leaders, authors, consultants, coaches, trainers, educators, humorists, and motivators. Anyone who uses the spoken word to impact listeners can benefit from NSA membership.

Here is the original post:

National Speakers Association (NSA) | Where professional …

Posted in NSA

Ascension – Show News, Reviews, Recaps and Photos –

Important: You must only upload images which you have created yourself or that you are expressly authorised or licensed to upload. By clicking “Publish”, you are confirming that the image fully complies with TV.coms Terms of Use and that you own all rights to the image or have authorization to upload it. Please read the following before uploading

Do not upload anything which you do not own or are fully licensed to upload. The images should not contain any sexually explicit content, race hatred material or other offensive symbols or images. Remember: Abuse of the image system may result in you being banned from uploading images or from the entire site so, play nice and respect the rules!

Read the original here:

Ascension – Show News, Reviews, Recaps and Photos –

Coffee & Wine Bar | Ascension Coffee Roasters Dallas, Texas

We work hard to be good at what we do. We are committed to incredible, high quality, farmer-produced coffee from around the world. We are fanatics. We spend much of our time sourcing the worlds best coffee, roasting it to coax out the beauty inside each bean, and then brewing it in one of our cafes with care for you, our fellow coffee lover.

Our journey is one of love for the bean and love for the people who care for them.

We are Ascension.

See the article here:

Coffee & Wine Bar | Ascension Coffee Roasters Dallas, Texas

Church of the Ascension

Mass schedule – April 21 – 29Apr. 21 – 5:00 PM Saturday Vigil Mass (Confessions at 4:00)April 22 – 8:30 AM, 10:30 AM, and 12:15 MassesApril 23 – 9 AM Daily MassApril 24 – 9 AM Communion ServiceApril 24 – 11 AM Memorial Mass for Barbara MulvihillApril 25 – 9 AM Daily MassApril 26 – 9 AM Daily MassApril 27 – 9 AM Daily MassApril 21 – Saturday Vigil Mass at 5 PM (Confessions at 4 PM)April 22 – Masses at 8:30 AM, 10:30 AM, 12:15 PM

See the rest here:

Church of the Ascension

Freedom of speech |

Freedom of speech, Right, as stated in the 1st and 14th Amendments to the Constitution of the United States, to express information, ideas, and opinions free of government restrictions based on content. A modern legal test of the legitimacy of proposed restrictions on freedom of speech was stated in the opinion by Oliver Wendell Holmes, Jr. in Schenk v. U.S. (1919): a restriction is legitimate only if the speech in question poses a clear and present dangeri.e., a risk or threat to safety or to other public interests that is serious and imminent. Many cases involving freedom of speech and of the press also have concerned defamation, obscenity, and prior restraint (see Pentagon Papers). See also censorship.

Continue reading here:

Freedom of speech |

Beaches (1988) – IMDb

Nominated for 1 Oscar. Another 1 win & 5 nominations. See more awards Learn more People who liked this also liked…


In the 1940s in the small town of Jupiter Hollow, two sets of identical twins are born in the same hospital on the same night. One set to a poor local family and the other to a rich family … See full summary

Director:Jim Abrahams

Stars: Bette Midler, Lily Tomlin, Fred Ward

Comedy | Drama | Romance

A young beautician, newly arrived in a small Louisiana town, finds work at the local salon, where a small group of women share a close bond of friendship, and welcome her into the fold.

Director:Herbert Ross

Stars: Shirley MacLaine, Olympia Dukakis, Sally Field


The friendship between two women from childhood onwards.

Director:Allison Anders

Stars: Nia Long, Idina Menzel, Antonio Cupo


Reunited by the death of a college friend, three divorced women seek revenge on the husbands who left them for younger women.

Director:Hugh Wilson

Stars: Goldie Hawn, Bette Midler, Diane Keaton


Two women unknowingly share the same man, but when he disappears, both go out looking for him and enter his surprisingly dangerous life.

Director:Arthur Hiller

Stars: Shelley Long, Bette Midler, Peter Coyote

Comedy | Drama | Music

With the help of the singer and dancer Dixie Leonhard, U.S. entertainer Eddie Sparks wants to bring some fun to the soldiers during World War II. Becoming a perfect team, they tour from … See full summary

Director:Mark Rydell

Stars: Bette Midler, James Caan, George Segal


A housewife who is unhappy with her life befriends an old lady in a nursing home and is enthralled by the tales she tells of people she used to know.

Director:Jon Avnet

Stars: Kathy Bates, Jessica Tandy, Mary Stuart Masterson

Drama | Romance

A strong and eccentric woman’s devoted relationship to her daughter through the years.

Director:John Erman

Stars: Bette Midler, John Goodman, Trini Alvarado

Drama | Music | Romance

The tragic life of a self-destructive female rock star who struggles to deal with the constant pressures of her career and the demands of her ruthless business manager.

Director:Mark Rydell

Stars: Bette Midler, Alan Bates, Frederic Forrest

When the New York child performer CC Bloom and San Francisco rich kid Hillary meet in a holiday resort in Atlantic City, it marks the start of a lifetime friendship between them. The two keep in touch through letters for a number of years until Hillary, now a successful lawyer moves to New York to stay with struggling singer CC. The movie shows the various stages of their friendship and their romances including their love for the same man. Written bySami Al-Taher

Budget:$20,000,000 (estimated)

Opening Weekend USA: $198,361,26 December 1988, Limited Release

Gross USA: $57,041,866

Runtime: 123 min

Aspect Ratio: 1.85 : 1

See original here:

Beaches (1988) – IMDb

Best Beaches in the World – TripAdvisor

@A@ of @B@

@C@ of @D@

Best time to go:Year-round

Best time to go:Year-round

Best time to go:Year-round

Aruba’s most beautiful beach. Private, quiet, serene, amazing!

Best time to go:Year-round

Best time to go:Year-round

Best time to go:June – September

Best time to go:Year-round

Best time to go:Year-round

Best time to go:Year-round

Best time to go:Year-round

Best time to go:May – October

Best time to go:April – October

Best time to go:May – October

Best time to go:July – September

Picture perfect, with crystal clear, warm waters, shade, sun and powder white sand

Best time to go:Year-round

Best time to go:Year-round

Serene, tranquil beach, far from the madding crowd. Ideal place for meditation, sun worshiping or reading.

Best time to go:November – April

Best time to go:April – November

Best time to go:Year-round

Best time to go:May – September

Best time to go:May – October

Best time to go:June – September

Best time to go:November – May

Calm, warm waters, gently sloping sand. Very relaxing. Possibly the most beautiful beach in Asia.

Best time to go:December – May


Need inspiration? See more Travelers Choice Awards

Read the original here:

Best Beaches in the World – TripAdvisor

Beaches (film) – Wikipedia

Beaches (also known as Forever Friends) is a 1988 American comedy-drama film adapted by Mary Agnes Donoghue from the Iris Rainer Dart novel of the same name. It was directed by Garry Marshall, and stars Bette Midler, Barbara Hershey, Mayim Bialik, John Heard, James Read, Spalding Gray, and Lainie Kazan.

Despite generally negative reviews from critics, the film was a commercial success, grossing $59 million in the box office, and gained a cult following.

A sequel, based on the novel Beaches II: I’ll Be There was planned with Barbara Eden but never filmed.

The story of two friends from different backgrounds, whose friendship spans more than 35 years through childhood, love, and tragedy: Cecilia Carol “C.C.” Bloom, a New York actress and singer, and Hillary Whitney, a San Francisco heiress and lawyer. The film begins with middle-aged C.C. receiving a note during a rehearsal for her upcoming Los Angeles concert. She leaves the rehearsal in a panic and tries frantically to travel to her friend’s side. Unable to get a flight to San Francisco because of fog, she rents a car and drives overnight, reflecting on her life with Hillary.

It is 1958; a rich little girl, Hillary, meets child performer C.C., under the boardwalk on the beach in Atlantic City, New Jersey. Hillary is lost and C.C. is hiding from her overbearing stage mother. They become fast friends, growing up and bonding through letters of support to each other. A grown-up Hillary goes on to become a human rights lawyer, while C.C.’s singing career is not exactly taking off. They write to each other regularly and give updates on their lives. Hillary shows up at the New York City dive bar where C.C. is performing, their first meeting since Atlantic City. She moves in with C.C. and gets a job with the ACLU. C.C. is now performing singing telegrams, leading to a job offer from John, the artistic director of the Falcon Players, after she sings his birthday telegram.

A love triangle ensues as Hillary and John are instantly attracted to one another, leaving C.C. in the cold and feeling resentment toward her best friend. Matters are made worse when Hillary and John sleep together on the opening-night of C.C.’s first lead in an off-Broadway production. When Hillary returns home to care for her ailing father, the two friends resolve their issues about John, as John does not have romantic feelings for C.C. After her father passes away, Hillary spends time at her family beach house with lawyer Michael Essex, eventually marrying him. C.C. and John spend a lot of time together, start dating and eventually marry. Hillary and Michael travel to New York to see C.C. perform on Broadway, where she has become a star. When C.C. finds out that Hillary has stopped working as a lawyer, she accuses Hillary of giving up on her dreams. Hillary responds that C.C. has become no more than a “pretentious social climber” who is obsessed with her career. After the argument, Hillary ignores C.C.’s letters, throwing herself into being a dutiful, but unchallenged, wife.

John tells C.C. that her self-centeredness and obsession with her career has him feeling left behind and he asks for a divorce. Despite the separation, John tells her, ‘I love you, I’ll always love you. I just want to let go of us before us gets bad.’ Upset at the thought of her marriage failing, C.C. turns to her mother, who lives in Miami Beach. Her mother tells her that she has given up a lot for her daughter, and C.C. starts to understand when her mother tells her the effect that her selfishness has had on those closest to her. Meanwhile, Hillary returns home from a trip earlier than expected to find her husband having breakfast with another woman, both wearing pajamas. When Hillary learns that C.C. is performing in San Francisco, she makes contact for the first time in years. They learn of each other’s divorces, then discover that they have been secretly jealous of each other for years: Hillary is upset that she has none of the talent or charisma that C.C. is noted for, while C.C. admits she has always been envious of Hillary’s beauty and intelligence. The two then realize that their feud could have been avoided by honest communication.

Hillary tells C.C. that she is pregnant and that she has already decided to keep the baby and raise the child as a single parent, a decision that wins her much admiration from the feisty and independent C.C., who promises she will stay and help her out. C.C. even starts talking of settling down and having a family of her own, having become engaged to Hillary’s obstetrician. However, when C.C.’s agent calls with the perfect comeback gig for her, C.C. quickly abandons her fianc and any notions of the domestic life and races back to New York City, discovering that the comeback gig is at her ex-husband John’s theater, bringing her full circle to where she began her theatrical career. Hillary eventually gives birth to a daughter, whom she names Victoria Cecilia. When Victoria is a young girl, Hillary finds herself easily exhausted and breathless, a state she attributes to her busy schedule as a mother and a lawyer. When she collapses while at court she is diagnosed with viral cardiomyopathy requiring a heart transplant if she is to live. Having a rare tissue type, she realizes she will most likely die before a heart is found.

In the meantime C.C. has become a big star, having won a Tony award and completed her latest hit album. When she learns of Hillary’s illness she agrees to accompany Hillary and Victoria to the beach house for the summer. Hillary becomes depressed due to her debilitated state and inadvertently takes her frustration out on C.C. who she sees having fun with and connecting with Victoria. Hillary eventually begins to accept her prognosis bravely, appreciating her time with Victoria and C.C. Hillary and Victoria return to San Francisco, while C.C. heads to Los Angeles for her concert. While Victoria is packing to travel to the concert, Hillary collapses, leading to the note C.C. receives at the start of the movie which prompts her overnight drive to San Francisco. C.C. takes Hillary and Victoria to the beach house. The two friends watch the sun setting over the beach, transitioning directly to a scene of C.C. and Victoria at a cemetery (all with C.C. singing “Wind Beneath My Wings” in the background).

After the funeral, C.C. tells Victoria that her mother wanted her to live with her. C.C. admits that she is very selfish and has no idea what kind of a mother she will make, but also tells her: “there’s nothing in the world that I want more than to be with you”. She then takes Victoria into her arms and the two console each other in their grief. C.C. goes forward with her concert, and after the show, she leaves hand-in-hand with Victoria, and begins telling stories of when she first met her mother. C.C.’s and Victoria’s voices fade as we hear the younger C.C. and Hillary from 1958: “Be sure to keep in touch, C.C., OK?” “Well sure, we’re friends aren’t we?” The film ends with a young C.C. and Hillary taking pictures together, in a photo booth, on the day they first met.

The film’s theme song, “Wind Beneath My Wings”, hit number one on the Billboard Hot 100 charts and won Grammy Awards for Record of the Year and Song of the Year in 1990.

The film took in $5,160,258 during its opening weekend beginning January 21, 1989. It grossed $57,041,866 domestically.[3]

The film was released on VHS by Touchstone Home Video in August 1989, with a DVD release on August 13, 2002, followed by a special-edition DVD on April 26, 2005.

On review aggregator website Rotten Tomatoes, the film holds an approval rating of 36% based on 39 reviews, and an average rating of 4.4/10.[4]

Included on the soundtrack was Midler’s performance of “Wind Beneath My Wings”, which became an immediate smash hit. The song went on to win Grammys for Record of the Year and Song of the Year in 1990.

It was nominated for the Academy Award for Best Art Direction (Albert Brenner and Garrett Lewis).[5]

The film is recognized by American Film Institute in these lists:

Lifetime announced a remake of the film, which aired on January 22, 2017. The updated version was directed by Allison Anders with the script by Bart Barker and Nikole Beckwith, and Idina Menzel plays the role of C.C.[7][8] Nia Long plays the role of Hillary alongside Menzel. The film includes the songs “Wind Beneath My Wings” and “The Glory of Love”.[9] [10]

A musical stage adaptation has been written, based on the book by Iris Rainer Dart, with lyrics and book by Dart and Thom Thomas (book) and music by David Austin. The musical premiered at the Signature Theatre, Arlington, Virginia in February 2014. The musical was directed by Eric D. Schaeffer, with Alysha Umphress as Cee Cee Bloom and Mara Davi as Bertie White.[11][12]

The musical next opened at the Drury Lane Theatre, Oakbrook, Illinois in June 2015 (previews). Again directed by Schaeffer, Shoshana Bean plays Cee Cee and Whitney Bashor plays Bertie.[13] The choreographer is Lorin Latarro, with scenic design by Derek McLane, lighting design by Howell Binkley, costume design by Alejo Vietti and sound design by Kai Harada.[14]

Excerpt from:

Beaches (film) – Wikipedia

NATO – Wikipedia

The North Atlantic Treaty Organization (NATO ; French: Organisation du Trait de l’Atlantique Nord; OTAN), also called the North Atlantic Alliance, is an intergovernmental military alliance between several North American and European countries based on the North Atlantic Treaty that was signed on 4 April 1949.[3][4]

NATO constitutes a system of collective defence whereby its member states agree to mutual defence in response to an attack by any external party. Three NATO members (the United States, France and the United Kingdom) are permanent members of the United Nations Security Council with the power to veto and are officially nuclear-weapon states. NATO Headquarters are located in Haren, Brussels, Belgium, while the headquarters of Allied Command Operations is near Mons, Belgium.

NATO is an alliance that consists of 29 independent member countries across North America and Europe. An additional 21countries participate in NATO’s Partnership for Peace program, with 15other countries involved in institutionalized dialogue programs. The combined military spending of all NATO members constitutes over 70% of the global total.[5] Members’ defense spending is supposed to amount to at least 2% of GDP by 2024.[6]

NATO was little more than a political association until the Korean War galvanized the organization’s member states, and an integrated military structure was built up under the direction of two US Supreme Commanders. The course of the Cold War led to a rivalry with nations of the Warsaw Pact, that formed in 1955. Doubts over the strength of the relationship between the European states and the United States ebbed and flowed, along with doubts over the credibility of the NATO defense against a prospective Soviet invasiondoubts that led to the development of the independent French nuclear deterrent and the withdrawal of France from NATO’s military structure in 1966 for 30 years. After the fall of the Berlin Wall in Germany in 1989, the organization became involved in the breakup of Yugoslavia, and conducted its first military interventions in Bosnia from 1992 to 1995 and later Yugoslavia in 1999. Politically, the organization sought better relations with former Warsaw Pact countries, several of which joined the alliance in 1999 and 2004.

Article5 of the North Atlantic treaty, requiring member states to come to the aid of any member state subject to an armed attack, was invoked for the first and only time after the September 11 attacks,[7] after which troops were deployed to Afghanistan under the NATO-led ISAF. The organization has operated a range of additional roles since then, including sending trainers to Iraq, assisting in counter-piracy operations[8] and in 2011 enforcing a no-fly zone over Libya in accordance with U.N. Security Council Resolution 1973. The less potent Article 4, which merely invokes consultation among NATO members, has been invoked five times: by Turkey in 2003 over the Iraq War; twice in 2012 by Turkey over the Syrian Civil War, after the downing of an unarmed Turkish F-4 reconnaissance jet, and after a mortar was fired at Turkey from Syria;[9] in 2014 by Poland, following the Russian intervention in Crimea;[10] and again by Turkey in 2015 after threats by Islamic State of Iraq and the Levant to its territorial integrity.[11]

Since its founding, the admission of new member states has increased the alliance from the original 12 countries to 29. The most recent member state to be added to NATO is Montenegro on 5 June 2017. NATO currently recognizes Bosnia and Herzegovina, Georgia, Macedonia and Ukraine as aspiring members.[12]


The Treaty of Brussels was a mutual defence treaty against the Soviet threat at the start of the Cold War. It was signed on 17 March 1948 by Belgium, the Netherlands, Luxembourg, France, and the United Kingdom. It was the precursor to NATO. The Soviet threat became immediate with the Berlin Blockade in 1948, leading to the creation of a joint defence organization in September 1948. However, the parties were too weak militarily to counter the military power of the USSR. In addition, the 1948 Czechoslovak coup d’tat by the Communists had overthrown a democratic government and British Foreign Minister Ernest Bevin reiterated that the best way to prevent another Czechoslovakia was to evolve a joint Western military strategy. He got a receptive hearing in the United States, especially considering American anxiety over Italy (and the Italian Communist Party).

In 1948, European leaders met with US defence, military and diplomatic officials at the Pentagon, under US Secretary of State George C. Marshall’s orders, exploring a framework for a new and unprecedented association. Talks for a new military alliance resulted in the North Atlantic Treaty, which was signed by US President Harry S. Truman in Washington on 4 April 1949. It included the five Treaty of Brussels states plus the United States, Canada, Portugal, Italy, Norway, Denmark and Iceland.[16] The first NATO Secretary General, Lord Ismay, stated in 1949 that the organization’s goal was “to keep the Russians out, the Americans in, and the Germans down”. Popular support for the Treaty was not unanimous, and some Icelanders participated in a pro-neutrality, anti-membership riot in March 1949. The creation of NATO can be seen as the primary institutional consequence of a school of thought called Atlanticism which stressed the importance of trans-Atlantic cooperation.[18]

The members agreed that an armed attack against any one of them in Europe or North America would be considered an attack against them all. Consequently, they agreed that, if an armed attack occurred, each of them, in exercise of the right of individual or collective self-defence, would assist the member being attacked, taking such action as it deemed necessary, including the use of armed force, to restore and maintain the security of the North Atlantic area. The treaty does not require members to respond with military action against an aggressor. Although obliged to respond, they maintain the freedom to choose the method by which they do so. This differs from ArticleIV of the Treaty of Brussels, which clearly states that the response will be military in nature. It is nonetheless assumed that NATO members will aid the attacked member militarily. The treaty was later clarified to include both the member’s territory and their “vessels, forces or aircraft” above the Tropic of Cancer, including some overseas departments of France.[19]

The creation of NATO brought about some standardization of allied military terminology, procedures, and technology, which in many cases meant European countries adopting US practices. The roughly 1300Standardization Agreements (STANAG) codified many of the common practices that NATO has achieved. Hence, the 7.6251mm NATO rifle cartridge was introduced in the 1950s as a standard firearm cartridge among many NATO countries.[20] Fabrique Nationale de Herstal’s FAL, which used the 7.62mm NATO cartridge, was adopted by 75 countries, including many outside of NATO. Also, aircraft marshalling signals were standardized, so that any NATO aircraft could land at any NATO base. Other standards such as the NATO phonetic alphabet have made their way beyond NATO into civilian use.[22]

The outbreak of the Korean War in June 1950 was crucial for NATO as it raised the apparent threat of all Communist countries working together and forced the alliance to develop concrete military plans. Supreme Headquarters Allied Powers Europe (SHAPE) was formed to direct forces in Europe, and began work under Supreme Allied Commander Dwight D. Eisenhower in January 1951.[24] In September 1950, the NATO Military Committee called for an ambitious buildup of conventional forces to meet the Soviets, subsequently reaffirming this position at the February 1952 meeting of the North Atlantic Council in Lisbon. The Lisbon conference, seeking to provide the forces necessary for NATO’s Long-Term Defence Plan, called for an expansion to ninety-six divisions. However this requirement was dropped the following year to roughly thirty-five divisions with heavier use to be made of nuclear weapons. At this time, NATO could call on about fifteen ready divisions in Central Europe, and another ten in Italy and Scandinavia. Also at Lisbon, the post of Secretary General of NATO as the organization’s chief civilian was created, and Lord Ismay was eventually appointed to the post.[27]

In September 1952, the first major NATO maritime exercises began; Exercise Mainbrace brought together 200 ships and over 50,000 personnel to practice the defence of Denmark and Norway.[28] Other major exercises that followed included Exercise Grand Slam and Exercise Longstep, naval and amphibious exercises in the Mediterranean Sea, Italic Weld, a combined air-naval-ground exercise in northern Italy, Grand Repulse, involving the British Army on the Rhine (BAOR), the Netherlands Corps and Allied Air Forces Central Europe (AAFCE), Monte Carlo, a simulated atomic air-ground exercise involving the Central Army Group, and Weldfast, a combined amphibious landing exercise in the Mediterranean Sea involving American, British, Greek, Italian and Turkish naval forces.[29]

Greece and Turkey also joined the alliance in 1952, forcing a series of controversial negotiations, in which the United States and Britain were the primary disputants, over how to bring the two countries into the military command structure.[24] While this overt military preparation was going on, covert stay-behind arrangements initially made by the Western European Union to continue resistance after a successful Soviet invasion, including Operation Gladio, were transferred to NATO control. Ultimately unofficial bonds began to grow between NATO’s armed forces, such as the NATO Tiger Association and competitions such as the Canadian Army Trophy for tank gunnery.[30][31]

In 1954, the Soviet Union suggested that it should join NATO to preserve peace in Europe.[32] The NATO countries, fearing that the Soviet Union’s motive was to weaken the alliance, ultimately rejected this proposal.

On 17 December 1954, the North Atlantic Council approved MC 48, a key document in the evolution of NATO nuclear thought. MC 48 emphasized that NATO would have to use atomic weapons from the outset of a war with the Soviet Union whether or not the Soviets chose to use them first. This gave SACEUR the same prerogatives for automatic use of nuclear weapons as existed for the commander-in-chief of the US Strategic Air Command.

The incorporation of West Germany into the organization on 9 May 1955 was described as “a decisive turning point in the history of our continent” by Halvard Lange, Foreign Affairs Minister of Norway at the time.[33] A major reason for Germany’s entry into the alliance was that without German manpower, it would have been impossible to field enough conventional forces to resist a Soviet invasion. One of its immediate results was the creation of the Warsaw Pact, which was signed on 14 May 1955 by the Soviet Union, Hungary, Czechoslovakia, Poland, Bulgaria, Romania, Albania, and East Germany, as a formal response to this event, thereby delineating the two opposing sides of the Cold War.

Three major exercises were held concurrently in the northern autumn of 1957. Operation Counter Punch, Operation Strikeback, and Operation Deep Water were the most ambitious military undertaking for the alliance to date, involving more than 250,000 men, 300 ships, and 1,500 aircraft operating from Norway to Turkey.[35]

NATO’s unity was breached early in its history with a crisis occurring during Charles de Gaulle’s presidency of France.[36] De Gaulle protested against the USA’s strong role in the organization and what he perceived as a special relationship between it and the United Kingdom. In a memorandum sent to President Dwight D. Eisenhower and Prime Minister Harold Macmillan on 17 September 1958, he argued for the creation of a tripartite directorate that would put France on an equal footing with the US and the UK.

Considering the response to be unsatisfactory, de Gaulle began constructing an independent defence force for his country. He wanted to give France, in the event of an East German incursion into West Germany, the option of coming to a separate peace with the Eastern bloc instead of being drawn into a larger NATOWarsaw Pact war.[38] In February 1959, France withdrew its Mediterranean Fleet from NATO command, and later banned the stationing of foreign nuclear weapons on French soil. This caused the United States to transfer two hundred military aircraft out of France and return control of the air force bases that it had operated in France since 1950 to the French by 1967.

Though France showed solidarity with the rest of NATO during the Cuban Missile Crisis in 1962, de Gaulle continued his pursuit of an independent defence by removing France’s Atlantic and Channel fleets from NATO command. In 1966, all French armed forces were removed from NATO’s integrated military command, and all non-French NATO troops were asked to leave France. US Secretary of State Dean Rusk was later quoted as asking de Gaulle whether his order included “the bodies of American soldiers in France’s cemeteries?” This withdrawal forced the relocation of SHAPE from Rocquencourt, near Paris, to Casteau, north of Mons, Belgium, by 16 October 1967.[42] France remained a member of the alliance, and committed to the defence of Europe from possible Warsaw Pact attack with its own forces stationed in the Federal Republic of Germany throughout the Cold War. A series of secret accords between US and French officials, the LemnitzerAilleret Agreements, detailed how French forces would dovetail back into NATO’s command structure should East-West hostilities break out.[43]

When de Gaulle announced his decision to withdraw from the integrated NATO command, President Lyndon Johnson suggested that when de Gaulle “comes rushing down like a locomotive on the track, why the Germans and ourselves, we just stand aside and let him go on by, then we are back together again.”[44] The vision came true. France announced their return to full participation at the 2009 StrasbourgKehl summit.[45]

During most of the Cold War, NATO’s watch against the Soviet Union and Warsaw Pact did not actually lead to direct military action. On 1 July 1968, the Treaty on the Non-Proliferation of Nuclear Weapons opened for signature: NATO argued that its nuclear sharing arrangements did not breach the treaty as US forces controlled the weapons until a decision was made to go to war, at which point the treaty would no longer be controlling. Few states knew of the NATO nuclear sharing arrangements at that time, and they were not challenged. In May 1978, NATO countries officially defined two complementary aims of the Alliance, to maintain security and pursue dtente. This was supposed to mean matching defences at the level rendered necessary by the Warsaw Pact’s offensive capabilities without spurring a further arms race.

On 12 December 1979, in light of a build-up of Warsaw Pact nuclear capabilities in Europe, ministers approved the deployment of US GLCM cruise missiles and PershingII theatre nuclear weapons in Europe. The new warheads were also meant to strengthen the western negotiating position regarding nuclear disarmament. This policy was called the Dual Track policy. Similarly, in 198384, responding to the stationing of Warsaw Pact SS-20 medium-range missiles in Europe, NATO deployed modern Pershing II missiles tasked to hit military targets such as tank formations in the event of war. This action led to peace movement protests throughout Western Europe, and support for the deployment wavered as many doubted whether the push for deployment could be sustained.

The membership of the organization at this time remained largely static. In 1974, as a consequence of the Turkish invasion of Cyprus, Greece withdrew its forces from NATO’s military command structure but, with Turkish cooperation, were readmitted in 1980[citation needed]. The Falklands War between the United Kingdom and Argentina did not result in NATO involvement because article 6 of the North Atlantic Treaty specifies that collective self-defence is only applicable to attacks on member state territories north of the Tropic of Cancer. On 30 May 1982, NATO gained a new member when the newly democratic Spain joined the alliance; Spain’s membership was confirmed by referendum in 1986. At the peak of the Cold War, 16 member nations maintained an approximate strength of 5,252,800 active military, including as many as 435,000 forward deployed US forces, under a command structure that reached a peak of 78 headquarters, organized into four echelons.[50]

The Revolutions of 1989 and the dissolution of the Warsaw Pact in 1991 removed the de facto main adversary of NATO and caused a strategic re-evaluation of NATO’s purpose, nature, tasks, and their focus on the continent of Europe. This shift started with the 1990 signing in Paris of the Treaty on Conventional Armed Forces in Europe between NATO and the Soviet Union, which mandated specific military reductions across the continent that continued after the dissolution of the Soviet Union in December 1991.[51] At that time, European countries accounted for 34 percent of NATO’s military spending; by 2012, this had fallen to 21 percent.[52] NATO also began a gradual expansion to include newly autonomous Central and Eastern European nations, and extended its activities into political and humanitarian situations that had not formerly been NATO concerns.

The first post-Cold War expansion of NATO came with German reunification on 3 October 1990, when the former East Germany became part of the Federal Republic of Germany and the alliance. This had been agreed in the Two Plus Four Treaty earlier in the year. To secure Soviet approval of a united Germany remaining in NATO, it was agreed that foreign troops and nuclear weapons would not be stationed in the east, and there are diverging views on whether negotiators gave commitments regarding further NATO expansion east.[53] Jack Matlock, American ambassador to the Soviet Union during its final years, said that the West gave a “clear commitment” not to expand, and declassified documents indicate that Soviet negotiators were given the impression that NATO membership was off the table for countries such as Czechoslovakia, Hungary, or Poland.[54] Hans-Dietrich Genscher, the West German foreign minister at that time, said in a conversation with Eduard Shevardnadze that “[f]or us, however, one thing is certain: NATO will not expand to the east.”[54] In 1996, Gorbachev wrote in his Memoirs, that “during the negotiations on the unification of Germany they gave assurances that NATO would not extend its zone of operation to the east,” and repeated this view in an interview in 2008.[56] According to Robert Zoellick, a State Department official involved in the Two Plus Four negotiating process, this appears to be a misperception, and no formal commitment regarding enlargement was made.[57]

As part of post-Cold War restructuring, NATO’s military structure was cut back and reorganized, with new forces such as the Headquarters Allied Command Europe Rapid Reaction Corps established. The changes brought about by the collapse of the Soviet Union on the military balance in Europe were recognized in the Adapted Conventional Armed Forces in Europe Treaty, which was signed in 1999. The policies of French President Nicolas Sarkozy resulted in a major reform of France’s military position, culminating with the return to full membership on 4 April 2009, which also included France rejoining the NATO Military Command Structure, while maintaining an independent nuclear deterrent.[43][58]

Between 1994 and 1997, wider forums for regional cooperation between NATO and its neighbors were set up, like the Partnership for Peace, the Mediterranean Dialogue initiative and the Euro-Atlantic Partnership Council. In 1998, the NATORussia Permanent Joint Council was established. On 8 July 1997, three former communist countries, Hungary, the Czech Republic, and Poland, were invited to join NATO, which each did in 1999. Membership went on expanding with the accession of seven more Central and Eastern European countries to NATO: Estonia, Latvia, Lithuania, Slovenia, Slovakia, Bulgaria, and Romania. They were first invited to start talks of membership during the 2002 Prague summit, and joined NATO on 29 March 2004, shortly before the 2004 Istanbul summit. At that time, the decision was criticised in the US by many military, political and academic leaders as a “a policy error of historic proportions.”[59] According to George F. Kennan, an American diplomat and an advocate of the containment policy, this decision “may be expected to have an adverse effect on the development of Russian democracy; to restore the atmosphere of the cold war to East-West relations, to impel Russian foreign policy in directions decidedly not to our liking.”[60]

New NATO structures were also formed while old ones were abolished. In 1997, NATO reached agreement on a significant downsizing of its command structure from 65 headquarters to just 20.[61] The NATO Response Force (NRF) was launched at the 2002 Prague summit on 21 November, the first summit in a former Comecon country. On 19 June 2003, a further restructuring of the NATO military commands began as the Headquarters of the Supreme Allied Commander, Atlantic were abolished and a new command, Allied Command Transformation (ACT), was established in Norfolk, United States, and the Supreme Headquarters Allied Powers Europe (SHAPE) became the Headquarters of Allied Command Operations (ACO). ACT is responsible for driving transformation (future capabilities) in NATO, whilst ACO is responsible for current operations.[62] In March 2004, NATO’s Baltic Air Policing began, which supported the sovereignty of Latvia, Lithuania and Estonia by providing jet fighters to react to any unwanted aerial intrusions. Eight multinational jet fighters are based in Lithuania, the number of which was increased from four in 2014.[63] Also at the 2004 Istanbul summit, NATO launched the Istanbul Cooperation Initiative with four Persian Gulf nations.[64]

The 2006 Riga summit was held in Riga, Latvia, and highlighted the issue of energy security. It was the first NATO summit to be held in a country that had been part of the Soviet Union. At the April 2008 summit in Bucharest, Romania, NATO agreed to the accession of Croatia and Albania and both countries joined NATO in April 2009. Ukraine and Georgia were also told that they could eventually become members.[65] The issue of Georgian and Ukrainian membership in NATO prompted harsh criticism from Russia, as did NATO plans for a missile defence system. Studies for this system began in 2002, with negotiations centered on anti-ballistic missiles being stationed in Poland and the Czech Republic. Though NATO leaders gave assurances that the system was not targeting Russia, both presidents Vladimir Putin and Dmitry Medvedev criticized it as a threat.[66]

In 2009, US President Barack Obama proposed using the ship-based Aegis Combat System, though this plan still includes stations being built in Turkey, Spain, Portugal, Romania, and Poland.[67] NATO will also maintain the “status quo” in its nuclear deterrent in Europe by upgrading the targeting capabilities of the “tactical” B61 nuclear bombs stationed there and deploying them on the stealthier Lockheed Martin F-35 Lightning II.[68][69] Following the 2014 annexation of Crimea by Russia, NATO committed to forming a new “spearhead” force of 5,000 troops at bases in Estonia, Lithuania, Latvia, Poland, Romania, and Bulgaria.[70][71]

At the 2014 Wales summit, the leaders of NATO’s member states reaffirmed their pledge to spend the equivalent of at least 2% of their gross domestic products on defence by 2024.[72] In 2015, five of its 28 members met that goal.[73][74][75] On 15 June 2016, NATO officially recognized cyberwarfare as an operational domain of war, just like land, sea and aerial warfare. This means that any cyber attack on NATO members can trigger Article 5 of the North Atlantic Treaty.[76] Montenegro became the 29th and newest member of NATO on 5 June 2017, amid strong objections from Russia.[77][78]

No military operations were conducted by NATO during the Cold War. Following the end of the Cold War, the first operations, Anchor Guard in 1990 and Ace Guard in 1991, were prompted by the Iraqi invasion of Kuwait. Airborne early warning aircraft were sent to provide coverage of southeastern Turkey, and later a quick-reaction force was deployed to the area.[79]

The Bosnian War began in 1992, as a result of the breakup of Yugoslavia. The deteriorating situation led to United Nations Security Council Resolution 816 on 9 October 1992, ordering a no-fly zone over central Bosnia and Herzegovina, which NATO began enforcing on 12 April 1993 with Operation Deny Flight. From June 1993 until October 1996, Operation Sharp Guard added maritime enforcement of the arms embargo and economic sanctions against the Federal Republic of Yugoslavia. On 28 February 1994, NATO took its first wartime action by shooting down four Bosnian Serb aircraft violating the no-fly zone.

On 10 and 11 April 1994, during the Bosnian War, the United Nations Protection Force called in air strikes to protect the Gorade safe area, resulting in the bombing of a Bosnian Serb military command outpost near Gorade by two US F-16 jets acting under NATO direction. This resulted in the taking of 150U.N. personnel hostage on 14 April.[82][83] On 16 April a British Sea Harrier was shot down over Gorade by Serb forces. A two-week NATO bombing campaign, Operation Deliberate Force, began in August 1995 against the Army of the Republika Srpska, after the Srebrenica massacre.[85]

NATO air strikes that year helped bring the Yugoslav wars to an end, resulting in the Dayton Agreement in November 1995.[85] As part of this agreement, NATO deployed a UN-mandated peacekeeping force, under Operation Joint Endeavor, named IFOR. Almost 60,000 NATO troops were joined by forces from non-NATO nations in this peacekeeping mission. This transitioned into the smaller SFOR, which started with 32,000 troops initially and ran from December 1996 until December 2004, when operations were then passed onto European Union Force Althea. Following the lead of its member nations, NATO began to award a service medal, the NATO Medal, for these operations.[87]

In an effort to stop Slobodan Miloevi’s Serbian-led crackdown on KLA separatists and Albanian civilians in Kosovo, the United Nations Security Council passed Resolution 1199 on 23 September 1998 to demand a ceasefire. Negotiations under US Special Envoy Richard Holbrooke broke down on 23 March 1999, and he handed the matter to NATO,[88] which started a 78-day bombing campaign on 24 March 1999.[89] Operation Allied Force targeted the military capabilities of what was then the Federal Republic of Yugoslavia. During the crisis, NATO also deployed one of its international reaction forces, the ACE Mobile Force (Land), to Albania as the Albania Force (AFOR), to deliver humanitarian aid to refugees from Kosovo.[90]

Though the campaign was criticized for high civilian casualties, including bombing of the Chinese embassy in Belgrade, Miloevi finally accepted the terms of an international peace plan on 3 June 1999, ending the Kosovo War. On 11 June, Miloevi further accepted UN resolution 1244, under the mandate of which NATO then helped establish the KFOR peacekeeping force. Nearly one million refugees had fled Kosovo, and part of KFOR’s mandate was to protect the humanitarian missions, in addition to deterring violence.[90][91] In AugustSeptember 2001, the alliance also mounted Operation Essential Harvest, a mission disarming ethnic Albanian militias in the Republic of Macedonia.[92] As of 1December2013[update], 4,882KFOR soldiers, representing 31countries, continue to operate in the area.[93]

The US, the UK, and most other NATO countries opposed efforts to require the UN Security Council to approve NATO military strikes, such as the action against Serbia in 1999, while France and some others claimed that the alliance needed UN approval.[94] The US/UK side claimed that this would undermine the authority of the alliance, and they noted that Russia and China would have exercised their Security Council vetoes to block the strike on Yugoslavia, and could do the same in future conflicts where NATO intervention was required, thus nullifying the entire potency and purpose of the organization. Recognizing the post-Cold War military environment, NATO adopted the Alliance Strategic Concept during its Washington summit in April 1999 that emphasized conflict prevention and crisis management.[95]

The September 11 attacks in the United States caused NATO to invoke Article5 of the NATO Charter for the first time in the organization’s history. The Article says that an attack on any member shall be considered to be an attack on all. The invocation was confirmed on 4 October 2001 when NATO determined that the attacks were indeed eligible under the terms of the North Atlantic Treaty.[96] The eight official actions taken by NATO in response to the attacks included Operation Eagle Assist and Operation Active Endeavour, a naval operation in the Mediterranean Sea which is designed to prevent the movement of terrorists or weapons of mass destruction, as well as enhancing the security of shipping in general which began on 4 October 2001.[97]

The alliance showed unity: On 16 April 2003, NATO agreed to take command of the International Security Assistance Force (ISAF), which includes troops from 42 countries. The decision came at the request of Germany and the Netherlands, the two nations leading ISAF at the time of the agreement, and all nineteen NATO ambassadors approved it unanimously. The handover of control to NATO took place on 11 August, and marked the first time in NATO’s history that it took charge of a mission outside the north Atlantic area.[98]

ISAF was initially charged with securing Kabul and surrounding areas from the Taliban, al Qaeda and factional warlords, so as to allow for the establishment of the Afghan Transitional Administration headed by Hamid Karzai. In October 2003, the UN Security Council authorized the expansion of the ISAF mission throughout Afghanistan,[99] and ISAF subsequently expanded the mission in four main stages over the whole of the country.[100]

On 31 July 2006, the ISAF additionally took over military operations in the south of Afghanistan from a US-led anti-terrorism coalition.[101] Due to the intensity of the fighting in the south, in 2011 France allowed a squadron of Mirage 2000 fighter/attack aircraft to be moved into the area, to Kandahar, in order to reinforce the alliance’s efforts.[102] During its 2012 Chicago Summit, NATO endorsed a plan to end the Afghanistan war and to remove the NATO-led ISAF Forces by the end of December 2014.[103] ISAF was disestablished in December 2014 and replaced by the follow-on training Resolute Support Mission

In August 2004, during the Iraq War, NATO formed the NATO Training Mission Iraq, a training mission to assist the Iraqi security forces in conjunction with the US led MNF-I.[104] The NATO Training Mission-Iraq (NTM-I) was established at the request of the Iraqi Interim Government under the provisions of United Nations Security Council Resolution 1546. The aim of NTM-I was to assist in the development of Iraqi security forces training structures and institutions so that Iraq can build an effective and sustainable capability that addresses the needs of the nation. NTM-I was not a combat mission but is a distinct mission, under the political control of NATO’s North Atlantic Council. Its operational emphasis was on training and mentoring. The activities of the mission were coordinated with Iraqi authorities and the US-led Deputy Commanding General Advising and Training, who was also dual-hatted as the Commander of NTM-I. The mission officially concluded on 17 December 2011.[105]

Beginning on 17 August 2009, NATO deployed warships in an operation to protect maritime traffic in the Gulf of Aden and the Indian Ocean from Somali pirates, and help strengthen the navies and coast guards of regional states. The operation was approved by the North Atlantic Council and involves warships primarily from the United States though vessels from many other nations are also included. Operation Ocean Shield focuses on protecting the ships of Operation Allied Provider which are distributing aid as part of the World Food Programme mission in Somalia. Russia, China and South Korea have sent warships to participate in the activities as well.[106][107] The operation seeks to dissuade and interrupt pirate attacks, protect vessels, and abetting to increase the general level of security in the region.[108]

During the Libyan Civil War, violence between protestors and the Libyan government under Colonel Muammar Gaddafi escalated, and on 17 March 2011 led to the passage of United Nations Security Council Resolution 1973, which called for a ceasefire, and authorized military action to protect civilians. Acoalition that included several NATO members began enforcing a no-fly zone over Libya shortly afterwards. On 20 March 2011, NATO states agreed on enforcing an arms embargo against Libya with Operation Unified Protector using ships from NATO Standing Maritime Group1 and Standing Mine Countermeasures Group1,[109] and additional ships and submarines from NATO members.[110] They would “monitor, report and, if needed, interdict vessels suspected of carrying illegal arms or mercenaries”.[109]

On 24 March, NATO agreed to take control of the no-fly zone from the initial coalition, while command of targeting ground units remained with the coalition’s forces.[111][112] NATO began officially enforcing the UN resolution on 27 March 2011 with assistance from Qatar and the United Arab Emirates.[113] By June, reports of divisions within the alliance surfaced as only eight of the 28 member nations were participating in combat operations,[114] resulting in a confrontation between US Defense Secretary Robert Gates and countries such as Poland, Spain, the Netherlands, Turkey, and Germany to contribute more, the latter believing the organization has overstepped its mandate in the conflict.[115][116][117] In his final policy speech in Brussels on 10 June, Gates further criticized allied countries in suggesting their actions could cause the demise of NATO.[118] The German foreign ministry pointed to “aconsiderable [German] contribution to NATO and NATO-led operations” and to the fact that this engagement was highly valued by President Obama.[119]

While the mission was extended into September, Norway that day announced it would begin scaling down contributions and complete withdrawal by 1 August.[120] Earlier that week it was reported Danish air fighters were running out of bombs.[121][122] The following week, the head of the Royal Navy said the country’s operations in the conflict were not sustainable.[123] By the end of the mission in October 2011, after the death of Colonel Gaddafi, NATO planes had flown about 9,500 strike sorties against pro-Gaddafi targets.[124][125] A report from the organization Human Rights Watch in May 2012 identified at least 72 civilians killed in the campaign.[126] Following a coup d’tat attempt in October 2013, Libyan Prime Minister Ali Zeidan requested technical advice and trainers from NATO to assist with ongoing security issues.[127]

NATO has twenty-nine members, mainly in Europe and North America. Some of these countries also have territory on multiple continents, which can be covered only as far south as the Tropic of Cancer in the Atlantic Ocean, which defines NATO’s “area of responsibility” under Article 6 of the North Atlantic Treaty. During the original treaty negotiations, the United States insisted that colonies such as the Belgian Congo be excluded from the treaty.[129] French Algeria was however covered until their independence on 3 July 1962.[130] Twelve of these twenty-nine are original members who joined in 1949, while the other seventeen joined in one of seven enlargement rounds.

From the mid-1960s to the mid-1990s, France pursued a military strategy of independence from NATO under a policy dubbed “Gaullo-Mitterrandism”.[citation needed] Nicolas Sarkozy negotiated the return of France to the integrated military command and the Defence Planning Committee in 2009, the latter being disbanded the following year. France remains the only NATO member outside the Nuclear Planning Group and unlike the United States and the United Kingdom, will not commit its nuclear-armed submarines to the alliance.[43][58] Few members spend more than two percent of their gross domestic product on defence,[131] with the United States accounting for three quarters of NATO defense spending.[132]

New membership in the alliance has been largely from Central and Eastern Europe, including former members of the Warsaw Pact. Accession to the alliance is governed with individual Membership Action Plans, and requires approval by each current member. NATO currently has two candidate countries that are in the process of joining the alliance: Bosnia and Herzegovina and the Republic of Macedonia. In NATO official statements, the Republic of Macedonia is always referred to as the “former Yugoslav Republic of Macedonia”, with a footnote stating that “Turkey recognizes the Republic of Macedonia under its constitutional name”. Though Macedonia completed its requirements for membership at the same time as Croatia and Albania, who joined NATO in 2009, its accession was blocked by Greece pending a resolution of the Macedonia naming dispute.[133] In order to support each other in the process, new and potential members in the region formed the Adriatic Charter in 2003.[134] Georgia was also named as an aspiring member, and was promised “future membership” during the 2008 summit in Bucharest,[135] though in 2014, US President Barack Obama said the country was not “currently on a path” to membership.[136]

Russia continues to oppose further expansion, seeing it as inconsistent with understandings between Soviet leader Mikhail Gorbachev and European and American negotiators that allowed for a peaceful German reunification.[54] NATO’s expansion efforts are often seen by Moscow leaders as a continuation of a Cold War attempt to surround and isolate Russia,[137] though they have also been criticised in the West.[138] A June 2016 Levada poll found that 68% of Russians think that deploying NATO troops in the Baltic states and Poland former Eastern bloc countries bordering Russia is a threat to Russia.[139] Ukraine’s relationship with NATO and Europe has been politically divisive, and contributed to “Euromaidan” protests that saw the ousting of pro-Russian President Viktor Yanukovych in 2014. In March 2014, Prime Minister Arseniy Yatsenyuk reiterated the government’s stance that Ukraine is not seeking NATO membership.[140] Ukraine’s president subsequently signed a bill dropping his nation’s nonaligned status in order to pursue NATO membership, but signaled that it would hold a referendum before seeking to join.[141] Ukraine is one of eight countries in Eastern Europe with an Individual Partnership Action Plan. IPAPs began in 2002, and are open to countries that have the political will and ability to deepen their relationship with NATO.[142]

A 2006 study in the journal Security Studies argued that NATO enlargement contributed to democratic consolidation in Central and Eastern Europe.[143]

The Partnership for Peace (PfP) programme was established in 1994 and is based on individual bilateral relations between each partner country and NATO: each country may choose the extent of its participation.[145] Members include all current and former members of the Commonwealth of Independent States.[146] The Euro-Atlantic Partnership Council (EAPC) was first established on 29 May 1997, and is a forum for regular coordination, consultation and dialogue between all fifty participants.[147] The PfP programme is considered the operational wing of the Euro-Atlantic Partnership.[145] Other third countries also have been contacted for participation in some activities of the PfP framework such as Afghanistan.[148]

The European Union (EU) signed a comprehensive package of arrangements with NATO under the Berlin Plus agreement on 16 December 2002. With this agreement, the EU was given the possibility to use NATO assets in case it wanted to act independently in an international crisis, on the condition that NATO itself did not want to actthe so-called “right of first refusal”.[149] For example, Article 42(7) of the 1982 Treaty of Lisbon specifies that “If a Member State is the victim of armed aggression on its territory, the other Member States shall have towards it an obligation of aid and assistance by all the means in their power”. The treaty applies globally to specified territories whereas NATO is restricted under its Article 6 to operations north of the Tropic of Cancer. It provides a “double framework” for the EU countries that are also linked with the PfP programme.

Additionally, NATO cooperates and discusses its activities with numerous other non-NATO members. The Mediterranean Dialogue was established in 1994 to coordinate in a similar way with Israel and countries in North Africa. The Istanbul Cooperation Initiative was announced in 2004 as a dialog forum for the Middle East along the same lines as the Mediterranean Dialogue. The four participants are also linked through the Gulf Cooperation Council.[150]

Political dialogue with Japan began in 1990, and since then, the Alliance has gradually increased its contact with countries that do not form part of any of these cooperation initiatives.[151] In 1998, NATO established a set of general guidelines that do not allow for a formal institutionalisation of relations, but reflect the Allies’ desire to increase cooperation. Following extensive debate, the term “Contact Countries” was agreed by the Allies in 2000. By 2012, the Alliance had broadened this group, which meets to discuss issues such as counter-piracy and technology exchange, under the names “partners across the globe” or “global partners”.[152][153] Australia and New Zealand, both contact countries, are also members of the AUSCANNZUKUS strategic alliance, and similar regional or bilateral agreements between contact countries and NATO members also aid cooperation. Colombia is the NATOs latest partner and Colombia has access to the full range of cooperative activities NATO offers to partners; Colombia became the first and only Latin American country to cooperate with NATO.[154]

The main headquarters of NATO is located on Boulevard LopoldIII/Leopold III-laan, B-1110 Brussels, which is in Haren, part of the City of Brussels municipality.[155] A new 750 million headquarters building began construction in 2010, was completed in summer 2016,[156] and was dedicated on 25 May 2017.[157] The 250,000 square metres (2,700,000sqft) complex was designed by Jo Palma and home to a staff of 3800.[158] Problems in the original building stemmed from its hurried construction in 1967, when NATO was forced to move its headquarters from Porte Dauphine in Paris, France following the French withdrawal.[42]

The staff at the Headquarters is composed of national delegations of member countries and includes civilian and military liaison offices and officers or diplomatic missions and diplomats of partner countries, as well as the International Staff and International Military Staff filled from serving members of the armed forces of member states.[160] Non-governmental citizens’ groups have also grown up in support of NATO, broadly under the banner of the Atlantic Council/Atlantic Treaty Association movement.

The cost of the new headquarters building escalated to about 1.1 billion[161] or $1.23 billion.[162]

Like any alliance, NATO is ultimately governed by its 29 member states. However, the North Atlantic Treaty and other agreements outline how decisions are to be made within NATO. Each of the 29 members sends a delegation or mission to NATO’s headquarters in Brussels, Belgium.[163] The senior permanent member of each delegation is known as the Permanent Representative and is generally a senior civil servant or an experienced ambassador (and holding that diplomatic rank). Several countries have diplomatic missions to NATO through embassies in Belgium.

Together, the Permanent Members form the North Atlantic Council (NAC), a body which meets together at least once a week and has effective governance authority and powers of decision in NATO. From time to time the Council also meets at higher level meetings involving foreign ministers, defence ministers or heads of state or government (HOSG) and it is at these meetings that major decisions regarding NATO’s policies are generally taken. However, it is worth noting that the Council has the same authority and powers of decision-making, and its decisions have the same status and validity, at whatever level it meets. France, Germany, Italy, the United Kingdom and the United States are together referred to as the Quint, which is an informal discussion group within NATO. NATO summits also form a further venue for decisions on complex issues, such as enlargement.[164]

The meetings of the North Atlantic Council are chaired by the Secretary General of NATO and, when decisions have to be made, action is agreed upon on the basis of unanimity and common accord. There is no voting or decision by majority. Each nation represented at the Council table or on any of its subordinate committees retains complete sovereignty and responsibility for its own decisions.

The body that sets broad strategic goals for NATO is the NATO Parliamentary Assembly (NATO-PA) which meets at the Annual Session, and one other time during the year, and is the organ that directly interacts with the parliamentary structures of the national governments of the member states which appoint Permanent Members, or ambassadors to NATO. The NATO Parliamentary Assembly is made up of legislators from the member countries of the North Atlantic Alliance as well as thirteen associate members. Karl A. Lamers, German Deputy Chairman of the Defence Committee of the Bundestag and a member of the Christian Democratic Union, became president of the assembly in 2010.[167] It is however officially a different structure from NATO, and has as aim to join together deputies of NATO countries in order to discuss security policies on the NATO Council.

The Assembly is the political integration body of NATO that generates political policy agenda setting for the NATO Council via reports of its five committees:

These reports provide impetus and direction as agreed upon by the national governments of the member states through their own national political processes and influencers to the NATO administrative and executive organizational entities.

NATO’s military operations are directed by the Chairman of the NATO Military Committee with the Deputy Chairman, and split into two Strategic Commands commanded by a senior US officer and (currently) a senior French officer[168] assisted by a staff drawn from across NATO. The Strategic Commanders are responsible to the Military Committee for the overall direction and conduct of all Alliance military matters within their areas of command.[62]

Each country’s delegation includes a Military Representative, a senior officer from each country’s armed forces, supported by the International Military Staff. Together the Military Representatives form the Military Committee, a body responsible for recommending to NATO’s political authorities those measures considered necessary for the common defence of the NATO area. Its principal role is to provide direction and advice on military policy and strategy. It provides guidance on military matters to the NATO Strategic Commanders, whose representatives attend its meetings, and is responsible for the overall conduct of the military affairs of the Alliance under the authority of the Council.[169] The Chairman of the NATO Military Committee is Petr Pavel of the Czech Republic, since 2015, and the Deputy Chairman is Steven Shepro of the United States, since 2016.

Like the Council, from time to time the Military Committee also meets at a higher level, namely at the level of Chiefs of Defence, the most senior military officer in each nation’s armed forces. Until 2008 the Military Committee excluded France, due to that country’s 1966 decision to remove itself from the NATO Military Command Structure, which it rejoined in 1995. Until France rejoined NATO, it was not represented on the Defence Planning Committee, and this led to conflicts between it and NATO members.[170] Such was the case in the lead up to Operation Iraqi Freedom.[171] The operational work of the Committee is supported by the International Military Staff.

The structure of NATO evolved throughout the Cold War and its aftermath. An integrated military structure for NATO was first established in 1950 as it became clear that NATO would need to enhance its defences for the longer term against a potential Soviet attack. In April 1951, Allied Command Europe and its headquarters (SHAPE) were established; later, four subordinate headquarters were added in Northern and Central Europe, the Southern Region, and the Mediterranean.[172]

From the 1950s to 2003, the Strategic Commanders were the Supreme Allied Commander Europe (SACEUR) and the Supreme Allied Commander Atlantic (SACLANT). The current arrangement is to separate responsibility between Allied Command Transformation (ACT), responsible for transformation and training of NATO forces, and Allied Command Operations (ACO), responsible for NATO operations worldwide.[173] Starting in late 2003 NATO has restructured how it commands and deploys its troops by creating several NATO Rapid Deployable Corps, including Eurocorps, I. German/Dutch Corps, Multinational Corps Northeast, and NATO Rapid Deployable Italian Corps among others, as well as naval High Readiness Forces (HRFs), which all report to Allied Command Operations.[174]

In early 2015, in the wake of the War in Donbass, meetings of NATO ministers decided that Multinational Corps Northeast would be augmented so as to develop greater capabilities, to, if thought necessary, prepare to defend the Baltic States, and that a new Multinational Division Southeast would be established in Romania. Six NATO Force Integration Units would also be established to coordinate preparations for defence of new Eastern members of NATO.[175]

Multinational Division Southeast was activated on 1 December 2015.[176] Headquarters Multinational Division South East (HQ MND-SE) is a North Atlantic Council (NAC) activated NATO military body under operational command (OPCOM) of Supreme Allied Commander Europe (SACEUR) which may be employed and deployed in peacetime, crisis and operations by NATO on the authority of the appropriate NATO Military Authorities by means of an exercise or operational tasking issued in accordance with the Command and Control Technical Arrangement (C2 TA) and standard NATO procedures.

During August 2016, it was announced that 650 soldiers of the British Army would be deployed on an enduring basis in Eastern Europe, mainly in Estonia with some also being deployed to Poland. This British deployment forms part of a four-battle group (four-battalion) deployment by various allies, NATO Enhanced Forward Presence, one each spread from Poland (the Poland-deployed battle group mostly led by the US) to Estonia.

Read the original here:

NATO – Wikipedia

NATO – Homepage

NATO constantly reviews and transforms its policies, capabilities and structures to ensure that it can continue to address current and future challenges to the freedom and security of its members. Presently, Allied forces are required to carry out a wide range of missions across several continents; the Alliance needs to ensure that its armed forces remain modern, deployable, and capable of sustained operations.

Read the rest here:

NATO – Homepage

Peterson Institute for International Economics

Policy Brief

How to Solve the Greek Debt Problem

Jeromin Zettelmeyer (PIIE), Emilios Avgouleas (University of Edinburgh), Barry Eichengreen (University of California, Berkeley), Miguel Poiares Maduro (European University Institute, Florence), Ugo Panizza (Graduate Institute, Geneva), Richard Portes (London Business School), Beatrice Weder di Mauro (INSEAD, Singapore) and Charles Wyplosz (Graduate Institute, Geneva)

See the rest here:

Peterson Institute for International Economics

Liberal Democrats

Published and promoted by Nick Harvey on behalf of the Liberal Democrats,8-10 Great George Street, London, SW1P 3AE.Hosted by NationBuilder.

The Liberal Democrats and their elected representatives may use the information youve given to contact you. By providing your data to us, you are consenting to us making contact with you in the future by mail, email, telephone, text, website and apps, even though you may be registered with the Telephone Preference Service. Your data may be stored or otherwise processed in the US, governed by European Commission model contract clauses. You can always opt out of communications at any time by contacting us or visiting For more information go to

Read the rest here:

Liberal Democrats

What is supercomputer? – Definition from

A supercomputer is a computer that performs at or near the currently highest operational rate for computers. Traditionally, supercomputers have been used for scientific and engineering applications that must handle very large databases or do a great amount of computation (or both).Although advances likemulti-core processors and GPGPUs (general-purpose graphics processing units)have enabled powerful machinesfor personal use (see: desktop supercomputer, GPU supercomputer),by definition, a supercomputer is exceptional in terms of performance.

At any given time, there are a few well-publicized supercomputers that operate at extremely high speeds relative to all other computers. The term is also sometimes applied to far slower (but still impressively fast) computers. The largest, most powerful supercomputers are really multiple computers that perform parallel processing. In general, there are two parallel processing approaches: symmetric multiprocessing (SMP) and massively parallel processing (MPP).

As of June 2016, the fastest supercomputer in the world was the Sunway TaihuLight, in the city of Wixu in China. A few statistics on TaihuLight:

The first commercially successful supercomputer, the CDC (Control Data Corporation) 6600 was designed by Seymour Cray. Released in 1964, the CDC 6600 had a single CPU and cost $8 million the equivalent of $60 million today. The CDC could handle three million floating point operations per second (flops).

Cray went on to found a supercomputer company under his name in 1972. Although the company has changed hands a number of times it is still in operation. In September 2008, Cray and Microsoft launched CX1, a $25,000 personal supercomputer aimed at markets such as aerospace, automotive, academic, financial services and life sciences.

IBM has been a keen competitor. The company’s Roadrunner, once the top-ranked supercomputer, was twice as fast as IBM’s Blue Gene and six times as fast as any of other supercomputers at that time. IBM’s Watson is famous for having adopted cognitive computing to beat champion Ken Jennings on Jeopardy!, a popular quiz show.



Peak speed (Rmax)



Sunway TaihuLight


Wuxi, China




Guangzhou, China




Oak Ridge, U.S.




Livermore, U.S.


FujitsuK computer


Kobe, Japan




Tianjin, China




Oak Ridge, U.S.




Los Alamos, U.S.


In the United States, some supercomputer centers are interconnected on an Internet backbone known as vBNS or NSFNet. This network is the foundation for an evolving network infrastructure known as the National Technology Grid. Internet2 is a university-led project that is part of this initiative.

At the lower end of supercomputing, clustering takes more of a build-it-yourself approach to supercomputing. The Beowulf Project offers guidance on how to put together a number of off-the-shelf personal computer processors, using Linux operating systems, and interconnecting the processors with Fast Ethernet. Applications must be written to manage the parallel processing.

Original post:

What is supercomputer? – Definition from

Common Sense Atheism Atheism is just the beginning. Now …

by Luke Muehlhauser on June 2, 2014 in News

Im blogging again,

Its RSS feed liveshere.

The most substantial post there so far is The Riddle of Being or Nothingness.

by Luke Muehlhauser on July 12, 2012 in News

This is probably the last post on, which is now merely an archive of posts.

In July 2012 I launched Its a small, simple site an ideal place to send your friends when you want to introduce them to naturalism.

You can track my writings around the web via my personal websites news page (RSS). I mostly write on Less Wrong and the MIRI blog.

Common Sense Atheismhas closed its doors. Comments are turned off and there will be no new posts.

I will keep the debates page updated, so feel free to notify me of new debates.

The site will remain online as an archive. See the Contents page for a quick view of the sites main attractions.

You can keep up with my work on a variety of websites at, which has an RSS feed that will alert you to my new works when they are published. If nothing else, youll want to subscribe to that feed so you are notified when goes live. You can also follow my Twitter page.

by Luke Muehlhauser on January 28, 2012 in News

Common Sense Atheismis closing its doors.

Its been a great ride, and my interests have now turned elsewhere.

Ill keep comments open for about a week, and then comments on the site will be closed, but this site will remain online as an archive. I also plan to keep the debates page updated.

by Luke Muehlhauser on January 23, 2012 in News

Allow me to indulge in some anticipation

What are you most eagerly anticipating?

Read the rest here:

Common Sense Atheism Atheism is just the beginning. Now …

What are quantum computers and how do they work? WIRED …

Ray Orange

Google, IBM and a handful of startups are racing to create the next generation of supercomputers. Quantum computers, if they ever get started, will help us solve problems, like modelling complex chemical processes, that our existing computers can’t even scratch the surface of.

But the quantum future isn’t going to come easily, and there’s no knowing what it’ll look like when it does arrive. At the moment, companies and researchers are using a handful of different approaches to try and build the most powerful computers the world has ever seen. Here’s everything you need to know about the coming quantum revolution.

Quantum computing takes advantage of the strange ability of subatomic particles to exist in more than one state at any time. Due to the way the tiniest of particles behave, operations can be done much more quickly and use less energy than classical computers.

In classical computing, a bit is a single piece of information that can exist in two states 1 or 0. Quantum computing uses quantum bits, or ‘qubits’ instead. These are quantum systems with two states. However, unlike a usual bit, they can store much more information than just 1 or 0, because they can exist in any superposition of these values.


“The difference between classical bits and qubits is that we can also prepare qubits in a quantum superposition of 0 and 1 and create nontrivial correlated states of a number of qubits, so-called ‘entangled states’,” says Alexey Fedorov, a physicist at the Moscow Institute of Physics and Technology.

A qubit can be thought of like an imaginary sphere. Whereas a classical bit can be in two states at either of the two poles of the sphere a qubit can be any point on the sphere. This means a computer using these bits can store a huge amount more information using less energy than a classical computer.

Until recently, it seemed like Google was leading the pack when it came to creating a quantum computer that could surpass the abilities of conventional computers. In a Nature article published in March 2017, the search giant set out ambitious plans to commercialise quantum technology in the next five years. Shortly after that, Google said it intended to achieve something its calling quantum supremacy with a 49-qubit computer by the end of 2017.

Now, quantum supremacy, which roughly refers to the point where a quantum computer can crunch sums that a conventional computer couldnt hope to simulate, isnt exactly a widely accepted term within the quantum community. Those sceptical of Googles quantum project or at least the way it talks about quantum computing argue that supremacy is essentially an arbitrary goal set by Google to make it look like its making strides in quantum when really its just meeting self-imposed targets.

Whether its an arbitrary goal or not, Google was pipped to the supremacy post by IBM in November 2017, when the company announced it had built a 50-qubit quantum computer. Even that, however, was far from stable, as the system could only hold its quantum microstate for 90 microseconds, a record, but far from the times needed to make quantum computing practically viable. Just because IBM has built a 50-qubit system, however, doesnt necessarily mean theyve cracked supremacy and definitely doesnt mean that theyve created a quantum computer that is anywhere near ready for practical use.

Where IBM has gone further than Google, however, is making quantum computers commercially available. Since 2016, it has offered researchers the chance to run experiments on a five-qubit quantum computer via the cloud and at the end of 2017 started making its 20-qubit system available online too.

But quantum computing is by no means a two-horse race. Californian startup Rigetti is focusing on the stability of its own systems rather than just the number of qubits and it could be the first to build a quantum computer that people can actually use. D-Wave, a company based in Vancouver, Canada, has already created what it is calling a 2,000-qubit system although many researchers dont consider the D-wave systems to be true quantum computers. Intel, too, has skin in the game. In February 2018 the company announced that it had found a way of fabricating quantum chips from silicon, which would make it much easier to produce chips using existing manufacturing methods.

Quantum computers operate on completely different principles to existing computers, which makes them really well suited to solving particular mathematical problems, like finding very large prime numbers. Since prime numbers are so important in cryptography, its likely that quantum computers would quickly be able to crack many of the systems that keep our online information secure. Because of these risks, researchers are already trying to develop technology that is resistant to quantum hacking, and on the flipside of that, its possible that quantum-based cryptographic systems would be much more secure than their conventional analogues.

Researchers are also excited about the prospect of using quantum computers to model complicated chemical reactions, a task that conventional supercomputers arent very good at all. In July 2016, Google engineers used a quantum device to simulate a hydrogen molecule for the first time, and since them IBM has managed to model the behaviour of even more complex molecules. Eventually, researchers hope theyll be able to use quantum simulations to design entirely new molecules for use in medicine. But the holy grail for quantum chemists is to be able to model the Haber-Bosch process a way of artificially producing ammonia that is still relatively inefficient. Researchers are hoping that if they can use quantum mechanics to work out whats going on inside that reaction, they could discover new ways to make the process much more efficient.

Read more:

What are quantum computers and how do they work? WIRED …

Molecular nanotechnology – Wikipedia

Molecular nanotechnology (MNT) is a technology based on the ability to build structures to complex, atomic specifications by means of mechanosynthesis.[1] This is distinct from nanoscale materials. Based on Richard Feynman’s vision of miniature factories using nanomachines to build complex products (including additional nanomachines), this advanced form of nanotechnology (or molecular manufacturing[2]) would make use of positionally-controlled mechanosynthesis guided by molecular machine systems. MNT would involve combining physical principles demonstrated by biophysics, chemistry, other nanotechnologies, and the molecular machinery of life with the systems engineering principles found in modern macroscale factories.

While conventional chemistry uses inexact processes obtaining inexact results, and biology exploits inexact processes to obtain definitive results, molecular nanotechnology would employ original definitive processes to obtain definitive results. The desire in molecular nanotechnology would be to balance molecular reactions in positionally-controlled locations and orientations to obtain desired chemical reactions, and then to build systems by further assembling the products of these reactions.

A roadmap for the development of MNT is an objective of a broadly based technology project led by Battelle (the manager of several U.S. National Laboratories) and the Foresight Institute.[3] The roadmap was originally scheduled for completion by late 2006, but was released in January 2008.[4] The Nanofactory Collaboration[5] is a more focused ongoing effort involving 23 researchers from 10 organizations and 4 countries that is developing a practical research agenda[6] specifically aimed at positionally-controlled diamond mechanosynthesis and diamondoid nanofactory development. In August 2005, a task force consisting of 50+ international experts from various fields was organized by the Center for Responsible Nanotechnology to study the societal implications of molecular nanotechnology.[7]

One proposed application of MNT is so-called smart materials. This term refers to any sort of material designed and engineered at the nanometer scale for a specific task. It encompasses a wide variety of possible commercial applications. One example would be materials designed to respond differently to various molecules; such a capability could lead, for example, to artificial drugs which would recognize and render inert specific viruses. Another is the idea of self-healing structures, which would repair small tears in a surface naturally in the same way as self-sealing tires or human skin.

A MNT nanosensor would resemble a smart material, involving a small component within a larger machine that would react to its environment and change in some fundamental, intentional way. A very simple example: a photosensor might passively measure the incident light and discharge its absorbed energy as electricity when the light passes above or below a specified threshold, sending a signal to a larger machine. Such a sensor would supposedly cost less and use less power than a conventional sensor, and yet function usefully in all the same applications for example, turning on parking lot lights when it gets dark.

While smart materials and nanosensors both exemplify useful applications of MNT, they pale in comparison with the complexity of the technology most popularly associated with the term: the replicating nanorobot.

MNT nanofacturing is popularly linked with the idea of swarms of coordinated nanoscale robots working together, a popularization of an early proposal by K. Eric Drexler in his 1986 discussions of MNT, but superseded in 1992. In this early proposal, sufficiently capable nanorobots would construct more nanorobots in an artificial environment containing special molecular building blocks.

Critics have doubted both the feasibility of self-replicating nanorobots and the feasibility of control if self-replicating nanorobots could be achieved: they cite the possibility of mutations removing any control and favoring reproduction of mutant pathogenic variations. Advocates address the first doubt by pointing out that the first macroscale autonomous machine replicator, made of Lego blocks, was built and operated experimentally in 2002.[8] While there are sensory advantages present at the macroscale compared to the limited sensorium available at the nanoscale, proposals for positionally controlled nanoscale mechanosynthetic fabrication systems employ dead reckoning of tooltips combined with reliable reaction sequence design to ensure reliable results, hence a limited sensorium is no handicap; similar considerations apply to the positional assembly of small nanoparts. Advocates address the second doubt by arguing that bacteria are (of necessity) evolved to evolve, while nanorobot mutation could be actively prevented by common error-correcting techniques. Similar ideas are advocated in the Foresight Guidelines on Molecular Nanotechnology,[9] and a map of the 137-dimensional replicator design space[10] recently published by Freitas and Merkle provides numerous proposed methods by which replicators could, in principle, be safely controlled by good design.

However, the concept of suppressing mutation raises the question: How can design evolution occur at the nanoscale without a process of random mutation and deterministic selection? Critics argue that MNT advocates have not provided a substitute for such a process of evolution in this nanoscale arena where conventional sensory-based selection processes are lacking. The limits of the sensorium available at the nanoscale could make it difficult or impossible to winnow successes from failures. Advocates argue that design evolution should occur deterministically and strictly under human control, using the conventional engineering paradigm of modeling, design, prototyping, testing, analysis, and redesign.

In any event, since 1992 technical proposals for MNT do not include self-replicating nanorobots, and recent ethical guidelines put forth by MNT advocates prohibit unconstrained self-replication.[9][11]

One of the most important applications of MNT would be medical nanorobotics or nanomedicine, an area pioneered by Robert Freitas in numerous books[12] and papers.[13] The ability to design, build, and deploy large numbers of medical nanorobots would, at a minimum, make possible the rapid elimination of disease and the reliable and relatively painless recovery from physical trauma. Medical nanorobots might also make possible the convenient correction of genetic defects, and help to ensure a greatly expanded lifespan. More controversially, medical nanorobots might be used to augment natural human capabilities. One study has reported on the conditions like tumors, arteriosclerosis, blood clots leading to stroke, accumulation of scar tissue and localized pockets of infection can be possibly be addressed by employing medical nanorobots.[14][15]

Another proposed application of molecular nanotechnology is “utility fog”[16] in which a cloud of networked microscopic robots (simpler than assemblers) would change its shape and properties to form macroscopic objects and tools in accordance with software commands. Rather than modify the current practices of consuming material goods in different forms, utility fog would simply replace many physical objects.

Yet another proposed application of MNT would be phased-array optics (PAO).[17] However, this appears to be a problem addressable by ordinary nanoscale technology. PAO would use the principle of phased-array millimeter technology but at optical wavelengths. This would permit the duplication of any sort of optical effect but virtually. Users could request holograms, sunrises and sunsets, or floating lasers as the mood strikes. PAO systems were described in BC Crandall’s Nanotechnology: Molecular Speculations on Global Abundance in the Brian Wowk article “Phased-Array Optics.”[18]

Molecular manufacturing is a potential future subfield of nanotechnology that would make it possible to build complex structures at atomic precision.[19] Molecular manufacturing requires significant advances in nanotechnology, but once achieved could produce highly advanced products at low costs and in large quantities in nanofactories weighing a kilogram or more.[19][20] When nanofactories gain the ability to produce other nanofactories production may only be limited by relatively abundant factors such as input materials, energy and software.[20]

The products of molecular manufacturing could range from cheaper, mass-produced versions of known high-tech products to novel products with added capabilities in many areas of application. Some applications that have been suggested are advanced smart materials, nanosensors, medical nanorobots and space travel.[19] Additionally, molecular manufacturing could be used to cheaply produce highly advanced, durable weapons, which is an area of special concern regarding the impact of nanotechnology.[20] Being equipped with compact computers and motors these could be increasingly autonomous and have a large range of capabilities.[20]

According to Chris Phoenix and Mike Treder from the Center for Responsible Nanotechnology as well as Anders Sandberg from the Future of Humanity Institute molecular manufacturing is the application of nanotechnology that poses the most significant global catastrophic risk.[20][21] Several nanotechnology researchers state that the bulk of risk from nanotechnology comes from the potential to lead to war, arms races and destructive global government.[20][21][22] Several reasons have been suggested why the availability of nanotech weaponry may with significant likelihood lead to unstable arms races (compared to e.g. nuclear arms races): (1) A large number of players may be tempted to enter the race since the threshold for doing so is low;[20] (2) the ability to make weapons with molecular manufacturing will be cheap and easy to hide;[20] (3) therefore lack of insight into the other parties’ capabilities can tempt players to arm out of caution or to launch preemptive strikes;[20][23] (4) molecular manufacturing may reduce dependency on international trade,[20] a potential peace-promoting factor;[24] (5) wars of aggression may pose a smaller economic threat to the aggressor since manufacturing is cheap and humans may not be needed on the battlefield.[20]

Since self-regulation by all state and non-state actors seems hard to achieve,[25] measures to mitigate war-related risks have mainly been proposed in the area of international cooperation.[20][26] International infrastructure may be expanded giving more sovereignty to the international level. This could help coordinate efforts for arms control.[27] International institutions dedicated specifically to nanotechnology (perhaps analogously to the International Atomic Energy Agency IAEA) or general arms control may also be designed.[26] One may also jointly make differential technological progress on defensive technologies, a policy that players should usually favour.[20] The Center for Responsible Nanotechnology also suggest some technical restrictions.[28] Improved transparency regarding technological capabilities may be another important facilitator for arms-control.[29]

A grey goo is another catastrophic scenario, which was proposed by Eric Drexler in his 1986 book Engines of Creation,[30] has been analyzed by Freitas in “Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy Recommendations” [31] and has been a theme in mainstream media and fiction.[32][33] This scenario involves tiny self-replicating robots that consume the entire biosphere using it as a source of energy and building blocks. Nanotech experts including Drexler now discredit the scenario. According to Chris Phoenix a “So-called grey goo could only be the product of a deliberate and difficult engineering process, not an accident”.[34] With the advent of nano-biotech, a different scenario called green goo has been forwarded. Here, the malignant substance is not nanobots but rather self-replicating biological organisms engineered through nanotechnology.

Nanotechnology (or molecular nanotechnology to refer more specifically to the goals discussed here) will let us continue the historical trends in manufacturing right up to the fundamental limits imposed by physical law. It will let us make remarkably powerful molecular computers. It will let us make materials over fifty times lighter than steel or aluminium alloy but with the same strength. We’ll be able to make jets, rockets, cars or even chairs that, by today’s standards, would be remarkably light, strong, and inexpensive. Molecular surgical tools, guided by molecular computers and injected into the blood stream could find and destroy cancer cells or invading bacteria, unclog arteries, or provide oxygen when the circulation is impaired.

Nanotechnology will replace our entire manufacturing base with a new, radically more precise, radically less expensive, and radically more flexible way of making products. The aim is not simply to replace today’s computer chip making plants, but also to replace the assembly lines for cars, televisions, telephones, books, surgical tools, missiles, bookcases, airplanes, tractors, and all the rest. The objective is a pervasive change in manufacturing, a change that will leave virtually no product untouched. Economic progress and military readiness in the 21st Century will depend fundamentally on maintaining a competitive position in nanotechnology.


Despite the current early developmental status of nanotechnology and molecular nanotechnology, much concern surrounds MNT’s anticipated impact on economics[36][37] and on law. Whatever the exact effects, MNT, if achieved, would tend to reduce the scarcity of manufactured goods and make many more goods (such as food and health aids) manufacturable.

MNT should make possible nanomedical capabilities able to cure any medical condition not already cured by advances in other areas. Good health would be common, and poor health of any form would be as rare as smallpox and scurvy are today. Even cryonics would be feasible, as cryopreserved tissue could be fully repaired.

Molecular nanotechnology is one of the technologies that some analysts believe could lead to a technological singularity. Some feel that molecular nanotechnology would have daunting risks.[38] It conceivably could enable cheaper and more destructive conventional weapons. Also, molecular nanotechnology might permit weapons of mass destruction that could self-replicate, as viruses and cancer cells do when attacking the human body. Commentators generally agree that, in the event molecular nanotechnology were developed, its self-replication should be permitted only under very controlled or “inherently safe” conditions.

A fear exists that nanomechanical robots, if achieved, and if designed to self-replicate using naturally occurring materials (a difficult task), could consume the entire planet in their hunger for raw materials,[39] or simply crowd out natural life, out-competing it for energy (as happened historically when blue-green algae appeared and outcompeted earlier life forms). Some commentators have referred to this situation as the “grey goo” or “ecophagy” scenario. K. Eric Drexler considers an accidental “grey goo” scenario extremely unlikely and says so in later editions of Engines of Creation.

In light of this perception of potential danger, the Foresight Institute, founded by Drexler, has prepared a set of guidelines[40] for the ethical development of nanotechnology. These include the banning of free-foraging self-replicating pseudo-organisms on the Earth’s surface, at least, and possibly in other places.

The feasibility of the basic technologies analyzed in Nanosystems has been the subject of a formal scientific review by U.S. National Academy of Sciences, and has also been the focus of extensive debate on the internet and in the popular press.

In 2006, U.S. National Academy of Sciences released the report of a study of molecular manufacturing as part of a longer report, A Matter of Size: Triennial Review of the National Nanotechnology Initiative[41] The study committee reviewed the technical content of Nanosystems, and in its conclusion states that no current theoretical analysis can be considered definitive regarding several questions of potential system performance, and that optimal paths for implementing high-performance systems cannot be predicted with confidence. It recommends experimental research to advance knowledge in this area:

A section heading in Drexler’s Engines of Creation reads[42] “Universal Assemblers”, and the following text speaks of multiple types of assemblers which, collectively, could hypothetically “build almost anything that the laws of nature allow to exist.” Drexler’s colleague Ralph Merkle has noted that, contrary to widespread legend,[43] Drexler never claimed that assembler systems could build absolutely any molecular structure. The endnotes in Drexler’s book explain the qualification “almost”: “For example, a delicate structure might be designed that, like a stone arch, would self-destruct unless all its pieces were already in place. If there were no room in the design for the placement and removal of a scaffolding, then the structure might be impossible to build. Few structures of practical interest seem likely to exhibit such a problem, however.”

In 1992, Drexler published Nanosystems: Molecular Machinery, Manufacturing, and Computation,[44] a detailed proposal for synthesizing stiff covalent structures using a table-top factory. Diamondoid structures and other stiff covalent structures, if achieved, would have a wide range of possible applications, going far beyond current MEMS technology. An outline of a path was put forward in 1992 for building a table-top factory in the absence of an assembler. Other researchers have begun advancing tentative, alternative proposed paths [5] for this in the years since Nanosystems was published.

In 2004 Richard Jones wrote Soft Machines (nanotechnology and life), a book for lay audiences published by Oxford University. In this book he describes radical nanotechnology (as advocated by Drexler) as a deterministic/mechanistic idea of nano engineered machines that does not take into account the nanoscale challenges such as wetness, stickiness, Brownian motion, and high viscosity. He also explains what is soft nanotechnology or more appropriatelly biomimetic nanotechnology which is the way forward, if not the best way, to design functional nanodevices that can cope with all the problems at a nanoscale. One can think of soft nanotechnology as the development of nanomachines that uses the lessons learned from biology on how things work, chemistry to precisely engineer such devices and stochastic physics to model the system and its natural processes in detail.

Several researchers, including Nobel Prize winner Dr. Richard Smalley (19432005),[45] attacked the notion of universal assemblers, leading to a rebuttal from Drexler and colleagues,[46] and eventually to an exchange of letters.[47] Smalley argued that chemistry is extremely complicated, reactions are hard to control, and that a universal assembler is science fiction. Drexler and colleagues, however, noted that Drexler never proposed universal assemblers able to make absolutely anything, but instead proposed more limited assemblers able to make a very wide variety of things. They challenged the relevance of Smalley’s arguments to the more specific proposals advanced in Nanosystems. Also, Smalley argued that nearly all of modern chemistry involves reactions that take place in a solvent (usually water), because the small molecules of a solvent contribute many things, such as lowering binding energies for transition states. Since nearly all known chemistry requires a solvent, Smalley felt that Drexler’s proposal to use a high vacuum environment was not feasible. However, Drexler addresses this in Nanosystems by showing mathematically that well designed catalysts can provide the effects of a solvent and can fundamentally be made even more efficient than a solvent/enzyme reaction could ever be. It is noteworthy that, contrary to Smalley’s opinion that enzymes require water, “Not only do enzymes work vigorously in anhydrous organic media, but in this unnatural milieu they acquire remarkable properties such as greatly enhanced stability, radically altered substrate and enantiomeric specificities, molecular memory, and the ability to catalyse unusual reactions.”[48]

For the future, some means have to be found for MNT design evolution at the nanoscale which mimics the process of biological evolution at the molecular scale. Biological evolution proceeds by random variation in ensemble averages of organisms combined with culling of the less-successful variants and reproduction of the more-successful variants, and macroscale engineering design also proceeds by a process of design evolution from simplicity to complexity as set forth somewhat satirically by John Gall: “A complex system that works is invariably found to have evolved from a simple system that worked. . . . A complex system designed from scratch never works and can not be patched up to make it work. You have to start over, beginning with a system that works.” [49] A breakthrough in MNT is needed which proceeds from the simple atomic ensembles which can be built with, e.g., an STM to complex MNT systems via a process of design evolution. A handicap in this process is the difficulty of seeing and manipulation at the nanoscale compared to the macroscale which makes deterministic selection of successful trials difficult; in contrast biological evolution proceeds via action of what Richard Dawkins has called the “blind watchmaker” [50] comprising random molecular variation and deterministic reproduction/extinction.

At present in 2007 the practice of nanotechnology embraces both stochastic approaches (in which, for example, supramolecular chemistry creates waterproof pants) and deterministic approaches wherein single molecules (created by stochastic chemistry) are manipulated on substrate surfaces (created by stochastic deposition methods) by deterministic methods comprising nudging them with STM or AFM probes and causing simple binding or cleavage reactions to occur. The dream of a complex, deterministic molecular nanotechnology remains elusive. Since the mid-1990s, thousands of surface scientists and thin film technocrats have latched on to the nanotechnology bandwagon and redefined their disciplines as nanotechnology. This has caused much confusion in the field and has spawned thousands of “nano”-papers on the peer reviewed literature. Most of these reports are extensions of the more ordinary research done in the parent fields.

The feasibility of Drexler’s proposals largely depends, therefore, on whether designs like those in Nanosystems could be built in the absence of a universal assembler to build them and would work as described. Supporters of molecular nanotechnology frequently claim that no significant errors have been discovered in Nanosystems since 1992. Even some critics concede[51] that “Drexler has carefully considered a number of physical principles underlying the ‘high level’ aspects of the nanosystems he proposes and, indeed, has thought in some detail” about some issues.

Other critics claim, however, that Nanosystems omits important chemical details about the low-level ‘machine language’ of molecular nanotechnology.[52][53][54][55] They also claim that much of the other low-level chemistry in Nanosystems requires extensive further work, and that Drexler’s higher-level designs therefore rest on speculative foundations. Recent such further work by Freitas and Merkle [56] is aimed at strengthening these foundations by filling the existing gaps in the low-level chemistry.

Drexler argues that we may need to wait until our conventional nanotechnology improves before solving these issues: “Molecular manufacturing will result from a series of advances in molecular machine systems, much as the first Moon landing resulted from a series of advances in liquid-fuel rocket systems. We are now in a position like that of the British Interplanetary Society of the 1930s which described how multistage liquid-fueled rockets could reach the Moon and pointed to early rockets as illustrations of the basic principle.”[57] However, Freitas and Merkle argue [58] that a focused effort to achieve diamond mechanosynthesis (DMS) can begin now, using existing technology, and might achieve success in less than a decade if their “direct-to-DMS approach is pursued rather than a more circuitous development approach that seeks to implement less efficacious nondiamondoid molecular manufacturing technologies before progressing to diamondoid”.

To summarize the arguments against feasibility: First, critics argue that a primary barrier to achieving molecular nanotechnology is the lack of an efficient way to create machines on a molecular/atomic scale, especially in the absence of a well-defined path toward a self-replicating assembler or diamondoid nanofactory. Advocates respond that a preliminary research path leading to a diamondoid nanofactory is being developed.[6]

A second difficulty in reaching molecular nanotechnology is design. Hand design of a gear or bearing at the level of atoms might take a few to several weeks. While Drexler, Merkle and others have created designs of simple parts, no comprehensive design effort for anything approaching the complexity of a Model T Ford has been attempted. Advocates respond that it is difficult to undertake a comprehensive design effort in the absence of significant funding for such efforts, and that despite this handicap much useful design-ahead has nevertheless been accomplished with new software tools that have been developed, e.g., at Nanorex.[59]

In the latest report A Matter of Size: Triennial Review of the National Nanotechnology Initiative[41] put out by the National Academies Press in December 2006 (roughly twenty years after Engines of Creation was published), no clear way forward toward molecular nanotechnology could yet be seen, as per the conclusion on page 108 of that report: “Although theoretical calculations can be made today, the eventually attainable range of chemical reaction cycles, error rates, speed of operation, and thermodynamic efficiencies of such bottom-up manufacturing systems cannot be reliably predicted at this time. Thus, the eventually attainable perfection and complexity of manufactured products, while they can be calculated in theory, cannot be predicted with confidence. Finally, the optimum research paths that might lead to systems which greatly exceed the thermodynamic efficiencies and other capabilities of biological systems cannot be reliably predicted at this time. Research funding that is based on the ability of investigators to produce experimental demonstrations that link to abstract models and guide long-term vision is most appropriate to achieve this goal.” This call for research leading to demonstrations is welcomed by groups such as the Nanofactory Collaboration who are specifically seeking experimental successes in diamond mechanosynthesis.[60] The “Technology Roadmap for Productive Nanosystems”[61] aims to offer additional constructive insights.

It is perhaps interesting to ask whether or not most structures consistent with physical law can in fact be manufactured. Advocates assert that to achieve most of the vision of molecular manufacturing it is not necessary to be able to build “any structure that is compatible with natural law.” Rather, it is necessary to be able to build only a sufficient (possibly modest) subset of such structuresas is true, in fact, of any practical manufacturing process used in the world today, and is true even in biology. In any event, as Richard Feynman once said, “It is scientific only to say what’s more likely or less likely, and not to be proving all the time what’s possible or impossible.”[62]

There is a growing body of peer-reviewed theoretical work on synthesizing diamond by mechanically removing/adding hydrogen atoms [63] and depositing carbon atoms [64][65][66][67][68][69] (a process known as mechanosynthesis). This work is slowly permeating the broader nanoscience community and is being critiqued. For instance, Peng et al. (2006)[70] (in the continuing research effort by Freitas, Merkle and their collaborators) reports that the most-studied mechanosynthesis tooltip motif (DCB6Ge) successfully places a C2 carbon dimer on a C(110) diamond surface at both 300K (room temperature) and 80K (liquid nitrogen temperature), and that the silicon variant (DCB6Si) also works at 80K but not at 300K. Over 100,000 CPU hours were invested in this latest study. The DCB6 tooltip motif, initially described by Merkle and Freitas at a Foresight Conference in 2002, was the first complete tooltip ever proposed for diamond mechanosynthesis and remains the only tooltip motif that has been successfully simulated for its intended function on a full 200-atom diamond surface.

The tooltips modeled in this work are intended to be used only in carefully controlled environments (e.g., vacuum). Maximum acceptable limits for tooltip translational and rotational misplacement errors are reported in Peng et al. (2006) — tooltips must be positioned with great accuracy to avoid bonding the dimer incorrectly. Peng et al. (2006) reports that increasing the handle thickness from 4 support planes of C atoms above the tooltip to 5 planes decreases the resonance frequency of the entire structure from 2.0THz to 1.8THz. More importantly, the vibrational footprints of a DCB6Ge tooltip mounted on a 384-atom handle and of the same tooltip mounted on a similarly constrained but much larger 636-atom “crossbar” handle are virtually identical in the non-crossbar directions. Additional computational studies modeling still bigger handle structures are welcome, but the ability to precisely position SPM tips to the requisite atomic accuracy has been repeatedly demonstrated experimentally at low temperature,[71][72] or even at room temperature[73][74] constituting a basic existence proof for this capability.

Further research[75] to consider additional tooltips will require time-consuming computational chemistry and difficult laboratory work.

A working nanofactory would require a variety of well-designed tips for different reactions, and detailed analyses of placing atoms on more complicated surfaces. Although this appears a challenging problem given current resources, many tools will be available to help future researchers: Moore’s law predicts further increases in computer power, semiconductor fabrication techniques continue to approach the nanoscale, and researchers grow ever more skilled at using proteins, ribosomes and DNA to perform novel chemistry.

Read more:

Molecular nanotechnology – Wikipedia

Home | Sealand Aviation Ltd., Campbell River aircraft …

Sealand Aviation Ltd. – Campbell River (CYBL)Vancouver Island, British Columbia, Canada.Aircraft Maintenance, Repairs and Modifications

Sealand Aviation overhauls, rebuilds, salvages and repairs aircraft. Sealand Aviation also manufactures aircraft modification kits and components. The company is Transport Canada approved for structures, maintenance, welding and manufacturing.

The experienced aviation mechanics at Sealand Aviation provide excellent maintenance and services on both floatplanes and wheel equipped light aircraft.


Sealand Aviation was originally started by Bill Alder in Campbell River, BC to provide aircraft maintenance services for the commercial floatplanes servicing the forestry and fishing industries on the West Coast of British Columbia, Canada. Sealand Aviation quickly established a reputation in the aircraft maintenance field for its large inventory of aircraft parts and reliable, efficient customer service.

Aircraft maintenance and aircraft repairs are still the mainstay of the company but Sealand Aviation now also designs, certifies and manufactures modifications, primarily for the de Havilland Beaver. These aircraft modification kits include the Cabin Extension Kit and the Alaska Door, Jump Door, Westcoast Windows, among others.

Change is good and Sealand Aviation is not a company that rests on its laurels. New and innovative aviation products are developed and manufactured onsite in response to ever changing customers’ needs. Sealand maintains a large inventory of all aircraft parts and products and can ship aircraft modification kits all over the globe.

Sealand Aviation works with some of North America’s best aviation engineers for certification and the existing customer base is worldwide.

We welcome your comments and inquiries. Please complete our survey or contact us, your feedback is extremely valuable to us.

Read this article:

Home | Sealand Aviation Ltd., Campbell River aircraft …