12345...102030...


Nineteen Eighty-Four – Wikipedia

Nineteen Eighty-Four, often published as 1984, is a dystopian novel published in 1949 by English author George Orwell.[2][3] The novel is set in the year 1984 when most of the world population have become victims of perpetual war, omnipresent government surveillance and public manipulation.

In the novel, Great Britain (“Airstrip One”) has become a province of a superstate named Oceania. Oceania is ruled by the “Party”, who employ the “Thought Police” to persecute individualism and independent thinking.[4] The Party’s leader is Big Brother, who enjoys an intense cult of personality but may not even exist. The protagonist of the novel, Winston Smith, is a rank-and-file Party member. Smith is an outwardly diligent and skillful worker, but he secretly hates the Party and dreams of rebellion against Big Brother. Smith rebels by entering a forbidden relationship with fellow employee Julia.

As literary political fiction and dystopian science-fiction, Nineteen Eighty-Four is a classic novel in content, plot, and style. Many of its terms and concepts, such as Big Brother, doublethink, thoughtcrime, Newspeak, Room 101, telescreen, 2 + 2 = 5, and memory hole, have entered into common usage since its publication in 1949. Nineteen Eighty-Four popularised the adjective Orwellian, which describes official deception, secret surveillance, brazenly misleading terminology, and manipulation of recorded history by a totalitarian or authoritarian state.[5] In 2005, the novel was chosen by Time magazine as one of the 100 best English-language novels from 1923 to 2005.[6] It was awarded a place on both lists of Modern Library 100 Best Novels, reaching number 13 on the editor’s list, and 6 on the readers’ list.[7] In 2003, the novel was listed at number 8 on the BBC’s survey The Big Read.[8]

Orwell “encapsulate[d] the thesis at the heart of his unforgiving novel” in 1944, the implications of dividing the world up into zones of influence, which had been conjured by the Tehran Conference. Three years later, he wrote most of it on the Scottish island of Jura from 1947 to 1948 despite being seriously ill with tuberculosis.[9][10] On 4 December 1948, he sent the final manuscript to the publisher Secker and Warburg, and Nineteen Eighty-Four was published on 8 June 1949.[11][12] By 1989, it had been translated into 65 languages, more than any other novel in English until then.[13] The title of the novel, its themes, the Newspeak language and the author’s surname are often invoked against control and intrusion by the state, and the adjective Orwellian describes a totalitarian dystopia that is characterised by government control and subjugation of the people.

Orwell’s invented language, Newspeak, satirises hypocrisy and evasion by the state: the Ministry of Love (Miniluv) oversees torture and brainwashing, the Ministry of Plenty (Miniplenty) oversees shortage and rationing, the Ministry of Peace (Minipax) oversees war and atrocity and the Ministry of Truth (Minitrue) oversees propaganda and historical revisionism.

The Last Man in Europe was an early title for the novel, but in a letter dated 22 October 1948 to his publisher Fredric Warburg, eight months before publication, Orwell wrote about hesitating between that title and Nineteen Eighty-Four.[14] Warburg suggested choosing the main title to be the latter, a more commercial one.[15]

In the novel 1985 (1978), Anthony Burgess suggests that Orwell, disillusioned by the onset of the Cold War (194591), intended to call the book 1948. The introduction to the Penguin Books Modern Classics edition of Nineteen Eighty-Four reports that Orwell originally set the novel in 1980 but that he later shifted the date to 1982 and then to 1984. The introduction to the Houghton Mifflin Harcourt edition of Animal Farm and 1984 (2003) reports that the title 1984 was chosen simply as an inversion of the year 1948, the year in which it was being completed, and that the date was meant to give an immediacy and urgency to the menace of totalitarian rule.[16]

Throughout its publication history, Nineteen Eighty-Four has been either banned or legally challenged, as subversive or ideologically corrupting, like Aldous Huxley’s Brave New World (1932), We (1924) by Yevgeny Zamyatin, Darkness at Noon (1940) by Arthur Koestler, Kallocain (1940) by Karin Boye and Fahrenheit 451 (1953) by Ray Bradbury.[17] Some writers consider the Russian dystopian novel We by Zamyatin to have influenced Nineteen Eighty-Four,[18][19] and the novel bears significant similarities in its plot and characters to Darkness at Noon, written years before by Arthur Koestler, who was a personal friend of Orwell.[20]

The novel is in the public domain in Canada,[21] South Africa,[22] Argentina,[23] Australia,[24] and Oman.[25] It will be in the public domain in the United Kingdom, the EU,[26] and Brazil in 2021[27] (70 years after the author’s death), and in the United States in 2044.[28]

Nineteen Eighty-Four is set in Oceania, one of three inter-continental superstates that divided the world after a global war.

Smith’s memories and his reading of the proscribed book, The Theory and Practice of Oligarchical Collectivism by Emmanuel Goldstein, reveal that after the Second World War, the United Kingdom became involved in a war fought in Europe, western Russia, and North America during the early 1950s. Nuclear weapons were used during the war, leading to the destruction of Colchester. London would also suffer widespread aerial raids, leading Winston’s family to take refuge in a London Underground station. Britain fell to civil war, with street fighting in London, before the English Socialist Party, abbreviated as Ingsoc, emerged victorious and formed a totalitarian government in Britain. The British Commonwealth was absorbed by the United States to become Oceania. Eventually Ingsoc emerged to form a totalitarian government in the country.

Simultaneously, the Soviet Union conquered continental Europe and established the second superstate of Eurasia. The third superstate of Eastasia would emerge in the Far East after several decades of fighting. The three superstates wage perpetual war for the remaining unconquered lands of the world in “a rough quadrilateral with its corners at Tangier, Brazzaville, Darwin, and Hong Kong” through constantly shifting alliances. Although each of the three states are said to have sufficient natural resources, the war continues in order to maintain ideological control over the people.

However, due to the fact that Winston barely remembers these events and due to the Party’s manipulation of history, the continuity and accuracy of these events are unclear. Winston himself notes that the Party has claimed credit for inventing helicopters, airplanes and trains, while Julia theorizes that the perpetual bombing of London is merely a false-flag operation designed to convince the populace that a war is occurring. If the official account was accurate, Smith’s strengthening memories and the story of his family’s dissolution suggest that the atomic bombings occurred first, followed by civil war featuring “confused street fighting in London itself” and the societal postwar reorganisation, which the Party retrospectively calls “the Revolution”.

Most of the plot takes place in London, the “chief city of Airstrip One”, the Oceanic province that “had once been called England or Britain”.[29][30] Posters of the Party leader, Big Brother, bearing the caption “BIG BROTHER IS WATCHING YOU”, dominate the city (Winston states it can be found on nearly every house), while the ubiquitous telescreen (transceiving television set) monitors the private and public lives of the populace. Military parades, propaganda films, and public executions are said to be commonplace.

The class hierarchy of Oceania has three levels:

As the government, the Party controls the population with four ministries:

The protagonist Winston Smith, a member of the Outer Party, works in the Records Department of the Ministry of Truth as an editor, revising historical records, to make the past conform to the ever-changing party line and deleting references to unpersons, people who have been “vaporised”, i.e., not only killed by the state but denied existence even in history or memory.

The story of Winston Smith begins on 4 April 1984: “It was a bright cold day in April, and the clocks were striking thirteen.” Yet he is uncertain of the true date, given the regime’s continual rewriting and manipulation of history.[31]

In the year 1984, civilization has been damaged by war, civil conflict, and revolution. Airstrip One (formerly Britain) is a province of Oceania, one of the three totalitarian super-states that rules the world. It is ruled by the “Party” under the ideology of “Ingsoc” and the mysterious leader Big Brother, who has an intense cult of personality. The Party stamps out anyone who does not fully conform to their regime using the Thought Police and constant surveillance, through devices such as Telescreens (two-way televisions).

Winston Smith is a member of the middle class Outer Party. He works at the Ministry of Truth, where he rewrites historical records to conform to the state’s ever-changing version of history. Those who fall out of favour with the Party become “unpersons”, disappearing with all evidence of their existence removed. Winston revises past editions of The Times, while the original documents are destroyed by fire in a “memory hole”. He secretly opposes the Party’s rule and dreams of rebellion. He realizes that he is already a “thoughtcriminal” and likely to be caught one day.

While in a proletarian neighbourhood, he meets an antique shop owner called Mr. Charrington and buys a diary. He uses an alcove to hide it from the Telescreen in his room, and writes thoughts criticising the Party and Big Brother. In the journal, he records his sexual frustration over a young woman maintaining the novel-writing machines at the ministry named Julia, whom Winston is attracted to but suspects is an informant. He also suspects that his superior, an Inner Party official named O’Brien, is a secret agent for an enigmatic underground resistance movement known as the Brotherhood, a group formed by Big Brother’s reviled political rival Emmanuel Goldstein.

The next day, Julia secretly hands Winston a note confessing her love for him. Winston and Julia begin an affair, an act of the rebellion as the Party insists that sex may only be used for reproduction. Winston realizes that she shares his loathing of the Party. They first meet in the country, and later in a rented room above Mr. Charrington’s shop. During his affair with Julia, Winston remembers the disappearance of his family during the civil war of the 1950s and his terse relationship with his ex-wife Katharine. Winston also interacts with his colleague Syme, who is writing a dictionary for a revised version of the English language called Newspeak. After Syme admits that the true purpose of Newspeak is to reduce the capacity of human thought, Winston speculates that Syme will disappear. Not long after, Syme disappears and no one acknowledges his absence.

Weeks later, Winston is approached by O’Brien, who offers Winston a chance to join the Brotherhood. They arrange a meeting at O’Brien’s luxurious flat where both Winston and Julia swear allegiance to the Brotherhood. He sends Winston a copy of The Theory and Practice of Oligarchical Collectivism by Emmanuel Goldstein. Winston and Julia read parts of the book, which explains more about how the Party maintains power, the true meanings of its slogans and the concept of perpetual war. It argues that the Party can be overthrown if proles (proletarians) rise up against it.

Mr. Charrington is revealed to be an agent of the Thought Police. Winston and Julia are captured in the shop and imprisoned in the Ministry of Love. O’Brien reveals that he is loyal to the party, and part of a special sting operation to catch “thoughtcriminals”. Over many months, Winston is tortured and forced to “cure” himself of his “insanity” by changing his own perception to fit the Party line, even if it requires saying that “2 + 2 = 5”. O’Brien openly admits that the Party “is not interested in the good of others; it is interested solely in power.” He says that once Winston is brainwashed into loyalty, he will be released back into society for a period of time, before they execute him. Winston points out that the Party has not managed to make him betray Julia.

O’Brien then takes Winston to Room 101 for the final stage of re-education. The room contains each prisoner’s worst fear, in Winston’s case rats. As a wire cage holding hungry rats is fitted onto his face, Winston shouts “Do it to Julia!”, thus betraying her. After being released, Winston meets Julia in a park. She says that she was also tortured, and both reveal betraying the other. Later, Winston sits alone in a caf as Oceania celebrates a supposed victory over Eurasian armies in Africa, and realizes that “He loved Big Brother.”

Ingsoc (English Socialism) is the predominant ideology and pseudophilosophy of Oceania, and Newspeak is the official language of official documents.

In London, the capital city of Airstrip One, Oceania’s four government ministries are in pyramids (300 m high), the faades of which display the Party’s three slogans. The ministries’ names are the opposite (doublethink) of their true functions: “The Ministry of Peace concerns itself with war, the Ministry of Truth with lies, the Ministry of Love with torture and the Ministry of Plenty with starvation.” (Part II, Chapter IX The Theory and Practice of Oligarchical Collectivism)

The Ministry of Peace supports Oceania’s perpetual war against either of the two other superstates:

The primary aim of modern warfare (in accordance with the principles of doublethink, this aim is simultaneously recognized and not recognized by the directing brains of the Inner Party) is to use up the products of the machine without raising the general standard of living. Ever since the end of the nineteenth century, the problem of what to do with the surplus of consumption goods has been latent in industrial society. At present, when few human beings even have enough to eat, this problem is obviously not urgent, and it might not have become so, even if no artificial processes of destruction had been at work.

The Ministry of Plenty rations and controls food, goods, and domestic production; every fiscal quarter, it publishes false claims of having raised the standard of living, when it has, in fact, reduced rations, availability, and production. The Ministry of Truth substantiates Ministry of Plenty’s claims by revising historical records to report numbers supporting the current, “increased rations”.

The Ministry of Truth controls information: news, entertainment, education, and the arts. Winston Smith works in the Minitrue RecDep (Records Department), “rectifying” historical records to concord with Big Brother’s current pronouncements so that everything the Party says is true.

The Ministry of Love identifies, monitors, arrests, and converts real and imagined dissidents. In Winston’s experience, the dissident is beaten and tortured, and, when near-broken, he is sent to Room 101 to face “the worst thing in the world”until love for Big Brother and the Party replaces dissension.

The keyword here is blackwhite. Like so many Newspeak words, this word has two mutually contradictory meanings. Applied to an opponent, it means the habit of impudently claiming that black is white, in contradiction of the plain facts. Applied to a Party member, it means a loyal willingness to say that black is white when Party discipline demands this. But it means also the ability to believe that black is white, and more, to know that black is white, and to forget that one has ever believed the contrary. This demands a continuous alteration of the past, made possible by the system of thought which really embraces all the rest, and which is known in Newspeak as doublethink. Doublethink is basically the power of holding two contradictory beliefs in one’s mind simultaneously, and accepting both of them.

Three perpetually warring totalitarian super-states control the world:[34]

The perpetual war is fought for control of the “disputed area” lying “between the frontiers of the super-states”, which forms “a rough parallelogram with its corners at Tangier, Brazzaville, Darwin and Hong Kong”,[34] and Northern Africa, the Middle East, India and Indonesia are where the superstates capture and use slave labour. Fighting also takes place between Eurasia and Eastasia in Manchuria, Mongolia and Central Asia, and all three powers battle one another over various Atlantic and Pacific islands.

Goldstein’s book, The Theory and Practice of Oligarchical Collectivism, explains that the superstates’ ideologies are alike and that the public’s ignorance of this fact is imperative so that they might continue believing in the detestability of the opposing ideologies. The only references to the exterior world for the Oceanian citizenry (the Outer Party and the Proles) are Ministry of Truth maps and propaganda to ensure their belief in “the war”.

Winston Smith’s memory and Emmanuel Goldstein’s book communicate some of the history that precipitated the Revolution. Eurasia was formed when the Soviet Union conquered Continental Europe, creating a single state stretching from Portugal to the Bering Strait. Eurasia does not include the British Isles because the United States annexed them along with the rest of the British Empire and Latin America, thus establishing Oceania and gaining control over a quarter of the planet. Eastasia, the last superstate established, emerged only after “a decade of confused fighting”. It includes the Asian lands conquered by China and Japan. Although Eastasia is prevented from matching Eurasia’s size, its larger populace compensates for that handicap.

The annexation of Britain occurred about the same time as the atomic war that provoked civil war, but who fought whom in the war is left unclear. Nuclear weapons fell on Britain; an atomic bombing of Colchester is referenced in the text. Exactly how Ingsoc and its rival systems (Neo-Bolshevism and Death Worship) gained power in their respective countries is also unclear.

While the precise chronology cannot be traced, most of the global societal reorganization occurred between 1945 and the early 1960s. Winston and Julia once meet in the ruins of a church that was destroyed in a nuclear attack “thirty years” earlier, which suggests 1954 as the year of the atomic war that destabilised society and allowed the Party to seize power. It is stated in the novel that the “fourth quarter of 1983” was “also the sixth quarter of the Ninth Three-Year Plan”, which implies that the first quarter of the first three-year plan began in July 1958. By then, the Party was apparently in control of Oceania.

In 1984, there is a perpetual war between Oceania, Eurasia and Eastasia, the superstates that emerged from the global atomic war. The Theory and Practice of Oligarchical Collectivism, by Emmanuel Goldstein, explains that each state is so strong it cannot be defeated, even with the combined forces of two superstates, despite changing alliances. To hide such contradictions, history is rewritten to explain that the (new) alliance always was so; the populaces are accustomed to doublethink and accept it. The war is not fought in Oceanian, Eurasian or Eastasian territory but in the Arctic wastes and in a disputed zone comprising the sea and land from Tangiers (Northern Africa) to Darwin (Australia). At the start, Oceania and Eastasia are allies fighting Eurasia in northern Africa and the Malabar Coast.

That alliance ends and Oceania, allied with Eurasia, fights Eastasia, a change occurring on Hate Week, dedicated to creating patriotic fervour for the Party’s perpetual war. The public are blind to the change; in mid-sentence, an orator changes the name of the enemy from “Eurasia” to “Eastasia” without pause. When the public are enraged at noticing that the wrong flags and posters are displayed, they tear them down; the Party later claims to have captured Africa.

Goldstein’s book explains that the purpose of the unwinnable, perpetual war is to consume human labour and commodities so that the economy of a superstate cannot support economic equality, with a high standard of life for every citizen. By using up most of the produced objects like boots and rations, the proles are kept poor and uneducated and will neither realise what the government is doing nor rebel. Goldstein also details an Oceanian strategy of attacking enemy cities with atomic rockets before invasion but dismisses it as unfeasible and contrary to the war’s purpose; despite the atomic bombing of cities in the 1950s, the superstates stopped it for fear that would imbalance the powers. The military technology in the novel differs little from that of World War II, but strategic bomber aeroplanes are replaced with rocket bombs, helicopters were heavily used as weapons of war (they did not figure in World War II in any form but prototypes) and surface combat units have been all but replaced by immense and unsinkable Floating Fortresses, island-like contraptions concentrating the firepower of a whole naval task force in a single, semi-mobile platform (in the novel, one is said to have been anchored between Iceland and the Faroe Islands, suggesting a preference for sea lane interdiction and denial).

The society of Airstrip One and, according to “The Book”, almost the whole world, lives in poverty: hunger, disease and filth are the norms. Ruined cities and towns are common: the consequence of the civil war, the atomic wars and the purportedly enemy (but possibly false flag) rockets. Social decay and wrecked buildings surround Winston; aside from the ministerial pyramids, little of London was rebuilt. Members of the Outer Party consume synthetic foodstuffs and poor-quality “luxuries” such as oily gin and loosely-packed cigarettes, distributed under the “Victory” brand. (That is a parody of the low-quality Indian-made “Victory” cigarettes, widely smoked in Britain and by British soldiers during World War II. They were smoked because it was easier to import them from India than it was to import American cigarettes from across the Atlantic because of the War of the Atlantic.)

Winston describes something as simple as the repair of a broken pane of glass as requiring committee approval that can take several years and so most of those living in one of the blocks usually do the repairs themselves (Winston himself is called in by Mrs. Parsons to repair her blocked sink). All Outer Party residences include telescreens that serve both as outlets for propaganda and to monitor the Party members; they can be turned down, but they cannot be turned off.

In contrast to their subordinates, the Inner Party upper class of Oceanian society reside in clean and comfortable flats in their own quarter of the city, with pantries well-stocked with foodstuffs such as wine, coffee and sugar, all denied to the general populace.[35] Winston is astonished that the lifts in O’Brien’s building work, the telescreens can be switched off and O’Brien has an Asian manservant, Martin. All members of the Inner Party are attended to by slaves captured in the disputed zone, and “The Book” suggests that many have their own motorcars or even helicopters. Nonetheless, “The Book” makes clear that even the conditions enjoyed by the Inner Party are only “relatively” comfortable, and standards would be regarded as austere by those of the prerevolutionary lite.[36]

The proles live in poverty and are kept sedated with alcohol, pornography and a national lottery whose winnings are never actually paid out; that is obscured by propaganda and the lack of communication within Oceania. At the same time, the proles are freer and less intimidated than the middle-class Outer Party: they are subject to certain levels of monitoring but are not expected to be particularly patriotic. They lack telescreens in their own homes and often jeer at the telescreens that they see. “The Book” indicates that is because the middle class, not the lower class, traditionally starts revolutions. The model demands tight control of the middle class, with ambitious Outer-Party members neutralised via promotion to the Inner Party or “reintegration” by the Ministry of Love, and proles can be allowed intellectual freedom because they lack intellect. Winston nonetheless believes that “the future belonged to the proles”.[37]

The standard of living of the populace is low overall. Consumer goods are scarce, and all those available through official channels are of low quality; for instance, despite the Party regularly reporting increased boot production, more than half of the Oceanian populace goes barefoot. The Party claims that poverty is a necessary sacrifice for the war effort, and “The Book” confirms that to be partially correct since the purpose of perpetual war consumes surplus industrial production. Outer Party members and proles occasionally gain access to better items in the market, which deals in goods that were pilfered from the residences of the Inner Party.[citation needed]

Nineteen Eighty-Four expands upon the subjects summarised in Orwell’s essay “Notes on Nationalism”[38] about the lack of vocabulary needed to explain the unrecognised phenomena behind certain political forces. In Nineteen Eighty-Four, the Party’s artificial, minimalist language ‘Newspeak’ addresses the matter.

O’Brien concludes: “The object of persecution is persecution. The object of torture is torture. The object of power is power.”

In the book, Inner Party member O’Brien describes the Party’s vision of the future:

There will be no curiosity, no enjoyment of the process of life. All competing pleasures will be destroyed. But alwaysdo not forget this, Winstonalways there will be the intoxication of power, constantly increasing and constantly growing subtler. Always, at every moment, there will be the thrill of victory, the sensation of trampling on an enemy who is helpless. If you want a picture of the future, imagine a boot stamping on a human faceforever.

Part III, Chapter III, Nineteen Eighty-Four

A major theme of Nineteen Eighty-Four is censorship, especially in the Ministry of Truth, where photographs are modified and public archives rewritten to rid them of “unpersons” (persons who are erased from history by the Party). On the telescreens, figures for all types of production are grossly exaggerated or simply invented to indicate an ever-growing economy, when the reality is the opposite. One small example of the endless censorship is Winston being charged with the task of eliminating a reference to an unperson in a newspaper article. He proceeds to write an article about Comrade Ogilvy, a made-up party member who displayed great heroism by leaping into the sea from a helicopter so that the dispatches he was carrying would not fall into enemy hands.

The inhabitants of Oceania, particularly the Outer Party members, have no real privacy. Many of them live in apartments equipped with two-way telescreens so that they may be watched or listened to at any time. Similar telescreens are found at workstations and in public places, along with hidden microphones. Written correspondence is routinely opened and read by the government before it is delivered. The Thought Police employ undercover agents, who pose as normal citizens and report any person with subversive tendencies. Children are encouraged to report suspicious persons to the government, and some denounce their parents. Citizens are controlled, and the smallest sign of rebellion, even something so small as a facial expression, can result in immediate arrest and imprisonment. Thus, citizens, particularly party members, are compelled to obedience.

“The Principles of Newspeak” is an academic essay appended to the novel. It describes the development of Newspeak, the Party’s minimalist artificial language meant to ideologically align thought and action with the principles of Ingsoc by making “all other modes of thought impossible”. (A linguistic theory about how language may direct thought is the SapirWhorf hypothesis.)

Whether or not the Newspeak appendix implies a hopeful end to Nineteen Eighty-Four remains a critical debate, as it is in Standard English and refers to Newspeak, Ingsoc, the Party etc., in the past tense: “Relative to our own, the Newspeak vocabulary was tiny, and new ways of reducing it were constantly being devised” p.422). Some critics (Atwood,[39] Benstead,[40] Milner,[41] Pynchon[42]) claim that for the essay’s author, both Newspeak and the totalitarian government are in the past.

Nineteen Eighty-Four uses themes from life in the Soviet Union and wartime life in Great Britain as sources for many of its motifs. Some time at an unspecified date after the first American publication of the book, producer Sidney Sheldon wrote to Orwell interested in adapting the novel to the Broadway stage. Orwell sold the American stage rights to Sheldon, explaining that his basic goal with Nineteen Eighty-Four was imagining the consequences of Stalinist government ruling British society:

[Nineteen Eighty-Four] was based chiefly on communism, because that is the dominant form of totalitarianism, but I was trying chiefly to imagine what communism would be like if it were firmly rooted in the English speaking countries, and was no longer a mere extension of the Russian Foreign Office.[43]

The statement “2 + 2 = 5”, used to torment Winston Smith during his interrogation, was a communist party slogan from the second five-year plan, which encouraged fulfillment of the five-year plan in four years. The slogan was seen in electric lights on Moscow house-fronts, billboards and elsewhere.[44]

The switch of Oceania’s allegiance from Eastasia to Eurasia and the subsequent rewriting of history (“Oceania was at war with Eastasia: Oceania had always been at war with Eastasia. A large part of the political literature of five years was now completely obsolete”; ch 9) is evocative of the Soviet Union’s changing relations with Nazi Germany. The two nations were open and frequently vehement critics of each other until the signing of the 1939 Treaty of Non-Aggression. Thereafter, and continuing until the Nazi invasion of the Soviet Union in 1941, no criticism of Germany was allowed in the Soviet press, and all references to prior party lines stoppedincluding in the majority of non-Russian communist parties who tended to follow the Russian line. Orwell had criticised the Communist Party of Great Britain for supporting the Treaty in his essays for Betrayal of the Left (1941). “The Hitler-Stalin pact of August 1939 reversed the Soviet Union’s stated foreign policy. It was too much for many of the fellow-travellers like Gollancz [Orwell’s sometime publisher] who had put their faith in a strategy of construction Popular Front governments and the peace bloc between Russia, Britain and France.”[45]

The description of Emmanuel Goldstein, with a “small, goatee beard”, evokes the image of Leon Trotsky. The film of Goldstein during the Two Minutes Hate is described as showing him being transformed into a bleating sheep. This image was used in a propaganda film during the Kino-eye period of Soviet film, which showed Trotsky transforming into a goat.[46] Goldstein’s book is similar to Trotsky’s highly critical analysis of the USSR, The Revolution Betrayed, published in 1936.

The omnipresent images of Big Brother, a man described as having a moustache, bears resemblance to the cult of personality built up around Joseph Stalin.

The news in Oceania emphasised production figures, just as it did in the Soviet Union, where record-setting in factories (by “Heroes of Socialist Labor”) was especially glorified. The best known of these was Alexey Stakhanov, who purportedly set a record for coal mining in 1935.

The tortures of the Ministry of Love evoke the procedures used by the NKVD in their interrogations,[47] including the use of rubber truncheons, being forbidden to put your hands in your pockets, remaining in brightly lit rooms for days, torture through the use of provoked rodents, and the victim being shown a mirror after their physical collapse.

The random bombing of Airstrip One is based on the Buzz bombs and the V-2 rocket, which struck England at random in 19441945.

The Thought Police is based on the NKVD, which arrested people for random “anti-soviet” remarks.[48] The Thought Crime motif is drawn from Kempeitai, the Japanese wartime secret police, who arrested people for “unpatriotic” thoughts.

The confessions of the “Thought Criminals” Rutherford, Aaronson and Jones are based on the show trials of the 1930s, which included fabricated confessions by prominent Bolsheviks Nikolai Bukharin, Grigory Zinoviev and Lev Kamenev to the effect that they were being paid by the Nazi government to undermine the Soviet regime under Leon Trotsky’s direction.

The song “Under the Spreading Chestnut Tree” (“Under the spreading chestnut tree, I sold you, and you sold me”) was based on an old English song called “Go no more a-rushing” (“Under the spreading chestnut tree, Where I knelt upon my knee, We were as happy as could be, ‘Neath the spreading chestnut tree.”). The song was published as early as 1891. The song was a popular camp song in the 1920s, sung with corresponding movements (like touching your chest when you sing “chest”, and touching your head when you sing “nut”). Glenn Miller recorded the song in 1939.[49]

The “Hates” (Two Minutes Hate and Hate Week) were inspired by the constant rallies sponsored by party organs throughout the Stalinist period. These were often short pep-talks given to workers before their shifts began (Two Minutes Hate), but could also last for days, as in the annual celebrations of the anniversary of the October revolution (Hate Week).

Orwell fictionalized “newspeak”, “doublethink”, and “Ministry of Truth” as evinced by both the Soviet press and that of Nazi Germany.[50] In particular, he adapted Soviet ideological discourse constructed to ensure that public statements could not be questioned.[51]

Winston Smith’s job, “revising history” (and the “unperson” motif) are based on the Stalinist habit of airbrushing images of ‘fallen’ people from group photographs and removing references to them in books and newspapers.[53] In one well-known example, the Soviet encyclopaedia had an article about Lavrentiy Beria. When he fell in 1953, and was subsequently executed, institutes that had the encyclopaedia were sent an article about the Bering Strait, with instructions to paste it over the article about Beria.[54]

Big Brother’s “Orders of the Day” were inspired by Stalin’s regular wartime orders, called by the same name. A small collection of the more political of these have been published (together with his wartime speeches) in English as “On the Great Patriotic War of the Soviet Union” By Joseph Stalin.[55][56] Like Big Brother’s Orders of the day, Stalin’s frequently lauded heroic individuals,[57] like Comrade Ogilvy, the fictitious hero Winston Smith invented to ‘rectify’ (fabricate) a Big Brother Order of the day.

The Ingsoc slogan “Our new, happy life”, repeated from telescreens, evokes Stalin’s 1935 statement, which became a CPSU slogan, “Life has become better, Comrades; life has become more cheerful.”[48]

In 1940 Argentine writer Jorge Luis Borges published Tln, Uqbar, Orbis Tertius which described the invention by a “benevolent secret society” of a world that would seek to remake human language and reality along human-invented lines. The story concludes with an appendix describing the success of the project. Borges’ story addresses similar themes of epistemology, language and history to 1984.[58]

During World War II, Orwell believed that British democracy as it existed before 1939 would not survive the war. The question being “Would it end via Fascist coup d’tat from above or via Socialist revolution from below”?[citation needed] Later, he admitted that events proved him wrong: “What really matters is that I fell into the trap of assuming that ‘the war and the revolution are inseparable’.”[59]

Nineteen Eighty-Four (1949) and Animal Farm (1945) share themes of the betrayed revolution, the person’s subordination to the collective, rigorously enforced class distinctions (Inner Party, Outer Party, Proles), the cult of personality, concentration camps, Thought Police, compulsory regimented daily exercise, and youth leagues. Oceania resulted from the US annexation of the British Empire to counter the Asian peril to Australia and New Zealand. It is a naval power whose militarism venerates the sailors of the floating fortresses, from which battle is given to recapturing India, the “Jewel in the Crown” of the British Empire. Much of Oceanic society is based upon the USSR under Joseph StalinBig Brother. The televised Two Minutes Hate is ritual demonisation of the enemies of the State, especially Emmanuel Goldstein (viz Leon Trotsky). Altered photographs and newspaper articles create unpersons deleted from the national historical record, including even founding members of the regime (Jones, Aaronson and Rutherford) in the 1960s purges (viz the Soviet Purges of the 1930s, in which leaders of the Bolshevik Revolution were similarly treated). A similar thing also happened during the French Revolution in which many of the original leaders of the Revolution were later put to death, for example Danton who was put to death by Robespierre, and then later Robespierre himself met the same fate.

In his 1946 essay “Why I Write”, Orwell explains that the serious works he wrote since the Spanish Civil War (193639) were “written, directly or indirectly, against totalitarianism and for democratic socialism”.[3][60] Nineteen Eighty-Four is a cautionary tale about revolution betrayed by totalitarian defenders previously proposed in Homage to Catalonia (1938) and Animal Farm (1945), while Coming Up for Air (1939) celebrates the personal and political freedoms lost in Nineteen Eighty-Four (1949). Biographer Michael Shelden notes Orwell’s Edwardian childhood at Henley-on-Thames as the golden country; being bullied at St Cyprian’s School as his empathy with victims; his life in the Indian Imperial Police in Burma and the techniques of violence and censorship in the BBC as capricious authority.[61]

Other influences include Darkness at Noon (1940) and The Yogi and the Commissar (1945) by Arthur Koestler; The Iron Heel (1908) by Jack London; 1920: Dips into the Near Future[62] by John A. Hobson; Brave New World (1932) by Aldous Huxley; We (1921) by Yevgeny Zamyatin which he reviewed in 1946;[63] and The Managerial Revolution (1940) by James Burnham predicting perpetual war among three totalitarian superstates. Orwell told Jacintha Buddicom that he would write a novel stylistically like A Modern Utopia (1905) by H. G. Wells.[citation needed]

Extrapolating from World War II, the novel’s pastiche parallels the politics and rhetoric at war’s endthe changed alliances at the “Cold War’s” (194591) beginning; the Ministry of Truth derives from the BBC’s overseas service, controlled by the Ministry of Information; Room 101 derives from a conference room at BBC Broadcasting House;[64] the Senate House of the University of London, containing the Ministry of Information is the architectural inspiration for the Minitrue; the post-war decrepitude derives from the socio-political life of the UK and the US, i.e., the impoverished Britain of 1948 losing its Empire despite newspaper-reported imperial triumph; and war ally but peace-time foe, Soviet Russia became Eurasia.

The term “English Socialism” has precedents in his wartime writings; in the essay “The Lion and the Unicorn: Socialism and the English Genius” (1941), he said that “the war and the revolution are inseparable…the fact that we are at war has turned Socialism from a textbook word into a realisable policy” because Britain’s superannuated social class system hindered the war effort and only a socialist economy would defeat Adolf Hitler. Given the middle class’s grasping this, they too would abide socialist revolution and that only reactionary Britons would oppose it, thus limiting the force revolutionaries would need to take power. An English Socialism would come about which “will never lose touch with the tradition of compromise and the belief in a law that is above the State. It will shoot traitors, but it will give them a solemn trial beforehand and occasionally it will acquit them. It will crush any open revolt promptly and cruelly, but it will interfere very little with the spoken and written word.”[65]

In the world of Nineteen Eighty-Four, “English Socialism”(or “Ingsoc” in Newspeak) is a totalitarian ideology unlike the English revolution he foresaw. Comparison of the wartime essay “The Lion and the Unicorn” with Nineteen Eighty-Four shows that he perceived a Big Brother regime as a perversion of his cherished socialist ideals and English Socialism. Thus Oceania is a corruption of the British Empire he believed would evolve “into a federation of Socialist states, like a looser and freer version of the Union of Soviet Republics”.[66][verification needed]

When first published, Nineteen Eighty-Four was generally well received by reviewers. V. S. Pritchett, reviewing the novel for the New Statesman stated: “I do not think I have ever read a novel more frightening and depressing; and yet, such are the originality, the suspense, the speed of writing and withering indignation that it is impossible to put the book down.”[67] P. H. Newby, reviewing Nineteen Eighty-Four for The Listener magazine, described it as “the most arresting political novel written by an Englishman since Rex Warner’s The Aerodrome.”[68] Nineteen Eighty-Four was also praised by Bertrand Russell, E. M. Forster and Harold Nicolson.[68] On the other hand, Edward Shanks, reviewing Nineteen Eighty-Four for The Sunday Times, was dismissive; Shanks claimed Nineteen Eighty-Four “breaks all records for gloomy vaticination”.[68] C. S. Lewis was also critical of the novel, claiming that the relationship of Julia and Winston, and especially the Party’s view on sex, lacked credibility, and that the setting was “odious rather than tragic”.[69]

Nineteen Eighty-Four has been adapted for the cinema, radio, television and theatre at least twice each, as well as for other art media, such as ballet and opera.

The effect of Nineteen Eighty-Four on the English language is extensive; the concepts of Big Brother, Room 101, the Thought Police, thoughtcrime, unperson, memory hole (oblivion), doublethink (simultaneously holding and believing contradictory beliefs) and Newspeak (ideological language) have become common phrases for denoting totalitarian authority. Doublespeak and groupthink are both deliberate elaborations of doublethink, and the adjective “Orwellian” means similar to Orwell’s writings, especially Nineteen Eighty-Four. The practice of ending words with “-speak” (such as mediaspeak) is drawn from the novel.[70] Orwell is perpetually associated with 1984; in July 1984, an asteroid was discovered by Antonn Mrkos and named after Orwell.

References to the themes, concepts and plot of Nineteen Eighty-Four have appeared frequently in other works, especially in popular music and video entertainment. An example is the worldwide hit reality television show Big Brother, in which a group of people live together in a large house, isolated from the outside world but continuously watched by television cameras.

The book touches on the invasion of privacy and ubiquitous surveillance. From mid-2013 it was publicized that the NSA has been secretly monitoring and storing global internet traffic, including the bulk data collection of email and phone call data. Sales of Nineteen Eighty-Four increased by up to seven times within the first week of the 2013 mass surveillance leaks.[79][80][81] The book again topped the Amazon.com sales charts in 2017 after a controversy involving Kellyanne Conway using the phrase “alternative facts” to explain discrepancies with the media.[82][83][84][85]

The book also shows mass media as a catalyst for the intensification of destructive emotions and violence. Since the 20th century, news and other forms of media have been publicizing violence more often.[86][87] In 2013, the Almeida Theatre and Headlong staged a successful new adaptation (by Robert Icke and Duncan Macmillan), which twice toured the UK and played an extended run in London’s West End. The play opened on Broadway in 2017.

In the decades since the publication of Nineteen Eighty-Four, there have been numerous comparisons to Aldous Huxley’s novel Brave New World, which had been published 17 years earlier, in 1932.[88][89][90][91] They are both predictions of societies dominated by a central government and are both based on extensions of the trends of their times. However, members of the ruling class of Nineteen Eighty-Four use brutal force, torture and mind control to keep individuals in line, but rulers in Brave New World keep the citizens in line by addictive drugs and pleasurable distractions.

In October 1949, after reading Nineteen Eighty-Four, Huxley sent a letter to Orwell and wrote that it would be more efficient for rulers to stay in power by the softer touch by allowing citizens to self-seek pleasure to control them rather than brute force and to allow a false sense of freedom:

Within the next generation I believe that the world’s rulers will discover that infant conditioning and narco-hypnosis are more efficient, as instruments of government, than clubs and prisons, and that the lust for power can be just as completely satisfied by suggesting people into loving their servitude as by flogging and kicking them into obedience.[92]

Elements of both novels can be seen in modern-day societies, with Huxley’s vision being more dominant in the West and Orwell’s vision more prevalent with dictators in ex-communist countries, as is pointed out in essays that compare the two novels, including Huxley’s own Brave New World Revisited.[93][94][95][85]

Comparisons with other dystopian novels like The Handmaid’s Tale, Virtual Light, The Private Eye and Children of Men have also been drawn.[96][97]

See original here:

Nineteen Eighty-Four – Wikipedia

War on drugs – Wikipedia

War on Drugs is an American term[6][7] usually applied to the U.S. federal government’s campaign of prohibition of drugs, military aid, and military intervention, with the stated aim being to reduce the illegal drug trade.[8][9] The initiative includes a set of drug policies that are intended to discourage the production, distribution, and consumption of psychoactive drugs that the participating governments and the UN have made illegal. The term was popularized by the media shortly after a press conference given on June 18, 1971, by President Richard Nixonthe day after publication of a special message from President Nixon to the Congress on Drug Abuse Prevention and Controlduring which he declared drug abuse “public enemy number one”. That message to the Congress included text about devoting more federal resources to the “prevention of new addicts, and the rehabilitation of those who are addicted”, but that part did not receive the same public attention as the term “war on drugs”.[10][11][12] However, two years prior to this, Nixon had formally declared a “war on drugs” that would be directed toward eradication, interdiction, and incarceration.[13] Today, the Drug Policy Alliance, which advocates for an end to the War on Drugs, estimates that the United States spends $51 billion annually on these initiatives.[14]

On May 13, 2009, Gil Kerlikowskethe Director of the Office of National Drug Control Policy (ONDCP)signaled that the Obama administration did not plan to significantly alter drug enforcement policy, but also that the administration would not use the term “War on Drugs”, because Kerlikowske considers the term to be “counter-productive”.[15] ONDCP’s view is that “drug addiction is a disease that can be successfully prevented and treated… making drugs more available will make it harder to keep our communities healthy and safe”.[16] One of the alternatives that Kerlikowske has showcased is the drug policy of Sweden, which seeks to balance public health concerns with opposition to drug legalization. The prevalence rates for cocaine use in Sweden are barely one-fifth of those in Spain, the biggest consumer of the drug.[17]

In June 2011, the Global Commission on Drug Policy released a critical report on the War on Drugs, declaring: “The global war on drugs has failed, with devastating consequences for individuals and societies around the world. Fifty years after the initiation of the UN Single Convention on Narcotic Drugs, and years after President Nixon launched the US government’s war on drugs, fundamental reforms in national and global drug control policies are urgently needed.”[18] The report was criticized by organizations that oppose a general legalization of drugs.[16]

The first U.S. law that restricted the distribution and use of certain drugs was the Harrison Narcotics Tax Act of 1914. The first local laws came as early as 1860.[19] In 1919, the United States passed the 18th Amendment, prohibiting the sale, manufacture, and transportation of alcohol, with exceptions for religious and medical use. In 1920, the United States passed the National Prohibition Act (Volstead Act), enacted to carry out the provisions in law of the 18th Amendment.

The Federal Bureau of Narcotics was established in the United States Department of the Treasury by an act of June 14, 1930 (46 Stat. 585).[20] In 1933, the federal prohibition for alcohol was repealed by passage of the 21st Amendment. In 1935, President Franklin D. Roosevelt publicly supported the adoption of the Uniform State Narcotic Drug Act. The New York Times used the headline “Roosevelt Asks Narcotic War Aid”.[21][22]

In 1937, the Marihuana Tax Act of 1937 was passed. Several scholars have claimed that the goal was to destroy the hemp industry,[23][24][25] largely as an effort of businessmen Andrew Mellon, Randolph Hearst, and the Du Pont family.[23][25] These scholars argue that with the invention of the decorticator, hemp became a very cheap substitute for the paper pulp that was used in the newspaper industry.[23][26] These scholars believe that Hearst felt[dubious discuss] that this was a threat to his extensive timber holdings. Mellon, United States Secretary of the Treasury and the wealthiest man in America, had invested heavily in the DuPont’s new synthetic fiber, nylon, and considered[dubious discuss] its success to depend on its replacement of the traditional resource, hemp.[23][27][28][29][30][31][32][33] However, there were circumstances that contradict these claims. One reason for doubts about those claims is that the new decorticators did not perform fully satisfactorily in commercial production.[34] To produce fiber from hemp was a labor-intensive process if you include harvest, transport and processing. Technological developments decreased the labor with hemp but not sufficient to eliminate this disadvantage.[35][36]

On October 27, 1970, Congress passes the Comprehensive Drug Abuse Prevention and Control Act of 1970, which, among other things, categorizes controlled substances based on their medicinal use and potential for addiction.[37] In 1971, two congressmen released an explosive report on the growing heroin epidemic among U.S. servicemen in Vietnam; ten to fifteen percent of the servicemen were addicted to heroin, and President Nixon declared drug abuse to be “public enemy number one”.[37][38]

Although Nixon declared “drug abuse” to be public enemy number one in 1971,[39] the policies that his administration implemented as part of the Comprehensive Drug Abuse Prevention and Control Act of 1970 were a continuation of drug prohibition policies in the U.S., which started in 1914.[37][40]

“The Nixon campaign in 1968, and the Nixon White House after that, had two enemies: the antiwar left and black people. You understand what I’m saying? We knew we couldn’t make it illegal to be either against the war or black, but by getting the public to associate the hippies with marijuana and blacks with heroin, and then criminalizing both heavily, we could disrupt those communities. We could arrest their leaders, raid their homes, break up their meetings, and vilify them night after night on the evening news. Did we know we were lying about the drugs? Of course we did.” John Ehrlichman, to Dan Baum[41][42][43] for Harper’s Magazine[44] in 1994, about President Richard Nixon’s war on drugs, declared in 1971.[45][46]

In 1973, the Drug Enforcement Administration was created to replace the Bureau of Narcotics and Dangerous Drugs.[37]

The Nixon Administration also repealed the federal 210-year mandatory minimum sentences for possession of marijuana and started federal demand reduction programs and drug-treatment programs. Robert DuPont, the “Drug czar” in the Nixon Administration, stated it would be more accurate to say that Nixon ended, rather than launched, the “war on drugs”. DuPont also argued that it was the proponents of drug legalization that popularized the term “war on drugs”.[16][unreliable source?]

In 1982, Vice President George H. W. Bush and his aides began pushing for the involvement of the CIA and U.S. military in drug interdiction efforts.[47]

The Office of National Drug Control Policy (ONDCP) was originally established by the National Narcotics Leadership Act of 1988,[48][49] which mandated a national anti-drug media campaign for youth, which would later become the National Youth Anti-Drug Media Campaign.[50] The director of ONDCP is commonly known as the Drug czar,[37] and it was first implemented in 1989 under President George H. W. Bush,[51] and raised to cabinet-level status by Bill Clinton in 1993.[52] These activities were subsequently funded by the Treasury and General Government Appropriations Act of 1998.[53][54] The Drug-Free Media Campaign Act of 1998 codified the campaign at 21 U.S.C.1708.[55]

The Global Commission on Drug Policy released a report on June 2, 2011, alleging that “The War On Drugs Has Failed.” The commissioned was made up of 22 self-appointed members including a number of prominent international politicians and writers. U.S. Surgeon General Regina Benjamin also released the first ever National Prevention Strategy.[56]

On May 21, 2012, the U.S. Government published an updated version of its Drug Policy.[57] The director of ONDCP stated simultaneously that this policy is something different from the “War on Drugs”:

At the same meeting was a declaration signed by the representatives of Italy, the Russian Federation, Sweden, the United Kingdom and the United States in line with this: “Our approach must be a balanced one, combining effective enforcement to restrict the supply of drugs, with efforts to reduce demand and build recovery; supporting people to live a life free of addiction.”[59]

In March 2016 the International Narcotics Control Board stated that the International Drug Control treaties do not mandate a “war on drugs.”[60]

According to Human Rights Watch, the War on Drugs caused soaring arrest rates that disproportionately targeted African Americans due to various factors.[62] John Ehrlichman, an aide to Nixon, said that Nixon used the war on drugs to criminalize and disrupt black and hippie communities and their leaders.[63]

The present state of incarceration in the U.S. as a result of the war on drugs arrived in several stages. By 1971, different stops on drugs had been implemented for more than 50 years (for e.g. since 1914, 1937 etc.) with only a very small increase of inmates per 100,000 citizens. During the first 9 years after Nixon coined the expression “War on Drugs”, statistics showed only a minor increase in the total number of imprisoned.

After 1980, the situation began to change. In the 1980s, while the number of arrests for all crimes had risen by 28%, the number of arrests for drug offenses rose 126%.[64] The result of increased demand was the development of privatization and the for-profit prison industry.[65] The US Department of Justice, reporting on the effects of state initiatives, has stated that, from 1990 through 2000, “the increasing number of drug offenses accounted for 27% of the total growth among black inmates, 7% of the total growth among Hispanic inmates, and 15% of the growth among white inmates.” In addition to prison or jail, the United States provides for the deportation of many non-citizens convicted of drug offenses.[66]

In 1994, the New England Journal of Medicine reported that the “War on Drugs” resulted in the incarceration of one million Americans each year.[67] In 2008, the Washington Post reported that of 1.5 million Americans arrested each year for drug offenses, half a million would be incarcerated.[68] In addition, one in five black Americans would spend time behind bars due to drug laws.[68]

Federal and state policies also impose collateral consequences on those convicted of drug offenses, such as denial of public benefits or licenses, that are not applicable to those convicted of other types of crime.[69] In particular, the passage of the 1990 SolomonLautenberg amendment led many states to impose mandatory driver’s license suspensions (of at least 6 months) for persons committing a drug offense, regardless of whether any motor vehicle was involved.[70][71] Approximately 191,000 licenses were suspended in this manner in 2016, according to a Prison Policy Initiative report.[72]

In 1986, the U.S. Congress passed laws that created a 100 to 1 sentencing disparity for the trafficking or possession of crack when compared to penalties for trafficking of powder cocaine,[73][74][75][76] which had been widely criticized as discriminatory against minorities, mostly blacks, who were more likely to use crack than powder cocaine.[77] This 100:1 ratio had been required under federal law since 1986.[78] Persons convicted in federal court of possession of 5grams of crack cocaine received a minimum mandatory sentence of 5 years in federal prison. On the other hand, possession of 500grams of powder cocaine carries the same sentence.[74][75] In 2010, the Fair Sentencing Act cut the sentencing disparity to 18:1.[77]

According to Human Rights Watch, crime statistics show thatin the United States in 1999compared to non-minorities, African Americans were far more likely to be arrested for drug crimes, and received much stiffer penalties and sentences.[79]

Statistics from 1998 show that there were wide racial disparities in arrests, prosecutions, sentencing and deaths. African-American drug users made up for 35% of drug arrests, 55% of convictions, and 74% of people sent to prison for drug possession crimes.[74] Nationwide African-Americans were sent to state prisons for drug offenses 13 times more often than other races,[80] even though they only supposedly comprised 13% of regular drug users.[74]

Anti-drug legislation over time has also displayed an apparent racial bias. University of Minnesota Professor and social justice author Michael Tonry writes, “The War on Drugs foreseeably and unnecessarily blighted the lives of hundreds and thousands of young disadvantaged black Americans and undermined decades of effort to improve the life chances of members of the urban black underclass.”[81]

In 1968, President Lyndon B. Johnson decided that the government needed to make an effort to curtail the social unrest that blanketed the country at the time. He decided to focus his efforts on illegal drug use, an approach which was in line with expert opinion on the subject at the time. In the 1960s, it was believed that at least half of the crime in the U.S. was drug related, and this number grew as high as 90 percent in the next decade.[82] He created the Reorganization Plan of 1968 which merged the Bureau of Narcotics and the Bureau of Drug Abuse to form the Bureau of Narcotics and Dangerous Drugs within the Department of Justice.[83] The belief during this time about drug use was summarized by journalist Max Lerner in his celebrated[citation needed] work America as a Civilization (1957):

As a case in point we may take the known fact of the prevalence of reefer and dope addiction in Negro areas. This is essentially explained in terms of poverty, slum living, and broken families, yet it would be easy to show the lack of drug addiction among other ethnic groups where the same conditions apply.[84]

Richard Nixon became president in 1969, and did not back away from the anti-drug precedent set by Johnson. Nixon began orchestrating drug raids nationwide to improve his “watchdog” reputation. Lois B. Defleur, a social historian who studied drug arrests during this period in Chicago, stated that, “police administrators indicated they were making the kind of arrests the public wanted”. Additionally, some of Nixon’s newly created drug enforcement agencies would resort to illegal practices to make arrests as they tried to meet public demand for arrest numbers. From 1972 to 1973, the Office of Drug Abuse and Law Enforcement performed 6,000 drug arrests in 18 months, the majority of the arrested black.[85]

The next two Presidents, Gerald Ford and Jimmy Carter, responded with programs that were essentially a continuation of their predecessors. Shortly after Ronald Reagan became President in 1981 he delivered a speech on the topic. Reagan announced, “We’re taking down the surrender flag that has flown over so many drug efforts; we’re running up a battle flag.”[86] For his first five years in office, Reagan slowly strengthened drug enforcement by creating mandatory minimum sentencing and forfeiture of cash and real estate for drug offenses, policies far more detrimental to poor blacks than any other sector affected by the new laws.[citation needed]

Then, driven by the 1986 cocaine overdose of black basketball star Len Bias,[dubious discuss] Reagan was able to pass the Anti-Drug Abuse Act through Congress. This legislation appropriated an additional $1.7 billion to fund the War on Drugs. More importantly, it established 29 new, mandatory minimum sentences for drug offenses. In the entire history of the country up until that point, the legal system had only seen 55 minimum sentences in total.[87] A major stipulation of the new sentencing rules included different mandatory minimums for powder and crack cocaine. At the time of the bill, there was public debate as to the difference in potency and effect of powder cocaine, generally used by whites, and crack cocaine, generally used by blacks, with many believing that “crack” was substantially more powerful and addictive. Crack and powder cocaine are closely related chemicals, crack being a smokeable, freebase form of powdered cocaine hydrochloride which produces a shorter, more intense high while using less of the drug. This method is more cost effective, and therefore more prevalent on the inner-city streets, while powder cocaine remains more popular in white suburbia. The Reagan administration began shoring public opinion against “crack”, encouraging DEA official Robert Putnam to play up the harmful effects of the drug. Stories of “crack whores” and “crack babies” became commonplace; by 1986, Time had declared “crack” the issue of the year.[88] Riding the wave of public fervor, Reagan established much harsher sentencing for crack cocaine, handing down stiffer felony penalties for much smaller amounts of the drug.[89]

Reagan protg and former Vice-President George H. W. Bush was next to occupy the oval office, and the drug policy under his watch held true to his political background. Bush maintained the hard line drawn by his predecessor and former boss, increasing narcotics regulation when the First National Drug Control Strategy was issued by the Office of National Drug Control in 1989.[90]

The next three presidents Clinton, Bush and Obama continued this trend, maintaining the War on Drugs as they inherited it upon taking office.[91] During this time of passivity by the federal government, it was the states that initiated controversial legislation in the War on Drugs. Racial bias manifested itself in the states through such controversial policies as the “stop and frisk” police practices in New York city and the “three strikes” felony laws began in California in 1994.[92]

In August 2010, President Obama signed the Fair Sentencing Act into law that dramatically reduced the 100-to-1 sentencing disparity between powder and crack cocaine, which disproportionately affected minorities.[93]

Commonly used illegal drugs include heroin, cocaine, methamphetamine, and, marijuana.

Heroin is an opiate that is highly addictive. If caught selling or possessing heroin, a perpetrator can be charged with a felony and face twofour years in prison and could be fined to a maximum of $20,000.[94]

Crystal meth is composed of methamphetamine hydrochloride. It is marketed as either a white powder or in a solid (rock) form. The possession of crystal meth can result in a punishment varying from a fine to a jail sentence. As with other drug crimes, sentencing length may increase depending on the amount of the drug found in the possession of the defendant.[95]

Cocaine possession is illegal across the U.S., with the cheaper crack cocaine incurring even greater penalties. Having possession is when the accused knowingly has it on their person, or in a backpack or purse. The possession of cocaine with no prior conviction, for the first offense, the person will be sentenced to a maximum of one year in prison or fined $1,000, or both. If the person has a prior conviction, whether it is a narcotic or cocaine, they will be sentenced to two years in prison, a $2,500 fine, or both. With two or more convictions of possession prior to this present offense, they can be sentenced to 90 days in prison along with a $5,000 fine.[96]

Marijuana is the most popular illegal drug worldwide. The punishment for possession of it is less than for the possession of cocaine or heroin. In some U.S. states, the drug is legal. Over 80 million Americans have tried marijuana. The Criminal Defense Lawyer article claims that, depending on the age of person and how much the person has been caught for possession, they will be fined and could plea bargain into going to a treatment program versus going to prison. In each state the convictions differ along with how much marijuana they have on their person.[97]

Some scholars have claimed that the phrase “War on Drugs” is propaganda cloaking an extension of earlier military or paramilitary operations.[9] Others have argued that large amounts of “drug war” foreign aid money, training, and equipment actually goes to fighting leftist insurgencies and is often provided to groups who themselves are involved in large-scale narco-trafficking, such as corrupt members of the Colombian military.[8]

From 1963 to the end of the Vietnam War in 1975, marijuana usage became common among U.S. soldiers in non-combat situations. Some servicemen also used heroin. Many of the servicemen ended the heroin use after returning to the United States but came home addicted. In 1971, the U.S. military conducted a study of drug use among American servicemen and women. It found that daily usage rates for drugs on a worldwide basis were as low as two percent.[98] However, in the spring of 1971, two congressmen released an alarming report alleging that 15% of the servicemen in Vietnam were addicted to heroin. Marijuana use was also common in Vietnam. Soldiers who used drugs had more disciplinary problems. The frequent drug use had become an issue for the commanders in Vietnam; in 1971 it was estimated that 30,000 servicemen were addicted to drugs, most of them to heroin.[11]

From 1971 on, therefore, returning servicemen were required to take a mandatory heroin test. Servicemen who tested positive upon returning from Vietnam were not allowed to return home until they had passed the test with a negative result. The program also offered a treatment for heroin addicts.[99]

Elliot Borin’s article “The U.S. Military Needs its Speed”published in Wired on February 10, 2003reports:

But the Defense Department, which distributed millions of amphetamine tablets to troops during World War II, Vietnam and the Gulf War, soldiers on, insisting that they are not only harmless but beneficial.

In a news conference held in connection with Schmidt and Umbach’s Article 32 hearing, Dr. Pete Demitry, an Air Force physician and a pilot, claimed that the “Air Force has used (Dexedrine) safely for 60 years” with “no known speed-related mishaps.”

The need for speed, Demitry added “is a life-and-death issue for our military.”[100]

One of the first anti-drug efforts in the realm of foreign policy was President Nixon’s Operation Intercept, announced in September 1969, targeted at reducing the amount of cannabis entering the United States from Mexico. The effort began with an intense inspection crackdown that resulted in an almost shutdown of cross-border traffic.[101] Because the burden on border crossings was controversial in border states, the effort only lasted twenty days.[102]

On December 20, 1989, the United States invaded Panama as part of Operation Just Cause, which involved 25,000 American troops. Gen. Manuel Noriega, head of the government of Panama, had been giving military assistance to Contra groups in Nicaragua at the request of the U.S. which, in exchange, tolerated his drug trafficking activities, which they had known about since the 1960s.[103][104] When the Drug Enforcement Administration (DEA) tried to indict Noriega in 1971, the CIA prevented them from doing so.[103] The CIA, which was then directed by future president George H. W. Bush, provided Noriega with hundreds of thousands of dollars per year as payment for his work in Latin America.[103] When CIA pilot Eugene Hasenfus was shot down over Nicaragua by the Sandinistas, documents aboard the plane revealed many of the CIA’s activities in Latin America, and the CIA’s connections with Noriega became a public relations “liability” for the U.S. government, which finally allowed the DEA to indict him for drug trafficking, after decades of tolerating his drug operations.[103] Operation Just Cause, whose purpose was to capture Noriega and overthrow his government; Noriega found temporary asylum in the Papal Nuncio, and surrendered to U.S. soldiers on January 3, 1990.[105] He was sentenced by a court in Miami to 45 years in prison.[103]

As part of its Plan Colombia program, the United States government currently provides hundreds of millions of dollars per year of military aid, training, and equipment to Colombia,[106] to fight left-wing guerrillas such as the Revolutionary Armed Forces of Colombia (FARC-EP), which has been accused of being involved in drug trafficking.[107]

Private U.S. corporations have signed contracts to carry out anti-drug activities as part of Plan Colombia. DynCorp, the largest private company involved, was among those contracted by the State Department, while others signed contracts with the Defense Department.[108]

Colombian military personnel have received extensive counterinsurgency training from U.S. military and law enforcement agencies, including the School of Americas (SOA). Author Grace Livingstone has stated that more Colombian SOA graduates have been implicated in human rights abuses than currently known SOA graduates from any other country. All of the commanders of the brigades highlighted in a 2001 Human Rights Watch report on Colombia were graduates of the SOA, including the III brigade in Valle del Cauca, where the 2001 Alto Naya Massacre occurred. US-trained officers have been accused of being directly or indirectly involved in many atrocities during the 1990s, including the Massacre of Trujillo and the 1997 Mapiripn Massacre.

In 2000, the Clinton administration initially waived all but one of the human rights conditions attached to Plan Colombia, considering such aid as crucial to national security at the time.[109]

The efforts of U.S. and Colombian governments have been criticized for focusing on fighting leftist guerrillas in southern regions without applying enough pressure on right-wing paramilitaries and continuing drug smuggling operations in the north of the country.[110][111] Human Rights Watch, congressional committees and other entities have documented the existence of connections between members of the Colombian military and the AUC, which the U.S. government has listed as a terrorist group, and that Colombian military personnel have committed human rights abuses which would make them ineligible for U.S. aid under current laws.[citation needed]

In 2010, the Washington Office on Latin America concluded that both Plan Colombia and the Colombian government’s security strategy “came at a high cost in lives and resources, only did part of the job, are yielding diminishing returns and have left important institutions weaker.”[112]

A 2014 report by the RAND Corporation, which was issued to analyze viable strategies for the Mexican drug war considering successes experienced in Columbia, noted:

Between 1999 and 2002, the United States gave Colombia $2.04 billion in aid, 81 percent of which was for military purposes, placing Colombia just below Israel and Egypt among the largest recipients of U.S. military assistance. Colombia increased its defense spending from 3.2 percent of gross domestic product (GDP) in 2000 to 4.19 percent in 2005. Overall, the results were extremely positive. Greater spending on infrastructure and social programs helped the Colombian government increase its political legitimacy, while improved security forces were better able to consolidate control over large swaths of the country previously overrun by insurgents and drug cartels.

It also notes that, “Plan Colombia has been widely hailed as a success, and some analysts believe that, by 2010, Colombian security forces had finally gained the upper hand once and for all.”[113]

The Mrida Initiative is a security cooperation between the United States and the government of Mexico and the countries of Central America. It was approved on June 30, 2008, and its stated aim is combating the threats of drug trafficking and transnational crime. The Mrida Initiative appropriated $1.4 billion in a three-year commitment (20082010) to the Mexican government for military and law enforcement training and equipment, as well as technical advice and training to strengthen the national justice systems. The Mrida Initiative targeted many very important government officials, but it failed to address the thousands of Central Americans who had to flee their countries due to the danger they faced everyday because of the war on drugs. There is still not any type of plan that addresses these people. No weapons are included in the plan.[114][115]

The United States regularly sponsors the spraying of large amounts of herbicides such as glyphosate over the jungles of Central and South America as part of its drug eradication programs. Environmental consequences resulting from aerial fumigation have been criticized as detrimental to some of the world’s most fragile ecosystems;[116] the same aerial fumigation practices are further credited with causing health problems in local populations.[117]

In 2012, the U.S. sent DEA agents to Honduras to assist security forces in counternarcotics operations. Honduras has been a major stop for drug traffickers, who use small planes and landing strips hidden throughout the country to transport drugs. The U.S. government made agreements with several Latin American countries to share intelligence and resources to counter the drug trade. DEA agents, working with other U.S. agencies such as the State Department, the CBP, and Joint Task Force-Bravo, assisted Honduras troops in conducting raids on traffickers’ sites of operation.[118]

The War on Drugs has been a highly contentious issue since its inception. A poll on October 2, 2008, found that three in four Americans believed that the War On Drugs was failing.[119]

At a meeting in Guatemala in 2012, three former presidents from Guatemala, Mexico and Colombia said that the war on drugs had failed and that they would propose a discussion on alternatives, including decriminalization, at the Summit of the Americas in April of that year.[120] Guatemalan President Otto Prez Molina said that the war on drugs was exacting too high a price on the lives of Central Americans and that it was time to “end the taboo on discussing decriminalization”.[121] At the summit, the government of Colombia pushed for the most far-reaching change to drugs policy since the war on narcotics was declared by Nixon four decades prior, citing the catastrophic effects it had had in Colombia.[122]

Several critics have compared the wholesale incarceration of the dissenting minority of drug users to the wholesale incarceration of other minorities in history. Psychiatrist Thomas Szasz, for example, writes in 1997 “Over the past thirty years, we have replaced the medical-political persecution of illegal sex users (‘perverts’ and ‘psychopaths’) with the even more ferocious medical-political persecution of illegal drug users.”[123]

Penalties for drug crimes among American youth almost always involve permanent or semi-permanent removal from opportunities for education, strip them of voting rights, and later involve creation of criminal records which make employment more difficult.[124] Thus, some authors maintain that the War on Drugs has resulted in the creation of a permanent underclass of people who have few educational or job opportunities, often as a result of being punished for drug offenses which in turn have resulted from attempts to earn a living in spite of having no education or job opportunities.[124]

According to a 2008 study published by Harvard economist Jeffrey A. Miron, the annual savings on enforcement and incarceration costs from the legalization of drugs would amount to roughly $41.3 billion, with $25.7 billion being saved among the states and over $15.6 billion accrued for the federal government. Miron further estimated at least $46.7 billion in tax revenue based on rates comparable to those on tobacco and alcohol ($8.7 billion from marijuana, $32.6 billion from cocaine and heroin, remainder from other drugs).[125]

Low taxation in Central American countries has been credited with weakening the region’s response in dealing with drug traffickers. Many cartels, especially Los Zetas have taken advantage of the limited resources of these nations. 2010 tax revenue in El Salvador, Guatemala, and Honduras, composed just 13.53% of GDP. As a comparison, in Chile and the U.S., taxes were 18.6% and 26.9% of GDP respectively. However, direct taxes on income are very hard to enforce and in some cases tax evasion is seen as a national pastime.[126]

The status of coca and coca growers has become an intense political issue in several countries, including Colombia and particularly Bolivia, where the president, Evo Morales, a former coca growers’ union leader, has promised to legalise the traditional cultivation and use of coca.[127] Indeed, legalization efforts have yielded some successes under the Morales administration when combined with aggressive and targeted eradication efforts. The country saw a 1213% decline in coca cultivation[127] in 2011 under Morales, who has used coca growers’ federations to ensure compliance with the law rather than providing a primary role for security forces.[127]

The coca eradication policy has been criticised for its negative impact on the livelihood of coca growers in South America. In many areas of South America the coca leaf has traditionally been chewed and used in tea and for religious, medicinal and nutritional purposes by locals.[128] For this reason many insist that the illegality of traditional coca cultivation is unjust. In many areas the U.S. government and military has forced the eradication of coca without providing for any meaningful alternative crop for farmers, and has additionally destroyed many of their food or market crops, leaving them starving and destitute.[128]

The CIA, DEA, State Department, and several other U.S. government agencies have been alleged to have relations with various groups which are involved in drug trafficking.

Senator John Kerry’s 1988 U.S. Senate Committee on Foreign Relations report on Contra drug links concludes that members of the U.S. State Department “who provided support for the Contras are involved in drug trafficking… and elements of the Contras themselves knowingly receive financial and material assistance from drug traffickers.”[129] The report further states that “the Contra drug links include… payments to drug traffickers by the U.S. State Department of funds authorized by the Congress for humanitarian assistance to the Contras, in some cases after the traffickers had been indicted by federal law enforcement agencies on drug charges, in others while traffickers were under active investigation by these same agencies.”

In 1996, journalist Gary Webb published reports in the San Jose Mercury News, and later in his book Dark Alliance, detailing how Contras, had been involved in distributing crack cocaine into Los Angeles whilst receiving money from the CIA.[citation needed] Contras used money from drug trafficking to buy weapons.[citation needed]

Webb’s premise regarding the U.S. Government connection was initially attacked at the time by the media. It is now widely accepted that Webb’s main assertion of government “knowledge of drug operations, and collaboration with and protection of known drug traffickers” was correct.[130][not in citation given] In 1998, CIA Inspector General Frederick Hitz published a two-volume report[131] that while seemingly refuting Webb’s claims of knowledge and collaboration in its conclusions did not deny them in its body.[citation needed] Hitz went on to admit CIA improprieties in the affair in testimony to a House congressional committee. There has been a reversal amongst mainstream media of its position on Webb’s work, with acknowledgement made of his contribution to exposing a scandal it had ignored.

According to Rodney Campbell, an editorial assistant to Nelson Rockefeller, during World War II, the United States Navy, concerned that strikes and labor disputes in U.S. eastern shipping ports would disrupt wartime logistics, released the mobster Lucky Luciano from prison, and collaborated with him to help the mafia take control of those ports. Labor union members were terrorized and murdered by mafia members as a means of preventing labor unrest and ensuring smooth shipping of supplies to Europe.[132]

According to Alexander Cockburn and Jeffrey St. Clair, in order to prevent Communist party members from being elected in Italy following World War II, the CIA worked closely with the Sicilian Mafia, protecting them and assisting in their worldwide heroin smuggling operations. The mafia was in conflict with leftist groups and was involved in assassinating, torturing, and beating leftist political organizers.[133]

In 1986, the US Defense Department funded a two-year study by the RAND Corporation, which found that the use of the armed forces to interdict drugs coming into the United States would have little or no effect on cocaine traffic and might, in fact, raise the profits of cocaine cartels and manufacturers. The 175-page study, “Sealing the Borders: The Effects of Increased Military Participation in Drug Interdiction”, was prepared by seven researchers, mathematicians and economists at the National Defense Research Institute, a branch of the RAND, and was released in 1988. The study noted that seven prior studies in the past nine years, including one by the Center for Naval Research and the Office of Technology Assessment, had come to similar conclusions. Interdiction efforts, using current armed forces resources, would have almost no effect on cocaine importation into the United States, the report concluded.[135]

During the early-to-mid-1990s, the Clinton administration ordered and funded a major cocaine policy study, again by RAND. The Rand Drug Policy Research Center study concluded that $3 billion should be switched from federal and local law enforcement to treatment. The report said that treatment is the cheapest way to cut drug use, stating that drug treatment is twenty-three times more effective than the supply-side “war on drugs”.[136]

The National Research Council Committee on Data and Research for Policy on Illegal Drugs published its findings in 2001 on the efficacy of the drug war. The NRC Committee found that existing studies on efforts to address drug usage and smuggling, from U.S. military operations to eradicate coca fields in Colombia, to domestic drug treatment centers, have all been inconclusive, if the programs have been evaluated at all: “The existing drug-use monitoring systems are strikingly inadequate to support the full range of policy decisions that the nation must make…. It is unconscionable for this country to continue to carry out a public policy of this magnitude and cost without any way of knowing whether and to what extent it is having the desired effect.”[137] The study, though not ignored by the press, was ignored by top-level policymakers, leading Committee Chair Charles Manski to conclude, as one observer notes, that “the drug war has no interest in its own results”.[138]

In mid-1995, the US government tried to reduce the supply of methamphetamine precursors to disrupt the market of this drug. According to a 2009 study, this effort was successful, but its effects were largely temporary.[139]

During alcohol prohibition, the period from 1920 to 1933, alcohol use initially fell but began to increase as early as 1922. It has been extrapolated that even if prohibition had not been repealed in 1933, alcohol consumption would have quickly surpassed pre-prohibition levels.[140] One argument against the War on Drugs is that it uses similar measures as Prohibition and is no more effective.

In the six years from 2000 to 2006, the U.S. spent $4.7 billion on Plan Colombia, an effort to eradicate coca production in Colombia. The main result of this effort was to shift coca production into more remote areas and force other forms of adaptation. The overall acreage cultivated for coca in Colombia at the end of the six years was found to be the same, after the U.S. Drug Czar’s office announced a change in measuring methodology in 2005 and included new areas in its surveys.[141] Cultivation in the neighboring countries of Peru and Bolivia increased, some would describe this effect like squeezing a balloon.[142]

Richard Davenport-Hines, in his book The Pursuit of Oblivion,[143] criticized the efficacy of the War on Drugs by pointing out that

1015% of illicit heroin and 30% of illicit cocaine is intercepted. Drug traffickers have gross profit margins of up to 300%. At least 75% of illicit drug shipments would have to be intercepted before the traffickers’ profits were hurt.

Alberto Fujimori, president of Peru from 1990 to 2000, described U.S. foreign drug policy as “failed” on grounds that “for 10 years, there has been a considerable sum invested by the Peruvian government and another sum on the part of the American government, and this has not led to a reduction in the supply of coca leaf offered for sale. Rather, in the 10 years from 1980 to 1990, it grew 10-fold.”[144]

At least 500 economists, including Nobel Laureates Milton Friedman,[145] George Akerlof and Vernon L. Smith, have noted that reducing the supply of marijuana without reducing the demand causes the price, and hence the profits of marijuana sellers, to go up, according to the laws of supply and demand.[146] The increased profits encourage the producers to produce more drugs despite the risks, providing a theoretical explanation for why attacks on drug supply have failed to have any lasting effect. The aforementioned economists published an open letter to President George W. Bush stating “We urge…the country to commence an open and honest debate about marijuana prohibition… At a minimum, this debate will force advocates of current policy to show that prohibition has benefits sufficient to justify the cost to taxpayers, foregone tax revenues and numerous ancillary consequences that result from marijuana prohibition.”

The declaration from the World Forum Against Drugs, 2008 state that a balanced policy of drug abuse prevention, education, treatment, law enforcement, research, and supply reduction provides the most effective platform to reduce drug abuse and its associated harms and call on governments to consider demand reduction as one of their first priorities in the fight against drug abuse.[147]

Despite over $7 billion spent annually towards arresting[148] and prosecuting nearly 800,000 people across the country for marijuana offenses in 2005[citation needed] (FBI Uniform Crime Reports), the federally funded Monitoring the Future Survey reports about 85% of high school seniors find marijuana “easy to obtain”. That figure has remained virtually unchanged since 1975, never dropping below 82.7% in three decades of national surveys.[149] The Drug Enforcement Administration states that the number of users of marijuana in the U.S. declined between 2000 and 2005 even with many states passing new medical marijuana laws making access easier,[150] though usage rates remain higher than they were in the 1990s according to the National Survey on Drug Use and Health.[151]

ONDCP stated in April 2011 that there has been a 46 percent drop in cocaine use among young adults over the past five years, and a 65 percent drop in the rate of people testing positive for cocaine in the workplace since 2006.[152] At the same time, a 2007 study found that up to 35% of college undergraduates used stimulants not prescribed to them.[153]

A 2013 study found that prices of heroin, cocaine and cannabis had decreased from 1990 to 2007, but the purity of these drugs had increased during the same time.[154]

The War on Drugs is often called a policy failure.[155][156][157][158][159]

The legality of the War on Drugs has been challenged on four main grounds in the U.S.

Several authors believe that the United States’ federal and state governments have chosen wrong methods for combatting the distribution of illicit substances. Aggressive, heavy-handed enforcement funnels individuals through courts and prisons; instead of treating the cause of the addiction, the focus of government efforts has been on punishment. By making drugs illegal rather than regulating them, the War on Drugs creates a highly profitable black market. Jefferson Fish has edited scholarly collections of articles offering a wide variety of public health based and rights based alternative drug policies.[160][161][162]

In the year 2000, the United States drug-control budget reached 18.4 billion dollars,[163] nearly half of which was spent financing law enforcement while only one sixth was spent on treatment. In the year 2003, 53 percent of the requested drug control budget was for enforcement, 29 percent for treatment, and 18 percent for prevention.[164] The state of New York, in particular, designated 17 percent of its budget towards substance-abuse-related spending. Of that, a mere one percent was put towards prevention, treatment, and research.

In a survey taken by Substance Abuse and Mental Health Services Administration (SAMHSA), it was found that substance abusers that remain in treatment longer are less likely to resume their former drug habits. Of the people that were studied, 66 percent were cocaine users. After experiencing long-term in-patient treatment, only 22 percent returned to the use of cocaine. Treatment had reduced the number of cocaine abusers by two-thirds.[163] By spending the majority of its money on law enforcement, the federal government had underestimated the true value of drug-treatment facilities and their benefit towards reducing the number of addicts in the U.S.

In 2004 the federal government issued the National Drug Control Strategy. It supported programs designed to expand treatment options, enhance treatment delivery, and improve treatment outcomes. For example, the Strategy provided SAMHSA with a $100.6 million grant to put towards their Access to Recovery (ATR) initiative. ATR is a program that provides vouchers to addicts to provide them with the means to acquire clinical treatment or recovery support. The project’s goals are to expand capacity, support client choice, and increase the array of faith-based and community based providers for clinical treatment and recovery support services.[165] The ATR program will also provide a more flexible array of services based on the individual’s treatment needs.

The 2004 Strategy additionally declared a significant 32 million dollar raise in the Drug Courts Program, which provides drug offenders with alternatives to incarceration. As a substitute for imprisonment, drug courts identify substance-abusing offenders and place them under strict court monitoring and community supervision, as well as provide them with long-term treatment services.[166] According to a report issued by the National Drug Court Institute, drug courts have a wide array of benefits, with only 16.4 percent of the nation’s drug court graduates rearrested and charged with a felony within one year of completing the program (versus the 44.1% of released prisoners who end up back in prison within 1-year). Additionally, enrolling an addict in a drug court program costs much less than incarcerating one in prison.[167] According to the Bureau of Prisons, the fee to cover the average cost of incarceration for Federal inmates in 2006 was $24,440.[168] The annual cost of receiving treatment in a drug court program ranges from $900 to $3,500. Drug courts in New York State alone saved $2.54 million in incarceration costs.[167]

Describing the failure of the War on Drugs, New York Times columnist Eduardo Porter noted:

Jeffrey Miron, an economist at Harvard who studies drug policy closely, has suggested that legalizing all illicit drugs would produce net benefits to the United States of some $65 billion a year, mostly by cutting public spending on enforcement as well as through reduced crime and corruption. A study by analysts at the RAND Corporation, a California research organization, suggested that if marijuana were legalized in California and the drug spilled from there to other states, Mexican drug cartels would lose about a fifth of their annual income of some $6.5 billion from illegal exports to the United States.[169]

Many believe that the War on Drugs has been costly and ineffective largely because inadequate emphasis is placed on treatment of addiction. The United States leads the world in both recreational drug usage and incarceration rates. 70% of men arrested in metropolitan areas test positive for an illicit substance,[170] and 54% of all men incarcerated will be repeat offenders.[171]

There are also programs in the United States to combat public health risks of injecting drug users such as the Needle exchange programme. The “needle exchange programme” is intended to provide injecting drug users with new needles in exchange for used needles to prevent needle sharing.

Covert activities and foreign policy

Go here to see the original:

War on drugs – Wikipedia

Philippines War on Drugs | Human Rights Watch

Tilted election playing field in Turkey; European Court of Justice confirms rights of same-sex couples; Philippine policepromoting abusers; Vietnam’s cyber security law; Nigerian military trying to smear Amnesty International; Paris names imprisoned Bahrainrights activist Nabeel Rajaban honorary citizen; Intimidation ofjournalists in the US; Brutal US treatment of refugees; and Russia’s World Cup amid Syria slaughter.

See original here:

Philippines War on Drugs | Human Rights Watch

Homepage – The War On Drugs

Postal Code

Country Select CountryUnited StatesAfghanistanAlbaniaAlgeriaAmerican SamoaAndorraAngolaAnguillaAntarcticaAntigua and BarbudaArgentinaArmeniaArubaAustraliaAustriaAzerbaijanBahamasBahrainBangladeshBarbadosBelarusBelgiumBelizeBeninBermudaBhutanBoliviaBosnia & HerzegovinaBotswanaBouvet IslandBrazilBritish Ind Ocean Ter Brunei DarussalamBulgariaBurkina FasoBurundiCambodiaCameroonCanadaCape VerdeCayman IslandsCentral African RepChadChileChinaChristmas IslandCocos (Keeling Is)ColombiaComorosCongoCook IslandsCosta RicaCote D’Ivoire Croatia (Hrvatska)CubaCyprusCzech RepublicDenmarkDjiboutiDominicaDominican RepublicEast TimorEcuadorEgyptEl SalvadorEquatorial GuineaEritreaEstoniaEthiopiaFalkland Islands Faroe IslandsFijiFinlandFranceFrance, MetroFrench GuianaFrench PolynesiaFrench Southern TerGabonGambiaGeorgiaGeorgia and S. Sand IsGermanyGhanaGibraltarGreeceGreenlandGrenadaGuadeloupeGuamGuatemalaGuineaGuinea-BissauGuyanaHaitiHeard & McDonald IsHondurasHong KongHungaryIcelandIndiaIndonesiaIranIraqIrelandIsraelItalyJamaicaJapanJordanKazakhstanKenyaKiribatiKorea (North) Korea (South)KuwaitKyrgyzstan LaosLatviaLebanonLesothoLiberiaLibyaLiechtensteinLithuaniaLuxembourgMacauMacedoniaMadagascarMalawiMalaysiaMaldivesMaliMaltaMarshall IslandsMartiniqueMauritaniaMauritiusMayotteMexicoMicronesiaMoldovaMonacoMongoliaMontserratMoroccoMozambiqueMyanmarNamibiaNauruNepalNetherlandsNetherlands AntillesNeutral Zone Saudi/IraqNew CaledoniaNew ZealandNicaraguaNigerNigeriaNiueNorfolk IslandNorthern Mariana IsNorwayOmanPakistanPalauPanamaPapua New GuineaParaguayPeruPhilippinesPitcairnPolandPortugalPuerto RicoQatarReunionRomaniaRussian FederationRwandaSaint Kitts and NevisSaint LuciaSt. Vincent/Grenadines SamoaSan MarinoSao Tome and PrincipeSaudi ArabiaSenegalSeychellesSierra LeoneSingaporeSlovakia (Slovak Rep)SloveniaSolomon IslandsSomaliaSouth AfricaSoviet Union (former)SpainSri LankaSt. HelenaSt. Pierre and Miquelo SudanSurinameSvalbard & Jan MayenSwazilandSwedenSwitzerlandSyriaTaiwanTajikistanTanzaniaThailandTogoTokelauTongaTrinidad and TobagoTunisiaTurkeyTurkmenistanTurks and Caicos TuvaluUS Minor Outlying IsUgandaUkraineUnited Arab EmiratesUnited Kingdom UruguayUzbekistanVanuatuVatican City State VenezuelaViet NamVirgin Islands (Brit)Virgin Islands (US)Wallis and Futuna IsWestern SaharaYemenYugoslaviaZaireZambiaZimbabwe

First Name

Birth Date

MonthJanuaryFebruaryMarchAprilMayJuneJulyAugustSeptemberOctoberNovemberDecember

Date12345678910111213141516171819202122232425262728293031

More here:

Homepage – The War On Drugs

War on Drugs | United States history | Britannica.com

War on Drugs, the effort in the United States since the 1970s to combat illegal drug use by greatly increasing penalties, enforcement, and incarceration for drug offenders.

The War on Drugs began in June 1971 when U.S. Pres. Richard Nixon declared drug abuse to be public enemy number one and increased federal funding for drug-control agencies and drug-treatment efforts. In 1973 the Drug Enforcement Agency was created out of the merger of the Office for Drug Abuse Law Enforcement, the Bureau of Narcotics and Dangerous Drugs, and the Office of Narcotics Intelligence to consolidate federal efforts to control drug abuse.

The War on Drugs was a relatively small component of federal law-enforcement efforts until the presidency of Ronald Reagan, which began in 1981. Reagan greatly expanded the reach of the drug war and his focus on criminal punishment over treatment led to a massive increase in incarcerations for nonviolent drug offenses, from 50,000 in 1980 to 400,000 in 1997. In 1984 his wife, Nancy, spearheaded another facet of the War on Drugs with her Just Say No campaign, which was a privately funded effort to educate schoolchildren on the dangers of drug use. The expansion of the War on Drugs was in many ways driven by increased media coverage ofand resulting public nervousness overthe crack epidemic that arose in the early 1980s. This heightened concern over illicit drug use helped drive political support for Reagans hard-line stance on drugs. The U.S. Congress passed the Anti-Drug Abuse Act of 1986, which allocated $1.7 billion to the War on Drugs and established a series of mandatory minimum prison sentences for various drug offenses. A notable feature of mandatory minimums was the massive gap between the amounts of crack and of powder cocaine that resulted in the same minimum sentence: possession of five grams of crack led to an automatic five-year sentence while it took the possession of 500 grams of powder cocaine to trigger that sentence. Since approximately 80% of crack users were African American, mandatory minimums led to an unequal increase of incarceration rates for nonviolent black drug offenders, as well as claims that the War on Drugs was a racist institution.

Concerns over the effectiveness of the War on Drugs and increased awareness of the racial disparity of the punishments meted out by it led to decreased public support of the most draconian aspects of the drug war during the early 21st century. Consequently, reforms were enacted during that time, such as the legalization of recreational marijuana in a number of states and the passage of the Fair Sentencing Act of 2010 that reduced the discrepancy of crack-to-powder possession thresholds for minimum sentences from 100-to-1 to 18-to-1. While the War on Drugs is still technically being waged, it is done at much less intense level than it was during its peak in the 1980s.

Read the original:

War on Drugs | United States history | Britannica.com

A Brief History of the Drug War | Drug Policy Alliance

This video from hip hop legend Jay Z and acclaimed artist Molly Crabapple depicts the drug wars devastating impact on the Black community from decades of biased law enforcement.

The video traces the drug war from President Nixon to the draconian Rockefeller Drug Laws to the emerging aboveground marijuana market that is poised to make legal millions for wealthy investors doing the same thing that generations of people of color have been arrested and locked up for. After you watch the video, read on to learn more about the discriminatory history of the war on drugs.

Many currently illegal drugs, such as marijuana, opium, coca, and psychedelics have been used for thousands of years for both medical and spiritual purposes. So why are some drugs legal and other drugs illegal today? It’s not based on any scientific assessment of the relative risks of these drugs but it has everything to do with who is associated with these drugs.

The first anti-opium laws in the 1870s were directed at Chinese immigrants. The first anti-cocaine laws in the early 1900s were directed at black men in the South. The first anti-marijuana laws, in the Midwest and the Southwest in the 1910s and 20s, were directed at Mexican migrants and Mexican Americans. Today, Latino and especially black communities are still subject to wildly disproportionate drug enforcement and sentencing practices.

In the 1960s, as drugs became symbols of youthful rebellion, social upheaval, and political dissent, the government halted scientific research to evaluate their medical safety and efficacy.

In June 1971, President Nixon declared a war on drugs. He dramatically increased the size and presence of federal drug control agencies, and pushed through measures such as mandatory sentencing and no-knock warrants.

A top Nixon aide, John Ehrlichman, later admitted: You want to know what this was really all about. The Nixon campaign in 1968, and the Nixon White House after that, had two enemies: the antiwar left and black people. You understand what Im saying. We knew we couldnt make it illegal to be either against the war or black, but by getting the public to associate the hippies with marijuana and blacks with heroin, and then criminalizing both heavily, we could disrupt those communities. We could arrest their leaders, raid their homes, break up their meetings, and vilify them night after night on the evening news. Did we know we were lying about the drugs? Of course we did.Nixon temporarily placed marijuana in Schedule One, the most restrictive category of drugs, pending review by a commission he appointed led by Republican Pennsylvania Governor Raymond Shafer.

In 1972, the commission unanimously recommended decriminalizing the possession and distribution of marijuana for personal use. Nixon ignored the report and rejected its recommendations.

Between 1973 and 1977, however, eleven states decriminalized marijuana possession. In January 1977, President Jimmy Carter was inaugurated on a campaign platform that included marijuana decriminalization. In October 1977, the Senate Judiciary Committee voted to decriminalize possession of up to an ounce of marijuana for personal use.

Within just a few years, though, the tide had shifted. Proposals to decriminalize marijuana were abandoned as parents became increasingly concerned about high rates of teen marijuana use. Marijuana was ultimately caught up in a broader cultural backlash against the perceived permissiveness of the 1970s.

The presidency of Ronald Reagan marked the start of a long period of skyrocketing rates of incarceration, largely thanks to his unprecedented expansion of the drug war. The number of people behind bars for nonviolent drug law offenses increased from 50,000 in 1980 to over 400,000 by 1997.

Public concern about illicit drug use built throughout the 1980s, largely due to media portrayals of people addicted to the smokeable form of cocaine dubbed crack. Soon after Ronald Reagan took office in 1981, his wife, Nancy Reagan, began a highly-publicized anti-drug campaign, coining the slogan “Just Say No.”

This set the stage for the zero tolerance policies implemented in the mid-to-late 1980s. Los Angeles Police Chief Daryl Gates, who believed that casual drug users should be taken out and shot, founded the DARE drug education program, which was quickly adopted nationwide despite the lack of evidence of its effectiveness. The increasingly harsh drug policies also blocked the expansion of syringe access programs and other harm reduction policies to reduce the rapid spread of HIV/AIDS.

In the late 1980s, a political hysteria about drugs led to the passage of draconian penalties in Congress and state legislatures that rapidly increased the prison population. In 1985, the proportion of Americans polled who saw drug abuse as the nation’s “number one problem” was just 2-6 percent. The figure grew through the remainder of the 1980s until, in September 1989, it reached a remarkable 64 percent one of the most intense fixations by the American public on any issue in polling history. Within less than a year, however, the figure plummeted to less than 10 percent, as the media lost interest. The draconian policies enacted during the hysteria remained, however, and continued to result in escalating levels of arrests and incarceration.

Although Bill Clinton advocated for treatment instead of incarceration during his 1992 presidential campaign, after his first few months in the White House he reverted to the drug war strategies of his Republican predecessors by continuing to escalate the drug war. Notoriously, Clinton rejected a U.S. Sentencing Commission recommendation to eliminate the disparity between crack and powder cocaine sentences.

He also rejected, with the encouragement of drug czar General Barry McCaffrey, Health Secretary Donna Shalalas advice to end the federal ban on funding for syringe access programs. Yet, a month before leaving office, Clinton asserted in a Rolling Stone interview that “we really need a re-examination of our entire policy on imprisonment” of people who use drugs, and said that marijuana use “should be decriminalized.”

At the height of the drug war hysteria in the late 1980s and early 1990s, a movement emerged seeking a new approach to drug policy. In 1987, Arnold Trebach and Kevin Zeese founded the Drug Policy Foundation describing it as the loyal opposition to the war on drugs. Prominent conservatives such as William Buckley and Milton Friedman had long advocated for ending drug prohibition, as had civil libertarians such as longtime ACLU Executive Director Ira Glasser. In the late 1980s they were joined by Baltimore Mayor Kurt Schmoke, Federal Judge Robert Sweet, Princeton professor Ethan Nadelmann, and other activists, scholars and policymakers.

In 1994, Nadelmann founded The Lindesmith Center as the first U.S. project of George Soros Open Society Institute. In 2000, the growing Center merged with the Drug Policy Foundation to create the Drug Policy Alliance.

George W. Bush arrived in the White House as the drug war was running out of steam yet he allocated more money than ever to it. His drug czar, John Walters, zealously focused on marijuana and launched a major campaign to promote student drug testing. While rates of illicit drug use remained constant, overdose fatalities rose rapidly.

The era of George W. Bush also witnessed the rapid escalation of the militarization of domestic drug law enforcement. By the end of Bush’s term, there were about 40,000 paramilitary-style SWAT raids on Americans every year mostly for nonviolent drug law offenses, often misdemeanors. While federal reform mostly stalled under Bush, state-level reforms finally began to slow the growth of the drug war.

Politicians now routinely admit to having used marijuana, and even cocaine, when they were younger. When Michael Bloomberg was questioned during his 2001 mayoral campaign about whether he had ever used marijuana, he said, “You bet I did and I enjoyed it.” Barack Obama also candidly discussed his prior cocaine and marijuana use: “When I was a kid, I inhaled frequently that was the point.”

Public opinion has shifted dramatically in favor of sensible reforms that expand health-based approaches while reducing the role of criminalization in drug policy.

Marijuana reform has gained unprecedented momentum throughout the Americas. Alaska, California, Colorado, Nevada, Oregon, Maine, Massachusetts, Washington State, and Washington D.C. have legalized marijuana for adults. In December 2013, Uruguay became the first country in the world to legally regulate marijuana. In Canada, Prime Minister Justin Trudeau plans legalize marijuana for adults by 2018.

In response to a worsening overdose epidemic, dozens of U.S. states passed laws to increase access to the overdose antidote, naloxone, as well as 911 Good Samaritan laws to encourage people to seek medical help in the event of an overdose.

Yet the assault on American citizens and others continues, with 700,000 people still arrested for marijuana offenses each year and almost 500,000 people still behind bars for nothing more than a drug law violation.

President Obama, despite supporting several successful policy changes such as reducing the crack/powder sentencing disparity, ending the ban on federal funding for syringe access programs, and ending federal interference with state medical marijuana laws did not shift the majority of drug policy funding to a health-based approach.

Now, the new administration is threatening to take us backward toward a 1980s style drug war. President Trump is calling for a wall to keep drugs out of the country, and Attorney General Jeff Sessions has made it clear that he does not support the sovereignty of states to legalize marijuana, and believes good people dont smoke marijuana.

Progress is inevitably slow, and even with an administration hostile to reform there is still unprecedented momentum behind drug policy reform in states and localities across the country. The Drug Policy Alliance and its allies will continue to advocate for health-based reforms such as marijuana legalization, drug decriminalization, safe consumption sites, naloxone access, bail reform, and more.

We look forward to a future where drug policies are shaped by science and compassion rather than political hysteria.

Read the original here:

A Brief History of the Drug War | Drug Policy Alliance

The War on Drugs (band) – Wikipedia

The War on Drugs is an American indie rock band from Philadelphia, Pennsylvania, formed in 2005. The band consists of Adam Granduciel (lyrics, vocals, guitar), David Hartley (bass), Robbie Bennett (keyboards), Charlie Hall (drums), Jon Natchez (saxophone, keyboards) and Anthony LaMarca (guitar).

Founded by close collaborators Granduciel and Kurt Vile, The War on Drugs released their debut studio album, Wagonwheel Blues, in 2008. Vile departed shortly after its release to focus on his solo career. The band’s second studio album Slave Ambient was released in 2011 to favorable reviews and extensive touring.

The band’s third album, Lost in the Dream, was released in 2014 following extensive touring and a period of loneliness and depression for primary songwriter Granduciel. The album was released to widespread critical acclaim and increased exposure. Previous collaborator Hall joined the band as its full-time drummer during the recording process, with saxophonist Natchez and additional guitarist LaMarca accompanying the band for its world tour. Signing to Atlantic Records, the six-piece band released their fourth album, A Deeper Understanding, in 2017, which won the Grammy Award for Best Rock Album at the 60th Annual Grammy Awards.

In 2003, frontman Adam Granduciel moved from Oakland, California to Philadelphia, where he met Kurt Vile, who had also recently moved back to Philadelphia after living in Boston for two years.[2] The duo subsequently began writing, recording and performing music together.[3] Vile stated, “Adam was the first dude I met when I moved back to Philadelphia in 2003. We saw eye-to-eye on a lot of things. I was obsessed with Bob Dylan at the time, and we totally geeked-out on that. We started playing together in the early days and he would be in my band, The Violators. Then, eventually I played in The War On Drugs.”[4]

Granduciel and Vile began playing together as The War on Drugs in 2005. Regarding the band’s name, Granduciel noted, “My friend Julian and I came up with it a few years ago over a couple bottles of red wine and a few typewriters when we were living in Oakland. We were writing a lot back then, working on a dictionary, and it just came out and we were like “hey, good band name” so eventually when I moved to Philadelphia and got a band together I used it. It was either that or The Rigatoni Danzas. I think we made the right choice. I always felt though that it was the kind of name I could record all sorts of different music under without any sort of predictability inherent in the name”[5]

While Vile and Granduciel formed the backbone of the band, they had a number of accompanists early in the group’s career, before finally settling on a lineup that added Charlie Hall as drummer/organist, Kyle Lloyd as drummer and Dave Hartley on bass.[6] Granduciel had previously toured and recorded with The Capitol Years, and Vile has several solo albums.[7] The group gave away its Barrel of Batteries EP for free early in 2008.[8] Their debut LP for Secretly Canadian, Wagonwheel Blues, was released in 2008.[9]

Following the album’s release, and subsequent European tour, Vile departed from the band to focus on his solo career, stating, “I only went on the first European tour when their album came out, and then I basically left the band. I knew if I stuck with that, it would be all my time and my goal was to have my own musical career.”[4] Fellow Kurt Vile & the Violators bandmate Mike Zanghi joined the band at this time, with Vile noting, “Mike was my drummer first and then when The War On Drugs’ first record came out I thought I was lending Mike to Adam for the European tour but then he just played with them all the time so I kind of had to like, while they were touring a lot, figure out my own thing.”[10]

The lineup underwent several changes, and by the end of 2008, Kurt Vile, Charlie Hall, and Kyle Lloyd had all exited the group. At that time Granduciel and Hartley were joined by drummer Mike Zanghi, whom Granduciel also played with in Kurt Vile’s backing band, the Violators.

After recording much of the band’s forthcoming studio album, Slave Ambient, Zanghi departed from the band in 2010. Drummer Steven Urgo subsequently joined the band, with keyboardist Robbie Bennett also joining at around this time. Regarding Zanghi’s exit, Granduciel noted: “I loved Mike, and I loved the sound of The Violators, but then he wasn’t really the sound of my band. But you have things like friendship, and he’s down to tour and he’s a great guy, but it wasn’t the sound of what this band was.”[11]

Slave Ambient was released to favorable reviews in 2011.[citation needed]

In 2012, Patrick Berkery replaced Urgo as the band’s drummer.[12]

On December 4, 2013 the band announced the upcoming release of its third studio album, Lost in the Dream (March 18, 2014). The band streamed the album in its entirety on NPR’s First Listen site for a week before its release.[13]

Lost in the Dream was featured as the Vinyl Me, Please record of the month in August 2014. The pressing was a limited edition pressing on mint green colored vinyl.

In June 2015, The War on Drugs signed with Atlantic Records for a two-album deal.[14]

On Record Store Day, April 22, 2017, The War on Drugs released their new single “Thinking of a Place.”[15] The single was produced by frontman Granduciel and Shawn Everett.[16] April 28, 2017, The War on Drugs announced a fall 2017 tour in North America and Europe and that a new album was imminent.[17] On June 1, 2017, a new song, “Holding On”, was released, and it was announced that the album would be titled A Deeper Understanding and was released on August 25, 2017.[18]

The 2017 tour begins in September, opening in the band’s hometown, Philadelphia, and it concludes in November in Sweden.[19]

A Deeper Understanding was nominated for the International Album of the Year award at the 2018 UK Americana Awards[20].

At the 60th Annual Grammy Awards, on January 28th, 2018, A Deeper Understanding won the Grammy for Best Rock Album [21]

Granduciel and Zanghi are both former members of founding guitarist Vile’s backing band The Violators, with Granduciel noting, “There was never, despite what lazy journalists have assumed, any sort of falling out, or resentment”[22] following Vile’s departure from The War on Drugs. In 2011, Vile stated, “When my record came out, I assumed Adam would want to focus on The War On Drugs but he came with us in The Violators when we toured the States. The Violators became a unit, and although the cast does rotate, we’ve developed an even tighter unity and sound. Adam is an incredible guitar player these days and there is a certain feeling [between us] that nobody else can tap into. We don’t really have to tell each other what to play, it just happens.”

Both Hartley and Granduciel contributed to singer-songwriter Sharon Van Etten’s fourth studio album, Are We There (2014). Hartley performs bass guitar on the entire album, with Granduciel contributing guitar on two tracks.

Granduciel is currently[when?] producing the new Sore Eros album. They have been recording it in Philadelphia and Los Angeles on and off for the past several years.[4]

In 2016, The War on Drugs contributed a cover of “Touch of Grey” for a Grateful Dead tribute album called Day of the Dead. The album was curated by The National’s Aaron and Bryce Dessner.[19]

Current members

Former members

Go here to see the original:

The War on Drugs (band) – Wikipedia

Mind uploading – Wikipedia

Whole brain emulation (WBE), mind upload or brain upload (sometimes called “mind copying” or “mind transfer”) is the hypothetical futuristic process of scanning the mental state (including long-term memory and “self”) of a particular brain substrate and copying it to a computer. The computer could then run a simulation model of the brain’s information processing, such that it responds in essentially the same way as the original brain (i.e., indistinguishable from the brain for all relevant purposes) and experiences having a conscious mind.[1][2][3]

Mind uploading may potentially be accomplished by either of two methods: Copy-and-Transfer or gradual replacement of neurons. In the case of the former method, mind uploading would be achieved by scanning and mapping the salient features of a biological brain, and then by copying, transferring, and storing that information state into a computer system or another computational device. The simulated mind could be within a virtual reality or simulated world, supported by an anatomic 3D body simulation model. Alternatively the simulated mind could reside in a computer that is inside (or connected to) a (not necessarily humanoid) robot or a biological body in real life.[4]

Among some futurists and within the transhumanist movement, mind uploading is treated as an important proposed life extension technology. Some believe mind uploading is humanity’s current best option for preserving the identity of the species, as opposed to cryonics. Another aim of mind uploading is to provide a permanent backup to our “mind-file”, and a means for functional copies of human minds to survive a global disaster or interstellar space travels. Whole brain emulation is discussed by some futurists as a “logical endpoint”[4] of the topical computational neuroscience and neuroinformatics fields, both about brain simulation for medical research purposes. It is discussed in artificial intelligence research publications as an approach to strong AI. Computer-based intelligence such as an upload could think much faster than a biological human even if it were no more intelligent. A large-scale society of uploads might, according to futurists, give rise to a technological singularity, meaning a sudden time constant decrease in the exponential development of technology.[5] Mind uploading is a central conceptual feature of numerous science fiction novels and films.

Substantial mainstream research in related areas is being conducted in animal brain mapping and simulation, development of faster supercomputers, virtual reality, braincomputer interfaces, connectomics and information extraction from dynamically functioning brains.[6] According to supporters, many of the tools and ideas needed to achieve mind uploading already exist or are currently under active development; however, they will admit that others are, as yet, very speculative, but still in the realm of engineering possibility. Neuroscientist Randal Koene has formed a nonprofit organization called Carbon Copies to promote mind uploading research.

The human brain contains, on average, about 86 billion nerve cells called neurons, each individually linked to other neurons by way of connectors called axons and dendrites. Signals at the junctures (synapses) of these connections are transmitted by the release and detection of chemicals known as neurotransmitters. The established neuroscientific consensus is that the human mind is largely an emergent property of the information processing of this neural network.[citation needed]

Neuroscientists have stated that important functions performed by the mind, such as learning, memory, and consciousness, are due to purely physical and electrochemical processes in the brain and are governed by applicable laws. For example, Christof Koch and Giulio Tononi wrote in IEEE Spectrum:

“Consciousness is part of the natural world. It depends, we believe, only on mathematics and logic and on the imperfectly known laws of physics, chemistry, and biology; it does not arise from some magical or otherworldly quality.”[7]

The concept of mind uploading is based on this mechanistic view of the mind, and denies the vitalist view of human life and consciousness.[citation needed]

Eminent computer scientists and neuroscientists have predicted that specially programmed computers will be capable of thought and even attain consciousness, including Koch and Tononi,[7] Douglas Hofstadter,[8] Jeff Hawkins,[8] Marvin Minsky,[9] Randal A. Koene, and Rodolfo Llinas.[10]

Such an artificial intelligence capability might provide a computational substrate necessary for uploading.

However, even though uploading is dependent upon such a general capability, it is conceptually distinct from general forms of AI in that it results from dynamic reanimation of information derived from a specific human mind so that the mind retains a sense of historical identity (other forms are possible but would compromise or eliminate the life-extension feature generally associated with uploading). The transferred and reanimated information would become a form of artificial intelligence, sometimes called an infomorph or “nomorph”.[citation needed]

Many theorists have presented models of the brain and have established a range of estimates of the amount of computing power needed for partial and complete simulations.[4][citation needed] Using these models, some have estimated that uploading may become possible within decades if trends such as Moore’s law continue.[11]

In theory, if the information and processes of the mind can be disassociated from the biological body, they are no longer tied to the individual limits and lifespan of that body. Furthermore, information within a brain could be partly or wholly copied or transferred to one or more other substrates (including digital storage or another brain), thereby from a purely mechanistic perspective reducing or eliminating “mortality risk” of such information. This general proposal was discussed in 1971 by biogerontologist George M. Martin of the University of Washington.[12]

An uploaded astronaut would be the application of mind uploading to human spaceflight. This would eliminate the harms caused by a zero gravity environment, the vacuum of space, and cosmic radiation to the human body. It would allow for the use of smaller spacecraft, such as the proposed StarChip, and it would enable virtually unlimited interstellar travel distances.[13][14]

The focus of mind uploading, in the case of copy-and-transfer, is on data acquisition, rather than data maintenance of the brain. A set of approaches known as loosely coupled off-loading (LCOL) may be used in the attempt to characterize and copy the mental contents of a brain.[15] The LCOL approach may take advantage of self-reports, life-logs and video recordings that can be analyzed by artificial intelligence. A bottom-up approach may focus on the specific resolution and morphology of neurons, the spike times of neurons, the times at which neurons produce action potential responses.

Advocates of mind uploading point to Moore’s law to support the notion that the necessary computing power is expected to become available within a few decades. However, the actual computational requirements for running an uploaded human mind are very difficult to quantify, potentially rendering such an argument specious.

Regardless of the techniques used to capture or recreate the function of a human mind, the processing demands are likely to be immense, due to the large number of neurons in the human brain along with the considerable complexity of each neuron.

In 2004, Henry Markram, lead researcher of the “Blue Brain Project”, stated that “it is not [their] goal to build an intelligent neural network”, based solely on the computational demands such a project would have.[17]

It will be very difficult because, in the brain, every molecule is a powerful computer and we would need to simulate the structure and function of trillions upon trillions of molecules as well as all the rules that govern how they interact. You would literally need computers that are trillions of times bigger and faster than anything existing today.[18]

Five years later, after successful simulation of part of a rat brain, Markram was much more bold and optimistic. In 2009, as director of the Blue Brain Project, he claimed that A detailed, functional artificial human brain can be built within the next 10 years.[19]

Required computational capacity strongly depend on the chosen level of simulation model scale:[4]

Since the function of the human mind and how it might arise from the working of the brain’s neural network, are poorly understood issues, mind uploading relies on the idea of neural network emulation. Rather than having to understand the high-level psychological processes and large-scale structures of the brain, and model them using classical artificial intelligence methods and cognitive psychology models, the low-level structure of the underlying neural network is captured, mapped and emulated with a computer system. In computer science terminology,[dubious discuss] rather than analyzing and reverse engineering the behavior of the algorithms and data structures that resides in the brain, a blueprint of its source code is translated to another programming language. The human mind and the personal identity then, theoretically, is generated by the emulated neural network in an identical fashion to it being generated by the biological neural network.

On the other hand, a molecule-scale simulation of the brain is not expected to be required, provided that the functioning of the neurons is not affected by quantum mechanical processes. The neural network emulation approach only requires that the functioning and interaction of neurons and synapses are understood. It is expected that it is sufficient with a black-box signal processing model of how the neurons respond to nerve impulses (electrical as well as chemical synaptic transmission).

A sufficiently complex and accurate model of the neurons is required. A traditional artificial neural network model, for example multi-layer perceptron network model, is not considered as sufficient. A dynamic spiking neural network model is required, which reflects that the neuron fires only when a membrane potential reaches a certain level. It is likely that the model must include delays, non-linear functions and differential equations describing the relation between electrophysical parameters such as electrical currents, voltages, membrane states (ion channel states) and neuromodulators.

Since learning and long-term memory are believed to result from strengthening or weakening the synapses via a mechanism known as synaptic plasticity or synaptic adaptation, the model should include this mechanism. The response of sensory receptors to various stimuli must also be modelled.

Furthermore, the model may have to include metabolism, i.e. how the neurons are affected by hormones and other chemical substances that may cross the bloodbrain barrier. It is considered likely that the model must include currently unknown neuromodulators, neurotransmitters and ion channels. It is considered unlikely that the simulation model has to include protein interaction, which would make it computationally complex.[4]

A digital computer simulation model of an analog system such as the brain is an approximation that introduces random quantization errors and distortion. However, the biological neurons also suffer from randomness and limited precision, for example due to background noise. The errors of the discrete model can be made smaller than the randomness of the biological brain by choosing a sufficiently high variable resolution and sample rate, and sufficiently accurate models of non-linearities. The computational power and computer memory must however be sufficient to run such large simulations, preferably in real time.

When modelling and simulating the brain of a specific individual, a brain map or connectivity database showing the connections between the neurons must be extracted from an anatomic model of the brain. For whole brain simulation, this network map should show the connectivity of the whole nervous system, including the spinal cord, sensory receptors, and muscle cells. Destructive scanning of a small sample of tissue from a mouse brain including synaptic details is possible as of 2010.[20]

However, if short-term memory and working memory include prolonged or repeated firing of neurons, as well as intra-neural dynamic processes, the electrical and chemical signal state of the synapses and neurons may be hard to extract. The uploaded mind may then perceive a memory loss of the events and mental processes immediately before the time of brain scanning.[4]

A full brain map has been estimated to occupy less than 2 x 1016 bytes (20,000 TB) and would store the addresses of the connected neurons, the synapse type and the synapse “weight” for each of the brains’ 1015 synapses.[4][not in citation given] However, the biological complexities of true brain function (e.g. the epigenetic states of neurons, protein components with multiple functional states, etc.) may preclude an accurate prediction of the volume of binary data required to faithfully represent a functioning human mind.

A possible method for mind uploading is serial sectioning, in which the brain tissue and perhaps other parts of the nervous system are frozen and then scanned and analyzed layer by layer, which for frozen samples at nano-scale requires a cryo-ultramicrotome, thus capturing the structure of the neurons and their interconnections.[21] The exposed surface of frozen nerve tissue would be scanned and recorded, and then the surface layer of tissue removed. While this would be a very slow and labor-intensive process, research is currently underway to automate the collection and microscopy of serial sections.[22] The scans would then be analyzed, and a model of the neural net recreated in the system that the mind was being uploaded into.

There are uncertainties with this approach using current microscopy techniques. If it is possible to replicate neuron function from its visible structure alone, then the resolution afforded by a scanning electron microscope would suffice for such a technique.[22] However, as the function of brain tissue is partially determined by molecular events (particularly at synapses, but also at other places on the neuron’s cell membrane), this may not suffice for capturing and simulating neuron functions. It may be possible to extend the techniques of serial sectioning and to capture the internal molecular makeup of neurons, through the use of sophisticated immunohistochemistry staining methods that could then be read via confocal laser scanning microscopy. However, as the physiological genesis of ‘mind’ is not currently known, this method may not be able to access all of the necessary biochemical information to recreate a human brain with sufficient fidelity.

It may be possible to create functional 3D maps of the brain activity, using advanced neuroimaging technology, such as functional MRI (fMRI, for mapping change in blood flow), magnetoencephalography (MEG, for mapping of electrical currents), or combinations of multiple methods, to build a detailed three-dimensional model of the brain using non-invasive and non-destructive methods. Today, fMRI is often combined with MEG for creating functional maps of human cortex during more complex cognitive tasks, as the methods complement each other. Even though current imaging technology lacks the spatial resolution needed to gather the information needed for such a scan, important recent and future developments are predicted to substantially improve both spatial and temporal resolutions of existing technologies.[24]

There is ongoing work in the field of brain simulation, including partial and whole simulations of some animals. For example, the C. elegans roundworm, Drosophila fruit fly, and mouse have all been simulated to various degrees.[citation needed]

The Blue Brain Project by the Brain and Mind Institute of the cole Polytechnique Fdrale de Lausanne, Switzerland is an attempt to create a synthetic brain by reverse-engineering mammalian brain circuitry.

Underlying the concept of “mind uploading” (more accurately “mind transferring”) is the broad philosophy that consciousness lies within the brain’s information processing and is in essence an emergent feature that arises from large neural network high-level patterns of organization, and that the same patterns of organization can be realized in other processing devices. Mind uploading also relies on the idea that the human mind (the “self” and the long-term memory), just like non-human minds, is represented by the current neural network paths and the weights of the brain synapses rather than by a dualistic and mystic soul and spirit. The mind or “soul” can be defined as the information state of the brain, and is immaterial only in the same sense as the information content of a data file or the state of a computer software currently residing in the work-space memory of the computer. Data specifying the information state of the neural network can be captured and copied as a “computer file” from the brain and re-implemented into a different physical form.[25] This is not to deny that minds are richly adapted to their substrates.[26] An analogy to the idea of mind uploading is to copy the temporary information state (the variable values) of a computer program from the computer memory to another computer and continue its execution. The other computer may perhaps have different hardware architecture but emulates the hardware of the first computer.

These issues have a long history. In 1775 Thomas Reid wrote:[27] I would be glad to know… whether when my brain has lost its original structure, and when some hundred years after the same materials are fabricated so curiously as to become an intelligent being, whether, I say that being will be me; or, if, two or three such beings should be formed out of my brain; whether they will all be me, and consequently one and the same intelligent being.

A considerable portion of transhumanists and singularitarians place great hope into the belief that they may become immortal, by creating one or many non-biological functional copies of their brains, thereby leaving their “biological shell”. However, the philosopher and transhumanist Susan Schneider claims that at best, uploading would create a copy of the original person’s mind.[28] Susan Schneider agrees that consciousness has a computational basis, but this does not mean we can upload and survive. According to her views, “uploading” would probably result in the death of the original person’s brain, while only outside observers can maintain the illusion of the original person still being alive. For it is implausible to think that one’s consciousness would leave one’s brain and travel to a remote location; ordinary physical objects do not behave this way. Ordinary objects (rocks, tables, etc.) are not simultaneously here, and somewhere else. At best, a copy of the original mind is created.[28] Others have argued against such conclusions. For example, Buddhist transhumanist James Hughes has pointed out that this consideration only goes so far: if one believes the self is an illusion, worries about survival are not reasons to avoid uploading,[29] and Keith Wiley has presented an argument wherein all resulting minds of an uploading procedure are granted equal primacy in their claim to the original identity, such that survival of the self is determined retroactively from a strictly subjective position.[30][31]

Another potential consequence of mind uploading is that the decision to “upload” may then create a mindless symbol manipulator instead of a conscious mind (see philosophical zombie).[32][33] Are we to assume that an upload is conscious if it displays behaviors that are highly indicative of consciousness? Are we to assume that an upload is conscious if it verbally insists that it is conscious?[34] Could there be an absolute upper limit in processing speed above which consciousness cannot be sustained? The mystery of consciousness precludes a definitive answer to this question.[35] Numerous scientists, including Kurzweil, strongly believe that determining whether a separate entity is conscious (with 100% confidence) is fundamentally unknowable, since consciousness is inherently subjective (see solipsism). Regardless, some scientists strongly believe consciousness is the consequence of computational processes which are substrate-neutral. On the contrary, numerous scientists believe consciousness may be the result of some form of quantum computation dependent on substrate (see quantum mind).[36][37][38]

In light of uncertainty on whether to regard uploads as conscious, Sandberg proposes a cautious approach:[39]

Principle of assuming the most (PAM): Assume that any emulated system could have the same mental properties as the original system and treat it correspondingly.

It is argued that if a computational copy of one’s mind did exist, it would be impossible for one to recognize it as their own mind.[40] The argument for this stance is the following: for a computational mind to recognize an emulation of itself, it must be capable of deciding whether two Turing machines (namely, itself and the proposed emulation) are functionally equivalent. This task is uncomputable due to the undecidability of equivalence, thus there cannot exist a computational procedure in the mind that is capable of recognizing an emulation of itself.

The process of developing emulation technology raises ethical issues related to animal welfare and artificial consciousness.[39] The neuroscience required to develop brain emulation would require animal experimentation, first on invertebrates and then on small mammals before moving on to humans. Sometimes the animals would just need to be euthanized in order to extract, slice, and scan their brains, but sometimes behavioral and in vivo measures would be required, which might cause pain to living animals.[39]

In addition, the resulting animal emulations themselves might suffer, depending on one’s views about consciousness.[39] Bancroft argues for the plausibility of consciousness in brain simulations on the basis of the “fading qualia” thought experiment of David Chalmers. He then concludes:[41] If, as I argue above, a sufficiently detailed computational simulation of the brain is potentially operationally equivalent to an organic brain, it follows that we must consider extending protections against suffering to simulations.

It might help reduce emulation suffering to develop virtual equivalents of anaesthesia, as well as to omit processing related to pain and/or consciousness. However, some experiments might require a fully functioning and suffering animal emulation. Animals might also suffer by accident due to flaws and lack of insight into what parts of their brains are suffering.[39] Questions also arise regarding the moral status of partial brain emulations, as well as creating neuromorphic emulations that draw inspiration from biological brains but are built somewhat differently.[41]

Brain emulations could be erased by computer viruses or malware, without need to destroy the underlying hardware. This may make assassination easier than for physical humans. The attacker might take the computing power for its own use.[42]

Many questions arise regarding the legal personhood of emulations.[43] Would they be given the rights of biological humans? If a person makes an emulated copy of himself and then dies, does the emulation inherit his property and official positions? Could the emulation ask to “pull the plug” when its biological version was terminally ill or in a coma? Would it help to treat emulations as adolescents for a few years so that the biological creator would maintain temporary control? Would criminal emulations receive the death penalty, or would they be given forced data modification as a form of “rehabilitation”? Could an upload have marriage and child-care rights?[43]

If simulated minds would come true and if they were assigned rights of their own, it may be difficult to ensure the protection of “digital human rights”. For example, social science researchers might be tempted to secretly expose simulated minds, or whole isolated societies of simulated minds, to controlled experiments in which many copies of the same minds are exposed (serially or simultaneously) to different test conditions.[citation needed]

Emulations could create a number of conditions that might increase risk of war, including inequality, changes of power dynamics, a possible technological arms race to build emulations first, first-strike advantages, strong loyalty and willingness to “die” among emulations, and triggers for racist, xenophobic, and religious prejudice.[42] If emulations run much faster than humans, there might not be enough time for human leaders to make wise decisions or negotiate. It is possible that humans would react violently against growing power of emulations, especially if they depress human wages. Emulations may not trust each other, and even well-intentioned defensive measures might be interpreted as offense.[42]

There are very few feasible technologies that humans have refrained from developing. The neuroscience and computer-hardware technologies that may make brain emulation possible are widely desired for other reasons, and logically their development will continue into the future. Assuming that emulation technology will arrive, a question becomes whether we should accelerate or slow its advance.[42]

Arguments for speeding up brain-emulation research:

Arguments for slowing down brain-emulation research:

Emulation research would also speed up neuroscience as a whole, which might accelerate medical advances, cognitive enhancement, lie detectors, and capability for psychological manipulation.[48]

Emulations might be easier to control than de novo AI because

As counterpoint to these considerations, Bostrom notes some downsides:

Ray Kurzweil, director of engineering at Google, claims to know and foresee that people will be able to “upload” their entire brains to computers and become “digitally immortal” by 2045. Kurzweil made this claim for many years, e.g. during his speech in 2013 at the Global Futures 2045 International Congress in New York, which claims to subscribe to a similar set of beliefs.[49] Mind uploading is also advocated by a number of researchers in neuroscience and artificial intelligence, such as Marvin Minsky[citation needed] while he was still alive. In 1993, Joe Strout created a small web site called the Mind Uploading Home Page, and began advocating the idea in cryonics circles and elsewhere on the net. That site has not been actively updated in recent years, but it has spawned other sites including MindUploading.org, run by Randal A. Koene, who also moderates a mailing list on the topic. These advocates see mind uploading as a medical procedure which could eventually save countless lives.

Many transhumanists look forward to the development and deployment of mind uploading technology, with transhumanists such as Nick Bostrom predicting that it will become possible within the 21st century due to technological trends such as Moore’s law.[4]

Michio Kaku, in collaboration with Science, hosted a documentary, Sci Fi Science: Physics of the Impossible, based on his book Physics of the Impossible. Episode four, titled “How to Teleport”, mentions that mind uploading via techniques such as quantum entanglement and whole brain emulation using an advanced MRI machine may enable people to be transported to vast distances at near light-speed.

The book Beyond Humanity: CyberEvolution and Future Minds by Gregory S. Paul & Earl D. Cox, is about the eventual (and, to the authors, almost inevitable) evolution of computers into sentient beings, but also deals with human mind transfer. Richard Doyle’s Wetwares: Experiments in PostVital Living deals extensively with uploading from the perspective of distributed embodiment, arguing for example that humans are currently part of the “artificial life phenotype”. Doyle’s vision reverses the polarity on uploading, with artificial life forms such as uploads actively seeking out biological embodiment as part of their reproductive strategy.

Kenneth D. Miller, a professor of neuroscience at Columbia and a co-director of the Center for Theoretical Neuroscience, raised doubts about the practicality of mind uploading. His major argument is that reconstructing neurons and their connections is in itself a formidable task, but it is far from being sufficient. Operation of the brain depends on the dynamics of electrical and biochemical signal exchange between neurons; therefore, capturing them in a single “frozen” state may prove insufficient. In addition, the nature of these signals may require modeling down to the molecular level and beyond. Therefore, while not rejecting the idea in principle, Miller believes that the complexity of the “absolute” duplication of an individual mind is insurmountable for the nearest hundreds of years.[50]

Continued here:

Mind uploading – Wikipedia

Mind uploading in fiction – Wikipedia

Mind uploading, whole brain emulation or substrate-independent minds is a use of a computer or another substrate as an emulated human brain, and the view of thoughts and memories as software information states. The term mind transfer also refers to a hypothetical transfer of a mind from one biological brain to another. Uploaded minds and societies of minds, often in simulated realities, are recurring themes in science fiction novels and films since 1950s.

An early story featuring something like mind uploading is the novella Izzard and the Membrane by Walter M. Miller, Jr., first published in May 1951.[1] In this story, an American cyberneticist named Scott MacDonney is captured by Russians and made to work on an advanced computer, Izzard, which they plan to use to coordinate an attack on the United States. He has conversations with Izzard as he works on it, and when he asks it if it is self-aware, it says “answer indeterminate” and then asks “can human individual’s self-awareness transor be mechanically duplicated?” MacDonney is unfamiliar with the concept of a self-awareness transor (it is later revealed that this information was loaded into Izzard by a mysterious entity who may nor may not be God[2]), and Izzard defines it by saying “A self-awareness transor is the mathematical function which describes the specific consciousness pattern of one human individual.”[3] It is later found that this mathematical function can indeed be duplicated, although not by a detailed scan of the individual’s brain as in later notions of mind uploading; instead, Donney just has to describe the individual verbally in sufficient detail, and Izzard uses this information to locate the transor in the appropriate “mathematical region”. In Izzard’s words, “to duplicate consciousness of deceased, it will be necessary for you to furnish anthropometric and psychic characteristics of the individual. These characteristics will not determine transor, but will only give its general form. Knowing its form, will enable me to sweep my circuit pattern through its mathematical region until the proper transor is reached. At that point, the consciousness will appear among the circuits.”[4] Using this method, MacDonney is able to recreate the mind of his dead wife in Izzard’s memory, as well as create a virtual duplicate of himself, which seems to have a shared awareness with the biological MacDonney.

In The Altered Ego by Jerry Sohl (1954), a person’s mind can be “recorded” and used to create a “restoration” in the event of their death. In a restoration, the person’s biological body is repaired and brought back to life, and their memories are restored to the last time that they had their minds recorded (what the story calls a ‘brain record'[5]), an early example of a story in which a person can create periodic backups of their own mind. The recording process is not described in great detail, but it is mentioned that the recording is used to create a duplicate or “dupe” which is stored in the “restoration bank”,[6] and at one point a lecturer says that “The experience of the years, the neurograms, simple memory circuitsneurons, if you wishstored among these nerve cells, are transferred to the dupe, a group of more than ten billion molecules in colloidal suspension. They are charged much as you would charge the plates of a battery, the small neuroelectrical impulses emanating from your brain during the recording session being duplicated on the molecular structure in the solution.”[7] During restoration, they take the dupe and “infuse it into an empty brain”,[7] and the plot turns on the fact that it is possible to install one person’s dupe in the body of a completely different person.[8]

An early example featuring uploaded minds in robotic bodies can be found in Frederik Pohl’s story “The Tunnel Under the World” from 1955.[9] In this story, the protagonist Guy Burckhardt continually wakes up on the same date from a dream of dying in an explosion. Burckhardt is already familiar with the idea of putting human minds in robotic bodies, since this is what is done with the robot workers at the nearby Contro Chemical factory. As someone has once explained it to him, “each machine was controlled by a sort of computer which reproduced, in its electronic snarl, the actual memory and mind of a human being … It was only a matter, he said, of transferring a man’s habit patterns from brain cells to vacuum-tube cells.” Later in the story, Pohl gives some additional description of the procedure: “Take a master petroleum chemist, infinitely skilled in the separation of crude oil into its fractions. Strap him down, probe into his brain with searching electronic needles. The machine scans the patterns of the mind, translates what it sees into charts and sine waves. Impress these same waves on a robot computer and you have your chemist. Or a thousand copies of your chemist, if you wish, with all of his knowledge and skill, and no human limitations at all.” After some investigation, Burckhardt learns that his entire town had been killed in a chemical explosion, and the brains of the dead townspeople had been scanned and placed into miniature robotic bodies in a miniature replica of the town (as a character explains to him, ‘It’s as easy to transfer a pattern from a dead brain as a living one’), so that a businessman named Mr. Dorchin could charge companies to use the townspeople as test subjects for new products and advertisements.

Something close to the notion of mind uploading is very briefly mentioned in Isaac Asimov’s 1956 short story The Last Question: “One by one Man fused with AC, each physical body losing its mental identity in a manner that was somehow not a loss but a gain.” A more detailed exploration of the idea (and one in which individual identity is preserved, unlike in Asimov’s story) can be found in ArthurC. Clarke’s novel The City and the Stars, also from 1956 (this novel was a revised and expanded version of Clarke’s earlier story Against the Fall of Night, but the earlier version did not contain the elements relating to mind uploading). The story is set in a city named Diaspar one billion years in the future, where the minds of inhabitants are stored as patterns of information in the city’s Central Computer in between a series of 1000-year lives in cloned bodies. Various commentators identify this story as one of the first (if not the first) to deal with mind uploading, human-machine synthesis, and computerized immortality.[10][11][12][13]

Another of the “firsts” is the novel Detta r verkligheten (This is reality), 1968, by the renowned philosopher and logician Bertil Mrtensson, a novel in which he describes people living in an uploaded state as a means to control overpopulation. The uploaded people believe that they are “alive”, but in reality they are playing elaborate and advanced fantasy games. In a twist at the end, the author changes everything into one of the best “multiverse” ideas of science fiction.

In Robert Silverberg’s To Live Again (1969), an entire worldwide economy is built up around the buying and selling of “souls” (personas that have been tape-recorded at six-month intervals), allowing well-heeled consumers the opportunity to spend tens of millions of dollars on a medical treatment that uploads the most recent recordings of archived personalities into the minds of the buyers. Federal law prevents people from buying a “personality recording” unless the possessor first had died; similarly, two or more buyers were not allowed to own a “share” of the persona. In this novel, the personality recording always went to the highest bidder. However, when one attempted to buy (and therefore possess) too many personalities, there was the risk that one of the personas would wrest control of the body from the possessor.

In the 1982 novel Software, part of the Ware Tetralogy by Rudy Rucker, one of the main characters, Cobb Anderson, has his mind downloaded and his body replaced with an extremely human-like android body. The robots who persuade Anderson into doing this sell the process to him as a way to become immortal.

In William Gibson’s award-winning Neuromancer (1984), which popularized the concept of “cyberspace”, a hacking tool used by the main character is an artificial infomorph of a notorious cyber-criminal, Dixie Flatline. The infomorph only assists in exchange for the promise that he be deleted after the mission is complete.

The fiction of Greg Egan has explored many of the philosophical, ethical, legal, and identity aspects of mind transfer, as well as the financial and computing aspects (i.e. hardware, software, processing power) of maintaining “copies.” In Egan’s Permutation City (1994), Diaspora (1997) and Zendegi (2010), “copies” are made by computer simulation of scanned brain physiology. See also Egan’s “jewelhead” stories, where the mind is transferred from the organic brain to a small, immortal backup computer at the base of the skull, the organic brain then being surgically removed.

The movie The Matrix is commonly mistaken for a mind uploading movie, but with exception to suggestions in later movies, it is only about virtual reality and simulated reality, since the main character Neo’s physical brain still is required to reside his mind. The mind (the information content of the brain) is not copied into an emulated brain in a computer. Neo’s physical brain is connected into the Matrix via a brain-machine interface. Only the rest of the physical body is simulated. Neo is disconnected from and reconnected to this dreamworld.

James Cameron’s 2009 movie Avatar has so far been the commercially most successful example of a work of fiction that features a form of mind uploading. Throughout most of the movie, the hero’s mind has not actually been uploaded and transferred to another body, but is simply controlling the body from a distance, a form of telepresence. However, at the end of the movie the hero’s mind is uploaded into Eywa, the mind of the planet, and then back into his Avatar body.

Mind transfer is a theme in many other works of science fiction in a wide range of media. Specific examples include the following:

Read more here:

Mind uploading in fiction – Wikipedia

Mind uploading – Wikipedia

Whole brain emulation (WBE), mind upload or brain upload (sometimes called “mind copying” or “mind transfer”) is the hypothetical futuristic process of scanning the mental state (including long-term memory and “self”) of a particular brain substrate and copying it to a computer. The computer could then run a simulation model of the brain’s information processing, such that it responds in essentially the same way as the original brain (i.e., indistinguishable from the brain for all relevant purposes) and experiences having a conscious mind.[1][2][3]

Mind uploading may potentially be accomplished by either of two methods: Copy-and-Transfer or gradual replacement of neurons. In the case of the former method, mind uploading would be achieved by scanning and mapping the salient features of a biological brain, and then by copying, transferring, and storing that information state into a computer system or another computational device. The simulated mind could be within a virtual reality or simulated world, supported by an anatomic 3D body simulation model. Alternatively the simulated mind could reside in a computer that is inside (or connected to) a (not necessarily humanoid) robot or a biological body in real life.[4]

Among some futurists and within the transhumanist movement, mind uploading is treated as an important proposed life extension technology. Some believe mind uploading is humanity’s current best option for preserving the identity of the species, as opposed to cryonics. Another aim of mind uploading is to provide a permanent backup to our “mind-file”, and a means for functional copies of human minds to survive a global disaster or interstellar space travels. Whole brain emulation is discussed by some futurists as a “logical endpoint”[4] of the topical computational neuroscience and neuroinformatics fields, both about brain simulation for medical research purposes. It is discussed in artificial intelligence research publications as an approach to strong AI. Computer-based intelligence such as an upload could think much faster than a biological human even if it were no more intelligent. A large-scale society of uploads might, according to futurists, give rise to a technological singularity, meaning a sudden time constant decrease in the exponential development of technology.[5] Mind uploading is a central conceptual feature of numerous science fiction novels and films.

Substantial mainstream research in related areas is being conducted in animal brain mapping and simulation, development of faster supercomputers, virtual reality, braincomputer interfaces, connectomics and information extraction from dynamically functioning brains.[6] According to supporters, many of the tools and ideas needed to achieve mind uploading already exist or are currently under active development; however, they will admit that others are, as yet, very speculative, but still in the realm of engineering possibility. Neuroscientist Randal Koene has formed a nonprofit organization called Carbon Copies to promote mind uploading research.

The human brain contains, on average, about 86 billion nerve cells called neurons, each individually linked to other neurons by way of connectors called axons and dendrites. Signals at the junctures (synapses) of these connections are transmitted by the release and detection of chemicals known as neurotransmitters. The established neuroscientific consensus is that the human mind is largely an emergent property of the information processing of this neural network.[citation needed]

Neuroscientists have stated that important functions performed by the mind, such as learning, memory, and consciousness, are due to purely physical and electrochemical processes in the brain and are governed by applicable laws. For example, Christof Koch and Giulio Tononi wrote in IEEE Spectrum:

“Consciousness is part of the natural world. It depends, we believe, only on mathematics and logic and on the imperfectly known laws of physics, chemistry, and biology; it does not arise from some magical or otherworldly quality.”[7]

The concept of mind uploading is based on this mechanistic view of the mind, and denies the vitalist view of human life and consciousness.[citation needed]

Eminent computer scientists and neuroscientists have predicted that specially programmed computers will be capable of thought and even attain consciousness, including Koch and Tononi,[7] Douglas Hofstadter,[8] Jeff Hawkins,[8] Marvin Minsky,[9] Randal A. Koene, and Rodolfo Llinas.[10]

Such an artificial intelligence capability might provide a computational substrate necessary for uploading.

However, even though uploading is dependent upon such a general capability, it is conceptually distinct from general forms of AI in that it results from dynamic reanimation of information derived from a specific human mind so that the mind retains a sense of historical identity (other forms are possible but would compromise or eliminate the life-extension feature generally associated with uploading). The transferred and reanimated information would become a form of artificial intelligence, sometimes called an infomorph or “nomorph”.[citation needed]

Many theorists have presented models of the brain and have established a range of estimates of the amount of computing power needed for partial and complete simulations.[4][citation needed] Using these models, some have estimated that uploading may become possible within decades if trends such as Moore’s law continue.[11]

In theory, if the information and processes of the mind can be disassociated from the biological body, they are no longer tied to the individual limits and lifespan of that body. Furthermore, information within a brain could be partly or wholly copied or transferred to one or more other substrates (including digital storage or another brain), thereby from a purely mechanistic perspective reducing or eliminating “mortality risk” of such information. This general proposal was discussed in 1971 by biogerontologist George M. Martin of the University of Washington.[12]

An uploaded astronaut would be the application of mind uploading to human spaceflight. This would eliminate the harms caused by a zero gravity environment, the vacuum of space, and cosmic radiation to the human body. It would allow for the use of smaller spacecraft, such as the proposed StarChip, and it would enable virtually unlimited interstellar travel distances.[13][14]

The focus of mind uploading, in the case of copy-and-transfer, is on data acquisition, rather than data maintenance of the brain. A set of approaches known as loosely coupled off-loading (LCOL) may be used in the attempt to characterize and copy the mental contents of a brain.[15] The LCOL approach may take advantage of self-reports, life-logs and video recordings that can be analyzed by artificial intelligence. A bottom-up approach may focus on the specific resolution and morphology of neurons, the spike times of neurons, the times at which neurons produce action potential responses.

Advocates of mind uploading point to Moore’s law to support the notion that the necessary computing power is expected to become available within a few decades. However, the actual computational requirements for running an uploaded human mind are very difficult to quantify, potentially rendering such an argument specious.

Regardless of the techniques used to capture or recreate the function of a human mind, the processing demands are likely to be immense, due to the large number of neurons in the human brain along with the considerable complexity of each neuron.

In 2004, Henry Markram, lead researcher of the “Blue Brain Project”, stated that “it is not [their] goal to build an intelligent neural network”, based solely on the computational demands such a project would have.[17]

It will be very difficult because, in the brain, every molecule is a powerful computer and we would need to simulate the structure and function of trillions upon trillions of molecules as well as all the rules that govern how they interact. You would literally need computers that are trillions of times bigger and faster than anything existing today.[18]

Five years later, after successful simulation of part of a rat brain, Markram was much more bold and optimistic. In 2009, as director of the Blue Brain Project, he claimed that A detailed, functional artificial human brain can be built within the next 10 years.[19]

Required computational capacity strongly depend on the chosen level of simulation model scale:[4]

Since the function of the human mind and how it might arise from the working of the brain’s neural network, are poorly understood issues, mind uploading relies on the idea of neural network emulation. Rather than having to understand the high-level psychological processes and large-scale structures of the brain, and model them using classical artificial intelligence methods and cognitive psychology models, the low-level structure of the underlying neural network is captured, mapped and emulated with a computer system. In computer science terminology,[dubious discuss] rather than analyzing and reverse engineering the behavior of the algorithms and data structures that resides in the brain, a blueprint of its source code is translated to another programming language. The human mind and the personal identity then, theoretically, is generated by the emulated neural network in an identical fashion to it being generated by the biological neural network.

On the other hand, a molecule-scale simulation of the brain is not expected to be required, provided that the functioning of the neurons is not affected by quantum mechanical processes. The neural network emulation approach only requires that the functioning and interaction of neurons and synapses are understood. It is expected that it is sufficient with a black-box signal processing model of how the neurons respond to nerve impulses (electrical as well as chemical synaptic transmission).

A sufficiently complex and accurate model of the neurons is required. A traditional artificial neural network model, for example multi-layer perceptron network model, is not considered as sufficient. A dynamic spiking neural network model is required, which reflects that the neuron fires only when a membrane potential reaches a certain level. It is likely that the model must include delays, non-linear functions and differential equations describing the relation between electrophysical parameters such as electrical currents, voltages, membrane states (ion channel states) and neuromodulators.

Since learning and long-term memory are believed to result from strengthening or weakening the synapses via a mechanism known as synaptic plasticity or synaptic adaptation, the model should include this mechanism. The response of sensory receptors to various stimuli must also be modelled.

Furthermore, the model may have to include metabolism, i.e. how the neurons are affected by hormones and other chemical substances that may cross the bloodbrain barrier. It is considered likely that the model must include currently unknown neuromodulators, neurotransmitters and ion channels. It is considered unlikely that the simulation model has to include protein interaction, which would make it computationally complex.[4]

A digital computer simulation model of an analog system such as the brain is an approximation that introduces random quantization errors and distortion. However, the biological neurons also suffer from randomness and limited precision, for example due to background noise. The errors of the discrete model can be made smaller than the randomness of the biological brain by choosing a sufficiently high variable resolution and sample rate, and sufficiently accurate models of non-linearities. The computational power and computer memory must however be sufficient to run such large simulations, preferably in real time.

When modelling and simulating the brain of a specific individual, a brain map or connectivity database showing the connections between the neurons must be extracted from an anatomic model of the brain. For whole brain simulation, this network map should show the connectivity of the whole nervous system, including the spinal cord, sensory receptors, and muscle cells. Destructive scanning of a small sample of tissue from a mouse brain including synaptic details is possible as of 2010.[20]

However, if short-term memory and working memory include prolonged or repeated firing of neurons, as well as intra-neural dynamic processes, the electrical and chemical signal state of the synapses and neurons may be hard to extract. The uploaded mind may then perceive a memory loss of the events and mental processes immediately before the time of brain scanning.[4]

A full brain map has been estimated to occupy less than 2 x 1016 bytes (20,000 TB) and would store the addresses of the connected neurons, the synapse type and the synapse “weight” for each of the brains’ 1015 synapses.[4][not in citation given] However, the biological complexities of true brain function (e.g. the epigenetic states of neurons, protein components with multiple functional states, etc.) may preclude an accurate prediction of the volume of binary data required to faithfully represent a functioning human mind.

A possible method for mind uploading is serial sectioning, in which the brain tissue and perhaps other parts of the nervous system are frozen and then scanned and analyzed layer by layer, which for frozen samples at nano-scale requires a cryo-ultramicrotome, thus capturing the structure of the neurons and their interconnections.[21] The exposed surface of frozen nerve tissue would be scanned and recorded, and then the surface layer of tissue removed. While this would be a very slow and labor-intensive process, research is currently underway to automate the collection and microscopy of serial sections.[22] The scans would then be analyzed, and a model of the neural net recreated in the system that the mind was being uploaded into.

There are uncertainties with this approach using current microscopy techniques. If it is possible to replicate neuron function from its visible structure alone, then the resolution afforded by a scanning electron microscope would suffice for such a technique.[22] However, as the function of brain tissue is partially determined by molecular events (particularly at synapses, but also at other places on the neuron’s cell membrane), this may not suffice for capturing and simulating neuron functions. It may be possible to extend the techniques of serial sectioning and to capture the internal molecular makeup of neurons, through the use of sophisticated immunohistochemistry staining methods that could then be read via confocal laser scanning microscopy. However, as the physiological genesis of ‘mind’ is not currently known, this method may not be able to access all of the necessary biochemical information to recreate a human brain with sufficient fidelity.

It may be possible to create functional 3D maps of the brain activity, using advanced neuroimaging technology, such as functional MRI (fMRI, for mapping change in blood flow), magnetoencephalography (MEG, for mapping of electrical currents), or combinations of multiple methods, to build a detailed three-dimensional model of the brain using non-invasive and non-destructive methods. Today, fMRI is often combined with MEG for creating functional maps of human cortex during more complex cognitive tasks, as the methods complement each other. Even though current imaging technology lacks the spatial resolution needed to gather the information needed for such a scan, important recent and future developments are predicted to substantially improve both spatial and temporal resolutions of existing technologies.[24]

There is ongoing work in the field of brain simulation, including partial and whole simulations of some animals. For example, the C. elegans roundworm, Drosophila fruit fly, and mouse have all been simulated to various degrees.[citation needed]

The Blue Brain Project by the Brain and Mind Institute of the cole Polytechnique Fdrale de Lausanne, Switzerland is an attempt to create a synthetic brain by reverse-engineering mammalian brain circuitry.

Underlying the concept of “mind uploading” (more accurately “mind transferring”) is the broad philosophy that consciousness lies within the brain’s information processing and is in essence an emergent feature that arises from large neural network high-level patterns of organization, and that the same patterns of organization can be realized in other processing devices. Mind uploading also relies on the idea that the human mind (the “self” and the long-term memory), just like non-human minds, is represented by the current neural network paths and the weights of the brain synapses rather than by a dualistic and mystic soul and spirit. The mind or “soul” can be defined as the information state of the brain, and is immaterial only in the same sense as the information content of a data file or the state of a computer software currently residing in the work-space memory of the computer. Data specifying the information state of the neural network can be captured and copied as a “computer file” from the brain and re-implemented into a different physical form.[25] This is not to deny that minds are richly adapted to their substrates.[26] An analogy to the idea of mind uploading is to copy the temporary information state (the variable values) of a computer program from the computer memory to another computer and continue its execution. The other computer may perhaps have different hardware architecture but emulates the hardware of the first computer.

These issues have a long history. In 1775 Thomas Reid wrote:[27] I would be glad to know… whether when my brain has lost its original structure, and when some hundred years after the same materials are fabricated so curiously as to become an intelligent being, whether, I say that being will be me; or, if, two or three such beings should be formed out of my brain; whether they will all be me, and consequently one and the same intelligent being.

A considerable portion of transhumanists and singularitarians place great hope into the belief that they may become immortal, by creating one or many non-biological functional copies of their brains, thereby leaving their “biological shell”. However, the philosopher and transhumanist Susan Schneider claims that at best, uploading would create a copy of the original person’s mind.[28] Susan Schneider agrees that consciousness has a computational basis, but this does not mean we can upload and survive. According to her views, “uploading” would probably result in the death of the original person’s brain, while only outside observers can maintain the illusion of the original person still being alive. For it is implausible to think that one’s consciousness would leave one’s brain and travel to a remote location; ordinary physical objects do not behave this way. Ordinary objects (rocks, tables, etc.) are not simultaneously here, and somewhere else. At best, a copy of the original mind is created.[28] Others have argued against such conclusions. For example, Buddhist transhumanist James Hughes has pointed out that this consideration only goes so far: if one believes the self is an illusion, worries about survival are not reasons to avoid uploading,[29] and Keith Wiley has presented an argument wherein all resulting minds of an uploading procedure are granted equal primacy in their claim to the original identity, such that survival of the self is determined retroactively from a strictly subjective position.[30][31]

Another potential consequence of mind uploading is that the decision to “upload” may then create a mindless symbol manipulator instead of a conscious mind (see philosophical zombie).[32][33] Are we to assume that an upload is conscious if it displays behaviors that are highly indicative of consciousness? Are we to assume that an upload is conscious if it verbally insists that it is conscious?[34] Could there be an absolute upper limit in processing speed above which consciousness cannot be sustained? The mystery of consciousness precludes a definitive answer to this question.[35] Numerous scientists, including Kurzweil, strongly believe that determining whether a separate entity is conscious (with 100% confidence) is fundamentally unknowable, since consciousness is inherently subjective (see solipsism). Regardless, some scientists strongly believe consciousness is the consequence of computational processes which are substrate-neutral. On the contrary, numerous scientists believe consciousness may be the result of some form of quantum computation dependent on substrate (see quantum mind).[36][37][38]

In light of uncertainty on whether to regard uploads as conscious, Sandberg proposes a cautious approach:[39]

Principle of assuming the most (PAM): Assume that any emulated system could have the same mental properties as the original system and treat it correspondingly.

It is argued that if a computational copy of one’s mind did exist, it would be impossible for one to recognize it as their own mind.[40] The argument for this stance is the following: for a computational mind to recognize an emulation of itself, it must be capable of deciding whether two Turing machines (namely, itself and the proposed emulation) are functionally equivalent. This task is uncomputable due to the undecidability of equivalence, thus there cannot exist a computational procedure in the mind that is capable of recognizing an emulation of itself.

The process of developing emulation technology raises ethical issues related to animal welfare and artificial consciousness.[39] The neuroscience required to develop brain emulation would require animal experimentation, first on invertebrates and then on small mammals before moving on to humans. Sometimes the animals would just need to be euthanized in order to extract, slice, and scan their brains, but sometimes behavioral and in vivo measures would be required, which might cause pain to living animals.[39]

In addition, the resulting animal emulations themselves might suffer, depending on one’s views about consciousness.[39] Bancroft argues for the plausibility of consciousness in brain simulations on the basis of the “fading qualia” thought experiment of David Chalmers. He then concludes:[41] If, as I argue above, a sufficiently detailed computational simulation of the brain is potentially operationally equivalent to an organic brain, it follows that we must consider extending protections against suffering to simulations.

It might help reduce emulation suffering to develop virtual equivalents of anaesthesia, as well as to omit processing related to pain and/or consciousness. However, some experiments might require a fully functioning and suffering animal emulation. Animals might also suffer by accident due to flaws and lack of insight into what parts of their brains are suffering.[39] Questions also arise regarding the moral status of partial brain emulations, as well as creating neuromorphic emulations that draw inspiration from biological brains but are built somewhat differently.[41]

Brain emulations could be erased by computer viruses or malware, without need to destroy the underlying hardware. This may make assassination easier than for physical humans. The attacker might take the computing power for its own use.[42]

Many questions arise regarding the legal personhood of emulations.[43] Would they be given the rights of biological humans? If a person makes an emulated copy of himself and then dies, does the emulation inherit his property and official positions? Could the emulation ask to “pull the plug” when its biological version was terminally ill or in a coma? Would it help to treat emulations as adolescents for a few years so that the biological creator would maintain temporary control? Would criminal emulations receive the death penalty, or would they be given forced data modification as a form of “rehabilitation”? Could an upload have marriage and child-care rights?[43]

If simulated minds would come true and if they were assigned rights of their own, it may be difficult to ensure the protection of “digital human rights”. For example, social science researchers might be tempted to secretly expose simulated minds, or whole isolated societies of simulated minds, to controlled experiments in which many copies of the same minds are exposed (serially or simultaneously) to different test conditions.[citation needed]

Emulations could create a number of conditions that might increase risk of war, including inequality, changes of power dynamics, a possible technological arms race to build emulations first, first-strike advantages, strong loyalty and willingness to “die” among emulations, and triggers for racist, xenophobic, and religious prejudice.[42] If emulations run much faster than humans, there might not be enough time for human leaders to make wise decisions or negotiate. It is possible that humans would react violently against growing power of emulations, especially if they depress human wages. Emulations may not trust each other, and even well-intentioned defensive measures might be interpreted as offense.[42]

There are very few feasible technologies that humans have refrained from developing. The neuroscience and computer-hardware technologies that may make brain emulation possible are widely desired for other reasons, and logically their development will continue into the future. Assuming that emulation technology will arrive, a question becomes whether we should accelerate or slow its advance.[42]

Arguments for speeding up brain-emulation research:

Arguments for slowing down brain-emulation research:

Emulation research would also speed up neuroscience as a whole, which might accelerate medical advances, cognitive enhancement, lie detectors, and capability for psychological manipulation.[48]

Emulations might be easier to control than de novo AI because

As counterpoint to these considerations, Bostrom notes some downsides:

Ray Kurzweil, director of engineering at Google, claims to know and foresee that people will be able to “upload” their entire brains to computers and become “digitally immortal” by 2045. Kurzweil made this claim for many years, e.g. during his speech in 2013 at the Global Futures 2045 International Congress in New York, which claims to subscribe to a similar set of beliefs.[49] Mind uploading is also advocated by a number of researchers in neuroscience and artificial intelligence, such as Marvin Minsky[citation needed] while he was still alive. In 1993, Joe Strout created a small web site called the Mind Uploading Home Page, and began advocating the idea in cryonics circles and elsewhere on the net. That site has not been actively updated in recent years, but it has spawned other sites including MindUploading.org, run by Randal A. Koene, who also moderates a mailing list on the topic. These advocates see mind uploading as a medical procedure which could eventually save countless lives.

Many transhumanists look forward to the development and deployment of mind uploading technology, with transhumanists such as Nick Bostrom predicting that it will become possible within the 21st century due to technological trends such as Moore’s law.[4]

Michio Kaku, in collaboration with Science, hosted a documentary, Sci Fi Science: Physics of the Impossible, based on his book Physics of the Impossible. Episode four, titled “How to Teleport”, mentions that mind uploading via techniques such as quantum entanglement and whole brain emulation using an advanced MRI machine may enable people to be transported to vast distances at near light-speed.

The book Beyond Humanity: CyberEvolution and Future Minds by Gregory S. Paul & Earl D. Cox, is about the eventual (and, to the authors, almost inevitable) evolution of computers into sentient beings, but also deals with human mind transfer. Richard Doyle’s Wetwares: Experiments in PostVital Living deals extensively with uploading from the perspective of distributed embodiment, arguing for example that humans are currently part of the “artificial life phenotype”. Doyle’s vision reverses the polarity on uploading, with artificial life forms such as uploads actively seeking out biological embodiment as part of their reproductive strategy.

Kenneth D. Miller, a professor of neuroscience at Columbia and a co-director of the Center for Theoretical Neuroscience, raised doubts about the practicality of mind uploading. His major argument is that reconstructing neurons and their connections is in itself a formidable task, but it is far from being sufficient. Operation of the brain depends on the dynamics of electrical and biochemical signal exchange between neurons; therefore, capturing them in a single “frozen” state may prove insufficient. In addition, the nature of these signals may require modeling down to the molecular level and beyond. Therefore, while not rejecting the idea in principle, Miller believes that the complexity of the “absolute” duplication of an individual mind is insurmountable for the nearest hundreds of years.[50]

Follow this link:

Mind uploading – Wikipedia

Mind map – Wikipedia

This article is about the visual diagram. For the geographical concept, see Mental mapping.

A mind map is a diagram used to visually organize information. A mind map is hierarchical and shows relationships among pieces of the whole.[1] It is often created around a single concept, drawn as an image in the center of a blank page, to which associated representations of ideas such as images, words and parts of words are added. Major ideas are connected directly to the central concept, and other ideas branch out from those.

Mind maps can be drawn by hand, either as “rough notes” during a lecture, meeting or planning session, for example, or as higher quality pictures when more time is available. Mind maps are considered to be a type of spider diagram.[2] A similar concept in the 1970s was “idea sun bursting”.[3]

Although the term “mind map” was first popularized by British popular psychology author and television personality Tony Buzan, the use of diagrams that visually “map” information using branching and radial maps traces back centuries. These pictorial methods record knowledge and model systems, and have a long history in learning, brainstorming, memory, visual thinking, and problem solving by educators, engineers, psychologists, and others. Some of the earliest examples of such graphical records were developed by Porphyry of Tyros, a noted thinker of the 3rd century, as he graphically visualized the concept categories of Aristotle. Philosopher Ramon Llull (12351315) also used such techniques.

The semantic network was developed in the late 1950s as a theory to understand human learning and developed further by Allan M. Collins and M. Ross Quillian during the early 1960s. Mind maps are similar in radial structure to concept maps, developed by learning experts in the 1970s, but differ in that the former are simplified by focusing around a single central key concept.

Buzan’s specific approach, and the introduction of the term “mind map” arose during a 1974 BBC TV series he hosted, called Use Your Head.[4][5] In this show, and companion book series, Buzan promoted his conception of radial tree, diagramming key words in a colorful, radiant, tree-like structure.[6]

Buzan says the idea was inspired by Alfred Korzybski’s general semantics as popularized in science fiction novels, such as those of Robert A. Heinlein and A. E. van Vogt. He argues that while “traditional” outlines force readers to scan left to right and top to bottom, readers actually tend to scan the entire page in a non-linear fashion. Buzan’s treatment also uses then-popular assumptions about the functions of cerebral hemispheres in order to explain the claimed increased effectiveness of mind mapping over other forms of note making.

Buzan suggests the following guidelines for creating mind maps:

As with other diagramming tools, mind maps can be used to generate, visualize, structure, and classify ideas, and as an aid to studying[7] and organizing information, solving problems, making decisions, and writing.

Mind maps have many applications in personal, family, educational, and business situations, including notetaking, brainstorming (wherein ideas are inserted into the map radially around the center node, without the implicit prioritization that comes from hierarchy or sequential arrangements, and wherein grouping and organizing is reserved for later stages), summarizing, as a mnemonic technique, or to sort out a complicated idea. Mind maps are also promoted as a way to collaborate in color pen creativity sessions.

In addition to these direct use cases, data retrieved from mind maps can be used to enhance several other applications; for instance expert search systems, search engines and search and tag query recommender.[8] To do so, mind maps can be analysed with classic methods of information retrieval to classify a mind map’s author or documents that are linked from within the mind map.[8]

Cunningham (2005) conducted a user study in which 80% of the students thought “mindmapping helped them understand concepts and ideas in science”.[9] Other studies also report some subjective positive effects on the use of mind maps.[10][11] Positive opinions on their effectiveness, however, were much more prominent among students of art and design than in students of computer and information technology, with 62.5% vs 34% (respectively) agreeing that they were able to understand concepts better with mind mapping software[10]. Farrand, Hussain, and Hennessy (2002) found that spider diagrams (similar to concept maps) had limited, but significant, impact on memory recall in undergraduate students (a 10% increase over baseline for a 600-word text only) as compared to preferred study methods (a 6% increase over baseline).[12] This improvement was only robust after a week for those in the diagram group and there was a significant decrease in motivation compared to the subjects’ preferred methods of note taking. A meta study about concept mapping concluded that concept mapping is more effective than “reading text passages, attending lectures, and participating in class discussions”.[13] The same study also concluded that concept mapping is slightly more effective “than other constructive activities such as writing summaries and outlines”. However, results were inconsistent, with the authors noting “significant heterogeneity was found in most subsets”. In addition, they concluded that low-ability students may benefit more from mind mapping than high-ability students.

Beel & Langer (2011) conducted a comprehensive analysis of the content of mind maps.[14] They analysed 19,379 mind maps from 11,179 users of the mind mapping applications SciPlore MindMapping (now Docear) and MindMeister. Results include that average users create only a few mind maps (mean=2.7), average mind maps are rather small (31 nodes) with each node containing about 3 words (median). However, there were exceptions. One user created more than 200 mind maps, the largest mind map consisted of more than 50,000 nodes and the largest node contained ~7500 words. The study also showed that between different mind mapping applications (Docear vs MindMeister) significant differences exist related to how users create mind maps.

There have been some attempts to create mind maps automatically. Brucks & Schommer created mind maps automatically from full-text streams.[15] Rothenberger et al. extracted the main story of a text and presented it as mind map.[16] And there is a patent about automatically creating sub-topics in mind maps.[17]

Mind-mapping software can be used to organize large amounts of information, combining spatial organization, dynamic hierarchical structuring and node folding. Software packages can extend the concept of mind-mapping by allowing individuals to map more than thoughts and ideas with information on their computers and the Internet, like spreadsheets, documents, Internet sites and images.[18] It has been suggested that mind-mapping can improve learning/study efficiency up to 15% over conventional note-taking.[12]

Read more:

Mind map – Wikipedia

Hedonism – Wikipedia

Hedonism is a school of thought that argues that pleasure and happiness are the primary or most important intrinsic goods and the aim of human life.[1] A hedonist strives to maximize net pleasure (pleasure minus pain), but when having finally gained that pleasure, happiness remains stationary.

Ethical hedonism is the idea that all people have the right to do everything in their power to achieve the greatest amount of pleasure possible to them. It is also the idea that every person’s pleasure should far surpass their amount of pain. Ethical hedonism is said to have been started by Aristippus of Cyrene, a student of Socrates. He held the idea that pleasure is the highest good.[2]

The name derives from the Greek word for “delight” ( hdonismos from hdon “pleasure”, cognate[according to whom?] with English sweet + suffix – -ismos “ism”). An extremely strong aversion to hedonism is hedonophobia.

In the original Old Babylonian version of the Epic of Gilgamesh, which was written soon after the invention of writing, Siduri gave the following advice “Fill your belly. Day and night make merry. Let days be full of joy. Dance and make music day and night […] These things alone are the concern of men”, which may represent the first recorded advocacy of a hedonistic philosophy.[3]

Scenes of a harper entertaining guests at a feast were common in ancient Egyptian tombs (see Harper’s Songs), and sometimes contained hedonistic elements, calling guests to submit to pleasure because they cannot be sure that they will be rewarded for good with a blissful afterlife. The following is a song attributed to the reign of one of the pharaohs around the time of the 12th dynasty, and the text was used in the eighteenth and nineteenth dynasties.[4][5]

Let thy desire flourish,In order to let thy heart forget the beatifications for thee.Follow thy desire, as long as thou shalt live.Put myrrh upon thy head and clothing of fine linen upon thee,Being anointed with genuine marvels of the gods’ property.Set an increase to thy good things;Let not thy heart flag.Follow thy desire and thy good.Fulfill thy needs upon earth, after the command of thy heart,Until there come for thee that day of mourning.

Democritus seems to be the earliest philosopher on record to have categorically embraced a hedonistic philosophy; he called the supreme goal of life “contentment” or “cheerfulness”, claiming that “joy and sorrow are the distinguishing mark of things beneficial and harmful” (DK 68 B 188).[6]

The Cyrenaics were an ultra-hedonist Greek school of philosophy founded in the 4th century BC, supposedly by Aristippus of Cyrene, although many of the principles of the school are believed to have been formalized by his grandson of the same name, Aristippus the Younger. The school was so called after Cyrene, the birthplace of Aristippus. It was one of the earliest Socratic schools. The Cyrenaics taught that the only intrinsic good is pleasure, which meant not just the absence of pain, but positively enjoyable sensations. Of these, momentary pleasures, especially physical ones, are stronger than those of anticipation or memory. They did, however, recognize the value of social obligation, and that pleasure could be gained from altruism[citation needed]. Theodorus the Atheist was a latter exponent of hedonism who was a disciple of younger Aristippus,[7] while becoming well known for expounding atheism. The school died out within a century, and was replaced by Epicureanism.

The Cyrenaics were known for their skeptical theory of knowledge. They reduced logic to a basic doctrine concerning the criterion of truth.[8] They thought that we can know with certainty our immediate sense-experiences (for instance, that I am having a sweet sensation now) but can know nothing about the nature of the objects that cause these sensations (for instance, that the honey is sweet).[9] They also denied that we can have knowledge of what the experiences of other people are like.[10] All knowledge is immediate sensation. These sensations are motions which are purely subjective, and are painful, indifferent or pleasant, according as they are violent, tranquil or gentle.[9][11] Further they are entirely individual, and can in no way be described as constituting absolute objective knowledge. Feeling, therefore, is the only possible criterion of knowledge and of conduct.[9] Our ways of being affected are alone knowable. Thus the sole aim for everyone should be pleasure.

Cyrenaicism deduces a single, universal aim for all people which is pleasure. Furthermore, all feeling is momentary and homogeneous. It follows that past and future pleasure have no real existence for us, and that among present pleasures there is no distinction of kind.[11] Socrates had spoken of the higher pleasures of the intellect; the Cyrenaics denied the validity of this distinction and said that bodily pleasures, being more simple and more intense, were preferable.[12] Momentary pleasure, preferably of a physical kind, is the only good for humans. However some actions which give immediate pleasure can create more than their equivalent of pain. The wise person should be in control of pleasures rather than be enslaved to them, otherwise pain will result, and this requires judgement to evaluate the different pleasures of life.[13] Regard should be paid to law and custom, because even though these things have no intrinsic value on their own, violating them will lead to unpleasant penalties being imposed by others.[12] Likewise, friendship and justice are useful because of the pleasure they provide.[12] Thus the Cyrenaics believed in the hedonistic value of social obligation and altruistic behaviour.

Epicureanism is a system of philosophy based upon the teachings of Epicurus (c. 341c. 270 BC), founded around 307 BC. Epicurus was an atomic materialist, following in the steps of Democritus and Leucippus. His materialism led him to a general stance against superstition or the idea of divine intervention. Following Aristippusabout whom very little is knownEpicurus believed that the greatest good was to seek modest, sustainable “pleasure” in the form of a state of tranquility and freedom from fear (ataraxia) and absence of bodily pain (aponia) through knowledge of the workings of the world and the limits of our desires. The combination of these two states is supposed to constitute happiness in its highest form. Although Epicureanism is a form of hedonism, insofar as it declares pleasure as the sole intrinsic good, its conception of absence of pain as the greatest pleasure and its advocacy of a simple life make it different from “hedonism” as it is commonly understood.

In the Epicurean view, the highest pleasure (tranquility and freedom from fear) was obtained by knowledge, friendship and living a virtuous and temperate life. He lauded the enjoyment of simple pleasures, by which he meant abstaining from bodily desires, such as sex and appetites, verging on asceticism. He argued that when eating, one should not eat too richly, for it could lead to dissatisfaction later, such as the grim realization that one could not afford such delicacies in the future. Likewise, sex could lead to increased lust and dissatisfaction with the sexual partner. Epicurus did not articulate a broad system of social ethics that has survived but had a unique version of the Golden Rule.

It is impossible to live a pleasant life without living wisely and well and justly (agreeing “neither to harm nor be harmed”),[14] and it is impossible to live wisely and well and justly without living a pleasant life.[15]

Epicureanism was originally a challenge to Platonism, though later it became the main opponent of Stoicism. Epicurus and his followers shunned politics. After the death of Epicurus, his school was headed by Hermarchus; later many Epicurean societies flourished in the Late Hellenistic era and during the Roman era (such as those in Antiochia, Alexandria, Rhodes and Ercolano). The poet Lucretius is its most known Roman proponent. By the end of the Roman Empire, having undergone Christian attack and repression, Epicureanism had all but died out, and would be resurrected in the 17th century by the atomist Pierre Gassendi, who adapted it to the Christian doctrine.

Some writings by Epicurus have survived. Some scholars consider the epic poem On the Nature of Things by Lucretius to present in one unified work the core arguments and theories of Epicureanism. Many of the papyrus scrolls unearthed at the Villa of the Papyri at Herculaneum are Epicurean texts. At least some are thought to have belonged to the Epicurean Philodemus.

Yangism has been described as a form of psychological and ethical egoism. The Yangist philosophers believed in the importance of maintaining self-interest through “keeping one’s nature intact, protecting one’s uniqueness, and not letting the body be tied by other things.” Disagreeing with the Confucian virtues of li (propriety), ren (humaneness), and yi (righteousness) and the Legalist virtue of fa (law), the Yangists saw wei wo, or “everything for myself,” as the only virtue necessary for self-cultivation. Individual pleasure is considered desirable, like in hedonism, but not at the expense of the health of the individual. The Yangists saw individual well-being as the prime purpose of life, and considered anything that hindered that well-being immoral and unnecessary.

The main focus of the Yangists was on the concept of xing, or human nature, a term later incorporated by Mencius into Confucianism. The xing, according to sinologist A. C. Graham, is a person’s “proper course of development” in life. Individuals can only rationally care for their own xing, and should not naively have to support the xing of other people, even if it means opposing the emperor. In this sense, Yangism is a “direct attack” on Confucianism, by implying that the power of the emperor, defended in Confucianism, is baseless and destructive, and that state intervention is morally flawed.

The Confucian philosopher Mencius depicts Yangism as the direct opposite of Mohism, while Mohism promotes the idea of universal love and impartial caring, the Yangists acted only “for themselves,” rejecting the altruism of Mohism. He criticized the Yangists as selfish, ignoring the duty of serving the public and caring only for personal concerns. Mencius saw Confucianism as the “Middle Way” between Mohism and Yangism.

Judaism believes that mankind was created for pleasure, as God placed Adam and Eve in the Garden of EdenEden being the Hebrew word for “pleasure.” In recent years, Rabbi Noah Weinberg articulated five different levels of pleasure; connecting with God is the highest possible pleasure.

Christian doctrine current in some evangelical circles, particularly those of the Reformed tradition.[16] The term was first coined by Reformed Baptist theologian John Piper in his 1986 book Desiring God: My shortest summary of it is: God is most glorified in us when we are most satisfied in him. Or: The chief end of man is to glorify God by enjoying him forever. Does Christian Hedonism make a god out of pleasure? No. It says that we all make a god out of what we take most pleasure in. [16] Piper states his term may describe the theology of Jonathan Edwards, who referred to a future enjoyment of him [God] in heaven.[17] In the 17th century, the atomist Pierre Gassendi adapted Epicureanism to the Christian doctrine.

The concept of hedonism is also found in the Hindu scriptures.[18][19]

Utilitarianism addresses problems with moral motivation neglected by Kantianism by giving a central role to happiness. It is an ethical theory holding that the proper course of action is the one that maximizes the overall good of the society.[20] It is thus one form of consequentialism, meaning that the moral worth of an action is determined by its resulting outcome. The most influential contributors to this theory are considered to be the 18th and 19th-century British philosophers Jeremy Bentham and John Stuart Mill. Conjoining hedonismas a view as to what is good for peopleto utilitarianism has the result that all action should be directed toward achieving the greatest total amount of happiness (see Hedonic calculus). Though consistent in their pursuit of happiness, Bentham and Mill’s versions of hedonism differ. There are two somewhat basic schools of thought on hedonism:[1]

Contemporary proponents of hedonism include Swedish philosopher Torbjrn Tnnsj,[21] Fred Feldman.[22] and Spanish ethic philosopher Esperanza Guisn (published a “Hedonist manifesto” in 1990).[23]

A dedicated contemporary hedonist philosopher and writer on the history of hedonistic thought is the French Michel Onfray. He has written two books directly on the subject (L’invention du plaisir: fragments cyraniques[24] and La puissance d’exister: Manifeste hdoniste).[25] He defines hedonism “as an introspective attitude to life based on taking pleasure yourself and pleasuring others, without harming yourself or anyone else.”[26] Onfray’s philosophical project is to define an ethical hedonism, a joyous utilitarianism, and a generalized aesthetic of sensual materialism that explores how to use the brain’s and the body’s capacities to their fullest extent — while restoring philosophy to a useful role in art, politics, and everyday life and decisions.”[27]

Onfray’s works “have explored the philosophical resonances and components of (and challenges to) science, painting, gastronomy, sex and sensuality, bioethics, wine, and writing. His most ambitious project is his projected six-volume Counter-history of Philosophy,”[27] of which three have been published. For him “In opposition to the ascetic ideal advocated by the dominant school of thought, hedonism suggests identifying the highest good with your own pleasure and that of others; the one must never be indulged at the expense of sacrificing the other. Obtaining this balance my pleasure at the same time as the pleasure of others presumes that we approach the subject from different angles political, ethical, aesthetic, erotic, bioethical, pedagogical, historiographical.”

For this he has “written books on each of these facets of the same world view.”[28] His philosophy aims for “micro-revolutions”, or “revolutions of the individual and small groups of like-minded people who live by his hedonistic, libertarian values.”[29]

The Abolitionist Society is a transhumanist group calling for the abolition of suffering in all sentient life through the use of advanced biotechnology. Their core philosophy is negative utilitarianism. David Pearce is a theorist of this perspective and he believes and promotes the idea that there exists a strong ethical imperative for humans to work towards the abolition of suffering in all sentient life. His book-length internet manifesto The Hedonistic Imperative[30] outlines how technologies such as genetic engineering, nanotechnology, pharmacology, and neurosurgery could potentially converge to eliminate all forms of unpleasant experience among human and non-human animals, replacing suffering with gradients of well-being, a project he refers to as “paradise engineering”.[31] A transhumanist and a vegan,[32] Pearce believes that we (or our future posthuman descendants) have a responsibility not only to avoid cruelty to animals within human society but also to alleviate the suffering of animals in the wild.

In a talk David Pearce gave at the Future of Humanity Institute and at the Charity International ‘Happiness Conference’ he said “Sadly, what won’t abolish suffering, or at least not on its own, is socio-economic reform, or exponential economic growth, or technological progress in the usual sense, or any of the traditional panaceas for solving the world’s ills. Improving the external environment is admirable and important; but such improvement can’t recalibrate our hedonic treadmill above a genetically constrained ceiling. Twin studies confirm there is a [partially] heritable set-point of well-being – or ill-being – around which we all tend to fluctuate over the course of a lifetime. This set-point varies between individuals. [It’s possible to lower an individual’s hedonic set-point by inflicting prolonged uncontrolled stress; but even this re-set is not as easy as it sounds: suicide-rates typically go down in wartime; and six months after a quadriplegia-inducing accident, studies[citation needed] suggest that we are typically neither more nor less unhappy than we were before the catastrophic event.] Unfortunately, attempts to build an ideal society can’t overcome this biological ceiling, whether utopias of the left or right, free-market or socialist, religious or secular, futuristic high-tech or simply cultivating one’s garden. Even if everything that traditional futurists have asked for is delivered – eternal youth, unlimited material wealth, morphological freedom, superintelligence, immersive VR, molecular nanotechnology, etc – there is no evidence that our subjective quality of life would on average significantly surpass the quality of life of our hunter-gatherer ancestors – or a New Guinea tribesman today – in the absence of reward pathway enrichment. This claim is difficult to prove in the absence of sophisticated neuroscanning; but objective indices of psychological distress e.g. suicide rates, bear it out. Unenhanced humans will still be prey to the spectrum of Darwinian emotions, ranging from terrible suffering to petty disappointments and frustrations – sadness, anxiety, jealousy, existential angst. Their biology is part of “what it means to be human”. Subjectively unpleasant states of consciousness exist because they were genetically adaptive. Each of our core emotions had a distinct signalling role in our evolutionary past: they tended to promote behaviours that enhanced the inclusive fitness of our genes in the ancestral environment.”[33]

Russian physicist and philosopher Victor Argonov argues that hedonism is not only a philosophical but also a verifiable scientific hypothesis. In 2014 he suggested “postulates of pleasure principle” confirmation of which would lead to a new scientific discipline, hedodynamics. Hedodynamics would be able to forecast the distant future development of human civilization and even the probable structure and psychology of other rational beings within the universe.[34] In order to build such a theory, science must discover the neural correlate of pleasure – neurophysiological parameter unambiguously corresponding to the feeling of pleasure (hedonic tone).

According to Argonov, posthumans will be able to reprogram their motivations in an arbitrary manner (to get pleasure from any programmed activity).[35] And if pleasure principle postulates are true, then general direction of civilization development is obvious: maximization of integral happiness in posthuman life (product of life span and average happiness). Posthumans will avoid constant pleasure stimulation, because it is incompatible with rational behavior required to prolong life. However, in average, they can become much happier than modern humans.

Many other aspects of posthuman society could be predicted by hedodynamics if the neural correlate of pleasure were discovered. For example, optimal number of individuals, their optimal body size (whether it matters for happiness or not) and the degree of aggression.

Critics of hedonism have objected to its exclusive concentration on pleasure as valuable.

In particular, G. E. Moore offered a thought experiment in criticism of pleasure as the sole bearer of value: he imagined two worldsone of exceeding beauty and the other a heap of filth. Neither of these worlds will be experienced by anyone. The question, then, is if it is better for the beautiful world to exist than the heap of filth. In this Moore implied that states of affairs have value beyond conscious pleasure, which he said spoke against the validity of hedonism.[36]

In Quran, God admonished mankind not to love the worldly pleasures, since it is related with greedy and source of sinful habit. He also threatened those who prefer worldly life rather than hereafter with Hell.

Those who choose the worldly life and its pleasures will be given proper recompense for their deeds in this life and will not suffer any loss. Such people will receive nothing in the next life except Hell fire. Their deeds will be made devoid of all virtue and their efforts will be in vain.

“Hedonism”. Encyclopdia Britannica (11th ed.). 1911.

Read this article:

Hedonism – Wikipedia

Hedonism | Internet Encyclopedia of Philosophy

The term “hedonism,” from the Greek word (hdon) for pleasure, refers to several related theories about what is good for us, how we should behave, and what motivates us to behave in the way that we do. All hedonistic theories identify pleasure and pain as the only important elements of whatever phenomena they are designed to describe. If hedonistic theories identified pleasure and pain as merely two important elements, instead of the only important elements of what they are describing, then they would not be nearly as unpopular as they all are. However, the claim that pleasure and pain are the only things of ultimate importance is what makes hedonism distinctive and philosophically interesting.

Philosophical hedonists tend to focus on hedonistic theories of value, and especially of well-being (the good life for the one living it). As a theory of value, hedonism states that all and only pleasure is intrinsically valuable and all and only pain is intrinsically not valuable. Hedonists usually define pleasure and pain broadly, such that both physical and mental phenomena are included. Thus, a gentle massage and recalling a fond memory are both considered to cause pleasure and stubbing a toe and hearing about the death of a loved one are both considered to cause pain. With pleasure and pain so defined, hedonism as a theory about what is valuable for us is intuitively appealing. Indeed, its appeal is evidenced by the fact that nearly all historical and contemporary treatments of well-being allocate at least some space for discussion of hedonism. Unfortunately for hedonism, the discussions rarely endorse it and some even deplore its focus on pleasure.

This article begins by clarifying the different types of hedonistic theories and the labels they are often given. Then, hedonisms ancient origins and its subsequent development are reviewed. The majority of this article is concerned with describing the important theoretical divisions within Prudential Hedonism and discussing the major criticisms of these approaches.

When the term “hedonism” is used in modern literature, or by non-philosophers in their everyday talk, its meaning is quite different from the meaning it takes when used in the discussions of philosophers. Non-philosophers tend to think of a hedonist as a person who seeks out pleasure for themselves without any particular regard for their own future well-being or for the well-being of others. According to non-philosophers, then, a stereotypical hedonist is someone who never misses an opportunity to indulge of the pleasures of sex, drugs, and rock n roll, even if the indulgences are likely to lead to relationship problems, health problems, regrets, or sadness for themselves or others. Philosophers commonly refer to this everyday understanding of hedonism as “Folk Hedonism.” Folk Hedonism is a rough combination of Motivational Hedonism, Hedonistic Egoism, and a reckless lack of foresight.

When philosophers discuss hedonism, they are most likely to be referring to hedonism about value, and especially the slightly more specific theory, hedonism about well-being. Hedonism as a theory about value (best referred to as Value Hedonism) holds that all and only pleasure is intrinsically valuable and all and only pain is intrinsically disvaluable. The term “intrinsically” is an important part of the definition and is best understood in contrast to the term “instrumentally.” Something is intrinsically valuable if it is valuable for its own sake. Pleasure is thought to be intrinsically valuable because, even if it did not lead to any other benefit, it would still be good to experience. Money is an example of an instrumental good; its value for us comes from what we can do with it (what we can buy with it). The fact that a copious amount of money has no value if no one ever sells anything reveals that money lacks intrinsic value. Value Hedonism reduces everything of value to pleasure. For example, a Value Hedonist would explain the instrumental value of money by describing how the things we can buy with money, such as food, shelter, and status-signifying goods, bring us pleasure or help us to avoid pain.

Hedonism as a theory about well-being (best referred to as Prudential Hedonism) is more specific than Value Hedonism because it stipulates what the value is for. Prudential Hedonism holds that all and only pleasure intrinsically makes peoples lives go better for them and all and only pain intrinsically makes their lives go worse for them. Some philosophers replace “people” with “animals” or “sentient creatures,” so as to apply Prudential Hedonism more widely. A good example of this comes from Peter Singers work on animals and ethics. Singer questions why some humans can see the intrinsic disvalue in human pain, but do not also accept that it is bad for sentient non-human animals to experience pain.

When Prudential Hedonists claim that happiness is what they value most, they intend happiness to be understood as a preponderance of pleasure over pain. An important distinction between Prudential Hedonism and Folk Hedonism is that Prudential Hedonists usually understand that pursuing pleasure and avoiding pain in the very short-term is not always the best strategy for achieving the best long-term balance of pleasure over pain.

Prudential Hedonism is an integral part of several derivative types of hedonistic theory, all of which have featured prominently in philosophical debates of the past. Since Prudential Hedonism plays this important role, the majority of this article is dedicated to Prudential Hedonism. First, however, the main derivative types of hedonism are briefly discussed.

Motivational Hedonism (more commonly referred to by the less descriptive label, “Psychological Hedonism”) is the theory that the desires to encounter pleasure and to avoid pain guide all of our behavior. Most accounts of Motivational Hedonism include both conscious and unconscious desires for pleasure, but emphasize the latter. Epicurus, William James, Sigmund Freud, Jeremy Bentham, John Stuart Mill, and (on one interpretation) even Charles Darwin have all argued for varieties of Motivational Hedonism. Bentham used the idea to support his theory of Hedonistic Utilitarianism (discussed below). Weak versions of Motivational Hedonism hold that the desires to seek pleasure and avoid pain often or always have some influence on our behavior. Weak versions are generally considered to be uncontroversially true and not especially useful for philosophy.

Philosophers have been more interested in strong accounts of Motivational Hedonism, which hold that all behavior is governed by the desires to encounter pleasure and to avoid pain (and only those desires). Strong accounts of Motivational Hedonism have been used to support some of the normative types of hedonism and to argue against non-hedonistic normative theories. One of the most notable mentions of Motivational Hedonism is Platos Ring of Gyges example in The Republic. Platos Socrates is discussing with Glaucon how men would react if they were to possess a ring that gives its wearer immense powers, including invisibility. Glaucon believes that a strong version of Motivational Hedonism is true, but Socrates does not. Glaucon asserts that, emboldened with the power provided by the Ring of Gyges, everyone would succumb to the inherent and ubiquitous desire to pursue their own ends at the expense of others. Socrates disagrees, arguing that good people would be able to overcome this desire because of their strong love of justice, fostered through philosophising.

Strong accounts of Motivational Hedonism currently garner very little support for similar reasons. Many examples of seemingly-pain-seeking acts performed out of a sense of duty are well-known from the soldier who jumps on a grenade to save his comrades to that time you rescued a trapped dog only to be (predictably) bitten in the process. Introspective evidence also weighs against strong accounts of Motivational Hedonism; many of the decisions we make seem to be based on motives other than seeking pleasure and avoiding pain. Given these reasons, the burden of proof is considered to be squarely on the shoulders of anyone wishing to argue for a strong account of Motivational Hedonism.

Value Hedonism, occasionally with assistance from Motivational Hedonism, has been used to argue for specific theories of right action (theories that explain which actions are morally permissible or impermissible and why). The theory that happiness should be pursued (that pleasure should be pursued and pain should be avoided) is referred to as Normative Hedonism and sometimes Ethical Hedonism. There are two major types of Normative Hedonism, Hedonistic Egoism and Hedonistic Utilitarianism. Both types commonly use happiness (defined as pleasure minus pain) as the sole criterion for determining the moral rightness or wrongness of an action. Important variations within each of these two main types specify either the actual resulting happiness (after the act) or the predicted resulting happiness (before the act) as the moral criterion. Although both major types of Normative Hedonism have been accused of being repugnant, Hedonistic Egoism is considered the most offensive.

Hedonistic Egoism is a hedonistic version of egoism, the theory that we should, morally speaking, do whatever is most in our own interests. Hedonistic Egoism is the theory that we ought, morally speaking, to do whatever makes us happiest that is whatever provides us with the most net pleasure after pain is subtracted. The most repugnant feature of this theory is that one never has to ascribe any value whatsoever to the consequences for anyone other than oneself. For example, a Hedonistic Egoist who did not feel saddened by theft would be morally required to steal, even from needy orphans (if he thought he could get away with it). Would-be defenders of Hedonistic Egoism often point out that performing acts of theft, murder, treachery and the like would not make them happier overall because of the guilt, the fear of being caught, and the chance of being caught and punished. The would-be defenders tend to surrender, however, when it is pointed out that a Hedonistic Egoist is morally obliged by their own theory to pursue an unusual kind of practical education; a brief and possibly painful training period that reduces their moral emotions of sympathy and guilt. Such an education might be achieved by desensitising over-exposure to, and performance of, torture on innocents. If Hedonistic Egoists underwent such an education, their reduced capacity for sympathy and guilt would allow them to take advantage of any opportunities to perform pleasurable, but normally-guilt-inducing, actions, such as stealing from the poor.

Hedonistic Egoism is very unpopular amongst philosophers, not just for this reason, but also because it suffers from all of the objections that apply to Prudential Hedonism.

Hedonistic Utilitarianism is the theory that the right action is the one that produces (or is most likely to produce) the greatest net happiness for all concerned. Hedonistic Utilitarianism is often considered fairer than Hedonistic Egoism because the happiness of everyone involved (everyone who is affected or likely to be affected) is taken into account and given equal weight. Hedonistic Utilitarians, then, tend to advocate not stealing from needy orphans because to do so would usually leave the orphan far less happy and the (probably better-off) thief only slightly happier (assuming he felt no guilt). Despite treating all individuals equally, Hedonistic Utilitarianism is still seen as objectionable by some because it assigns no intrinsic moral value to justice, friendship, truth, or any of the many other goods that are thought by some to be irreducibly valuable. For example, a Hedonistic Utilitarian would be morally obliged to publicly execute an innocent friend of theirs if doing so was the only way to promote the greatest happiness overall. Although unlikely, such a situation might arise if a child was murdered in a small town and the lack of suspects was causing large-scale inter-ethnic violence. Some philosophers argue that executing an innocent friend is immoral precisely because it ignores the intrinsic values of justice, friendship, and possibly truth.

Hedonistic Utilitarianism is rarely endorsed by philosophers, but mainly because of its reliance on Prudential Hedonism as opposed to its utilitarian element. Non-hedonistic versions of utilitarianism are about as popular as the other leading theories of right action, especially when it is the actions of institutions that are being considered.

Perhaps the earliest written record of hedonism comes from the Crvka, an Indian philosophical tradition based on the Barhaspatya sutras. The Crvka persisted for two thousand years (from about 600 B.C.E.). Most notably, the Crvka advocated scepticism and Hedonistic Egoism that the right action is the one that brings the actor the most net pleasure. The Crvka acknowledged that some pain often accompanied, or was later caused by, sensual pleasure, but that pleasure was worth it.

The Cyrenaics, founded by Aristippus (c. 435-356 B.C.E.), were also sceptics and Hedonistic Egoists. Although the paucity of original texts makes it difficult to confidently state all of the justifications for the Cyrenaics positions, their overall stance is clear enough. The Cyrenaics believed pleasure was the ultimate good and everyone should pursue all immediate pleasures for themselves. They considered bodily pleasures better than mental pleasures, presumably because they were more vivid or trustworthy. The Cyrenaics also recommended pursuing immediate pleasures and avoiding immediate pains with scant or no regard for future consequences. Their reasoning for this is even less clear, but is most plausibly linked to their sceptical views perhaps that what we can be most sure of in this uncertain existence is our current bodily pleasures.

Epicurus (c. 341-271 B.C.E.), founder of Epicureanism, developed a Normative Hedonism in stark contrast to that of Aristippus. The Epicureanism of Epicurus is also quite the opposite to the common usage of Epicureanism; while we might like to go on a luxurious “Epicurean” holiday packed with fine dining and moderately excessive wining, Epicurus would warn us that we are only setting ourselves up for future pain. For Epicurus, happiness was the complete absence of bodily and especially mental pains, including fear of the Gods and desires for anything other than the bare necessities of life. Even with only the limited excesses of ancient Greece on offer, Epicurus advised his followers to avoid towns, and especially marketplaces, in order to limit the resulting desires for unnecessary things. Once we experience unnecessary pleasures, such as those from sex and rich food, we will then suffer from painful and hard to satisfy desires for more and better of the same. No matter how wealthy we might be, Epicurus would argue, our desires will eventually outstrip our means and interfere with our ability to live tranquil, happy lives. Epicureanism is generally egoistic, in that it encourages everyone to pursue happiness for themselves. However, Epicureans would be unlikely to commit any of the selfish acts we might expect from other egoists because Epicureans train themselves to desire only the very basics, which gives them very little reason to do anything to interfere with the affairs of others.

With the exception of a brief period discussed below, Hedonism has been generally unpopular ever since its ancient beginnings. Although criticisms of the ancient forms of hedonism were many and varied, one in particular was heavily cited. In Philebus, Platos Socrates and one of his many foils, Protarchus in this instance, are discussing the role of pleasure in the good life. Socrates asks Protarchus to imagine a life without much pleasure but full of the higher cognitive processes, such as knowledge, forethought and consciousness and to compare it with a life that is the opposite. Socrates describes this opposite life as having perfect pleasure but the mental life of an oyster, pointing out that the subject of such a life would not be able to appreciate any of the pleasure within it. The harrowing thought of living the pleasurable but unthinking life of an oyster causes Protarchus to abandon his hedonistic argument. The oyster example is now easily avoided by clarifying that pleasure is best understood as being a conscious experience, so any sensation that we are not consciously aware of cannot be pleasure.

Normative and Motivational Hedonism were both at their most popular during the heyday of Empiricism in the 18th and 19th Centuries. Indeed, this is the only period during which any kind of hedonism could be considered popular at all. During this period, two Hedonistic Utilitarians, Jeremy Bentham (1748-1832) and his protg John Stuart Mill (1806-1873), were particularly influential. Their theories are similar in many ways, but are notably distinct on the nature of pleasure.

Bentham argued for several types of hedonism, including those now referred to as Prudential Hedonism, Hedonistic Utilitarianism, and Motivational Hedonism (although his commitment to strong Motivational Hedonism eventually began to wane). Bentham argued that happiness was the ultimate good and that happiness was pleasure and the absence of pain. He acknowledged the egoistic and hedonistic nature of peoples motivation, but argued that the maximization of collective happiness was the correct criterion for moral behavior. Benthams greatest happiness principle states that actions are immoral if they are not the action that appears to maximise the happiness of all the people likely to be affected; only the action that appears to maximise the happiness of all the people likely to be affected is the morally right action.

Bentham devised the greatest happiness principle to justify the legal reforms he also argued for. He understood that he could not conclusively prove that the principle was the correct criterion for morally right action, but also thought that it should be accepted because it was fair and better than existing criteria for evaluating actions and legislation. Bentham thought that his Hedonic Calculus could be applied to situations to see what should, morally speaking, be done in a situation. The Hedonic Calculus is a method of counting the amount of pleasure and pain that would likely be caused by different actions. The Hedonic Calculus required a methodology for measuring pleasure, which in turn required an understanding of the nature of pleasure and specifically what aspects of pleasure were valuable for us.

Benthams Hedonic Calculus identifies several aspects of pleasure that contribute to its value, including certainty, propinquity, extent, intensity, and duration. The Hedonic Calculus also makes use of two future-pleasure-or-pain-related aspects of actions fecundity and purity. Certainty refers to the likelihood that the pleasure or pain will occur. Propinquity refers to how long away (in terms of time) the pleasure or pain is. Fecundity refers to the likelihood of the pleasure or pain leading to more of the same sensation. Purity refers to the likelihood of the pleasure or pain leading to some of the opposite sensation. Extent refers to the number of people the pleasure or pain is likely to affect. Intensity refers to the felt strength of the pleasure or pain. Duration refers to how long the pleasure or pain are felt for. It should be noted that only intensity and duration have intrinsic value for an individual. Certainty, propinquity, fecundity, and purity are all instrumentally valuable for an individual because they affect the likelihood of an individual feeling future pleasure and pain. Extent is not directly valuable for an individuals well-being because it refers to the likelihood of other people experiencing pleasure or pain.

Benthams inclusion of certainty, propinquity, fecundity, and purity in the Hedonic Calculus helps to differentiate his hedonism from Folk Hedonism. Folk Hedonists rarely consider how likely their actions are to lead to future pleasure or pain, focussing instead on the pursuit of immediate pleasure and the avoidance of immediate pain. So while Folk Hedonists would be unlikely to study for an exam, anyone using Benthams Hedonic Calculus would consider the future happiness benefits to themselves (and possibly others) of passing the exam and then promptly begin studying.

Most importantly for Benthams Hedonic Calculus, the pleasure from different sources is always measured against these criteria in the same way, that is to say that no additional value is afforded to pleasures from particularly moral, clean, or culturally-sophisticated sources. For example, Bentham held that pleasure from the parlor game push-pin was just as valuable for us as pleasure from music and poetry. Since Benthams theory of Prudential Hedonism focuses on the quantity of the pleasure, rather than the source-derived quality of it, it is best described as a type of Quantitative Hedonism.

Benthams indifferent stance on the source of pleasures led to others disparaging his hedonism as the philosophy of swine. Even his student, John Stuart Mill, questioned whether we should believe that a satisfied pig leads a better life than a dissatisfied human or that a satisfied fool leads a better life than a dissatisfied Socrates results that Benthams Quantitative Hedonism seems to endorse.

Like Bentham, Mill endorsed the varieties of hedonism now referred to as Prudential Hedonism, Hedonistic Utilitarianism, and Motivational Hedonism. Mill also thought happiness, defined as pleasure and the avoidance of pain, was the highest good. Where Mills hedonism differs from Benthams is in his understanding of the nature of pleasure. Mill argued that pleasures could vary in quality, being either higher or lower pleasures. Mill employed the distinction between higher and lower pleasures in an attempt to avoid the criticism that his hedonism was just another philosophy of swine. Lower pleasures are those associated with the body, which we share with other animals, such as pleasure from quenching thirst or having sex. Higher pleasures are those associated with the mind, which were thought to be unique to humans, such as pleasure from listening to opera, acting virtuously, and philosophising. Mill justified this distinction by arguing that those who have experienced both types of pleasure realise that higher pleasures are much more valuable. He dismissed challenges to this claim by asserting that those who disagreed lacked either the experience of higher pleasures or the capacity for such experiences. For Mill, higher pleasures were not different from lower pleasures by mere degree; they were different in kind. Since Mills theory of Prudential Hedonism focuses on the quality of the pleasure, rather than the amount of it, it is best described as a type of Qualitative Hedonism.

George Edward Moore (1873-1958) was instrumental in bringing hedonisms brief heyday to an end. Moores criticisms of hedonism in general, and Mills hedonism in particular, were frequently cited as good reasons to reject hedonism even decades after his death. Indeed, since G. E. Moore, hedonism has been viewed by most philosophers as being an initially intuitive and interesting family of theories, but also one that is flawed on closer inspection. Moore was a pluralist about value and argued persuasively against the Value Hedonists central claim that all and only pleasure is the bearer of intrinsic value. Moores most damaging objection against Hedonism was his heap of filth example. Moore himself thought the heap of filth example thoroughly refuted what he saw as the only potentially viable form of Prudential Hedonism that conscious pleasure is the only thing that positively contributes to well-being. Moore used the heap of filth example to argue that Prudential Hedonism is false because pleasure is not the only thing of value.

In the heap of filth example, Moore asks the reader to imagine two worlds, one of which is exceedingly beautiful and the other a disgusting heap of filth. Moore then instructs the reader to imagine that no one would ever experience either world and asks if it is better for the beautiful world to exist than the filthy one. As Moore expected, his contemporaries tended to agree that it would be better if the beautiful world existed. Relying on this agreement, Moore infers that the beautiful world is more valuable than the heap of filth and, therefore, that beauty must be valuable. Moore then concluded that all of the potentially viable theories of Prudential Hedonism (those that value only conscious pleasures) must be false because something, namely beauty, is valuable even when no conscious pleasure can be derived from it.

Moores heap of filth example has rarely been used to object to Prudential Hedonism since the 1970s because it is not directly relevant to Prudential Hedonism (it evaluates worlds and not lives). Moores other objections to Prudential Hedonism also went out of favor around the same time. The demise of these arguments was partly due to mounting objections against them, but mainly because arguments more suited to the task of refuting Prudential Hedonism were developed. These arguments are discussed after the contemporary varieties of hedonism are introduced below.

Several contemporary varieties of hedonism have been defended, although usually by just a handful of philosophers or less at any one time. Other varieties of hedonism are also theoretically available but have received little or no discussion. Contemporary varieties of Prudential Hedonism can be grouped based on how they define pleasure and pain, as is done below. In addition to providing different notions of what pleasure and pain are, contemporary varieties of Prudential Hedonism also disagree about what aspect or aspects of pleasure are valuable for well-being (and the opposite for pain).

The most well-known disagreement about what aspects of pleasure are valuable occurs between Quantitative and Qualitative Hedonists. Quantitative Hedonists argue that how valuable pleasure is for well-being depends on only the amount of pleasure, and so they are only concerned with dimensions of pleasure such as duration and intensity. Quantitative Hedonism is often accused of over-valuing animalistic, simple, and debauched pleasures.

Qualitative Hedonists argue that, in addition to the dimensions related to the amount of pleasure, one or more dimensions of quality can have an impact on how pleasure affects well-being. The quality dimensions might be based on how cognitive or bodily the pleasure is (as it was for Mill), the moral status of the source of the pleasure, or some other non-amount-related dimension. Qualitative Hedonism is criticised by some for smuggling values other than pleasure into well-being by misleadingly labelling them as dimensions of pleasure. How these qualities are chosen for inclusion is also criticised for being arbitrary or ad hoc by some because inclusion of these dimensions of pleasure is often in direct response to objections that Quantitative Hedonism cannot easily deal with. That is to say, the inclusion of these dimensions is often accused of being an exercise in plastering over holes, rather than deducing corollary conclusions from existing theoretical premises. Others have argued that any dimensions of quality can be better explained in terms of dimensions of quantity. For example, they might claim that moral pleasures are no higher in quality than immoral pleasures, but that moral pleasures are instrumentally more valuable because they are likely to lead to more moments of pleasure or less moments of pain in the future.

Hedonists also have differing views about how the value of pleasure compares with the value of pain. This is not a practical disagreement about how best to measure pleasure and pain, but rather a theoretical disagreement about comparative value, such as whether pain is worse for us than an equivalent amount of pleasure is good for us. The default position is that one unit of pleasure (sometimes referred to as a Hedon) is equivalent but opposite in value to one unit of pain (sometimes referred to as a Dolor). Several Hedonistic Utilitarians have argued that reduction of pain should be seen as more important than increasing pleasure, sometimes for the Epicurean reason that pain seems worse for us than an equivalent amount of pleasure is good for us. Imagine that a magical genie offered for you to play a game with him. The game consists of you flipping a fair coin. If the coin lands on heads, then you immediately feel a burst of very intense pleasure and if it lands on tails, then you immediately feel a burst of very intense pain. Is it in your best interests to play the game?

Another area of disagreement between some Hedonists is whether pleasure is entirely internal to a person or if it includes external elements. Internalism about pleasure is the thesis that, whatever pleasure is, it is always and only inside a person. Externalism about pleasure, on the other hand, is the thesis that, pleasure is more than just a state of an individual (that is, that a necessary component of pleasure lies outside of the individual). Externalists about pleasure might, for example, describe pleasure as a function that mediates between our minds and the environment, such that every instance of pleasure has one or more integral environmental components. The vast majority of historic and contemporary versions of Prudential Hedonism consider pleasure to be an internal mental state.

Perhaps the least known disagreement about what aspects of pleasure make it valuable is the debate about whether we have to be conscious of pleasure for it to be valuable. The standard position is that pleasure is a conscious mental state, or at least that any pleasure a person is not conscious of does not intrinsically improve their well-being.

The most common definition of pleasure is that it is a sensation, something that we identify through our senses or that we feel. Psychologists claim that we have at least ten senses, including the familiar, sight, hearing, smell, taste, and touch, but also, movement, balance, and several sub-senses of touch, including heat, cold, pressure, and pain. New senses get added to the list when it is understood that some independent physical process underpins their functioning. The most widely-used examples of pleasurable sensations are the pleasures of eating, drinking, listening to music, and having sex. Use of these examples has done little to help Hedonism avoid its debauched reputation.

It is also commonly recognised that our senses are physical processes that usually involve a mental component, such as the tickling feeling when someone blows gently on the back of your neck. If a sensation is something we identify through our sense organs, however, it is not entirely clear how to account for abstract pleasures. This is because abstract pleasures, such as a feeling of accomplishment for a job well done, do not seem to be experienced through any of the senses in the standard lists. Some Hedonists have attempted to resolve this problem by arguing for the existence of an independent pleasure sense and by defining sensation as something that we feel (regardless of whether it has been mediated by sense organs).

Most Hedonists who describe pleasure as a sensation will be Quantitative Hedonists and will argue that the pleasure from the different senses is the same. Qualitative Hedonists, in comparison, can use the framework of the senses to help differentiate between qualities of pleasure. For example, a Qualitative Hedonist might argue that pleasurable sensations from touch and movement are always lower quality than the others.

Hedonists have also defined pleasure as intrinsically valuable experience, that is to say any experiences that we find intrinsically valuable either are, or include, instances of pleasure. According to this definition, the reason that listening to music and eating a fine meal are both intrinsically pleasurable is because those experiences include an element of pleasure (along with the other elements specific to each activity, such as the experience of the texture of the food and the melody of the music). By itself, this definition enables Hedonists to make an argument that is close to perfectly circular. Defining pleasure as intrinsically valuable experience and well-being as all and only experiences that are intrinsically valuable allows a Hedonist to all but stipulate that Prudential Hedonism is the correct theory of well-being. Where defining pleasure as intrinsically valuable experience is not circular is in its stipulation that only experiences matter for well-being. Some well-known objections to this idea are discussed below.

Another problem with defining pleasure as intrinsically valuable experience is that the definition does not tell us very much about what pleasure is or how it can be identified. For example, knowing that pleasure is intrinsically valuable experience would not help someone to work out if a particular experience was intrinsically or just instrumentally valuable. Hedonists have attempted to respond to this problem by explaining how to find out whether an experience is intrinsically valuable.

One method is to ask yourself if you would like the experience to continue for its own sake (rather than because of what it might lead to). Wanting an experience to continue for its own sake reveals that you find it to be intrinsically valuable. While still making a coherent theory of well-being, defining intrinsically valuable experiences as those you want to perpetuate makes the theory much less hedonistic. The fact that what a person wants is the main criterion for something having intrinsic value, makes this kind of theory more in line with preference satisfaction theories of well-being. The central claim of preference satisfaction theories of well-being is that some variant of getting what one wants, or should want, under certain conditions is the only thing that intrinsically improves ones well-being.

Another method of fleshing out the definition of pleasure as intrinsically valuable experience is to describe how intrinsically valuable experiences feel. This method remains a hedonistic one, but seems to fall back into defining pleasure as a sensation.

It has also been argued that what makes an experience intrinsically valuable is that you like or enjoy it for its own sake. Hedonists arguing for this definition of pleasure usually take pains to position their definition in between the realms of sensation and preference satisfaction. They argue that since we can like or enjoy some experiences without concurrently wanting them or feeling any particular sensation, then liking is distinct from both sensation and preference satisfaction. Liking and enjoyment are also difficult terms to define in more detail, but they are certainly easier to recognise than the rather opaque “intrinsically valuable experience.”

Merely defining pleasure as intrinsically valuable experience and intrinsically valuable experiences as those that we like or enjoy still lacks enough detail to be very useful for contemplating well-being. A potential method for making this theory more useful would be to draw on the cognitive sciences to investigate if there is a specific neurological function for liking or enjoying. Cognitive science has not reached the point where anything definitive can be said about this, but a few neuroscientists have experimental evidence that liking and wanting (at least in regards to food) are neurologically distinct processes in rats and have argued that it should be the same for humans. The same scientists have wondered if the same processes govern all of our liking and wanting, but this question remains unresolved.

Most Hedonists who describe pleasure as intrinsically valuable experience believe that pleasure is internal and conscious. Hedonists who define pleasure in this way may be either Quantitative or Qualitative Hedonists, depending on whether they think that quality is a relevant dimension of how intrinsically valuable we find certain experiences.

One of the most recent developments in modern hedonism is the rise of defining pleasure as a pro-attitude a positive psychological stance toward some object. Any account of Prudential Hedonism that defines pleasure as a pro-attitude is referred to as Attitudinal Hedonism because it is a persons attitude that dictates whether anything has intrinsic value. Positive psychological stances include approving of something, thinking it is good, and being pleased about it. The object of the positive psychological stance could be a physical object, such as a painting one is observing, but it could also be a thought, such as “my country is not at war,” or even a sensation. An example of a pro-attitude towards a sensation could be being pleased about the fact that an ice cream tastes so delicious.

Fred Feldman, the leading proponent of Attitudinal Hedonism, argues that the sensation of pleasure only has instrumental value it only brings about value if you also have a positive psychological stance toward that sensation. In addition to his basic Intrinsic Attitudinal Hedonism, which is a form of Quantitative Hedonism, Feldman has also developed many variants that are types of Qualitative Hedonism. For example, Desert-Adjusted Intrinsic Attitudinal Hedonism, which reduces the intrinsic value a pro-attitude has for our well-being based on the quality of deservedness (that is, on the extent to which the particular object deserves a pro-attitude or not). For example, Desert-Adjusted Intrinsic Attitudinal Hedonism might stipulate that sensations of pleasure arising from adulterous behavior do not deserve approval, and so assign them no value.

Defining pleasure as a pro-attitude, while maintaining that all sensations of pleasure have no intrinsic value, makes Attitudinal Hedonism less obviously hedonistic as the versions that define pleasure as a sensation. Indeed, defining pleasure as a pro-attitude runs the risk of creating a preference satisfaction account of well-being because being pleased about something without feeling any pleasure seems hard to distinguish from having a preference for that thing.

The most common argument against Prudential Hedonism is that pleasure is not the only thing that intrinsically contributes to well-being. Living in reality, finding meaning in life, producing noteworthy achievements, building and maintaining friendships, achieving perfection in certain domains, and living in accordance with religious or moral laws are just some of the other things thought to intrinsically add value to our lives. When presented with these apparently valuable aspects of life, Hedonists usually attempt to explain their apparent value in terms of pleasure. A Hedonist would argue, for example, that friendship is not valuable in and of itself, rather it is valuable to the extent that it brings us pleasure. Furthermore, to answer why we might help a friend even when it harms us, a Hedonist will argue that the prospect of future pleasure from receiving reciprocal favors from our friend, rather than the value of friendship itself, should motivate us to help in this way.

Those who object to Prudential Hedonism on the grounds that pleasure is not the only source of intrinsic value use two main strategies. In the first strategy, objectors make arguments that some specific value cannot be reduced to pleasure. In the second strategy, objectors cite very long lists of apparently intrinsically valuable aspects of life and then challenge hedonists with the prolonged and arduous task of trying to explain how the value of all of them can be explained solely by reference to pleasure and the avoidance of pain. This second strategy gives good reason to be a pluralist about value because the odds seem to be against any monistic theory of value, such as Prudential Hedonism. The first strategy, however, has the ability to show that Prudential Hedonism is false, rather than being just unlikely to be the best theory of well-being.

The most widely cited argument for pleasure not being the only source of intrinsic value is based on Robert Nozicks experience machine thought-experiment. Nozicks experience machine thought-experiment was designed to show that more than just our experiences matter to us because living in reality also matters to us. This argument has proven to be so convincing that nearly every single book on ethics that discusses hedonism rejects it using only this argument or this one and one other.

In the thought experiment, Nozick asks us to imagine that we have the choice of plugging in to a fantastic machine that flawlessly provides an amazing mix of experiences. Importantly, this machine can provide these experiences in a way that, once plugged in to the machine, no one can tell that their experiences are not real. Disregarding considerations about responsibilities to others and the problems that would arise if everyone plugged in, would you plug in to the machine for life? The vast majority of people reject the choice to live a much more pleasurable life in the machine, mostly because they agree with Nozick that living in reality seems to be important for our well-being. Opinions differ on what exactly about living in reality is so much better for us than the additional pleasure of living in the experience machine, but the most common response is that a life that is not lived in reality is pointless or meaningless.

Since this argument has been used so extensively (from the mid 1970s onwards) to dismiss Prudential Hedonism, several attempts have been made to refute it. Most commonly, Hedonists argue that living an experience machine life would be better than living a real life and that most people are simply mistaken to not want to plug in. Some go further and try to explain why so many people choose not to plug in. Such explanations often point out that the most obvious reasons for not wanting to plug in can be explained in terms of expected pleasure and avoidance of pain. For example, it might be argued that we expect to get pleasure from spending time with our real friends and family, but we do not expect to get as much pleasure from the fake friends or family we might have in the experience machine. These kinds of attempts to refute the experience machine objection do little to persuade non-Hedonists that they have made the wrong choice.

A more promising line of defence for the Prudential Hedonists is to provide evidence that there is a particular psychological bias that affects most peoples choice in the experience machine thought experiment. A reversal of Nozicks thought experiment has been argued to reveal just such a bias. Imagine that a credible source tells you that you are actually in an experience machine right now. You have no idea what reality would be like. Given the choice between having your memory of this conversation wiped and going to reality, what would be best for you to choose? Empirical evidence on this choice shows that most people would choose to stay in the experience machine. Comparing this result with how people respond to Nozicks experience machine thought experiment reveals the following: In Nozicks experience machine thought experiment people tend to choose a real and familiar life over a more pleasurable life and in the reversed experience machine thought experiment people tend to choose a familiar life over a real life. Familiarity seems to matter more than reality, undermining the strength of Nozicks original argument. The bias thought to be responsible for this difference is the status quo bias an irrational preference for the familiar or for things to stay as they are.

Regardless of whether Nozicks experience machine thought experiment is as decisive a refutation of Prudential Hedonism as it is often thought to be, the wider argument (that living in reality is valuable for our well-being) is still a problem for Prudential Hedonists. That our actions have real consequences, that our friends are real, and that our experiences are genuine seem to matter for most of us regardless of considerations of pleasure. Unfortunately, we lack a trusted methodology for discerning if these things should matter to us. Perhaps the best method for identifying intrinsically valuable aspects of lives is to compare lives that are equal in pleasure and all other important ways, except that one aspect of one of the lives is increased. Using this methodology, however, seems certain to lead to an artificial pluralist conclusion about what has value. This is because any increase in a potentially valuable aspect of our lives will be viewed as a free bonus. And, most people will choose the life with the free bonus just in case it has intrinsic value, not necessarily because they think it does have intrinsic value.

The main traditional line of criticism against Prudential Hedonism is that not all pleasure is valuable for well-being, or at least that some pleasures are less valuable than others because of non-amount-related factors. Some versions of this criticism are much easier for Prudential Hedonists to deal with than others depending on where the allegedly disvaluable aspect of the pleasure resides. If the disvaluable aspect is experienced with the pleasure itself, then both Qualitative and Quantitative varieties of Prudential Hedonism have sufficient answers to these problems. If, however, the disvaluable aspect of the pleasure is never experienced, then all types of Prudential Hedonism struggle to explain why the allegedly disvaluable aspect is irrelevant.

Examples of the easier criticisms to deal with are that Prudential Hedonism values, or at least overvalues, perverse and base pleasures. These kinds of criticisms tend to have had more sway in the past and doubtless encouraged Mill to develop his Qualitative Hedonism. In response to the charge that Prudential Hedonism mistakenly values pleasure from sadistic torture, sating hunger, copulating, listening to opera, and philosophising all equally, Qualitative Hedonists can simply deny that it does. Since pleasure from sadistic torture will normally be experienced as containing the quality of sadism (just as the pleasure from listening to good opera is experienced as containing the quality of acoustic excellence), the Qualitative Hedonist can plausibly claim to be aware of the difference in quality and allocate less value to perverse or base pleasures accordingly.

Prudential Hedonists need not relinquish the Quantitative aspect of their theory in order to deal with these criticisms, however. Quantitative Hedonists, can simply point out that moral or cultural values are not necessarily relevant to well-being because the investigation of well-being aims to understand what the good life for the one living it is and what intrinsically makes their life go better for them. A Quantitative Hedonist can simply respond that a sadist that gets sadistic pleasure from torturing someone does improve their own well-being (assuming that the sadist never feels any negative emotions or gets into any other trouble as a result). Similarly, a Quantitative Hedonist can argue that if someone genuinely gets a lot of pleasure from porcine company and wallowing in the mud, but finds opera thoroughly dull, then we have good reason to think that having to live in a pig sty would be better for her well-being than forcing her to listen to opera.

Much more problematic for both Quantitative and Qualitative Hedonists, however, are the more modern versions of the criticism that not all pleasure is valuable. The modern versions of this criticism tend to use examples in which the disvaluable aspect of the pleasure is never experienced by the person whose well-being is being evaluated. The best example of these modern criticisms is a thought experiment devised by Shelly Kagan. Kagans deceived businessman thought experiment is widely thought to show that pleasures of a certain kind, namely false pleasures, are worth much less than true pleasures.

Kagan asks us to imagine the life of a very successful businessman who takes great pleasure in being respected by his colleagues, well-liked by his friends, and loved by his wife and children until the day he died. Then Kagan asks us to compare this life with one of equal length and the same amount of pleasure (experienced as coming from exactly the same sources), except that in each case the businessman is mistaken about how those around him really feel. This second (deceived) businessman experiences just as much pleasure from the respect of his colleagues and the love of his family as the first businessman. The only difference is that the second businessman has many false beliefs. Specifically, the deceived businessmans colleagues actually think he is useless, his wife doesnt really love him, and his children are only nice to him so that he will keep giving them money. Given that the deceived businessman never knew of any of these deceptions and his experiences were never negatively impacted by the deceptions indirectly, which life do you think is better?

Nearly everyone thinks that the deceived businessman has a worse life. This is a problem for Prudential Hedonists because the pleasure is quantitatively equal in each life, so they should be equally good for the one living it. Qualitative Hedonism does not seem to be able to avoid this criticism either because the falsity of the pleasures experienced by the deceived businessman is a dimension of the pleasure that he never becomes aware of. Theoretically, an externalist and qualitative version of Attitudinal Hedonism could include the falsity dimension of an instance of pleasure even if the falsity dimension never impacts the consciousness of the person. However, the resulting definition of pleasure bears little resemblance to what we commonly understand pleasure to be and also seems to be ad hoc in its inclusion of the truth dimension but not others. A dedicated Prudential Hedonist of any variety can always stubbornly stick to the claim that the lives of the two businessmen are of equal value, but that will do little to convince the vast majority to take Prudential Hedonism more seriously.

Another major line of criticism used against Prudential Hedonists is that they have yet to come up with a meaningful definition of pleasure that unifies the seemingly disparate array of pleasures while remaining recognisable as pleasure. Some definitions lack sufficient detail to be informative about what pleasure actually is, or why it is valuable, and those that do offer enough detail to be meaningful are faced with two difficult tasks.

The first obstacle for a useful definition of pleasure for hedonism is to unify all of the diverse pleasures in a reasonable way. Phenomenologically, the pleasure from reading a good book is very different to the pleasure from bungee jumping, and both of these pleasures are very different to the pleasure of having sex. This obstacle is unsurpassable for most versions of Quantitative Hedonism because it makes the value gained from different pleasures impossible to compare. Not being able to compare different types of pleasure results in being unable to say if a life is better than another in most even vaguely realistic cases. Furthermore, not being able to compare lives means that Quantitative Hedonism could not be usefully used to guide behavior since it cannot instruct us on which life to aim for.

Attempts to resolve the problem of unifying the different pleasures while remaining within a framework of Quantitative Hedonism, usually involve pointing out something that is constant in all of the disparate pleasures and defining that particular thing as pleasure. When pleasure is defined as a strict sensation, this strategy fails because introspection reveals that no such sensation exists. Pleasure defined as the experience of liking or as a pro-attitude does much better at unifying all of the diverse pleasures. However, defining pleasure in these ways makes the task of filling in the details of the theory a fine balancing act. Liking or pro-attitudes must be described in such a way that they are not solely a sensation or best described as a preference satisfaction theory. And they must perform this balancing act while still describing a scientifically plausible and conceptually coherent account of pleasure. Most attempts to define pleasure as liking or pro-attitudes seem to disagree with either the folk conception of what pleasure is or any of the plausible scientific conceptions of how pleasure functions.

Most varieties of Qualitative Hedonism do better at dealing with the problem of diverse pleasures because they can evaluate different pleasures according to their distinct qualities. Qualitative Hedonists still need a coherent method for comparing the different pleasures with each other in order to be more than just an abstract theory of well-being, however. And, it is difficult to construct such a methodology in a way that avoids counter examples, while still describing a scientifically plausible and conceptually coherent account of pleasure.

The second obstacle is creating a definition of pleasure that retains at least some of the core properties of the common understanding of the term pleasure. As mentioned, many of the potential adjustments to the main definitions of pleasure are useful for avoiding one or more of the many objections against Prudential Hedonism. The problem with this strategy is that the more adjustments that are made, the more apparent it becomes that the definition of pleasure is not recognisable as the pleasure that gave Hedonism its distinctive intuitive plausibility in the first place. When an instance of pleasure is defined simply as when someone feels good, its intrinsic value for well-being is intuitively obvious. However, when the definition of pleasure is stretched, so as to more effectively argue that all valuable experiences are pleasurable, it becomes much less recognisable as the concept of pleasure we use in day-to-day life and its intrinsic value becomes much less intuitive.

The future of hedonism seems bleak. The considerable number and strength of the arguments against Prudential Hedonisms central principle (that pleasure and only pleasure intrinsically contributes positively to well-being and the opposite for pain) seem insurmountable. Hedonists have been creative in their definitions of pleasure so as to avoid these objections, but more often than not find themselves defending a theory that is not particularly hedonistic, realistic or both.

Perhaps the only hope that Hedonists of all types can have for the future is that advances in cognitive science will lead to a better understanding of how pleasure works in the brain and how biases affect our judgements about thought experiments. If our improved understanding in these areas confirms a particular theory about what pleasure is and also provides reasons to doubt some of the widespread judgements about the thought experiments that make the vast majority of philosophers reject hedonism, then hedonism might experience at least a partial revival. The good news for Hedonists is that at least some emerging theories and results from cognitive science do appear to support some aspects of hedonism.

Dan WeijersEmail: danweijers@gmail.comVictoria University of WellingtonNew Zealand

Read more:

Hedonism | Internet Encyclopedia of Philosophy

Clothing Optional Resorts, Negril, Jamaica | Hedonism II

Select Departure City Albany, Ny [ALB] Albuquerque, Nm [ABQ] Allentown, Pa [ABE] Amarillo, Tx [AMA] Anchorage, Ak [ANC] Appleton, Mn [AQP] Arcata, Ca [ACV] Asheville, Nc [AVL] Aspen, Co [ASE] Atlanta, Ga [ATL] Atlantic City, Nj [ACY] Austin, Tx [AUS] Baltimore, Md [BWI] Bangor, Me [BGR] Beaumont, Tx [BPT] Bethel, Ak [BET] Billings, Mt [BIL] Binghamton, Ny [BGM] Birmingham, Al [BHM] Bismarck, Nd [BIS] Bloomington, Il [BMI] Boise, Id [BOI] Boston, Ma [BOS] Brownsville, Tx [BRO] Brunswick, Ga [BQK] Buffalo, Ny [BUF] Burbank, Ca [BUR] Burlington, Vt [BTV] Calgary [YYC] Cedar Rapids, Ia [CID] Charleston, Sc [CHS] Charleston, Wv [CRW] Charlotte, Nc [CLT] Charlottesville, Va [CHO] Chicago (Midway), Il [MDW] Chicago (O’Hare), Il [ORD] Cincinnati, Oh [CVG] Cleveland, Oh [CLE] College Station, Tx [CLL] Colorado Springs, Co [COS] Columbia, Mo [COU] Columbia, Sc [CAE] Columbus, Oh [CMH] Cordova, Ak [CDV] Corpus Christi, Tx [CRP] Dallas Love Field, Tx [DAL] Dallas/Fort Worth, Tx [DFW] Dayton, Oh [DAY] Denver, Co [DEN] Des Moines, Ia [DSM] Detroit, Mi [DTW] Duluth, Mn [DLH] Durango, Co [DRO] Edmonton Intntl [YEG] Eastern Iowa, Ia [CID] El Paso, Tx [ELP] Erie, Pa [ERI] Eugene, Or [EUG] Eureka, Ca [EKA] Fairbanks, Ak [FAI] Fargo, Nd [FAR] Flint, Mi [FNT] Fresno, Ca [FAT] Ft. Lauderdale, Fl [FLL] Ft. Myers, Fl [RSW] Ft. Walton/Okaloosa [VPS] Ft. Wayne, In [FWA] Gainesville, Fl [GNV] Grand Forks, Nd [GFK] Grand Rapids, Mi [GRR] Great Falls, Mt [GTF] Green Bay, Wi [GRB] Greensboro, Nc [GSO] Greenville, Sc [GSP] Gulfport, Ms [GPT] Halifax Intntl [YHZ] Harlingen [HRL] Harrisburg, Pa [MDT] Hartford, Ct [BDL] Helena, Mt [HLN] Hilo, Hi [ITO] Hilton Head, Sc [HHH] Honolulu, Hi [HNL] Houston Hobby, Tx [HOU] Houston Busch, Tx [IAH] Huntington, Wv [HTS] Huntsville Intl, Al [HSV] Idaho Falls, Id [IDA] Indianapolis, In [IND] Islip, Ny [ISP] Ithaca, Ny [ITH] Jackson Hole, Wy [JAC] Jackson Int’L, Ms [JAN] Jacksonville, Fl [JAX] Juneau, Ak [JNU] Kahului, Hi [OGG] Kansas City, Mo [MCI] Kapalua, Hi [JHM] Kauai, Hi [LIH] Key West, Fl [EYW] Knoxville, Tn [TYS] Kona, Hi [KOA] Lanai, Hi [LNY] Lansing, Mi [LAN] Las Vegas, Nv [LAS] Lexington, Ky [LEX] Lincoln, Ne [LNK] Little Rock, Ar [LIT] Long Beach, Ca [LGB] Los Angeles, Ca [LAX] Louisville, Ky [SDF] Lubbock, Tx [LBB] Lynchburg, Va [LYH] Montreal Mirabel [YMX] Montreal Trudeau [YUL] Madison, Wi [MSN] Manchester, Nh [MHT] Maui, Hi [OGG] Mcallen, Tx [MFE] Medford, Or [MFR] Melbourne, Fl [MLB] Memphis, Tn [MEM] Miami, Fl [MIA] Midland/Odessa, Tx [MAF] Milwaukee, Wi [MKE] Minneapolis/St. Paul [MSP] Missoula, Mt [MSO] Mobile Regional, Al [MOB] Molokai, Hi [MKK] Monterey, Ca [MRY] Montgomery, Al [MGM] Myrtle Beach, Sc [MYR] Naples, Fl [APF] Nashville, Tn [BNA] New Braunfels, Tx [BAZ] New Orleans, La [MSY] New York Kennedy, Ny [JFK] New York Laguardia [LGA] Newark, Nj [EWR] Norfolk, Va [ORF] Ottawa Mcdonald [YOW] Oakland, Ca [OAK] Oklahoma City, Ok [OKC] Omaha, Ne [OMA] Ontario, Ca [ONT] Orange County, Ca [SNA] Orlando, Fl [MCO] Palm Springs, Ca [PSP] Panama City, Fl [PFN] Pensacola, Fl [PNS] Peoria, Il [PIA] Philadelphia, Pa [PHL] Phoenix, Az [PHX] Pittsburgh, Pa [PIT] Port Angeles, Wa [CLM] Portland Intl, Or [PDX] Portland, Me [PWM] Providence, Ri [PVD] Quebec Intntl [YQB] Raleigh/Durham, Nc [RDU] Rapid City, Sd [RAP] Redmond, Or [RDM] Reno, Nv [RNO] Richmond, Va [RIC] Roanoke, Va [ROA] Rochester, Ny [ROC] Rockford, Il [RFD] Sacramento, Ca [SMF] Saginaw, Mi [MBS] Salem, Or [SLE] Salt Lake City, Ut [SLC] San Antonio, Tx [SAT] San Diego, Ca [SAN] San Francisco, Ca [SFO] San Jose, Ca [SJC] Santa Barbara, Ca [SBA] Santa Rosa, Ca [STS] Sarasota/Bradenton [SRQ] Savannah, Ga [SAV] Seattle/Tacoma, Wa [SEA] Shreveport, La [SHV] Sioux City, Ia [SUX] Sioux Falls, Sd [FSD] Spokane, Wa [GEG] Springfield, Il [SPI] Springfield, Mo [SGF] St. Louis, Mo [STL] St. Petersburg, Fl [PIE] Syracuse, Ny [SYR] Toronto Pearson [YYZ] Tallahassee, Fl [TLH] Tampa, Fl [TPA] Traverse City, Mi [TVC] Tucson, Az [TUS] Tulsa, Ok [TUL] Vancouver Intntl [YVR] Victoria Intntl [YYJ] Winnipeg Intntl [YWG] Washington Natl, Dc [DCA] Washington/Dulles, Dc [IAD] Wenatchee, Wa [EAT] West Palm Beach, Fl [PBI] White Plains, Ny [HPN] Wichita, Ks [ICT] Wilkes-Barre/Scranton [AVP]

Read more from the original source:

Clothing Optional Resorts, Negril, Jamaica | Hedonism II

hedonism | Philosophy & Definition | Britannica.com

Hedonism, in ethics, a general term for all theories of conduct in which the criterion is pleasure of one kind or another. The word is derived from the Greek hedone (pleasure), from hedys (sweet or pleasant).

Hedonistic theories of conduct have been held from the earliest times. They have been regularly misrepresented by their critics because of a simple misconception, namely, the assumption that the pleasure upheld by the hedonist is necessarily purely physical in its origins. This assumption is in most cases a complete perversion of the truth. Practically all hedonists recognize the existence of pleasures derived from fame and reputation, from friendship and sympathy, from knowledge and art. Most have urged that physical pleasures are not only ephemeral in themselves but also involve, either as prior conditions or as consequences, such pains as to discount any greater intensity that they may have while they last.

The earliest and most extreme form of hedonism is that of the Cyrenaics as stated by Aristippus, who argued that the goal of a good life should be the sentient pleasure of the moment. Since, as Protagoras maintained, knowledge is solely of momentary sensations, it is useless to try to calculate future pleasures and to balance pains against them. The true art of life is to crowd as much enjoyment as possible into each moment.

No school has been more subject to the misconception noted above than the Epicurean. Epicureanism is completely different from Cyrenaicism. For Epicurus pleasure was indeed the supreme good, but his interpretation of this maxim was profoundly influenced by the Socratic doctrine of prudence and Aristotles conception of the best life. The true hedonist would aim at a life of enduring pleasure, but this would be obtainable only under the guidance of reason. Self-control in the choice and limitation of pleasures with a view to reducing pain to a minimum was indispensable. This view informed the Epicurean maxim Of all this, the beginning, and the greatest good, is prudence. This negative side of Epicureanism developed to such an extent that some members of the school found the ideal life rather in indifference to pain than in positive enjoyment.

In the late 18th century Jeremy Bentham revived hedonism both as a psychological and as a moral theory under the umbrella of utilitarianism. Individuals have no goal other than the greatest pleasure, thus each person ought to pursue the greatest pleasure. It would seem to follow that each person inevitably always does what he or she ought. Bentham sought the solution to this paradox on different occasions in two incompatible directions. Sometimes he says that the act which one does is the act which one thinks will give the most pleasure, whereas the act which one ought to do is the act which really will provide the most pleasure. In short, calculation is salvation, while sin is shortsightedness. Alternatively he suggests that the act which one does is that which will give one the most pleasure, whereas the act one ought to do is that which will give all those affected by it the most pleasure.

The psychological doctrine that a humans only aim is pleasure was effectively attacked by Joseph Butler. He pointed out that each desire has its own specific object and that pleasure comes as a welcome addition or bonus when the desire achieves its object. Hence the paradox that the best way to get pleasure is to forget it and to pursue wholeheartedly other objects. Butler, however, went too far in maintaining that pleasure cannot be pursued as an end. Normally, indeed, when one is hungry or curious or lonely, there is desire to eat, to know, or to have company. These are not desires for pleasure. One can also eat sweets when one is not hungry, for the sake of the pleasure that they give.

Moral hedonism has been attacked since Socrates, though moralists sometimes have gone to the extreme of holding that humans never have a duty to bring about pleasure. It may seem odd to say that a human has a duty to pursue pleasure, but the pleasures of others certainly seem to count among the factors relevant in making a moral decision. One particular criticism which may be added to those usually urged against hedonists is that whereas they claim to simplify ethical problems by introducing a single standard, namely pleasure, in fact they have a double standard. As Bentham said, Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. Hedonists tend to treat pleasure and pain as if they were, like heat and cold, degrees on a single scale, when they are really different in kind.

Read the original here:

hedonism | Philosophy & Definition | Britannica.com

Home Hedonism Wines

. .

, , . , , [emailprotected], .

Caro Hedonista,De momento o nosso site est apenas dsponivel em Ingls.Contudo, a nossa equipa tem sua disposio alguem capaz de lhe responder em Portugus.Por favor no hesite em contactar directamente o nosso especialista, Miguel.

Chers Hdonistes, notre site internet nest disponible pour le moment quen Anglais. Cependant, notre quipe se tient votre disposition pour vous rpondre en Franais. Nhsitez pas contacter directement Maxime notre spcialiste francophone.

Follow this link:

Home Hedonism Wines

Mind uploading – Wikipedia

Whole brain emulation (WBE), mind upload or brain upload (sometimes called “mind copying” or “mind transfer”) is the hypothetical futuristic process of scanning the mental state (including long-term memory and “self”) of a particular brain substrate and copying it to a computer. The computer could then run a simulation model of the brain’s information processing, such that it responds in essentially the same way as the original brain (i.e., indistinguishable from the brain for all relevant purposes) and experiences having a conscious mind.[1][2][3]

Mind uploading may potentially be accomplished by either of two methods: Copy-and-Transfer or gradual replacement of neurons. In the case of the former method, mind uploading would be achieved by scanning and mapping the salient features of a biological brain, and then by copying, transferring, and storing that information state into a computer system or another computational device. The simulated mind could be within a virtual reality or simulated world, supported by an anatomic 3D body simulation model. Alternatively the simulated mind could reside in a computer that is inside (or connected to) a (not necessarily humanoid) robot or a biological body in real life.[4]

Among some futurists and within the transhumanist movement, mind uploading is treated as an important proposed life extension technology. Some believe mind uploading is humanity’s current best option for preserving the identity of the species, as opposed to cryonics. Another aim of mind uploading is to provide a permanent backup to our “mind-file”, and a means for functional copies of human minds to survive a global disaster or interstellar space travels. Whole brain emulation is discussed by some futurists as a “logical endpoint”[4] of the topical computational neuroscience and neuroinformatics fields, both about brain simulation for medical research purposes. It is discussed in artificial intelligence research publications as an approach to strong AI. Computer-based intelligence such as an upload could think much faster than a biological human even if it were no more intelligent. A large-scale society of uploads might, according to futurists, give rise to a technological singularity, meaning a sudden time constant decrease in the exponential development of technology.[5] Mind uploading is a central conceptual feature of numerous science fiction novels and films.

Substantial mainstream research in related areas is being conducted in animal brain mapping and simulation, development of faster supercomputers, virtual reality, braincomputer interfaces, connectomics and information extraction from dynamically functioning brains.[6] According to supporters, many of the tools and ideas needed to achieve mind uploading already exist or are currently under active development; however, they will admit that others are, as yet, very speculative, but still in the realm of engineering possibility. Neuroscientist Randal Koene has formed a nonprofit organization called Carbon Copies to promote mind uploading research.

The human brain contains, on average, about 86 billion nerve cells called neurons, each individually linked to other neurons by way of connectors called axons and dendrites. Signals at the junctures (synapses) of these connections are transmitted by the release and detection of chemicals known as neurotransmitters. The established neuroscientific consensus is that the human mind is largely an emergent property of the information processing of this neural network.[citation needed]

Neuroscientists have stated that important functions performed by the mind, such as learning, memory, and consciousness, are due to purely physical and electrochemical processes in the brain and are governed by applicable laws. For example, Christof Koch and Giulio Tononi wrote in IEEE Spectrum:

“Consciousness is part of the natural world. It depends, we believe, only on mathematics and logic and on the imperfectly known laws of physics, chemistry, and biology; it does not arise from some magical or otherworldly quality.”[7]

The concept of mind uploading is based on this mechanistic view of the mind, and denies the vitalist view of human life and consciousness.[citation needed]

Eminent computer scientists and neuroscientists have predicted that specially programmed computers will be capable of thought and even attain consciousness, including Koch and Tononi,[7] Douglas Hofstadter,[8] Jeff Hawkins,[8] Marvin Minsky,[9] Randal A. Koene, and Rodolfo Llinas.[10]

Such an artificial intelligence capability might provide a computational substrate necessary for uploading.

However, even though uploading is dependent upon such a general capability, it is conceptually distinct from general forms of AI in that it results from dynamic reanimation of information derived from a specific human mind so that the mind retains a sense of historical identity (other forms are possible but would compromise or eliminate the life-extension feature generally associated with uploading). The transferred and reanimated information would become a form of artificial intelligence, sometimes called an infomorph or “nomorph”.[citation needed]

Many theorists have presented models of the brain and have established a range of estimates of the amount of computing power needed for partial and complete simulations.[4][citation needed] Using these models, some have estimated that uploading may become possible within decades if trends such as Moore’s law continue.[11]

In theory, if the information and processes of the mind can be disassociated from the biological body, they are no longer tied to the individual limits and lifespan of that body. Furthermore, information within a brain could be partly or wholly copied or transferred to one or more other substrates (including digital storage or another brain), thereby from a purely mechanistic perspective reducing or eliminating “mortality risk” of such information. This general proposal was discussed in 1971 by biogerontologist George M. Martin of the University of Washington.[12]

An uploaded astronaut would be the application of mind uploading to human spaceflight. This would eliminate the harms caused by a zero gravity environment, the vacuum of space, and cosmic radiation to the human body. It would allow for the use of smaller spacecraft, such as the proposed StarChip, and it would enable virtually unlimited interstellar travel distances.[13][14]

The focus of mind uploading, in the case of copy-and-transfer, is on data acquisition, rather than data maintenance of the brain. A set of approaches known as loosely coupled off-loading (LCOL) may be used in the attempt to characterize and copy the mental contents of a brain.[15] The LCOL approach may take advantage of self-reports, life-logs and video recordings that can be analyzed by artificial intelligence. A bottom-up approach may focus on the specific resolution and morphology of neurons, the spike times of neurons, the times at which neurons produce action potential responses.

Advocates of mind uploading point to Moore’s law to support the notion that the necessary computing power is expected to become available within a few decades. However, the actual computational requirements for running an uploaded human mind are very difficult to quantify, potentially rendering such an argument specious.

Regardless of the techniques used to capture or recreate the function of a human mind, the processing demands are likely to be immense, due to the large number of neurons in the human brain along with the considerable complexity of each neuron.

In 2004, Henry Markram, lead researcher of the “Blue Brain Project”, stated that “it is not [their] goal to build an intelligent neural network”, based solely on the computational demands such a project would have.[17]

It will be very difficult because, in the brain, every molecule is a powerful computer and we would need to simulate the structure and function of trillions upon trillions of molecules as well as all the rules that govern how they interact. You would literally need computers that are trillions of times bigger and faster than anything existing today.[18]

Five years later, after successful simulation of part of a rat brain, Markram was much more bold and optimistic. In 2009, as director of the Blue Brain Project, he claimed that A detailed, functional artificial human brain can be built within the next 10 years.[19]

Required computational capacity strongly depend on the chosen level of simulation model scale:[4]

Since the function of the human mind and how it might arise from the working of the brain’s neural network, are poorly understood issues, mind uploading relies on the idea of neural network emulation. Rather than having to understand the high-level psychological processes and large-scale structures of the brain, and model them using classical artificial intelligence methods and cognitive psychology models, the low-level structure of the underlying neural network is captured, mapped and emulated with a computer system. In computer science terminology,[dubious discuss] rather than analyzing and reverse engineering the behavior of the algorithms and data structures that resides in the brain, a blueprint of its source code is translated to another programming language. The human mind and the personal identity then, theoretically, is generated by the emulated neural network in an identical fashion to it being generated by the biological neural network.

On the other hand, a molecule-scale simulation of the brain is not expected to be required, provided that the functioning of the neurons is not affected by quantum mechanical processes. The neural network emulation approach only requires that the functioning and interaction of neurons and synapses are understood. It is expected that it is sufficient with a black-box signal processing model of how the neurons respond to nerve impulses (electrical as well as chemical synaptic transmission).

A sufficiently complex and accurate model of the neurons is required. A traditional artificial neural network model, for example multi-layer perceptron network model, is not considered as sufficient. A dynamic spiking neural network model is required, which reflects that the neuron fires only when a membrane potential reaches a certain level. It is likely that the model must include delays, non-linear functions and differential equations describing the relation between electrophysical parameters such as electrical currents, voltages, membrane states (ion channel states) and neuromodulators.

Since learning and long-term memory are believed to result from strengthening or weakening the synapses via a mechanism known as synaptic plasticity or synaptic adaptation, the model should include this mechanism. The response of sensory receptors to various stimuli must also be modelled.

Furthermore, the model may have to include metabolism, i.e. how the neurons are affected by hormones and other chemical substances that may cross the bloodbrain barrier. It is considered likely that the model must include currently unknown neuromodulators, neurotransmitters and ion channels. It is considered unlikely that the simulation model has to include protein interaction, which would make it computationally complex.[4]

A digital computer simulation model of an analog system such as the brain is an approximation that introduces random quantization errors and distortion. However, the biological neurons also suffer from randomness and limited precision, for example due to background noise. The errors of the discrete model can be made smaller than the randomness of the biological brain by choosing a sufficiently high variable resolution and sample rate, and sufficiently accurate models of non-linearities. The computational power and computer memory must however be sufficient to run such large simulations, preferably in real time.

When modelling and simulating the brain of a specific individual, a brain map or connectivity database showing the connections between the neurons must be extracted from an anatomic model of the brain. For whole brain simulation, this network map should show the connectivity of the whole nervous system, including the spinal cord, sensory receptors, and muscle cells. Destructive scanning of a small sample of tissue from a mouse brain including synaptic details is possible as of 2010.[20]

However, if short-term memory and working memory include prolonged or repeated firing of neurons, as well as intra-neural dynamic processes, the electrical and chemical signal state of the synapses and neurons may be hard to extract. The uploaded mind may then perceive a memory loss of the events and mental processes immediately before the time of brain scanning.[4]

A full brain map has been estimated to occupy less than 2 x 1016 bytes (20,000 TB) and would store the addresses of the connected neurons, the synapse type and the synapse “weight” for each of the brains’ 1015 synapses.[4][not in citation given] However, the biological complexities of true brain function (e.g. the epigenetic states of neurons, protein components with multiple functional states, etc.) may preclude an accurate prediction of the volume of binary data required to faithfully represent a functioning human mind.

A possible method for mind uploading is serial sectioning, in which the brain tissue and perhaps other parts of the nervous system are frozen and then scanned and analyzed layer by layer, which for frozen samples at nano-scale requires a cryo-ultramicrotome, thus capturing the structure of the neurons and their interconnections.[21] The exposed surface of frozen nerve tissue would be scanned and recorded, and then the surface layer of tissue removed. While this would be a very slow and labor-intensive process, research is currently underway to automate the collection and microscopy of serial sections.[22] The scans would then be analyzed, and a model of the neural net recreated in the system that the mind was being uploaded into.

There are uncertainties with this approach using current microscopy techniques. If it is possible to replicate neuron function from its visible structure alone, then the resolution afforded by a scanning electron microscope would suffice for such a technique.[22] However, as the function of brain tissue is partially determined by molecular events (particularly at synapses, but also at other places on the neuron’s cell membrane), this may not suffice for capturing and simulating neuron functions. It may be possible to extend the techniques of serial sectioning and to capture the internal molecular makeup of neurons, through the use of sophisticated immunohistochemistry staining methods that could then be read via confocal laser scanning microscopy. However, as the physiological genesis of ‘mind’ is not currently known, this method may not be able to access all of the necessary biochemical information to recreate a human brain with sufficient fidelity.

It may be possible to create functional 3D maps of the brain activity, using advanced neuroimaging technology, such as functional MRI (fMRI, for mapping change in blood flow), magnetoencephalography (MEG, for mapping of electrical currents), or combinations of multiple methods, to build a detailed three-dimensional model of the brain using non-invasive and non-destructive methods. Today, fMRI is often combined with MEG for creating functional maps of human cortex during more complex cognitive tasks, as the methods complement each other. Even though current imaging technology lacks the spatial resolution needed to gather the information needed for such a scan, important recent and future developments are predicted to substantially improve both spatial and temporal resolutions of existing technologies.[24]

There is ongoing work in the field of brain simulation, including partial and whole simulations of some animals. For example, the C. elegans roundworm, Drosophila fruit fly, and mouse have all been simulated to various degrees.[citation needed]

The Blue Brain Project by the Brain and Mind Institute of the cole Polytechnique Fdrale de Lausanne, Switzerland is an attempt to create a synthetic brain by reverse-engineering mammalian brain circuitry.

Underlying the concept of “mind uploading” (more accurately “mind transferring”) is the broad philosophy that consciousness lies within the brain’s information processing and is in essence an emergent feature that arises from large neural network high-level patterns of organization, and that the same patterns of organization can be realized in other processing devices. Mind uploading also relies on the idea that the human mind (the “self” and the long-term memory), just like non-human minds, is represented by the current neural network paths and the weights of the brain synapses rather than by a dualistic and mystic soul and spirit. The mind or “soul” can be defined as the information state of the brain, and is immaterial only in the same sense as the information content of a data file or the state of a computer software currently residing in the work-space memory of the computer. Data specifying the information state of the neural network can be captured and copied as a “computer file” from the brain and re-implemented into a different physical form.[25] This is not to deny that minds are richly adapted to their substrates.[26] An analogy to the idea of mind uploading is to copy the temporary information state (the variable values) of a computer program from the computer memory to another computer and continue its execution. The other computer may perhaps have different hardware architecture but emulates the hardware of the first computer.

These issues have a long history. In 1775 Thomas Reid wrote:[27] I would be glad to know… whether when my brain has lost its original structure, and when some hundred years after the same materials are fabricated so curiously as to become an intelligent being, whether, I say that being will be me; or, if, two or three such beings should be formed out of my brain; whether they will all be me, and consequently one and the same intelligent being.

A considerable portion of transhumanists and singularitarians place great hope into the belief that they may become immortal, by creating one or many non-biological functional copies of their brains, thereby leaving their “biological shell”. However, the philosopher and transhumanist Susan Schneider claims that at best, uploading would create a copy of the original person’s mind.[28] Susan Schneider agrees that consciousness has a computational basis, but this does not mean we can upload and survive. According to her views, “uploading” would probably result in the death of the original person’s brain, while only outside observers can maintain the illusion of the original person still being alive. For it is implausible to think that one’s consciousness would leave one’s brain and travel to a remote location; ordinary physical objects do not behave this way. Ordinary objects (rocks, tables, etc.) are not simultaneously here, and somewhere else. At best, a copy of the original mind is created.[28] Others have argued against such conclusions. For example, Buddhist transhumanist James Hughes has pointed out that this consideration only goes so far: if one believes the self is an illusion, worries about survival are not reasons to avoid uploading,[29] and Keith Wiley has presented an argument wherein all resulting minds of an uploading procedure are granted equal primacy in their claim to the original identity, such that survival of the self is determined retroactively from a strictly subjective position.[30][31]

Another potential consequence of mind uploading is that the decision to “upload” may then create a mindless symbol manipulator instead of a conscious mind (see philosophical zombie).[32][33] Are we to assume that an upload is conscious if it displays behaviors that are highly indicative of consciousness? Are we to assume that an upload is conscious if it verbally insists that it is conscious?[34] Could there be an absolute upper limit in processing speed above which consciousness cannot be sustained? The mystery of consciousness precludes a definitive answer to this question.[35] Numerous scientists, including Kurzweil, strongly believe that determining whether a separate entity is conscious (with 100% confidence) is fundamentally unknowable, since consciousness is inherently subjective (see solipsism). Regardless, some scientists strongly believe consciousness is the consequence of computational processes which are substrate-neutral. On the contrary, numerous scientists believe consciousness may be the result of some form of quantum computation dependent on substrate (see quantum mind).[36][37][38]

In light of uncertainty on whether to regard uploads as conscious, Sandberg proposes a cautious approach:[39]

Principle of assuming the most (PAM): Assume that any emulated system could have the same mental properties as the original system and treat it correspondingly.

It is argued that if a computational copy of one’s mind did exist, it would be impossible for one to recognize it as their own mind.[40] The argument for this stance is the following: for a computational mind to recognize an emulation of itself, it must be capable of deciding whether two Turing machines (namely, itself and the proposed emulation) are functionally equivalent. This task is uncomputable due to the undecidability of equivalence, thus there cannot exist a computational procedure in the mind that is capable of recognizing an emulation of itself.

The process of developing emulation technology raises ethical issues related to animal welfare and artificial consciousness.[39] The neuroscience required to develop brain emulation would require animal experimentation, first on invertebrates and then on small mammals before moving on to humans. Sometimes the animals would just need to be euthanized in order to extract, slice, and scan their brains, but sometimes behavioral and in vivo measures would be required, which might cause pain to living animals.[39]

In addition, the resulting animal emulations themselves might suffer, depending on one’s views about consciousness.[39] Bancroft argues for the plausibility of consciousness in brain simulations on the basis of the “fading qualia” thought experiment of David Chalmers. He then concludes:[41] If, as I argue above, a sufficiently detailed computational simulation of the brain is potentially operationally equivalent to an organic brain, it follows that we must consider extending protections against suffering to simulations.

It might help reduce emulation suffering to develop virtual equivalents of anaesthesia, as well as to omit processing related to pain and/or consciousness. However, some experiments might require a fully functioning and suffering animal emulation. Animals might also suffer by accident due to flaws and lack of insight into what parts of their brains are suffering.[39] Questions also arise regarding the moral status of partial brain emulations, as well as creating neuromorphic emulations that draw inspiration from biological brains but are built somewhat differently.[41]

Brain emulations could be erased by computer viruses or malware, without need to destroy the underlying hardware. This may make assassination easier than for physical humans. The attacker might take the computing power for its own use.[42]

Many questions arise regarding the legal personhood of emulations.[43] Would they be given the rights of biological humans? If a person makes an emulated copy of himself and then dies, does the emulation inherit his property and official positions? Could the emulation ask to “pull the plug” when its biological version was terminally ill or in a coma? Would it help to treat emulations as adolescents for a few years so that the biological creator would maintain temporary control? Would criminal emulations receive the death penalty, or would they be given forced data modification as a form of “rehabilitation”? Could an upload have marriage and child-care rights?[43]

If simulated minds would come true and if they were assigned rights of their own, it may be difficult to ensure the protection of “digital human rights”. For example, social science researchers might be tempted to secretly expose simulated minds, or whole isolated societies of simulated minds, to controlled experiments in which many copies of the same minds are exposed (serially or simultaneously) to different test conditions.[citation needed]

Emulations could create a number of conditions that might increase risk of war, including inequality, changes of power dynamics, a possible technological arms race to build emulations first, first-strike advantages, strong loyalty and willingness to “die” among emulations, and triggers for racist, xenophobic, and religious prejudice.[42] If emulations run much faster than humans, there might not be enough time for human leaders to make wise decisions or negotiate. It is possible that humans would react violently against growing power of emulations, especially if they depress human wages. Emulations may not trust each other, and even well-intentioned defensive measures might be interpreted as offense.[42]

There are very few feasible technologies that humans have refrained from developing. The neuroscience and computer-hardware technologies that may make brain emulation possible are widely desired for other reasons, and logically their development will continue into the future. Assuming that emulation technology will arrive, a question becomes whether we should accelerate or slow its advance.[42]

Arguments for speeding up brain-emulation research:

Arguments for slowing down brain-emulation research:

Emulation research would also speed up neuroscience as a whole, which might accelerate medical advances, cognitive enhancement, lie detectors, and capability for psychological manipulation.[48]

Emulations might be easier to control than de novo AI because

As counterpoint to these considerations, Bostrom notes some downsides:

Ray Kurzweil, director of engineering at Google, claims to know and foresee that people will be able to “upload” their entire brains to computers and become “digitally immortal” by 2045. Kurzweil made this claim for many years, e.g. during his speech in 2013 at the Global Futures 2045 International Congress in New York, which claims to subscribe to a similar set of beliefs.[49] Mind uploading is also advocated by a number of researchers in neuroscience and artificial intelligence, such as Marvin Minsky[citation needed] while he was still alive. In 1993, Joe Strout created a small web site called the Mind Uploading Home Page, and began advocating the idea in cryonics circles and elsewhere on the net. That site has not been actively updated in recent years, but it has spawned other sites including MindUploading.org, run by Randal A. Koene, who also moderates a mailing list on the topic. These advocates see mind uploading as a medical procedure which could eventually save countless lives.

Many transhumanists look forward to the development and deployment of mind uploading technology, with transhumanists such as Nick Bostrom predicting that it will become possible within the 21st century due to technological trends such as Moore’s law.[4]

Michio Kaku, in collaboration with Science, hosted a documentary, Sci Fi Science: Physics of the Impossible, based on his book Physics of the Impossible. Episode four, titled “How to Teleport”, mentions that mind uploading via techniques such as quantum entanglement and whole brain emulation using an advanced MRI machine may enable people to be transported to vast distances at near light-speed.

The book Beyond Humanity: CyberEvolution and Future Minds by Gregory S. Paul & Earl D. Cox, is about the eventual (and, to the authors, almost inevitable) evolution of computers into sentient beings, but also deals with human mind transfer. Richard Doyle’s Wetwares: Experiments in PostVital Living deals extensively with uploading from the perspective of distributed embodiment, arguing for example that humans are currently part of the “artificial life phenotype”. Doyle’s vision reverses the polarity on uploading, with artificial life forms such as uploads actively seeking out biological embodiment as part of their reproductive strategy.

Kenneth D. Miller, a professor of neuroscience at Columbia and a co-director of the Center for Theoretical Neuroscience, raised doubts about the practicality of mind uploading. His major argument is that reconstructing neurons and their connections is in itself a formidable task, but it is far from being sufficient. Operation of the brain depends on the dynamics of electrical and biochemical signal exchange between neurons; therefore, capturing them in a single “frozen” state may prove insufficient. In addition, the nature of these signals may require modeling down to the molecular level and beyond. Therefore, while not rejecting the idea in principle, Miller believes that the complexity of the “absolute” duplication of an individual mind is insurmountable for the nearest hundreds of years.[50]

See the rest here:

Mind uploading – Wikipedia

Mind Uploading

Welcome

Minduploading.org is a collection of pages and articles designed to explore the concepts underlying mind uploading. The articles are intended to be a readable introduction to the basic technical and philosophical topics covering mind uploading and substrate-independent minds. The focus is on careful definitions of the common terms and what the implications are if mind uploading becomes possible.

Mind uploading is an ongoing area of active research, bringing together ideas from neuroscience, computer science, engineering, and philosophy. This site refers to a number of participants and researchers who are helping to make mind uploading possible.

Realistically, mind uploading likely lies many decades in the future, but the short-term offers the possibility of advanced neural prostheses that may benefit us.

Mind uploading is a popular term for a process by which the mind, a collection of memories, personality, and attributes of a specific individual, is transferred from its original biological brain to an artificial computational substrate. Alternative terms for mind uploading have appeared in fiction and non-fiction, such as mind transfer, mind downloading, off-loading, side-loading, and several others. They all refer to the same general concept of transferring the mind to a different substrate.

Once it is possible to move a mind from one substrate to another, it is then called a substrate-independent mind (SIM). The concept of SIM is inspired by the idea of designing software that can run on multiple computers with different hardware without needing to be rewritten. For example, Javas design principle write once, run everywhere makes it a platform independent system. In this context, substrate is a term referring to a generalized concept of any computational platform that is capable of universal computation.

We take the materialist position that the human mind is solely generated by the brain and is a function of neural states. Additionally, we assume that the neural states are computational processes and devices capable of universal computing are sufficient to generate the same kind of computational processes found in a brain.

View original post here:

Mind Uploading


12345...102030...