Ethics of artificial intelligence – Wikipedia

Ethical issues specific to AI

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems.[1] It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics. It also includes the issue of a possible singularity due to superintelligent AI.

The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots.[2] Robot ethics intersect with the ethics of AI. Robots are physical machines whereas AI can be only software.[3] Not all robots function through AI systems and not all AI systems are robots. Robot ethics considers how machines may be used to harm or benefit humans, their impact on individual autonomy, and their effects on social justice.

Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral.[4][5][6][7] To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs.[8]

Isaac Asimov considered the issue in the 1950s in his I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.[9] More recently, academics and many governments have challenged the idea that AI can itself be held accountable.[10] A panel convened by the United Kingdom in 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator.[11]

In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fdrale of Lausanne, Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.[12]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[13] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[14][15] The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue.[16] They point to programs like the Language Acquisition Device which can emulate human interaction.

Vernor Vinge has suggested that a moment may come when some computers are smarter than humans. He calls this "the Singularity".[17] He suggests that it may be somewhat or possibly very dangerous for humans.[18] This is discussed by a philosophy called Singularitarianism. The Machine Intelligence Research Institute has suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.[19]

There are discussion on creating tests to see if an AI is capable of making ethical decisions. Alan Winfield concludes that the Turing test is flawed and the requirement for an AI to pass the test is too low.[20] A proposed alternative test is one called the Ethical Turing Test, which would improve on the current test by having multiple judges decide if the AI's decision is ethical or unethical.[20]

In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence". They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.[17]

However, there is one technology in particular that could truly bring the possibility of robots with moral competence to reality. In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons.[21] Robots embedded with neuromorphic technology could learn and develop knowledge in a uniquely humanlike way. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.

In Moral Machines: Teaching Robots Right from Wrong,[22] Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific learning algorithms to use in machines. Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. stare decisis),[23] while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers".[24]

According to a 2019 report from the Center for the Governance of AI at the University of Oxford, 82% of Americans believe that robots and AI should be carefully managed. Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deep fakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that do not require a human controller.[25]

In the review of 84[26] ethics guidelines for AI 11 clusters of principles were found: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, sustainability, dignity, solidarity.[26]

Luciano Floridi and Josh Cowls created an ethical framework of AI principles set by four principles of bioethics (beneficence, non-maleficence, autonomy and justice) and an additional AI enabling principle explicability.[27]

Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts.[28] Ben Goertzel and David Hart created OpenCog as an open source framework for AI development.[29] OpenAI is a non-profit AI research company created by Elon Musk, Sam Altman and others to develop open-source AI beneficial to humanity.[30] There are numerous other open-source AI developments.

Unfortunately, making code open source does not make it comprehensible, which by many definitions means that the AI code is not transparent. The IEEE has a standardisation effort on AI transparency.[31] The IEEE effort identifies multiple scales of transparency for different users. Further, there is concern that releasing the full capacity of contemporary AI to some organizations may be a public bad, that is, do more damage than good. For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted an extraordinary blog on this topic, asking for government regulation to help determine the right thing to do.[32]

Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term.[33] The OECD, UN, EU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks.[34][35][36]

On June 26, 2019, the European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG) published its "Policy and investment recommendations for trustworthy Artificial Intelligence".[37] This is the AI HLEG's second deliverable, after the April 2019 publication of the "Ethics Guidelines for Trustworthy AI". The June AI HLEG recommendations cover four principal subjects: humans and society at large, research and academia, the private sector, and the public sector. The European Commission claims that "HLEG's recommendations reflect an appreciation of both the opportunities for AI technologies to drive economic growth, prosperity and innovation, as well as the potential risks involved" and states that the EU aims to lead on the framing of policies governing AI internationally.[38] To prevent harm, in addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks.[39]

AI has become increasingly inherent in facial and voice recognition systems. Some of these systems have real business applications and directly impact people. These systems are vulnerable to biases and errors introduced by its human creators. Also, the data used to train these AI systems itself can have biases.[40][41][42][43] For instance, facial recognition algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people's gender;[44] these AI systems were able to detect gender of white men more accurately than gender of darker skin men. Further, a 2020 study reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people's voices than white people's.[45] Furthermore, Amazon terminated their use of AI hiring and recruitment because the algorithm favored male candidates over female ones. This was because Amazon's system was trained with data collected over 10-year period that came mostly from male candidates.[46]

Bias can creep into algorithms in many ways. The most predominant view on how bias is introduced into AI systems is that it is embedded within the historical data used to train the system. For instance, Amazon's AI-powered recruitment tool was trained with its own recruitment data accumulated over the years, during which time the candidates that successfully got the job were mostly white males. Consequently, the algorithms learned the (biased) pattern from the historical data and generated predictions for the present/future that these types of candidates are mostly like to succeed in getting the job. Therefore, the recruitment decisions made by the AI system turn out to be biased against female and minority candidates. Friedman and Nissenbaum identify three categories of bias in computer systems: existing bias, technical bias, and emergent bias.[47] In natural language processing, problems can arise from the text corpus the source material the algorithm uses to learn about the relationships between different words.[48]

Large companies such as IBM, Google, etc. have made efforts to research and address these biases.[49][50][51] One solution for addressing bias is to create documentation for the data used to train AI systems.[52][53] Process mining can be an important tool for organizations to achieve compliance with proposed AI regulations by identifying errors, monitoring processes, identifying potential root causes for improper execution, and other functions.[54]

The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries and that almost no one is making an effort to identify or correct it.[55] There are some open-sourced tools [56] by civil societies that are looking to bring more awareness to biased AI.

"Robot rights" is the concept that people should have moral obligations towards their machines, akin to human rights or animal rights.[57] It has been suggested that robot rights (such as a right to exist and perform its own mission) could be linked to robot duty to serve humanity, analogous to linking human rights with human duties before society.[58] These could include the right to life and liberty, freedom of thought and expression, and equality before the law.[59] The issue has been considered by the Institute for the Future[60] and by the U.K. Department of Trade and Industry.[61]

Experts disagree on how soon specific and detailed laws on the subject will be necessary.[61] Glenn McGee reported that sufficiently humanoid robots might appear by 2020,[62] while Ray Kurzweil sets the date at 2029.[63] Another group of scientists meeting in 2007 supposed that at least 50 years had to pass before any sufficiently advanced system would exist.[64]

The rules for the 2003 Loebner Prize competition envisioned the possibility of robots having rights of their own:

61. If in any given year, a publicly available open-source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry. If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.[65]

In October 2017, the android Sophia was granted "honorary" citizenship in Saudi Arabia, though some considered this to be more of a publicity stunt than a meaningful legal recognition.[66] Some saw this gesture as openly denigrating of human rights and the rule of law.[67]

The philosophy of Sentientism grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligence show evidence of being sentient, this philosophy holds that they should be shown compassion and granted rights.

Joanna Bryson has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society.[68]

Joseph Weizenbaum[69] argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as:

Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated, for the artificially intelligent system would not be able to simulate empathy. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."[70]

Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer", pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.[70] However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines; Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against.[71]

Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as computationalism). To Weizenbaum, these points suggest that AI research devalues human life.[69]

AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes. Bill Hibbard[72] writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."

As the widespread use of autonomous cars becomes increasingly imminent, new challenges raised by fully autonomous vehicles must be addressed.[73][74] Recently,[when?] there has been debate as to the legal liability of the responsible party if these cars get into accidents.[75][76] In one report where a driverless car hit a pedestrian, the driver was inside the car but the controls were fully in the hand of computers. This led to a dilemma over who was at fault for the accident.[77]

In another incident on March 18, 2018, Elaine Herzberg was struck and killed by a self-driving Uber in Arizona. In this case, the automated car was capable of detecting cars and certain obstacles in order to autonomously navigate the roadway, but it could not anticipate a pedestrian in the middle of the road. This raised the question of whether the driver, pedestrian, the car company, or the government should be held responsible for her death.[78]

Currently, self-driving cars are considered semi-autonomous, requiring the driver to pay attention and be prepared to take control if necessary.[79][failed verification] Thus, it falls on governments to regulate the driver who over-relies on autonomous features. as well educate them that these are just technologies that, while convenient, are not a complete substitute. Before autonomous cars become widely used, these issues need to be tackled through new policies.[80][81][82]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomy.[13][83] On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. However, a major concern is how the report will be implemented.[84] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[85][15] Some researchers state that autonomous robots might be more humane, as they could make decisions more effectively.[86]

Within this last decade, there has been intensive research in autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots."[87] From a consequentialist view, there is a chance that robots will develop the ability to make their own logical decisions on whom to kill and that is why there should be a set moral framework that the AI cannot override.[88]

There has been a recent outcry with regard to the engineering of artificial intelligence weapons that have included ideas of a robot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry. The United States Navy recently announced plans to develop autonomous drone weapons, paralleling similar announcements by Russia and Korea respectively. Due to the potential of AI weapons becoming more dangerous than human-operated weapons, Stephen Hawking and Max Tegmark signed a "Future of Life" petition[89] to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.[90]

"If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow", says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.[91]

Physicist and Astronomer Royal Sir Martin Rees has warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." Huw Price, a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology". These two professors created the Centre for the Study of Existential Risk at Cambridge University in the hope of avoiding this threat to human existence.[90]

Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios "seem potentially as important as the risks related to loss of control", but research investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them".[92]

Approaches like machine learning with neural networks can result in computers making decisions that they and the humans who programmed them cannot explain. It is difficult for people to determine if such decisions are fair and trustworthy, leading potentially to bias in AI systems going undetected, or people rejecting the use of such systems. This has led to advocacy and in some jurisdictions legal requirements for explainable artificial intelligence.[93]

Many researchers have argued that, by way of an "intelligence explosion", a self-improving AI could become so powerful that humans would not be able to stop it from achieving its goals.[94] In his paper "Ethical Issues in Advanced Artificial Intelligence" and subsequent book Superintelligence: Paths, Dangers, Strategies, philosopher Nick Bostrom argues that artificial intelligence has the capability to bring about human extinction. He claims that general superintelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the superintelligence to specify its original motivations. Because a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[95]

However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that superintelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to "enhance" ourselves.[96]

The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.[94][95] Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.[97] AI researchers such as Stuart J. Russell,[98] Bill Hibbard,[72] Roman Yampolskiy,[99] Shannon Vallor,[100] Steven Umbrello[101] and Luciano Floridi[102] have proposed design strategies for developing beneficial machines.

There are many organisations concerned with AI ethics and policy, public and governmental as well as corporate and societal.

Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit, The Partnership on AI to Benefit People and Society, to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. Apple joined in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.[103]

The IEEE put together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organization.

Traditionally, government has been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government and non-government organizations to ensure AI is ethically applied.

The role of fiction with regards to AI ethics has been a complex one. One can distinguish three levels at which fiction has impacted the development of artificial intelligence and robotics: Historically, fiction has been prefiguring common tropes that have not only influenced goals and visions for AI, but also outlined ethical questions and common fears associated with it. During the second half of the twentieth and the first decades of the twenty-first century, popular culture, in particular movies, TV series and video games have frequently echoed preoccupations and dystopian projections around ethical questions concerning AI and robotics. Recently, these themes have also been increasingly treated in literature beyond the realm of science fiction. And, as Carme Torras, research professor at the Institut de Robtica i Informtica Industrial (Institute of robotics and industrial computing) at the Technical University of Catalonia notes,[124] in higher education, science fiction is also increasingly used for teaching technology-related ethical issues in technological degrees.

Historically speaking, the investigation of moral and ethical implications of "thinking machines" goes back at least to the Enlightenment: Leibniz already poses the question if we might attribute intelligence to a mechanism that behaves as if it were a sentient being,[125] and so does Descartes, who describes what could be considered an early version of the Turing Test.[126]

The romantic period has several times envisioned artificial creatures that escape the control of their creator with dire consequences, most famously in Mary Shelley's Frankenstein. The widespread preoccupation with industrialization and mechanization in the 19th and early 20th century, however, brought ethical implications of unhinged technical developments to the forefront of fiction: R.U.R Rossum's Universal Robots, Karel apek's play of sentient robots endowed with emotions used as slave labor is not only credited with the invention of the term 'robot' (derived from the Czech word for forced labor, robota) but was also an international success after it premiered in 1921. George Bernard Shaw's play Back to Methuselah, published in 1921, questions at one point the validity of thinking machines that act like humans; Fritz Lang's 1927 film Metropolis shows an android leading the uprising of the exploited masses against the oppressive regime of a technocratic society.

While the anticipation of a future dominated by potentially indomitable technology has fueled the imagination of writers and film makers for a long time, one question has been less frequently analyzed, namely, to what extent fiction has played a role in providing inspiration for technological development. It has been documented, for instance, that the young Alan Turing saw and appreciated G.B. Shaw's play Back to Methuselah in 1933[127] (just 3 years before the publication of his first seminal paper[128] which laid the groundwork for the digital computer), and he would likely have been at least aware of plays like R.U.R., which was an international success and translated into many languages.

One might also ask the question which role science fiction played in establishing the tenets and ethical implications of AI development: Isaac Asimov conceptualized his Three Laws of Robotics in the 1942 short story "Runaround", part of the short story collection I, Robot; Arthur C. Clarke's short "The Sentinel", on which Stanley Kubrick's film 2001: A Space Odyssey is based, was written in 1948 and published in 1952. Another example (among many others) would be Philip K. Dicks numerous short stories and novels in particular Do Androids Dream of Electric Sheep?, published in 1968, and featuring its own version of a Turing Test, the Voight-Kampff Test, to gauge emotional responses of androids indistinguishable from humans. The novel later became the basis of the influential 1982 movie Blade Runner by Ridley Scott.

Science fiction has been grappling with ethical implications of AI developments for decades, and thus provided a blueprint for ethical issues that might emerge once something akin to general artificial intelligence has been achieved: Spike Jonze's 2013 film Her shows what can happen if a user falls in love with the seductive voice of his smartphone operating system; Ex Machina, on the other hand, asks a more difficult question: if confronted with a clearly recognizable machine, made only human by a face and an empathetic and sensual voice, would we still be able to establish an emotional connection, still be seduced by it? (The film echoes a theme already present two centuries earlier, in the 1817 short story "The Sandmann" by E.T.A. Hoffmann.)

The theme of coexistence with artificial sentient beings is also the theme of two recent novels: Machines like me by Ian McEwan, published in 2019, involves (among many other things) a love-triangle involving an artificial person as well as a human couple. Klara and the Sun by Nobel Prize winner Kazuo Ishiguro, published in 2021, is the first-person account of Klara, an 'AF' (artificial friend), who is trying, in her own way, to help the girl she is living with, who, after having been 'lifted' (i.e. having been subjected to genetic enhancements), is suffering from a strange illness.

While ethical questions linked to AI have been featured in science fiction literature and feature films for decades, the emergence of the TV series as a genre allowing for longer and more complex story lines and character development has led to some significant contributions that deal with ethical implications of technology. The Swedish series Real Humans (20122013) tackled the complex ethical and social consequences linked to the integration of artificial sentient beings in society. The British dystopian science fiction anthology series Black Mirror (20132019) was particularly notable for experimenting with dystopian fictional developments linked to a wide variety of recent technology developments. Both the French series Osmosis (2020) and British series The One deal with the question of what can happen if technology tries to find the ideal partner for a person. Several episodes of the Netflix series Love, Death+Robots have imagined scenes of robots and humans living together. The most representative one of them is S02 E01, it shows how bad the consequences can be when robots get out of control if humans rely too much on them in their lives.[129]

The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost speciesism. The short story "The Planck Dive" suggests a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.[130]

The ethics of artificial intelligence is one of several core themes in BioWare's Mass Effect series of games.[131] It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scale neural network. This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story.

Detroit: Become Human is one of the most famous video games which discusses the ethics of artificial intelligence recently. Quantic Dream designed the chapters of the game using interactive storylines to give players a more immersive gaming experience. Players manipulate three different awakened bionic men in the face of different events to make different choices to achieve the purpose of changing the human view of the bionic group and different choices will result in different endings. This is one of the few games that puts players in the bionic perspective, which allows them to better consider the rights and interests of robots once a true artificial intelligence is created.[132]

Over time, debates have tended to focus less and less on possibility and more on desirability,[133] as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.

Experts at the University of Cambridge have argued that AI is portrayed in fiction and nonfiction overwhelmingly as racially White, in ways that distort perceptions of its risks and benefits.[134]

The rest is here:

Ethics of artificial intelligence - Wikipedia

Singularitarianism | Prometheism Transhumanism Post Humanism

I have a four-foot-tall robot in my house that plays with my kids. Its name is Jethro.

Both my daughters, aged 5 and 9, are so enamored with Jethro that they have each asked to marry it. For fun, my wife and I put on mock weddings. Despite the robot being mainly for entertainment, its very basic artificial intelligence can perform thousands of functions, including dance and teach karate, which my kids love.

The most important thing Jethro has taught my kids is that its totally normal to have a walking, talking machine around the house that you can hang out with whenever you want to.

Given my daughters semi-regular use of smartphones and tablets, I have to wonder how this will affect them in the future. Will they have any fear of technologies like driverless cars? Will they take it for granted that machine intelligences and avatars on computers can be their best friends, or even their bosses?

Will marrying a super-intelligent robot in 20 years be a natural decision? Even though I love technology, Im not sure how I would feel about having a robot-in-law. But my kids might think nothing of it.

This is my story of transhumanism.

Courtesy of Zoltan Istvan

My transhumanism journey began in 2003 when I was reporting a story for National Geographic in Vietnams demilitarized zone and I almost stepped on a landmine.

I remember my guide roughly shoving me aside and pointing to the metal object half sticking out of the ground in front of me.

I stared at the device that would have completely blown my legs off had my boot tripped the mine. I had just turned 30. The experience left me shaken. And it kept haunting me.

That night as I lay tense and awake in my hotel room, I had the epiphany that has helped define the rest of my life: I decided that the most important thing in my existence was to fight for survival. To put it another way: My goal was to never die.

Because I was not religious, I immediately turned to the thing that gave meaning to my world: science and technology. I took a leap of faith and made a wager that day. I later called this (and even later, dedicated a book to it) the transhumanist wager.

The life extension business of transhumanism will be a $600 billion industry by 2025.

My idea for an immortality wager came from Pascals Wager, the famous bet that caught on in the 17th century that loosely argued it was better to believe in God than not to, because you would be granted an afterlife if there was indeed a God. My transhumanist wager was based in my belief that its better to dedicate our resources to science and technology to overcome death while were still aliveso we dont ever have to find out whether there is an afterlife or not. It turns out I wasnt alone in my passion to live indefinitely through science. A small social movement, mostly of academics and researchers, were tackling similar issues, starting organizations, and funding research.

Some of them called themselves transhumanists.

Fast-forward 16 years from my landmine incident, and transhumanism has grown way beyond its main mission of just overcoming death with science.

Now the movement is the de facto philosophy (maybe even the religion) of Silicon Valley. It encapsulates numerous futurist fields: singularitarianism, cyborgism, cryonics, genetic editing, robotics, AI, biohacking, and others.

Biohacking in particular has taken offthe practice of physically hacking ones body with science, changing and augmenting our physiology the same way computer hackers would infiltrate a mainframe.

Its pretty obvious why it has emerged as such a big trend: It attracts the youth.

Not surprisingly, worrying about death is something that older people usually do (and, apparently, those younger people who almost step on landmines). Most young people feel invincible. But tell young people they can take brain drugs called nootropics that make them super smart, or give them special eye drops that let them see in the dark, or give them a chip implant that enhances human ability (like the one I have), and a lot of young people will go for it.

In 2016, I ran for the US presidency as the Transhumanist Party nominee. To get support from younger biohackers, my team and I journeyed on the Immortality Busmy 38-foot coffin-shaped campaign busto Grindfest, the major annual biohacking meet-up in Tehachapi, California. In an old dentists chair in a garage, biohackers injected me with a horse syringe containing a small radio-frequency-identification implant that uses near-field communication technologythe same wireless frequency used in most smartphones. The tiny deviceits about the size of a grain of ricewas placed just under the skin in my hand. With my chip, I could start a car, pay with bitcoin, and open my front door with a lock reader.

Four years later, I still have the implant and use it almost every day. For surfers or joggers like myself, for example, its great because I dont have to carry keys around.

One thing I do have to navigate is how some religious people view me once they understand I have one. Evangelical Christians have told me that an implant is the mark of the beast, as in from the Bibles Book of Revelations.

Even though Im tagged by conspiracy theorists as a potential contender for the Antichrist, I cant think of any negatives in my own experiences to having a chip implant. But as my work in transhumanism has reached from the US Military to the World Bank to many of the worlds most well-known universities, my chip implant only exasperates this conspiracy.

While people often want to know what other things Ive done to my body, in reality becoming a cyborg is a lot less futuristic and drastic than people think.

For me and for the thousands of people around the world who have implants, its all about functionality. An implant simply makes our lives easier and more efficient. Mine also sends out pre-written text messages when peoples phones come within a few feet of me, which is a fun party trick.

But frankly, a lot of the most transformative technology is still being developed, and if youre healthy like me, theres really not much benefit in doing a lot of biohacking today.

I take nootropics for better brain memory, but theres no conclusive research I know of that it actually works yet. Ive done some brainwave therapy, sometimes called direct neurofeedback, or biofeedback, but I didnt see any lasting changes. I fly drones for fun, and of course I also have Jethro, our family robot.

For the most part, members of the disabled community are the ones who are truly benefiting from transhumanist technologies today. If you have an arm shot off in a war, its cyborg science that gives you a robot arm controlled by your neural system that allows you to grab a beer, play the piano, or shake someones hand again.

But much more dramatic technology is soon to come. And the hope is that it will be availableand accessibleto everyone.

I asked to be added to a volunteer list for an experiment that will place implants in peoples brains that would allow us to communicate telepathically, using AI. (Biohacking trials like this are secretive because they are coming under more intense legal scrutiny.)Im also looking into getting a facial recognition security system for my home. I might even get a pet dog robot; these have become incredibly sophisticated, have fur softer than the real thing (that doesnt shed all over your couch or trigger allergies) and can even act as security systems.

Beyond that, people are using stem cells to grow new teeth, genetic editing to create designer babies, and exoskeleton technology that will likely allow a human to run on water in the near future.

Most people generally focus on one aspect of transhumanism, like just biohacking, or just AI, or just brainwave-tech devices. But I like to try it all, embrace it all, and support it all. Whatever new transhumanist direction technology takes, I try to take it all in and embrace the innovation.

This multi-faceted approach has worked well in helping me build a bridge connecting the various industries and factions of the transhumanist movement. Its what inspired me to launch presidential and California gubernatorial campaigns on a transhumanist platform. Now Im embarking on a new campaign in 2020 for US president as a Republican, hoping to get conservatives to become more open-minded about the future.

The amount of money flowing into transhumanist projects is growing into many billions of dollars. The life extension business of transhumanism will be a $600 billion industry by 2025, according to Bank of America. This is no time for transhumanism to break apart into many different divisions, and its no time to butt heads. We need to unite in our aim to truly change the human being forever.

Transhumanistsit doesnt matter what kind you arebelieve they can be more than just human. The word natural is not in our vocabulary. Theres only what transhumanists can do with the tools of science and technology they create. That is our great calling: to evolve the human being into something better than it is.

Because transhumanism has grown so broadly by now, not all transhumanists agree with me on substantially changing the human being. Some believe we should only use technology to eliminate suffering in our lives. Religious transhumanists believe we should use brain implants and virtual reality to improves our morality and religious behavior. Others tell me politics and transhumanism should never mix, and we must always keep science out of the hands of the government.

We need unity of some significant sort because as we grow at such a fast rate there are a lot of challenges ahead. For example, the conservative Christian Right wants to enact moratoriums against transhumanism. The anarcho-primativists, led by people like the primitivist philosopher and author John Zerzan (who I debated once at Stanford University), want to eliminate much technology and go back to a hunting-gathering lifestyle which they believe is more in tune with Earths original ecology. And finally, we must be careful that the so-called one percent doesnt take transhumanist technology and leave us all in the dust, by becoming gods themselves with radical tech and not sharing the benefits with humanity.

I personally believe the largest danger of the transhumanist era is the fact that within a few decades, we will have created super-intelligent AI. What if this new entity simply decides it doesnt like humans? If something is more sophisticated, powerful, intelligent, and resilient than humans, we will have a hard time stopping it if it wants to harm or eliminate us.

Whatever happens in the future, we must take greater care than we ever have before as our species enters the transhumanist age. For the first time, we are on the verge of transforming the physical structure of our bodies and our brains. And we are inventing machines that could end up being more intelligent and powerful than we are. This type of change requires that not only governments act together, but also cultures, religions, and humanity as a whole.

In the end, I believe that a lot more people will be on board with transhumanism than admit it. Nearly all of us want to eliminate disease, protect our families from death, and create a better path and purpose for science and technology.

But I also realize that this must be done ever so delicately, so as not to prematurely push our species into crisis with our unbridled arrogance. One day, we humans may look back and revel in how far our species has evolvedinto undying mammals, cyborgs, robots, and even pure living data. And the most important part will be to be able to look back and know we didnt destroy ourselves to get there.

Excerpt from:

What it means to be a cyborg in 2019 - Quartz

Read more here:

Singularitarianism | Prometheism Transhumanism Post Humanism

New World Order Explained – The New World Order …

New World Order ExplainedSource: Wikipedia: http://en.wikipedia.org/wiki/New_World_Order

In conspiracy theory, the term New World Order or NWO refers to the emergence of a bureaucratic collectivist one-world government.

The common theme in conspiracy theories about a New World Order is that a powerful and secretive elite of globalists is conspiring to eventually rule the world through an autonomous world government, which would replace sovereign nation-states and put an end to international power struggles. Significant occurrences in politics and finance are speculated to be caused by an extremely influential cabal operating through many front organizations. Numerous historical and current events are seen as steps in an on-going plot to achieve world domination through secret political gatherings and decision-making processes.

Prior to the early 1990s, New World Order conspiracism was limited to two American subcultures, primarily the militantly anti-government right, and secondarily Christian fundamentalists concerned with end-time emergence of the Antichrist. Skeptics, such as political scientist Michael Barkun, have expressed concern that right-wing conspiracy theories about a New World Order have now not only been embraced by many left-wing conspiracy theorists but have seeped into popular culture, thereby inaugurating an unrivaled period of people actively preparing for apocalyptic millenarian scenarios in the United States of the late 20th and early 21st centuries. They warn that this development may not only fuel lone-wolf terrorism but have devastating effects on American political life, such as the far right and the far left joining forces to launch an insurrectionary national-anarchist movement capable of subverting the established political powers.

During the 20th century, many statesmen, such as Woodrow Wilson and Winston Churchill, used the term new world order to refer to a new period of history evidencing a dramatic change in world political thought and the balance of power after World War I and World War II. They all saw these periods as opportunities to implement idealistic or liberal proposals for global governance only in the sense of new collective efforts to identify, understand, or address worldwide problems that go beyond the capacity of individual nation-states to solve. These proposals led to the creation of international organizations, such as the United Nations and N.A.T.O., and international regimes, such as the Bretton Woods system and the General Agreement on Tariffs and Trade, which were calculated both to maintain a balance of power as well as regularize cooperation between nations. These creations in particular and internationalism in general, however, would always be criticized and opposed by American paleoconservatives on isolationist grounds and by neoconservatives on benevolent imperalist grounds.

In the aftermath of World Wars I & II, progressives welcomed these new international organizations and regimes but argued they suffered from a democratic deficit and therefore were inadequate to not only prevent another global war but also foster global justice. Thus, activists around the globe formed a world federalist movement bent on creating a real new world order. A number of intellectuals of the reformist left, such as British writer H. G. Wells in the 1940s, adopted and redefined the term new world order as a synonym for the establishment of a full-fledged social democratic world government.

In reaction, conspiracy theorists of the American secular and Christian right, whose paranoia was fueled by Second Red Scare-era unfounded fears of Masonic, Illuminati, and Jewish conspiracies to achieve world communism, began misinterpreting any use of term new world order by members of the Establishment, even when they were simply acknowledging a change in the international balance of power, as a call for the imposition of a state atheistic and bureaucratic collectivist world government, which controls the means of production, while the surplus (profit) is distributed among a ruling class of bureaucrats, rather than among the working class.

In the 1960s, a great deal of right-wing conspiracist attention focused on the United Nations as the vehicle for creating the One World Government, and contributed to a movement for United States withdrawal from the U.N.. American writer Mary M. Davison, in her 1966 booklet The Profound Revolution, traced the alleged New World Order conspiracy to the creation of the U.S. Federal Reserve System in 1913 by international bankers, who she claimed later formed the Council on Foreign Relations in 1921 as the shadow government. At the time the booklet was published, international bankers would have been interpreted by many readers as a reference to a postulated international Jewish banking conspiracy masterminded by the Rothschilds and Rockefellers.

Claiming that the term New World Order is used by a secretive elite dedicated to the destruction of all national sovereignties, American producerist journalist Gary Allen, in his 1974 book Rockefeller: Campaigning for the New World Order and 1987 book Say No! to the New World Order, articulated the anti-globalist theme of much current right-wing conspiracism in the U.S.. Thus, during the 1990s, the main demonized scapegoat of the American far right, represented by the John Birch Society and the Liberty Lobby, shifted seamlessly from crypto-communists who plotted on behalf of the Red Menace to globalists who plot on behalf of the New World Order. The relatively painless nature of the shift was due to growing right-wing opposition to the globalization of capitalism but also in part to the basic underlying apocalyptic millenarian paradigm, which fed the Cold War and the witch-hunts of the McCarthy period.

In his 11 September 1990 Toward a New World Order speech to a joint session of the U.S. Congress, President George H. W. Bush described his ideals for post-Cold-War global governance in cooperation with post-Soviet states:

Chip Berlet, an investigative reporter specializing in the study of right-wing movements in the U.S., writes:

When President Bush announced his new foreign policy would help build a New World Order, his phrasing surged through the Christian and secular hard right like an electric shock, since the phrase had been used to represent the dreaded collectivist One World Government for decades. Some Christians saw Bush as signaling the End Times betrayal by a world leader. Secular anticommunists saw a bold attempt to smash US sovereignty and impose a tyrannical collectivist system run by the United Nations.

American televangelist Pat Robertson with his 1991 best-selling book The New World Order became the most prominent Christian popularizer of conspiracy theories about recent American history as a theater in which Wall Street, the Federal Reserve System, Council on Foreign Relations, Bilderberg Group, and Trilateral Commission control the flow of events from behind the scenes, nudging us constantly and covertly in the direction of world government for the Antichrist.

Observers note that the galvanization of right-wing populist conspiracy theorists, such as Linda Thompson, Mark Koernke and Robert K. Spear, into militancy led to the rise of the anti-government militia movement, and their use of viral propaganda on the Internet contributed to their extremist political ideas about the New World Order finding their way into the far left literature of some black nationalists, but also the previously apolitical literature of many Kennedy assassinologists, ufologists, lost land theorists, and, most recently, occultists. The wide appeal of these subcultures then transmitted New World Order conspiracism like a mind virus to a large new audience of seekers of counterknowledge from the mid-1990s on.

After the turn of the century, specifically during the global financial crisis of 20082009, many politicians and pundits, such as Gordon Brown, Henry Kissinger, and Barack Obama, used the term new world order in their advocacy for a Keynesian reform of the global financial system and their calls for a New Bretton Woods.[ These declarations had the unintended consequence of providing fresh fodder for New World Order conspiracy theorists, and culminated in former Clinton administration adviser Dick Morris and conservative talk show host Sean Hannity arguing on one of his Fox News Channel programs that conspiracy theorists were right. Fox News has been repeatedly criticized by progressive media watchdog groups for not only mainstreaming the conspiracist rhetoric of the radical right but possibly agitating its lone wolves into action.

Conspiracy theories

Freemasonry

Anti-Masonic conspiracy theorists believe that high-ranking Freemasons are involved in conspiracies to create an occult New World Order. They claim that some of the Founding Fathers of the United States, such as George Washington and Benjamin Franklin, had Masonic symbolism and sacred geometry interwoven into American society, particularly in the Great Seal of the United States, the United States one-dollar bill, the architecture of National Mall landmarks, and the streets and highways of Washington, D.C.. They speculate that Freemasons did this in order to mystically bind their planning of a government in conformity with the luciferian plan of the Great Architect of the Universe whom, they are said to believe, has tasked the United States with the eventual establishment of an hermetic "Kingdom of God on Earth" and the building of the Third Temple in New Jerusalem as its holiest site.

Freemasons rebut these claims of Masonic conspiracy. They assert that Freemasonry, which promotes natural theology through esotericism, places no power in occult symbols themselves. It is not a part of Freemasonry to view the drawing of symbols, no matter how large, as an act of consolidating or controlling power. Furthermore, there is no published information establishing the Masonic membership of the men responsible for the design of the Great Seal or the street plan of Washington, D.C. The Latin phrase "novus ordo seclorum", appearing on the reverse side of the Great Seal since 1782 and on the back of the one-dollar bill since 1935, means "New Order of the Ages" and only alludes to the beginning of an era where the United States is an independent nation-state, but is often improperly translated by conspiracy theorists as "New World Order" or "New Secular Order".[25] Lastly, Freemasons argue that, despite the symbolic importance of the Temple of Solomon in their mythology, they have no interest in rebuilding it, especially since it is obvious that any attempt to interfere with the present condition of things [on the Temple Mount] would in all probability bring about the greatest religious war the world has ever known.

More broadly, Freemasons assert that a long-standing rule within regular Freemasonry is a prohibition on the discussion of politics in a Masonic Lodge and the participation of lodges or Masonic bodies in political pursuits. Freemasonry has no politics, but it teaches its members to be of high moral character and active citizens. The accusation that Freemasonry has a hidden agenda to establish a Masonic government ignores several facts. While agreeing on certain Masonic Landmarks, the many independent and sovereign Grand Lodges act as such, and do not agree on many other points of belief and practice.

Also, as can be seen from a survey of famous Freemasons, individual Freemasons hold beliefs that span the spectrum of politics. The term Masonic government has no meaning since individual Freemasons hold many different opinions on what constitutes a good government, and Freemasonry as a body has no opinion on the topic. Ultimately, Freemasons argue that even if it were proven that influential individuals have used and are using Masonic Lodges to engage in crypto-politics, such as was the case with the illegal Italian Lodge Propaganda Due, this would represent a cooptation of Freemasonry rather than evidence of its hidden agenda.

Illuminati

The Order of the Illuminati was an Enlightenment-age secret society founded on 1 May 1776, in Ingolstadt (Upper Bavaria), by Adam Weishaupt, who was the first lay professor of canon law at the University of Ingolstadt. The movement consisted of freethinkers, secularists, liberals, republicans and pro-feminists, recruited in the Masonic Lodges of Germany, who sought to promote perfectionism through mystery schools. In 1785, the order was infiltrated, broken and suppressed by the Bavarian government for allegedly plotting to overthrow all the monarchies and state religions of Europe.

In the late 18th century, reactionary conspiracy theorists, such as Scottish physicist John Robison and French Jesuit priest Augustin Barruel, began speculating that the Illuminati survived their suppression and became the masterminds behind the French Revolution and the Reign of Terror. The Illuminati were accused of being enlightened absolutists who were attempting to secretly orchestrate a world revolution in order to globalize the most radical ideals of the Enlightenment: anti-clericalism, anti-monarchism, and anti-patriarchalism. During the 19th century, fear of an Illuminati conspiracy was a real concern of European ruling classes, and their oppressive reactions to this unfounded fear provoked in 1848 the very revolutions they sought to prevent.

During the interwar period of the 20th century, fascist propagandists, such as British revisionist historian Nesta Helen Webster and American socialite Edith Starr Miller, not only popularized the myth of an Illuminati conspiracy but claimed that it was a subversive secret society which serves the Jewish elites that supposedly propped up both finance capitalism and Soviet communism in order to divide and rule the world. American evangelist Gerald Burton Winrod and other conspiracy theorists within the Christian fundamentalist movement in the United States, which emerged in the early 20th century as a backlash against the principles of the Enlightenment, modernism, and liberalism, became the main channel of dissemination of Illuminati conspiracy theories in America. Right-wing populists subsequently began speculating that some collegiate fraternities, gentlemens clubs and think tanks of the American upper class are front organizations of the Illuminati, which they accuse of plotting to create a New World Order through a one-world government.

Protocols of the Elders of Zion

The Protocols of the Elders of Zion is an antisemitic canard, published in 1903, but first translated into English in 1919 or 1920, alleging a Judaeo-Masonic conspiracy to achieve world domination. It propagandized the idea that a cabal of Jewish masterminds, which has coopted Freemasonry, is plotting to rule the world on behalf of all Jews because they believe themselves to be the chosen people of God. The Protocols has been proven by scholars, such as Irish journalist Philip Graves in a 1921 The Times article, and British academic Norman Cohn in his 1967 book Warrant for Genocide, to be both a hoax and a clear case of plagiarism. There is general agreement that the Okhrana, the secret police of the Russian Empire, fabricated the text in the late 1890s or early 1900s by plagiarizing it, almost word for word in some passages, from The Dialogue in Hell Between Machiavelli and Montesquieu, a 19th century satire against Napoleon III of France originally written by Maurice Joly, a French lawyer and Legitimist militant.

Partly responsible for feeding many antisemitic and anti-Masonic hysterias of the 20th century, The Protocols is widely considered to be influential in the development of conspiracy theories related to a New World Order (such as the notion of a Zionist Occupation Government), and reappears repeatedly in contemporary conspiracy literature. For example, the authors of the 1982 controversial book The Holy Blood and the Holy Grail concluded that The Protocols was the most persuasive piece of evidence for the existence and activities of the Priory of Sion. They speculated that this secret society was working behind the scenes to establish a theocratic United States of Europe (politically and religiously unified through the imperial cult of a Merovingian sacred king from the Jesus bloodline, who occupies both the throne of Europe and the Holy See) which would become the hyperpower of the 21st century. Although the Priory of Sion, itself, has been exhaustively debunked by journalists and scholars as a hoax, fringe Christian eschatologists concerned with the emergence of a New World Order became convinced that the Priory of Sion was a fulfillment of prophecies found in the Book of Revelation and further proof of an anti-Christian conspiracy of epic proportions.

Skeptics argue that the current gambit of contemporary conspiracy theorists who use the The Protocols is to claim that they really come from some group other than the Jews such as the Illuminati or alien invaders. Although it is hard to determine whether the conspiracy-minded actually believe this or are simply trying to sanitize a discredited text, skeptics argue that it doesnt make much difference, since they leave the actual, antisemitic text unchanged. The result is to give The Protocols credibility and circulation when it deserves neither.

Round Table

British businessman Cecil Rhodes advocated the British Empire reannexing the United States of America and reforming itself into an Imperial Federation to bring about a hyperpower and lasting world peace. In his first will, of 1877, written at the age of 23, he expressed his wish to fund a secret society (known as the Society of the Elect) that would advance this goal:

To and for the establishment, promotion and development of a Secret Society, the true aim and object whereof shall be for the extension of British rule throughout the world, the perfecting of a system of emigration from the United Kingdom, and of colonisation by British subjects of all lands where the means of livelihood are attainable by energy, labour and enterprise, and especially the occupation by British settlers of the entire Continent of Africa, the Holy Land, the Valley of the Euphrates, the Islands of Cyprus and Candia, the whole of South America, the Islands of the Pacific not heretofore possessed by Great Britain, the whole of the Malay Archipelago, the seaboard of China and Japan, the ultimate recovery of the United States of America as an integral part of the British Empire, the inauguration of a system of Colonial representation in the Imperial Parliament which may tend to weld together the disjointed members of the Empire and, finally, the foundation of so great a Power as to render wars impossible, and promote the best interests of humanity.

In his later wills, a more mature Rhodes abandoned the idea and instead concentrated on what became the Rhodes Scholarship, which had British statesman Alfred Milner as one of its trustees. Established in 1902, the original goal of the trust fund was to foster peace among the great powers by creating a sense of fraternity and a shared world view among future British, American, and German leaders by having enabled them to study for free at the University of Oxford.

Milner and British official Lionel George Curtis were the architects of the Round Table movement, a network of organizations promoting closer union between Britain and its self-governing colonies. To this end, Curtis founded the Royal Institute of International Affairs in June 1919 and wrote the 1938 book The Commonwealth of God in which he advocated the creation of an imperial federation, that eventually reannexes the U.S., which would be presented to Protestant churches as being the work of the Christian God to elicit their support. The Commonwealth of Nations was created in 1949 but it would only be a free association of independent states rather than the powerful imperial federation imagined by Rhodes, Milner and Curtis.

The Council on Foreign Relations began in 1917 with a group of New York academics who were asked by President Woodrow Wilson to offer options for the foreign policy of the United States in the interwar period. Originally envisioned as a British-American group of scholars and diplomats, some of whom belonging to the Round Table movement, it was a subsequent group of 108 New York financiers, manufacturers and international lawyers organized in June 1918 by Nobel Peace Prize recipient and U.S. secretary of state, Elihu Root, that became the Council on Foreign Relations on 29 July 1921. The first of the councils projects was a quarterly journal launched in September 1922, called Foreign Affairs.

Conspiracy theorists believe that the Council on Foreign Relations is a front organization for the Round Table as a tool of the Anglo-American Establishment, which they believe has been plotting from 1900 on to rule the world. The research findings of historian Carroll Quigley, author of the 1966 book Tragedy and Hope, are taken by both conspiracy theorists of the American Old Right (Cleon Skousen) and New Left (Carl Oglesby) to substantiate this view, even though he argued that the Establishment is not involved in a plot to implement a one-world government but rather British and American benevolent imperialism driven by the mutual interests of economic elites in the United Kingdom and the United States. Quigley also argued that, although the Round Table still exists today, its position in influencing the policies of world leaders has been much reduced from its heyday during World War I and slowly waned after the end of World War II and the Suez Crisis. Today it is largely a ginger group, designed to consider and gradually influence the policies of the Commonwealth of Nations, but faces strong opposition. Furthermore, in American society after 1965, the problem, according to Quigley, was that no elite was in charge and acting responsibly.

American banker David Rockefeller joined the Council on Foreign Relations as its youngest-ever director in 1949 and subsequently became chairman of the board from 1970 to 1985; today he serves as honorary chairman. In 2002, Rockefeller authored his autobiography Memoirs wherein, on page 405, he wrote:

For more than a century ideological extremists at either end of the political spectrum have seized upon well-publicized incidents to attack the Rockefeller family for the inordinate influence they claim we wield over American political and economic institutions. Some even believe we are part of a secret cabal working against the best interests of the United States, characterizing my family and me as internationalists and of conspiring with others around the world to build a more integrated global political and economic structure one world, if you will. If thats the charge, I stand guilty, and I am proud of it.

Although this statement should be interpreted as being partially sarcastic, it is taken at face value and widely cited by conspiracy theorists as proof that the Council on Foreign Relations (itself alleged to be a front for an international banking cabal, as well as, it is claimed, the sponsor of many globalist think tanks such as the Trilateral Commission) uses its role as the brain trust of American presidents, senators and representatives to manipulate them into supporting a New World Order. Conspiracy theorists fear that the international bankers of financial capitalism are planning to eventually subvert the independence of the U.S. by subordinating national sovereignty to a strengthened Bank for International Settlements with the intent to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole.

Some American social critics, such as Laurence H. Shoup, argue that the Council on Foreign Relations is an imperial brain trust, which has, for decades, played a central behind-the-scenes role in shaping U.S. foreign policy choices for the post-WWII international order and the Cold War, by determining what options show up on the agenda and what options do not even make it to the table;[40] while others, such as G. William Domhoff, argue that it is in fact a mere policy discussion forum, which provides the business input to U.S. foreign policy planning. The latter argue that it has nearly 3,000 members, far too many for secret plans to be kept within the group; all the council does is sponsor discussion groups, debates and speakers; and as far as being secretive, it issues annual reports and allows access to its historical archives. However, all these critics agree that historical studies of the council show that it has a very different role in the overall power structure than what is claimed by conspiracy theorists.

Open Conspiracy

In his 1928 book The Open Conspiracy British writer H. G. Wells called for the intelligentsia of all nation-states to organize for the establishment of a global federation of strengthened and democratized global institutions, with plenary constitutional power accountable to global citizens and a division of international authority among separate global agencies, in order to build a world social democracy.

Wells warned, however, in his 1940 book The New World Order that:

... when the struggle seems to be drifting definitely towards a world social democracy, there may still be very great delays and disappointments before it becomes an efficient and beneficent world system. Countless people ... will hate the new world order ... and will die protesting against it. When we attempt to evaluate its promise, we [must] bear in mind the distress of a generation or so of malcontents, many of them quite gallant and graceful-looking people.

Wells book was extremely influential in associating the notion of a socialist world state and government with the term New World Order in the minds of both supporters and opponents for generations to come. But the dissolution of the Soviet Union in 1991 led to a period of triumphalism by capitalists world wide, the elimination of the only obstacle to the spread of a neoliberal form of globalization, and a shattering of the confidence of those who hoped that a proletarian revolution would return the Soviet Union (which had become a degenerated workers state) to socialism and transform it into one of the building blocks of the new world order envisioned by Wells. Right-wing conspiracy theorists, however, simply changed their focus from the Soviet Union to the United Nations as the bureaucratic collectivist menace.

New Age

British neo-Theosophical occultist Alice Bailey, one of the founders of the so-called New Age movement, prophesied in 1940 the eventual victory of the Allies of World War II over the Axis powers (which occurred in 1945) and the establishment by the Allies of a political and religious New World Order. She saw a federal world government as the culmination of Wells Open Conspiracy but argued that it would be synarchist because it was guided by ascended masters, intent on preparing humanity for the mystical second coming of Christ, and the dawn of the Age of Aquarius. According to Bailey, a group of ascended masters called the Great White Brotherhood works on the inner planes to oversee the transition to the New World Order but, for now, the members of this Spiritual Hierarchy are only known to a few occult scientists, with whom they communicate telepathically, but as the need for their personal involvement in the plan increases, there will be an Externalization of the Hierarchy and everyone will know of their presence on Earth.

In 1997, Hasidic rabbi Yonassan Gershom, in an article titled Anti-Semitic Stereotypes in Alice Baileys Writings, pointed out that Baileys Plan for the New World Order, marked by extravagant fantasy, called for the gradual dissolution again if in any way possible of the Orthodox Jewish faith, which, he said, indicated that her goal is nothing less than the destruction of Judaism itself. This fact is notable since many conspiracy theories tend to portray Jews as the plotters behind the New World Order rather than one of the groups the plotters want to repress in order to create it.

Baileys writings, along with American writer Marilyn Fergusons 1980 book The Aquarian Conspiracy, contributed to conspiracy theorists of the Christian right viewing the New Age movement as the false religion that would supersede Christianity in a New World Order.[46] Some conspiracy theorists have adopted 21 December 2012 as the exact date for the establishment of the New World Order because of the growing 2012 phenomenon, which has its origins in the fringe Mayanist theories of New Age writers Jos Argelles, Terence McKenna, and Daniel Pinchbeck.

Skeptics argue that the term New Age movement is a misnomer, generally used by conspiracy theorists as a catch-all rubric for any new religious, spiritual or philosophical belief, symbol and practice that is not fundamentalist Christian. By their lights, anything that is not Christian is by definition actively and willfully anti-Christian. The implication is that these independent and sometimes contradictory schools of thought are all part of a monolithic whole. This is logically and empirically false, and rationally simplistic.

End Time

Millenarian Christian theologians and laymen, such as American televangelist Pat Robertson with his 1991 book The New World Order, see a globalist conspiracy as the fulfillment of prophecies about the end time in the Bible, specifically in the Book of Ezekiel, the Book of Daniel, the Olivet discourse found in the Synoptic Gospels, and the Book of Revelation. They assert that human and demonic agents of the Devil are involved in a primordial conspiracy to deceive humanity into accepting a satanic world theocracy that has the Unholy Trinity Satan, the Antichrist and the False Prophet at the core of an imperial cult. In many theories, the False Prophet will either be the last pope of the Catholic Church (groomed and installed by an Alta Vendita or Jesuit conspiracy) or a charismatic leader in the New Age movement, while the Antichrist will either be the president of the European Union or the secretary-general of the United Nations or even a virtual actor serving as the figurehead for a supercomputer.

Preterist Christian skeptics of the end-time conspiracism argue that some or all of the biblical prophecies concerning the end time refer literally or metaphorically to events which already happened in the first century after Jesus birth. In their view, the end time concept refers to the end of the covenant between God and Israel, rather than the end of time, or the end of planet Earth. They argue that prophecies about the Rapture, the defiling of the Temple, the destruction of Jerusalem, the Antichrist, the Number of the Beast, the Tribulation, the Second Coming, and the Last Judgment were fulfilled at or about the year 70 when the Roman general (and future Emperor) Titus sacked Jerusalem and destroyed the Second Temple in Jerusalem, putting a permanent stop to the daily animal sacrifices.

According to such skeptics, many passages in the New Testament indicate with apparent certainty that the second coming of Christ, and the end time predicted in the Bible were to take place within the lifetimes of Jesus disciples rather than millennia later: Matt. 10:23, Matt. 16:28, Matt. 24:34, Matt. 26:64, Rom. 13:11-12, 1 Cor. 7:29-31, 1 Cor. 10:11, Phil. 4:5, James 5:8-9, 1 Pet. 4:7, 1 Jn. 2:18.[48]

Fourth Reich

Anti-Nazi conspiracy theorists, such as American writer Jim Marrs, argue that some ex-Nazis, who were surviving members of Germanys Third Reich, along with sympathizers in the United States and elsewhere, given safe haven by organizations like ODESSA and Die Spinne, have been working behind the scenes since the end of World War II to enact at least some of the principles of Nazism (e.g. military-industrial complex, imperialism, widespread spying on citizens, use of corporations and propaganda to control national interests and ideas) into culture, government, and business worldwide, but primarily in the U.S. They cite the influence of ex-Nazi scientists brought in under Operation Paperclip to help advance aerospace manufacturing in the U.S., and the acquisition and creation of conglomerates by ex-Nazis and their sympathizers after the war, in both Europe and the U.S.

This neo-Nazi conspiracy is said to be animated by an Iron Dream in which the U.S. gradually establishes the Fourth Reich, known as the Western Imperium, a pan-Aryan New World Order modeled after Adolf Hitlers New Order, to ensure the West wins the hypothetical Clash of Civilizations.

Skeptics argue that conspiracy theorists grossly overestimate the influence of ex-Nazis and neo-Nazis on American society, and point out that American imperialism, corporatocracy and political repression have a long history that predates World War II. Some political scientists, such as Sheldon Wolin, have expressed concern that the twin forces of democratic deficit and superpower status have paved the way in the U.S. for the emergence of an inverted totalitarianism which contradicts many principles of Nazism.

Alien Invasion

Since the late 1970s, extraterrestrials from other habitable planets or parallel dimensions (such as Greys) and intraterrestrials from Hollow Earth (such as Reptilians) have been included in the New World Order conspiracy, in more or less dominant roles, as in the theories put forward by American writers Stan Deyo and Milton William Cooper, and British writer David Icke.

The common theme in such conspiracy theories is that aliens have been among us for decades, centuries or millennia, but a government cover-up has protected the public from knowledge of ancient astronauts and an alien invasion. Motivated by speciesism, these aliens have been and are secretly manipulating developments and changes in human society in order to more efficiently control and exploit it. In some theories, alien infiltrators have taken human form and move freely throughout human society, even to the point of taking control of command positions in governmental, corporate, and religious institutions, and are now in the final stages of their plan to take over the world. A mythical covert government agency of the United States code-named Majestic 12 is often cited by conspiracy theorists as being the shadow government which collaborates with the alien occupation, in exchange for assistance in the development and testing of military flying saucers at Area 51, in order for U.S. armed forces to achieve full-spectrum dominance.

Skeptics, who adhere to the psychosocial hypothesis for unidentified flying objects, argue that the convergence of New World Order conspiracy theory and UFO conspiracy theory is a product of not only the eras widespread mistrust of governments and the popularity of the extraterrestrial hypothesis for UFOs but of the far right and ufologists actually joining forces. Barkun notes that the only positive side to this development is that, if conspirators plotting to rule the world are believed to be aliens, traditional human scapegoats are exonerated.

Brave New World

Antiscience and neo-Luddite conspiracy theorists emphasize technology forecasting in their New World Order conpiracy theories. They speculate that the global power elite are modern Luciferians pursuing a transhumanist agenda to develop and use human enhancement technologies in order to become a posthuman ruling caste, while change accelerates toward a technological singularity a theorized future point of discontinuity when events will accelerate at such a pace that normal unenhanced humans will be unable to predict or even understand the rapid changes occurring in the world around them. Conspiracy theorists fear the outcome will either be the emergence of a Brave New World-like dystopia a Brave New World Order or the extinction of the human species.

Advocates of transhumanism and singularitarianism, such as American sociologist James Hughes, counter that many influential members of the American Establishment are bioconservative and therefore anti-transhumanist as demonstrated by President Bushs Council on Bioethicss proposed international treaty prohibiting human cloning and germline engineering. Regardless, transhumanists and singularitarians claim to only support developing and making publicly available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities for the common good; as well as taking deliberate action to ensure that the Singularity the moment when technological progress starts being driven by superintelligence occurs in a way that is beneficial to humankind.

Just as there are several overlapping or conflicting theories among conspiracists about the nature of the New World Order, so are there several beliefs about how its architects and planners will implement it:

Gradualism

Conspiracy theorists generally speculate that the New World Order is being implemented gradually, citing the formation of the U.S. Federal Reserve System in 1913; the International Monetary Fund in 1944; the United Nations in 1945; the World Bank in 1945; the World Health Organization in 1948; the European Union and the euro currency in 1993; the World Trade Organization in 1998; and the African Union in 2002 as major milestones.

An increasingly popular conspiracy theory among American paleoconservatives is that the hypothetical North American Union and the amero currency, proposed by the Council on Foreign Relations and its counterparts in Mexico and Canada, will be the next implementation of the New World Order. The theory holds that a group of shadowy and mostly nameless international elites are planning to replace the federal government of the United States with a transnational government. Therefore, conspiracy theorists believe the borders between Mexico, Canada and the United States are in the process of being erased, covertly, by a group of globalists whose ultimate goal is to replace national governments in Washington, D.C., Ottawa and Mexico City with a European-style political union and a bloated E.U.-style bureaucracy.

Skeptics argue that the North American Union exists only as a proposal contained in one of a thousand academic and/or policy papers published each year that advocate all manner of idealistic but ultimately unrealistic approaches to social, economic and political problems. Most of these get passed around in their own circles and eventually filed away and forgotten by junior staffers in congressional offices. Some of these papers, however, become touchstones for the conspiracy-minded and form the basis of all kinds of unfounded xenophobic fears especially during times of economic anxiety.

In March 2009, as a result of the global financial crisis of 20082009, the Peoples Republic of China and the Russian Federation have pressed for urgent consideration of a super-sovereign reserve currency and a U.N. panel has proposed greatly expanding the I.M.F.s Special Drawing Rights. Conspiracy theorists have misinterpreted the proposal as vindication of their beliefs about a global currency for the New World Order.

Judging that both national governments and global institutions have proven ineffective in addressing worldwide problems that go beyond the capacity of individual nation-states to solve, some political scientists, such as Mark C. Partrige, argue that regionalism will be the major force in the coming decades, pockets of power around regional centers: Western Europe around Brussels, the Western Hemisphere around Washington, D.C., East Asia around Beijing, and Eastern Europe around Moscow. As such, the E.U., the Shanghai Cooperation Organisation, and the G-20 will likely become more influential as time progresses. The question then is not whether global governance is gradually emerging, but rather how will these regional powers interact with one another.

Coup dtat and martial law

American right-wing conspiracy theorists, especially those who joined the militia movement in the United States, speculate that the New World Order will be implemented by martial law after a dramatic coup dtat by a secret team, using black helicopters, in the U.S. and other nation-states to bring about a world government controlled by the United Nations and enforced by troops of foreign U.N. peacekeepers.

Before year 2000 some survivalists wrongly believed this process would be set in motion by the predicted Y2K problem causing societal collapse. Since many conspiracy theorists believe that the September 11 attacks were a false flag operation carried out by the United States intelligence community, as part of a strategy of tension to justify political repression at home and preemptive war abroad, some of them have become convinced that a more catastrophic terrorist incident will be responsible for triggering the process completing the transition to a police state.

These conspiracy theorists, who are all strong believers in a right to keep and bear arms, are extremely fearful that the passing of any gun control legislation will be later followed by the abolishment of personal gun ownership, and that the refugee camps of emergency management agencies such as F.E.M.A. will be used for the internment of suspected subversives, making little effort to distinguish true threats to the New World Order from ideological dissidents.

Skeptics argue that unfounded fears about an imminent or eventual gun ban, military coup, internment, or U.N. invasion and occupation are rooted in an extremist form of constitutionalism but also an apocalyptic millennialism which provides a basic narrative within the American political right, claiming that the idealized society (i.e. Christian nation, constitutional republic of sovereign citizens) is thwarted by subversive conspiracies of liberal secular humanists who want Big Government and globalists who plot on behalf of the New World Order.

Mass surveillance

Conspiracy theorists concerned about surveillance abuse believe that the New World Order is being implemented by the cult of intelligence at the core of the surveillance-industrial complex through mass surveillance and the use of Social Security numbers, the bar-coding of retail goods with Universal Product Code markings, and, most recently, R.F.I.D. tagging via microchip implants.

Original seal of the now defunct DARPA Information Awareness Office.Some consumer privacy advocates, such as Katherine Albrecht and Liz McIntyre, who warn of how corporations and government supposedly plan to track every move of consumers and citizens with R.F.I.D. is the latest step toward a 1984-like surveillance state, have become Christian conspiracy theorists who associate spychips with the Number of the Beast mentioned in the Book of Revelation.

Boston University professor Richard Landes, who specializes in the history of apocalypticism and was co-founder and director of the Center for Millennial Studies at B.U., argues that new and emerging technologies often trigger alarmism among millenarians and even the introduction of Gutenbergs printing press in 1436 caused waves of apocalyptic thinking. The Y2K problem, bar codes and Social Security numbers all triggered end-time warnings which either proved to be false or simply were no longer taken seriously once the public became accustomed to these technologies. Skeptics argue that the privatization of surveillance and the rise of the surveillance-industrial complex in the United States does raise legitimate concerns about the erosion of privacy, but such concerns should be disentagled from secular paranoia about Big Brother or religious hysteria about the Antichrist.

The Information Awareness Office (IAO) was established by the Defense Advanced Research Projects Agency (DARPA) in January 2002 to bring together several DARPA projects focused on applying information technology to counter asymmetric threats to national security. Following public criticism that the development and deployment of these technologies could potentially lead to a mass surveillance system, the IAO was defunded by the United States Congress in 2003.[65][66] The second source of controversy involved IAOs original logo, which depicted the all-seeing Eye of Providence atop of a pyramid looking down over the globe, accompanied by the Latin phrase scientia est potentia (knowledge is power). Although DARPA eventually removed the logo from its website, it left a lasting impression on privacy advocates. It also inflamed conspiracy theorists, who misinterpret the eye and pyramid as the Masonic symbol of the Illuminati, an 18th-century secret society they speculate continues to exist and is plotting on behalf of a New World Order.

Occultism

Conspiracy theorists of the Christian right believe there is an occult conspiracy, started by the first mystagogues of Gnosticism and perpetuated by their alleged esoteric successors, such as the Kabbalists, Cathars, Knights Templar, Rosicrucians, Freemasons, and, ultimately, the Illuminati, which seeks to subvert the Judeo-Christian foundations of the Western world and implement the New World Order through a New Age one-world religion that prepares the world to embrace the imperial cult of the Antichrist. More broadly, they speculate that conspirators who plot on behalf of a New World Order are directed by occult agencies of some sort: unknown superiors, spiritual hierarchies, demons, fallen angels or Lucifer. They believe that, like Nazi occultists, these conspirators use the power of occult sciences (numerology), symbols (Eye of Providence), rituals (Masonic degrees), monuments (National Mall), buildings (Manitoba Legislative Building and facilities (Denver International Airport) to advance their plot to rule the world.

For example, in June 1979, an unknown benefactor under the pseudonym R. C. Christian had a huge granite megalith built in the U.S. state of Georgia, which acts like a compass, calendar, and clock. A message comprising ten guides is inscribed on the occult structure in many languages to serve as instructions for survivors of a doomsday event to establish a more enlightened and sustainable civilization than the one which was destroyed. The Georgia Guidestones have subsequently become a spiritual and political Rorschach test onto which any number of ideas can be imposed. Some New Agers and neo-pagans revere it as a ley-line power nexus while a few conspiracy theorists are convinced that they are engraved with the New World Orders anti-Christian Ten Commandments. Should the Guidestones survive for centuries as their creators intended, many more meanings could arise, equally unrelated to the designers original intention.

Skeptics argue that the demonization of Western occultism by conspiracy theorists is rooted in religious intolerance but also in the same moral panics that have fueled witch trials in Early Modern Europe, and satanic ritual abuse allegations in the United States.

Population control

Conspiracy theorists believe that the New World Order will also be implemented through the use of population control in order to more easily monitor and control the movement of individuals. The means range from stopping the growth of human societies through reproductive health and family planning programs, which condone abortion and liberal eugenics, or intentionally reducing the bulk of the worlds population through genocide by fomenting unnecessary wars, mass sterilization by tainting vaccines, and environmental terrorism-caused disasters by controlling the weather (HAARP, chemtrails), etc. The Codex Alimentarius, a collection of internationally recognized standards, codes of practice, guidelines and other recommendations relating to foods, food production and food safety, has also become the subject of conspiracy theories about population control.

Skeptics argue that fears of totalitarian population control can be traced back to the Red Scare in the United States during the late 1940s and 1950s, and to a lesser extent in the 1960s, when activists on the far right of American politics routinely opposed public health programs, notably water fluoridation, mass vaccination and mental health services, by asserting they were all part of a far-reaching plot to impose a socialist or communist regime. Their views were influenced by opposition to a number of major social and political changes that had happened in recent years: the growth of internationalism, particularly the United Nations and its programs; the introduction of social welfare provisions, particularly the various programs established by the New Deal; and government efforts to reduce perceived inequalities in the social structure of the United States.

Mind control

Conspiracy theorists accuse governments, corporations, and the mass media of being involved in the manufacturing of a national consensus and, paradoxically, a culture of fear due to the potential for increased social control that a mistrustful and mutually fearing population might offer to those in power. The worst fear of some conspiracy theorists is that conspirators are using mind control a broad range of tactics able to subvert an individuals control of his or her own thinking, behavior, emotions, or decisions to implement the New World Order. These tactics are said to include everything from Manchurian candidate-style brainwashing of sleeper agents (Project MKULTRA, Project Monarch) to engineering psychological operations (water fluoridation, subliminal advertising, Silent Sound Spread Spectrum, MEDUSA) and parapsychological operations (Stargate Project) to influence the masses. The concept of wearing a tin foil hat for protection from such threats has become a popular stereotype and term of derision; the phrase serves as a byword for paranoia and is associated with conspiracy theorists.

Skeptics argue that the paranoia behind a conspiracy theorists obsession with mind control, population control, occultism, surveillance abuse, Big Business, Big Government, and globalization arises from a combination of two factors, when he or she: 1) holds strong individualist values and 2) lacks power. The first attribute refers to people who care deeply about an individuals right to make their own choices and direct their own lives without interference or obligations to a larger system (like the government). But combine this with a sense of powerlessness in ones own life, and one gets what some psychologists call agency panic, intense anxiety about an apparent loss of autonomy to outside forces or regulators. When fervent individualists feel that they cannot exercise their independence, they experience a crisis and assume that larger forces are to blame for usurping this freedom.

03.26.2011. 11:18

Read the original here:

New World Order Explained - The New World Order ...

True AI is both logically possible and utterly implausible …

Suppose you enter a dark room in an unknown building. You might panic about monsters that could be lurking in the dark. Or you could just turn on the light, to avoid bumping into furniture. The dark room is the future of artificial intelligence (AI). Unfortunately, many people believe that, as we step into the room, we might run into some evil, ultra-intelligent machines. This is an old fear. It dates to the 1960s, when Irving John Good, a British mathematician who worked as a cryptologist at Bletchley Park with Alan Turing, made the following observation:

Once ultraintelligent machines become a reality, they might not be docile at all but behave like Terminator: enslave humanity as a sub-species, ignore its rights, and pursue their own ends, regardless of the effects on human lives.

If this sounds incredible, you might wish to reconsider. Fast-forward half a century to now, and the amazing developments in our digital technologies have led many people to believe that Goods intelligence explosion is a serious risk, and the end of our species might be near, if were not careful. This is Stephen Hawking in 2014:

Last year, Bill Gates was of the same view:

And what had Musk, Teslas CEO, said?

The reality is more trivial. ThisMarch, Microsoftintroduced Tay an AI-based chat robot to Twitter. They had to remove it only 16 hours later. It was supposed to become increasingly smarter as it interacted with humans. Instead, it quickly became anevil Hitler-loving, Holocaust-denying, incestual-sex-promoting, Bush did 9/11-proclaiming chatterbox. Why? Because it worked no better than kitchen paper, absorbing and being shaped by the nasty messages sent to it. Microsoft apologised.

This is the state of AI today. After so much talking about the risks of ultraintelligent machines, it is time to turn on the light, stop worrying about sci-fi scenarios, and start focusing on AIs actual challenges, in order to avoid making painful and costly mistakes in the design and use of our smart technologies.

Let me be more specific. Philosophy doesnt do nuances well. It might fancy itself a model of precision and finely honed distinctions, but what it really loves are polarisations and dichotomies. Internalism or externalism, foundationalism or coherentism, trolley left or right, zombies or not zombies, observer-relative or observer-independent, possible or impossible worlds, grounded or ungrounded Philosophy might preach the inclusive vel (girls or boys may play) but too often indulges in the exclusive aut aut (either you like it or you dont).

The current debate about AI is a case in point. Here, the dichotomy is between those who believein true AI and those who do not. Yes, the real thing, not Siri in your iPhone, Roomba in your living room, or Nest in your kitchen (I am the happy owner of all three). Think instead of the false Maria in Metropolis (1927); Hal9000 in 2001: A Space Odyssey (1968), on which Good was one of the consultants; C3PO in Star Wars (1977); Rachael in Blade Runner (1982); Data in Star Trek: The Next Generation (1987); Agent Smith in The Matrix (1999) or the disembodied Samantha in Her (2013). Youve got the picture. Believers in true AI and in Goods intelligence explosion belong to the Church of Singularitarians. For lack of a better term, I shall refer to the disbelievers as members of the Church of AItheists. Lets have a look at both faiths and see why both are mistaken. And meanwhile, remember: good philosophy is almost always in the boring middle.

Singularitarians believe in three dogmas. First, that the creation of some form of artificial ultraintelligence is likely in the foreseeable future. This turning point is known as a technological singularity, hence the name. Both the nature of such a superintelligence and the exact timeframe of its arrival are left unspecified, although Singularitarians tend to prefer futures that are conveniently close-enough-to-worry-about but far-enough-not-to-be-around-to-be-proved-wrong.

Second, humanity runs a major risk of being dominated by such ultraintelligence. Third, a primary responsibility of the current generation is to ensure that the Singularity either does not happen or, if it does, that it is benign and will benefit humanity. This has all the elements of a Manichean view of the world: Good fighting Evil, apocalyptic overtones, the urgency of we must do something now or it will be too late, an eschatological perspective of human salvation, and an appeal to fears and ignorance.

Put all this in a context where people are rightly worried about the impact of idiotic digital technologies on their lives, especially in the job market and in cyberwars, and where mass media daily report new gizmos and unprecedented computer-driven disasters, and you have a recipe for mass distraction: a digital opiate for the masses.

Like all faith-based views, Singularitarianism is irrefutable because, in the end, it is unconstrained by reason and evidence. It is also implausible, since there is no reason to believe that anything resembling intelligent (let alone ultraintelligent) machines will emerge from our current and foreseeable understanding of computer science and digital technologies. Let me explain.

Sometimes, Singularitarianism is presented conditionally. This is shrewd, because the then does follow from the if, and not merely in an ex falso quodlibet sense: if some kind of ultraintelligence were to appear, then we would be in deep trouble (not merely could, as stated above by Hawking). Correct. Absolutely. But this also holds true for the following conditional: if the Four Horsemen of the Apocalypse were to appear, then we would be in even deeper trouble.

At other times, Singularitarianism relies on a very weak sense of possibility: some form of artificial ultraintelligence could develop, couldnt it? Yes it could. But this could is mere logical possibility as far as we know, there is no contradiction in assuming the development of artificial ultraintelligence. Yet this is a trick, blurring the immense difference between I could be sick tomorrow when I am already feeling unwell, and I could be a butterfly that dreams its a human being.

How some nasty ultraintelligent AI will ever evolve autonomously from the computational skills required to park in a tight spot remains unclear

There is no contradiction in assuming that a dead relative youve never heard of has left you $10million. That could happen. So? Contradictions, like happily married bachelors, arent possible states of affairs, but non-contradictions, like extra-terrestrial agents living among us so well-hidden that we never discovered them, can still be dismissed as utterly crazy. In other words, the could is not the could happen of an earthquake, but the it isnt true that it couldnt happen of thinking that you are the first immortal human. Correct, but not a reason to start acting as if you will live forever. Unless, that is, someone provides evidence to the contrary, and shows that there is something in our current and foreseeable understanding of computer science that should lead us to suspect that the emergence of artificial ultraintelligence is truly plausible.

Here Singularitarians mix faith and facts, often moved, I believe, by a sincere sense of apocalyptic urgency. They start talking about job losses, digital systems at risk, unmanned drones gone awry and other real and worrisome issues about computational technologies that are coming to dominate human life, from education to employment, from entertainment to conflicts. From this, they jump to being seriously worried about their inability to control their next Honda Civic because it will have a mind of its own. How some nasty ultraintelligent AI will ever evolve autonomously from the computational skills required to park in a tight spot remains unclear. The truth is that climbing on top of a tree is not a small step towards the Moon; it is the end of the journey. What we are going to see are increasingly smart machines able to perform more tasks that we currently perform ourselves.

If all other arguments fail, Singularitarians are fond of throwing in some maths. A favourite reference is Moores Law. This is the empirical claim that, in the development of digital computers, the number of transistors on integrated circuits doubles approximately every two years. The outcome has so far been more computational power for less. But things are changing. Technical difficulties in nanotechnology present serious manufacturing challenges. There is, after all, a limit to how small things can get before they simply melt. Moores Law no longer holds. Just because something grows exponentially for some time, does not mean that it will continue to do so forever, as The Economist put it in 2014:

From Turkzilla to AIzilla, the step is small, if it werent for the fact that a growth curve can easily be sigmoid, with an initial stage of growth that is approximately exponential, followed by saturation, slower growth, maturity and, finally, no further growth. But I suspect that the representation of sigmoid curves might be blasphemous for Singularitarianists.

Singularitarianism is irresponsibly distracting. It is a rich-world preoccupation, likely to worry people in leisured societies, who seem to forget about real evils oppressing humanity and our planet. One example will suffice: almost 700million people have no access to safe water. This is a major threat to humanity. Oh, and just in case you thought predictions by experts were a reliable guide, think twice. There are many staggeringly wrong technological predictions by experts (see some hilarious ones from David Pogue and on Cracked.com). In 2004 Gates stated: Two years from now, spam will be solved. And in 2011 Hawking declared that philosophy is dead (so whats this you are reading?).

The prediction of which I am most fond is by Robert Metcalfe, co-inventor of Ethernet and founder of the digital electronics manufacturer 3Com. In 1995 he promised to eat his words if proved wrong that the internet will soon go supernova and in 1996 will catastrophically collapse. A man of his word, in 1997 he publicly liquefied his article in a food processor and drank it. I wish Singularitarians were as bold and coherent as him.

Deeply irritated by those who worship the wrong digital gods, and by their unfulfilled Singularitarian prophecies, disbelievers AItheists make it their mission to prove once and for all that any kind of faith in true AI is totally wrong. AI is just computers, computers are just Turing Machines, Turing Machines are merely syntactic engines, and syntactic engines cannot think, cannot know, cannot be conscious. End of story.

This is why there is so much that computers (still) cannot do, loosely the title of several publications Ira Wilson (1970); Hubert Dreyfus (1972; 1979); Dreyfus (1992); David Harel (2000); John Searle (2014) though what precisely they cant do is a conveniently movable target. It is also why they are unable to process semantics (of any language, Chinese included, no matter what Google translation achieves). This proves that there is absolutely nothing to discuss, let alone worry about. There is no genuine AI, so a fortiori there are no problems caused by it. Relax and enjoy all these wonderful electric gadgets.

AItheists faith is as misplaced as the Singularitarians. Both Churches have plenty of followers in California, where Hollywood sci-fi films, wonderful research universities such as Berkeley, and some of the worlds most important digital companies flourish side by side. This might not be accidental. When there is big money involved, people easily get confused. For example, Google has been buying AI tech companies as if there were no tomorrow (disclaimer: I am a member of Googles Advisory Council on the right to be forgotten), so surely Google must know something about the real chances of developing a computer that can think, that we, outside The Circle, are missing? Eric Schmidt, Googles executive chairman, fuelled this view, when he told the Aspen Institute in 2013: Many people in AI believe that were close to [a computer passing the Turing Test] within the next five years.

The Turing test is a way to check whether AI is getting any closer. You ask questions of two agents in another room; one is human, the other artificial; if you cannot tell the difference between the two from their answers, then the robot passes the test. It is a crude test. Think of the driving test: if Alice does not pass it, she is not a safe driver; but even if she does, she might still be an unsafe driver. The Turing test provides a necessary but insufficient condition for a form of intelligence. This is a really low bar. And yet, no AI has ever got over it. More importantly, all programs keep failing in the same way, using tricks developed in the 1960s. Let me offer a bet. I hate aubergine (eggplant), but I shall eat a plate of it if a software program passes the Turing Test and wins the Loebner Prize gold medal before 16 July 2018. It is a safe bet.

Both Singularitarians and AItheists are mistaken. As Turing clearly stated in the 1950 article that introduced his test, the question Can a machine think? is too meaningless to deserve discussion. (Ironically, or perhaps presciently, that question is engraved on the Loebner Prize medal.) This holds true, no matter which of the two Churches you belong to. Yet both Churches continue this pointless debate, suffocating any dissenting voice of reason.

True AI is not logically impossible, but it is utterly implausible. We have no idea how we might begin to engineer it, not least because we have very little understanding of how our own brains and intelligence work. This means that we should not lose sleep over the possible appearance of some ultraintelligence. What really matters is that the increasing presence of ever-smarter technologies is having huge effects on how we conceive of ourselves, the world, and our interactions. The point is not that our machines are conscious, or intelligent, or able to know something as we do. They are not. There are plenty of well-known results that indicate the limits of computation, so-called undecidable problems for which it can be proved that it is impossible to construct an algorithm that always leads to a correct yes-or-no answer.

We know, for example, that our computational machines satisfy the Curry-Howard correspondence, which indicates that proof systems in logic on the one hand and the models of computation on the other, are in fact structurally the same kind of objects, and so any logical limit applies to computers as well. Plenty of machines can do amazing things, including playing checkers, chess and Go and the quiz show Jeopardy better than us. And yet they are all versions of a Turing Machine, an abstract model that sets the limits of what can be done by a computer through its mathematical logic.

Quantum computers are constrained by the same limits, the limits of what can be computed (so-called computable functions). No conscious, intelligent entity is going to emerge from a Turing Machine. The point is that our smart technologies also thanks to the enormous amount of available data and some very sophisticated programming are increasingly able to deal with more tasks better than we do, including predicting our behaviours. So we are not the only agents able to perform tasks successfully.

This is what I have defined as the Fourth Revolution in our self-understanding. We are not at the centre of the Universe (Copernicus), of the biological kingdom (Charles Darwin), or of rationality (Sigmund Freud). And after Turing, we are no longer at the centre of the infosphere, the world of information processing and smart agency, either. We share the infosphere with digital technologies. These are ordinary artefacts that outperform us in ever more tasks, despite being no cleverer than a toaster. Their abilities are humbling and make us reevaluate human exceptionality and our special role in the Universe, which remains unique. We thought we were smart because we could play chess. Now a phone plays better than a Grandmaster. We thought we were free because we could buy whatever we wished. Now our spending patterns are predicted by devices as thick as a plank.

Whats the difference? The same as between you and the dishwasher when washing the dishes. Whats the consequence? That any apocalyptic vision of AI can be disregarded

The success of our technologies depends largely on the fact that, while we were speculating about the possibility of ultraintelligence, we increasingly enveloped the world in so many devices, sensors, applications and data that it became an IT-friendly environment, where technologies can replace us without having any understanding, mental states, intentions, interpretations, emotional states, semantic skills, consciousness, self-awareness or flexible intelligence. Memory (as in algorithms and immense datasets) outperforms intelligence when landing an aircraft, finding the fastest route from home to the office, or discovering the best price for your next fridge.

Digital technologies can do more and more things better than us, by processing increasing amounts of data and improving their performance by analysing their own output as input for the next operations. AlphaGo, the computer program developed by Google DeepMind, won the boardgame Go against the worlds best player because it could use a database of around 30million moves and play thousands of games against itself, learning how to improve its performance. It is like a two-knife system that can sharpen itself. Whats the difference? The same as between you and the dishwasher when washing the dishes. Whats the consequence? That any apocalyptic vision of AI can be disregarded. We are and shall remain, for any foreseeable future, the problem, not our technology. So we should concentrate on the real challenges. By way of conclusion, let me list five of them, all equally important.

We should make AI environment-friendly. We need the smartest technologies we can build to tackle the concrete evils oppressing humanity and our planet, from environmental disasters to financial crises, from crime, terrorism and war, to famine, poverty, ignorance, inequality and appalling living standards.

We should make AI human-friendly. It should be used to treat people always as ends, never as mere means, to paraphrase Immanuel Kant.

We should make AIs stupidity work for human intelligence. Millions of jobs will be disrupted, eliminated and created; the benefits of this should be shared by all, and the costs borne by society.

We should make AIs predictive power work for freedom and autonomy. Marketing products, influencing behaviours, nudging people or fighting crime and terrorism should never undermine human dignity.

And finally, we should make AI make us more human. The serious risk is that we might misuse our smart technologies, to the detriment of most of humanity and the whole planet. Winston Churchill said that we shape our buildings and afterwards our buildings shape us. This applies to the infosphere and its smart technologies as well.

Singularitarians and AItheists will continue their diatribes about the possibility or impossibility of true AI. We need to be tolerant. But we do not have to engage. As Virgil suggests in Dantes Inferno: Speak not of them, but look, and pass them by. For the world needs some good philosophy, and we need to take care of more pressing problems.

Continued here:

True AI is both logically possible and utterly implausible ...

Singularitarianism Research Papers – Academia.edu

Given the contemporary ambivalent standpoints toward the future of artificial intelligence, recently denoted as the phenomenon of Singularitarianism, Gregory Batesons core theories of ecology of mind, schismogenesis, and double bind, are... more

Given the contemporary ambivalent standpoints toward the future of artificial intelligence, recently denoted as the phenomenon of Singularitarianism, Gregory Batesons core theories of ecology of mind, schismogenesis, and double bind, are hereby revisited, taken out of their respective sociological, anthropological, and psychotherapeutic contexts and recontextualized in the field of Roboethics as to a twofold aim: (a) the proposal of a rigid ethical standpoint toward both artificial and non-artificial agents, and (b) an explanatory analysis of the reasons bringing about such a polarized outcome of contradictory views in regard to the future of robots. Firstly, the paper applies the Batesonian ecology of mind for constructing a unified roboethical framework which endorses a flat ontology embracing multiple forms of agency, borrowing elements from Floridis information ethics, classic virtue ethics, Felix Guattaris ecosophy, Braidottis posthumanism, and the Japanese animist doctrine of Rinri. The proposed framework wishes to act as a pragmatic solution to the endless dispute regarding the nature of consciousness or the natural/artificial dichotomy and as a further argumentation against the recognition of future artificial agency as a potential existential threat. Secondly, schismogenic analysis is employed to describe the emergence of the hostile humanrobot cultural contact, tracing its origins in the early scientific discourse of manmachine symbiosis up to the contemporary countermeasures against superintelligent agents. Thirdly, Batesons double bind theory is utilized as an analytic methodological tool of humanitys collective agency, leading to the hypothesis of collective schizophrenic symptomatology, due to the constancy and intensity of confronting messages emitted by either proponents or opponents of artificial intelligence. The double binds treatment is the mirroring therapeutic double bind, and the article concludes in proposing the conceptual pragmatic imperative necessary for such a condition to follow: humanitys conscience of habitualizing danger and familiarization with its possible future extinction, as the result of a progressive blurrification between natural and artificial agency, succeeded by a totally non-organic intelligent form of agency.

Go here to see the original:

Singularitarianism Research Papers - Academia.edu

Singularitarianism | Transhumanism Wiki | FANDOM powered …

Singularitarianism is a moral philosophy based upon the belief that a technological singularity the technological creation of smarter-than-human intelligence is possible, and advocating deliberate action to bring it into effect and ensure its safety. While many futurists and transhumanists speculate on the possibility and nature of this technological development (often referred to as the Singularity), Singularitarians believe it is not only possible, but desirable if, and only if, guided safely. Accordingly, they might sometimes "dedicate their lives" to acting in ways they believe will contribute to its safe implementation.

The term "singularitarian" was originally defined by Extropian Mark Plus in 1991 to mean "one who believes the concept of a Singularity". This term has since been redefined to mean "Singularity activist" or "friend of the Singularity"; that is, one who acts so as to bring about the Singularity.[1]

Ray Kurzweil, the author of the book The Singularity is Near, defines a Singularitarian as someone "who understands the Singularity and who has reflected on its implications for his or her own life".[2]

In his 2000 essay, "Singularitarian Principles", Eliezer Yudkowsky writes that there are four qualities that define a Singularitarian:[3]

In July 2000 Eliezer Yudkowsky, Brian Atkins and Sabine Atkins founded the Singularity Institute for Artificial Intelligence to work towards the creation of self-improving Friendly AI. The Singularity Institute's writings argue for the idea that an AI with the ability to improve upon its own design (Seed AI) would rapidly lead to superintelligence. Singularitarians believe that reaching the Singularity swiftly and safely is the best possible way to minimize net existential risk.

Many believe a technological singularity is possible without adopting Singularitarianism as a moral philosophy. Although the exact numbers are hard to quantify, Singularitarianism is presently a small movement. Other prominent Singularitarians include Ray Kurzweil and Nick Bostrom.

Often ridiculing the Singularity as "the Rapture for nerds", many critics have dismissed singularitarianism as a pseudoreligion of fringe science.[4] However, some green anarchist militants have taken singularitarian rhetoric seriously enough to have called for violent direct action to stop the Singularity.[5]

lt:Singuliaritarianizmas

Read this article:

Singularitarianism | Transhumanism Wiki | FANDOM powered ...

Singularitarianism? Pharyngula

Ray Kurzweil is a genius. One of the greatest hucksters of the age. Thats the only way I can explain how his nonsense gets so much press and has such a following. Now he has the cover of Time magazine, and an article called 2045: The Year Man Becomes Immortal. It certainly couldnt be taken seriously anywhere else; once again, Kurzweil wiggles his fingers and mumbles a few catchphrases and upchucks a remarkable prediction, that in 35 years (a number dredged out of his compendium of biased estimates), Man (one, a few, many? How? He doesnt know) will finally achieve immortality (seems to me youd need to wait a few years beyond that goal to know if it was true). Now weve even got a name for the Kurzweil delusion: Singularitarianism.

Theres room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or wont happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe youre walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything. They have no fear of sounding ridiculous; your ordinary citizens distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality. When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence.

Wow. Sounds just like the Raelians, or Hercolubians, or Scientologists, or any of the modern New Age pseudosciences that appropriate a bit of jargon and blow it up into a huge mythology. Nice hyperbole there, though. Too bad the whole movement is empty of evidence.

One of the things I do really despise about the Kurzweil approach is their dishonest management of critics, and Kurzweil is the master. He loves to tell everyone whats wrong with his critics, but he doesnt actually address the criticisms.

Take the question of whether computers can replicate the biochemical complexity of an organic brain. Kurzweil yields no ground there whatsoever. He does not see any fundamental difference between flesh and silicon that would prevent the latter from thinking. He defies biologists to come up with a neurological mechanism that could not be modeled or at least matched in power and flexibility by software running on a computer. He refuses to fall on his knees before the mystery of the human brain. Generally speaking, he says, the core of a disagreement Ill have with a critic is, theyll say, Oh, Kurzweil is underestimating the complexity of reverse-engineering of the human brain or the complexity of biology. But I dont believe Im underestimating the challenge. I think theyre underestimating the power of exponential growth.

This is wrong. For instance, I think reverse-engineering the general principles of a human brain might well be doable in a few or several decades, and I do suspect that well be able to do things in ten years, 20 years, a century that I cant even imagine. I dont find Kurzweil silly because Im blind to the power of exponential growth, but because:

Kurzweil hasnt demonstrated that there is exponential growth at play here. Ive read his absurd book, and his data is phony and fudged to fit his conclusion. He cheerfully makes stuff up or drops data that goes against his desires to invent these ridiculous charts.

Im not claiming he underestimates the complexity of the brain, Im saying he doesnt understand biology, period. Handwaving is not enough if hes going to make fairly specific claims of immortality in 35 years, there had better be some understanding of the path that will be taken.

There is a vast difference between grasping a principle and implementing the specifics. If we understand how the brain works, if we can create a computer simulation that replicates and improves upon the function of our brain, that does not in any way imply that my identity and experiences can be translated into the digital realm. Again, Kurzweil doesnt have even a hint of a path that can be taken to do that, so he has no basis for making the prediction.

Smooth curves that climb upward into infinity can exist in mathematics (although Kurzweils predictions dont live in state of rigor that would justify calling them mathematical), but they dont work in the real world. There are limits. Weve been building better and more powerful power plants for aircraft for a century, but they havent gotten to a size and efficiency to allow me to fly off with a personal jetpack. I have no reason to expect that they will, either.

While I dont doubt that science will advance rapidly, I also expect that the directions it takes will be unpredictable. Kurzweil confuses engineering, where you build something to fit a predetermined set of specifications, with science, in which you follow the evidence wherever it leads. Look at the so-called war on cancer: it isnt won, no one expects that it will be, but what it has accomplished is to provide limited success in improving health and quality of life, extending survival times, and developing new tools for earlier diagnosis thats reality, and understanding reality is achieved incrementally, not by sudden surges in technology independent of human effort. It also generates unexpected spinoffs in deeper knowledge about cell cycles, signaling, gene regulation, etc. The problems get more interesting and diverse, and its awfully silly of one non-biologist in 2011 to try to predict what surprises will pop out.

Kurzweil is a typical technocrat with limited breadth of knowledge. Imagine what happens IF we actually converge on some kind of immortality. Who gets it? If its restricted, what makes Kurzweil think he, and not Senator Dumbbum who controls federal spending on health, or Tycoon Greedo the trillionaire, gets it? How would the world react if such a capability were available, and they (or their dying mother, or their sick child) dont have access? What if its cheap and easy, and everyone gets it? Kurzweil is talking about a technology that would almost certainly destroy every human society on the planet, and he treats it as blithely as the prospect of getting new options for his cell phone. In case he hadnt noticed, human sociology and politics shows no sign of being on an exponential trend towards greater wisdom. Yeah, expect turbulence.

Hes guilty of a very weird form of reductionism that considers a human life can be reduced to patterns in a computer. I have no stock in spiritualism or dualism, but we are very much a product of our crude and messy biology we percieve the world through imprecise chemical reactions, our brains send signals by shuffling ions in salt water, our attitudes and reactions are shaped by chemicals secreted by glands in our guts. Replicating the lightning while ignoring the clouds and rain and pressure changes will not give you a copy of the storm. It will give you something different, which would be interesting still, but its not the same.

Kurzweil shows other signs of kookery. Two hundred pills a day? Weekly intravenous transfusions? Drinking alkalized water because hes afraid of acidosis? The man is an intelligent engineer, but hes also an obsessive crackpot.

Oh, well. Ill make my own predictions. Magazines will continue to praise Kurzweils techno-religion in sporadic bursts, and followers will continue to gullibly accept what he says because it is what they wish would happen. Kurzweil will die while brain-uploading and immortality are still vague dreams; he will be frozen in liquid nitrogen, which will so thoroughly disrupt his cells that even if we discover how to cure whatever kills him, there will be no hope of recovering the mind and personality of Kurzweil from the scrambled chaos of his dead brain. 2045 will come, and those of us who are alive to see it, will look back and realize it is very, very different from what life was like in 2011, and also very different from what we expected life to be like. At some point, I expect artificial intelligences to be part of our culture, if we persist; theyll work in radically different ways than human brains, and they will revolutionize society, but I have no way of guessing how. Ray Kurzweil will be forgotten, mostly, but records of the existence of a strange shaman of the circuitry from the late 20th and early 21st century will be tucked away in whatever the future databases are like, and people and machines will sometimes stumble across them and laugh or zotigrate and say, How quaint and amusing!, or whatever the equivalent in the frangitwidian language of the trans-entity circumsolar ansible network might be.

And thatll be kinda cool. I wish I could live to see it.

Here is the original post:

Singularitarianism? Pharyngula

Singularitarianism r/Singularitarianism – reddit

Welcome to /r/Singularitarianism

A subreddit devoted to the social, political, and technological movement defined by the belief that deliberate action ought to be taken to ensure that an Intelligence Explosion benefits human civilization.

The theory of Singularitarianism is that our human species is an infant waiting to be born. An infant that is unaware of an outside world beyond the womb. The hope, purpose, and meaning in the creation of greater-than-human intelligence is our will to be born. The birth of humanity, the birth of the infant, is the evolution of the intelligence of our man and machine civilization.

Singularitarianism is a non-religious, decentralized futurist and transhumanist movement. Singularitarianism is faith in scientific skepticism and admiration for the biological phenomenon of human intelligence. From this biological intelligence comes the awe, responsibility, and capability of creating non-biological machine intelligence.

The Singularity places a horizon across humanity's understanding because we are still discovering the scientific nature of our own intelligence. Not until we understand and improve upon the biological heritage of our intelligence can we begin to understand the meaning of superintelligence. Ultimately, this reverence for universal forms of intelligence and sentience is our safeguard against mysticism, fanaticism, and ideology. Understanding and improving intelligence is simultaneously our greatest imperative and our guiding principle. This movement does not believe in God- but that simply man is a bridge and not an end- that instead we can become the Gods themselves. The human future(s) are infinite.

Original post:

Singularitarianism r/Singularitarianism - reddit

Singularitarianism? – Pharyngula

Ray Kurzweil is a genius. One of the greatest hucksters of the age. Thats the only way I can explain how his nonsense gets so much press and has such a following. Now he has the cover of Time magazine, and an article called 2045: The Year Man Becomes Immortal. It certainly couldnt be taken seriously anywhere else; once again, Kurzweil wiggles his fingers and mumbles a few catchphrases and upchucks a remarkable prediction, that in 35 years (a number dredged out of his compendium of biased estimates), Man (one, a few, many? How? He doesnt know) will finally achieve immortality (seems to me youd need to wait a few years beyond that goal to know if it was true). Now weve even got a name for the Kurzweil delusion: Singularitarianism.

Theres room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or wont happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe youre walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything. They have no fear of sounding ridiculous; your ordinary citizens distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality. When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence.

Wow. Sounds just like the Raelians, or Hercolubians, or Scientologists, or any of the modern New Age pseudosciences that appropriate a bit of jargon and blow it up into a huge mythology. Nice hyperbole there, though. Too bad the whole movement is empty of evidence.

One of the things I do really despise about the Kurzweil approach is their dishonest management of critics, and Kurzweil is the master. He loves to tell everyone whats wrong with his critics, but he doesnt actually address the criticisms.

Take the question of whether computers can replicate the biochemical complexity of an organic brain. Kurzweil yields no ground there whatsoever. He does not see any fundamental difference between flesh and silicon that would prevent the latter from thinking. He defies biologists to come up with a neurological mechanism that could not be modeled or at least matched in power and flexibility by software running on a computer. He refuses to fall on his knees before the mystery of the human brain. Generally speaking, he says, the core of a disagreement Ill have with a critic is, theyll say, Oh, Kurzweil is underestimating the complexity of reverse-engineering of the human brain or the complexity of biology. But I dont believe Im underestimating the challenge. I think theyre underestimating the power of exponential growth.

This is wrong. For instance, I think reverse-engineering the general principles of a human brain might well be doable in a few or several decades, and I do suspect that well be able to do things in ten years, 20 years, a century that I cant even imagine. I dont find Kurzweil silly because Im blind to the power of exponential growth, but because:

Kurzweil hasnt demonstrated that there is exponential growth at play here. Ive read his absurd book, and his data is phony and fudged to fit his conclusion. He cheerfully makes stuff up or drops data that goes against his desires to invent these ridiculous charts.

Im not claiming he underestimates the complexity of the brain, Im saying he doesnt understand biology, period. Handwaving is not enough if hes going to make fairly specific claims of immortality in 35 years, there had better be some understanding of the path that will be taken.

There is a vast difference between grasping a principle and implementing the specifics. If we understand how the brain works, if we can create a computer simulation that replicates and improves upon the function of our brain, that does not in any way imply that my identity and experiences can be translated into the digital realm. Again, Kurzweil doesnt have even a hint of a path that can be taken to do that, so he has no basis for making the prediction.

Smooth curves that climb upward into infinity can exist in mathematics (although Kurzweils predictions dont live in state of rigor that would justify calling them mathematical), but they dont work in the real world. There are limits. Weve been building better and more powerful power plants for aircraft for a century, but they havent gotten to a size and efficiency to allow me to fly off with a personal jetpack. I have no reason to expect that they will, either.

While I dont doubt that science will advance rapidly, I also expect that the directions it takes will be unpredictable. Kurzweil confuses engineering, where you build something to fit a predetermined set of specifications, with science, in which you follow the evidence wherever it leads. Look at the so-called war on cancer: it isnt won, no one expects that it will be, but what it has accomplished is to provide limited success in improving health and quality of life, extending survival times, and developing new tools for earlier diagnosis thats reality, and understanding reality is achieved incrementally, not by sudden surges in technology independent of human effort. It also generates unexpected spinoffs in deeper knowledge about cell cycles, signaling, gene regulation, etc. The problems get more interesting and diverse, and its awfully silly of one non-biologist in 2011 to try to predict what surprises will pop out.

Kurzweil is a typical technocrat with limited breadth of knowledge. Imagine what happens IF we actually converge on some kind of immortality. Who gets it? If its restricted, what makes Kurzweil think he, and not Senator Dumbbum who controls federal spending on health, or Tycoon Greedo the trillionaire, gets it? How would the world react if such a capability were available, and they (or their dying mother, or their sick child) dont have access? What if its cheap and easy, and everyone gets it? Kurzweil is talking about a technology that would almost certainly destroy every human society on the planet, and he treats it as blithely as the prospect of getting new options for his cell phone. In case he hadnt noticed, human sociology and politics shows no sign of being on an exponential trend towards greater wisdom. Yeah, expect turbulence.

Hes guilty of a very weird form of reductionism that considers a human life can be reduced to patterns in a computer. I have no stock in spiritualism or dualism, but we are very much a product of our crude and messy biology we percieve the world through imprecise chemical reactions, our brains send signals by shuffling ions in salt water, our attitudes and reactions are shaped by chemicals secreted by glands in our guts. Replicating the lightning while ignoring the clouds and rain and pressure changes will not give you a copy of the storm. It will give you something different, which would be interesting still, but its not the same.

Kurzweil shows other signs of kookery. Two hundred pills a day? Weekly intravenous transfusions? Drinking alkalized water because hes afraid of acidosis? The man is an intelligent engineer, but hes also an obsessive crackpot.

Oh, well. Ill make my own predictions. Magazines will continue to praise Kurzweils techno-religion in sporadic bursts, and followers will continue to gullibly accept what he says because it is what they wish would happen. Kurzweil will die while brain-uploading and immortality are still vague dreams; he will be frozen in liquid nitrogen, which will so thoroughly disrupt his cells that even if we discover how to cure whatever kills him, there will be no hope of recovering the mind and personality of Kurzweil from the scrambled chaos of his dead brain. 2045 will come, and those of us who are alive to see it, will look back and realize it is very, very different from what life was like in 2011, and also very different from what we expected life to be like. At some point, I expect artificial intelligences to be part of our culture, if we persist; theyll work in radically different ways than human brains, and they will revolutionize society, but I have no way of guessing how. Ray Kurzweil will be forgotten, mostly, but records of the existence of a strange shaman of the circuitry from the late 20th and early 21st century will be tucked away in whatever the future databases are like, and people and machines will sometimes stumble across them and laugh or zotigrate and say, How quaint and amusing!, or whatever the equivalent in the frangitwidian language of the trans-entity circumsolar ansible network might be.

And thatll be kinda cool. I wish I could live to see it.

Related

Continued here:

Singularitarianism? - Pharyngula

Singularitarianism – Lesswrongwiki

Wikipedia has an article about

Singularitarianism refers to attitudes or beliefs favoring a technological singularity.

The term was coined by Mark Plus, then given a more specific meaning by Eliezer Yudkowsky in his Singularitarian principles. "Singularitarianism", early on, referred to an principled activist stance aimed at creating a singularity for the benefit of humanity as a whole, and in particular to the movement surrounding the Machine Intelligence Research Institute.

The term has since sometimes been used differently, without it implying the specific principles listed by Yudkowsky. For example, Ray Kurzweil's book "The Singularity Is Near" contains a chapter titled "Ich bin ein Singularitarian", in which Kurzweil describes his own vision for technology improving the world. Others have used the term to refer to people with an impact on the Singularity and to "expanding one's mental faculties by merging with technology". Others have used "Singularitarian" to refer to anyone who predicts a technological singularity will happen.

Yudkowsky has (perhaps facetiously) suggested that those adhering to the original activist stance relabel themselves the "Elder Singularitarians".

Read the rest here:

Singularitarianism - Lesswrongwiki

Technological utopianism – Wikipedia

Technological utopianism (often called techno-utopianism or technoutopianism) is any ideology based on the premise that advances in science and technology will eventually bring about a utopia, or at least help to fulfil one or another utopian ideal. A techno-utopia is therefore a hypothetical ideal society, in which laws, government, and social conditions are solely operating for the benefit and well-being of all its citizens, set in the near- or far-future, when advanced science and technology will allow these ideal living standards to exist; for example, post-scarcity, transformations in human nature, the abolition of suffering and even the end of death. Technological utopianism is often connected with other discourses presenting technologies as agents of social and cultural change, such as technological determinism or media imaginaries.[1]

Douglas Rushkoff, a leading theorist on technology and cyberculture claims that technology gives everyone a chance to voice their own opinions, fosters individualistic thinking, and dilutes hierarchy and power structures by giving the power to the people.[2] He says that the whole world is in the middle of a new Renaissance, one that is centered on technology and self-expression. However, Rushkoff makes it clear that people dont live their lives behind a desk with their hands on a keyboard [3]

A tech-utopia does not disregard any problems that technology may cause,[4] but strongly believes that technology allows mankind to make social, economic, political, and cultural advancements.[5] Overall, Technological Utopianism views technologys impacts as extremely positive.

In the late 20th and early 21st centuries, several ideologies and movements, such as the cyberdelic counterculture, the Californian Ideology, transhumanism,[6] and singularitarianism, have emerged promoting a form of techno-utopia as a reachable goal. Cultural critic Imre Szeman argues technological utopianism is an irrational social narrative because there is no evidence to support it. He concludes that it shows the extent to which modern societies place faith in narratives of progress and technology overcoming things, despite all evidence to the contrary.[7]

Karl Marx believed that science and democracy were the right and left hands of what he called the move from the realm of necessity to the realm of freedom. He argued that advances in science helped delegitimize the rule of kings and the power of the Christian Church.[8]

19th-century liberals, socialists, and republicans often embraced techno-utopianism. Radicals like Joseph Priestley pursued scientific investigation while advocating democracy. Robert Owen, Charles Fourier and Henri de Saint-Simon in the early 19th century inspired communalists with their visions of a future scientific and technological evolution of humanity using reason. Radicals seized on Darwinian evolution to validate the idea of social progress. Edward Bellamys socialist utopia in Looking Backward, which inspired hundreds of socialist clubs in the late 19th century United States and a national political party, was as highly technological as Bellamys imagination. For Bellamy and the Fabian Socialists, socialism was to be brought about as a painless corollary of industrial development.[8]

Marx and Engels saw more pain and conflict involved, but agreed about the inevitable end. Marxists argued that the advance of technology laid the groundwork not only for the creation of a new society, with different property relations, but also for the emergence of new human beings reconnected to nature and themselves. At the top of the agenda for empowered proletarians was to increase the total productive forces as rapidly as possible. The 19th and early 20th century Left, from social democrats to communists, were focused on industrialization, economic development and the promotion of reason, science, and the idea of progress.[8]

Some technological utopians promoted eugenics. Holding that in studies of families, such as the Jukes and Kallikaks, science had proven that many traits such as criminality and alcoholism were hereditary, many advocated the sterilization of those displaying negative traits. Forcible sterilization programs were implemented in several states in the United States.[9]

H.G. Wells in works such as The Shape of Things to Come promoted technological utopianism.

The horrors of the 20th century - communist and fascist dictatorships, world wars - caused many to abandon optimism. The Holocaust, as Theodor Adorno underlined, seemed to shatter the ideal of Condorcet and other thinkers of the Enlightenment, which commonly equated scientific progress with social progress.[10]

The Goliath of totalitarianism will be brought down by the David of the microchip.

A movement of techno-utopianism began to flourish again in the dot-com culture of the 1990s, particularly in the West Coast of the United States, especially based around Silicon Valley. The Californian Ideology was a set of beliefs combining bohemian and anti-authoritarian attitudes from the counterculture of the 1960s with techno-utopianism and support for libertarian economic policies. It was reflected in, reported on, and even actively promoted in the pages of Wired magazine, which was founded in San Francisco in 1993 and served for a number years as the "bible" of its adherents.[11][12][13]

This form of techno-utopianism reflected a belief that technological change revolutionizes human affairs, and that digital technology in particular - of which the Internet was but a modest harbinger - would increase personal freedom by freeing the individual from the rigid embrace of bureaucratic big government. "Self-empowered knowledge workers" would render traditional hierarchies redundant; digital communications would allow them to escape the modern city, an "obsolete remnant of the industrial age".[11][12][13]

Similar forms of "digital utopianism" has often entered in the political messages of party and social movements that point to the Web or more broadly to new media as harbingers of political and social change.[14] Its adherents claim it transcended conventional "right/left" distinctions in politics by rendering politics obsolete. However, techno-utopianism disproportionately attracted adherents from the libertarian right end of the political spectrum. Therefore, techno-utopians often have a hostility toward government regulation and a belief in the superiority of the free market system. Prominent "oracles" of techno-utopianism included George Gilder and Kevin Kelly, an editor of Wired who also published several books.[11][12][13]

During the late 1990s dot-com boom, when the speculative bubble gave rise to claims that an era of "permanent prosperity" had arrived, techno-utopianism flourished, typically among the small percentage of the population who were employees of Internet startups and/or owned large quantities of high-tech stocks. With the subsequent crash, many of these dot-com techno-utopians had to rein in some of their beliefs in the face of the clear return of traditional economic reality.[12][13]

In the late 1990s and especially during the first decade of the 21st century, technorealism and techno-progressivism are stances that have risen among advocates of technological change as critical alternatives to techno-utopianism.[15][16] However, technological utopianism persists in the 21st century as a result of new technological developments and their impact on society. For example, several technical journalists and social commentators, such as Mark Pesce, have interpreted the WikiLeaks phenomenon and the United States diplomatic cables leak in early December 2010 as a precursor to, or an incentive for, the creation of a techno-utopian transparent society.[17]Cyber-utopianism, first coined by Evgeny Morozov, is another manifestation of this, in particular in relation to the Internet and social networking.

Bernard Gendron, a professor of philosophy at the University of WisconsinMilwaukee, defines the four principles of modern technological utopians in the late 20th and early 21st centuries as follows:[18]

Rushkoff presents us with multiple claims that surround the basic principles of Technological Utopianism:[19]

Critics claim that techno-utopianism's identification of social progress with scientific progress is a form of positivism and scientism. Critics of modern libertarian techno-utopianism point out that it tends to focus on "government interference" while dismissing the positive effects of the regulation of business. They also point out that it has little to say about the environmental impact of technology[22] and that its ideas have little relevance for much of the rest of the world that are still relatively quite poor (see global digital divide).[11][12][13]

In his 2010 study System Failure: Oil, Futurity, and the Anticipation of Disaster, Canada Research Chairholder in cultural studies Imre Szeman argues that technological utopianism is one of the social narratives that prevent people from acting on the knowledge they have concerning the effects of oil on the environment.[7]

In a controversial article "Techno-Utopians are Mugged by Reality", Wall Street Journal explores the concept of the violation of free speech by shutting down social media to stop violence. As a result of British cities being looted consecutively, Prime British Minister David Cameron argued that the government should have the ability to shut down social media during crime sprees so that the situation could be contained. A poll was conducted to see if Twitter users would prefer to let the service be closed temporarily or keep it open so they can chat about the famous television show X-Factor. The end report showed that every Tweet opted for X-Factor. The negative social effects of technological utopia is that society is so addicted to technology that we simply can't be parted even for the greater good. While many Techno-Utopians would like to believe that digital technology is for the greater good, it can also be used negatively to bring harm to the public.[23]

Other critics of a techno-utopia include the worry of the human element. Critics suggest that a techno-utopia may lessen human contact, leading to a distant society. Another concern is the amount of reliance society may place on their technologies in these techno-utopia settings.[24] These criticisms are sometimes referred to as a technological anti-utopian view or a techno-dystopia.

Even today, the negative social effects of a technological utopia can be seen. Mediated communication such as phone calls, instant messaging and text messaging are steps towards a utopian world in which one can easily contact another regardless of time or location. However, mediated communication removes many aspects that are helpful in transferring messages. As it stands today, most text, email, and instant messages offer fewer nonverbal cues about the speakers feelings than do face-to-face encounters.[25] This makes it so that mediated communication can easily be misconstrued and the intended message is not properly conveyed. With the absence of tone, body language, and environmental context, the chance of a misunderstanding is much higher, rendering the communication ineffective. In fact, mediated technology can be seen from a dystopian view because it can be detrimental to effective interpersonal communication. These criticisms would only apply to messages that are prone to misinterpretation as not every text based communication requires contextual cues. The limitations of lacking tone and body language in text based communication are likely to be mitigated by video and augmented reality versions of digital communication technologies.[26]

Read more from the original source:

Technological utopianism - Wikipedia

Sterling Crispin: Begin at the End – ArtSlant

This essay was first published in the ArtSlant Prize 2016 Catalogue, on the occasion of theArtSlant Prize Shortlist exhibitionat SPRING/BREAK Art Show, from February 28March 6, 2016.Sterling Crispin is the ArtSlant Prize 2016 Third Prize winner.Other ArtSlant Prize 2016 catalogue essays:Brigitta Varadi & Tiffany Smith

What does the end, The End, look like? Is it a transcendent experience like the religious and singularitarians believe? Will humans transform into iridescent angels of ethereal nature, timeless in their march towards oneness? Will the end look like an episode of The Walking Dead? Like an episode of Doomsday Preppers? Will the remnants of society scrabble together the few resources left to find baseline survival the underlying truth of excess? Does the end resemble a person sitting in a concrete box buried underground swallowing baked beans out of a can, or do we become waves of energy, identifiable not by our body but by a collection of experiences and tropes traveling from host to host, like a Westworld protagonist?

It is hard to conceive of a greater tension between these two visions and yet they exist, in tandem, in our collective imaginations. To imagine civilization dwindling down to a couple thousand people, the Earth in environmental hell, taking global collapse to its conclusionits unimaginably terrible, says artist Sterling Crispin. But, he continues, take techno-optimism to its extreme, with humans living for hundreds of thousands of years, and its also kind of unimaginable.

Sterling Crispin explores the end. From a fascination with Buddhist conceptions of oneness and propelled by the rapid technological pace in the era of Moores Law[1], Crispin takes as his subject the hurtling hulk of humanity as it flies towards some kind of imagined or real conclusion. Transhumanism is on my mind a lot, he says.

Crispins materials are birthed in todays technology. Aluminum server frames, Alexa towers, emergency water filtration systems, canned food, Bitcoin miners, extruded plastics and resinsthese are the vocabulary of an end-times practice.

The singularity as a concept comes from a 1993 paper[2]by mathematician Vernor Vinge in which he states: We are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater-than-human intelligence. The basic principle of singularitarianism is that, at a certain point, advancement will be out of human hands. Technology will be free to replicate and improve on its own. Futurist Ray Kurzweilbelieves that at this point a massive rupture in human culture, philosophy, and civilization will occur, characterized by the end of death and anthropocentric evolution. Kurzweils end is an apocalypse of a different sort.[3] His is a moment of becoming and transcendence beyond the human.

Sterling Crispin, Self-Contained Investment Module and Contingency Package (Cloud-Enabled Modular Emergency-Enterprise Application Platform),2015. Courtesy of the artist

The globe just scored a hat trick of hottest years on record. The doomsday clock has begun ticking towards midnight again. Amidst the statistical evidence, markers of impending doom keep pinging us. The cries of apocryphal evangelists are beginning to ring true.

With each passing meteor, every seemingly-significant date on an ancient calendar that appears on our Julian calendar, throngs proclaim the end with rapturous fervency. But the end interrogated by Crispin is not fanciful. His work has a sincere immediacy: Trumps presidency and the collapse of civil society really gets you thinking about how fragile our whole global economy is and how loosely everything is held together. He goes on, Next month, some catastrophe could happen that could close down international shipping, close off the internet; millions of people could die because there wasnt enough food. Were just on the edge of this all of the time.

Never has the world been so interconnected. In 2015, $16 trillion (21% of GWP) in merchandise exchanged hands across the world. In 2013, one fifth of the average Americans diet was imported. This interdependence isnt trivial. As political forces around the world begin to pull back from the integrated system of globalized advanced capitalism, the connections holding it all together seem more tenuous than ever.

Crispins suite of four sculptures, N.A.N.O., B.I.O., I.N.F.O., C.O.G.N.O. (2015), serves as sentries. Each monolith is attached to an industry stock: N.A.N.O. comes with 100 shares of stock in a nanotechnology company, B.I.O., biotechnology, I.N.F.O., informatics, and C.O.G.N.O., cognitive research. If separated, these Gundam-like structures will track each other: a GPS display shows you where the other three horsemen are at all times. An emergency water purifier and food rations anchor the sculptures. N.A.N.O. et al. recall ancient statues guarding a crypt, protectors of humanity straight out of anime waiting for the right time to awaken and save the world. They reach towards the promises of advanced capital, zeroing in on the industries most likely to transform humanity via the singularity and save it from itself.

Sterling Crispin, N.A.N.O. , B.I.O. , I.N.F.O. , C.O.G.N.O., 2015. Courtesy of the artist

Of course, if that doesnt work out, theres always a jerrycan of clean water and some freeze-dried beef.[4]

Self-Contained Investment Module and Contingency Package (2015), like N.A.N.O., is practical and sculptural. Inside an aluminum frame sits an ASIC Bitcoin mining tube, a Lifesaver Systems 4000 ultra-filtration water bottle, an emergency radio, Mayday emergency food rations, a knife, heirloom seeds, etc. The connections are barely waiting to be pieced together by the viewer: theyre all there, visible in the cube. Crispins work makes hard connections, direct metaphors, in his search for the aesthetic of the end. The metaphors I use are heavy-handed but rounded in the utility of their function in reality, relays the artist.

This frankness fights the obfuscating nature of reality. Are things really as dire as they seem? It is readily accepted that things will be okay; we tell ourselves the same often enough. But why is it so difficult to accept that things might not be okay? Is it so difficult to imagine that, shit, were fucked?

In some remote corner of the universe, flickering in the light of the countless solar systems into which it had been poured, there was once a planet on which clever animals invented cognition. It was the most arrogant and most mendacious minute in the history of the world; but a minute was all it was. After nature had drawn just a few more breaths the planet froze and the clever animals had to die.[5]

There is something reflected in the gleaming aluminum, the candy-apple neon, and low hum of Self-Contained. An optimism, perhaps, that if we structure things just right, if we allow for recursive corrections, if we prepare and adjust, we wont be the ones responsible for bringing the short reign of humanity to an end. We might not be Nietzsches arrogant creatures doomed to death on a frozen, or in this case, scorched Earth. We may just be the ones that become whats next. Either way, be prepared.

Joel Kuennen

Joel Kuennenis the Chief Operations Officer and a Senior Editor at ArtSlant.

[1] Moores Law holds that the number of transistors in an integrated circuit doubles every two years. This law has been extrapolated to include the exponential rate of computational and technological advancement more broadly.

[2] Vernor Vinge, The Coming Technological Singularity: How to Survive in the Post-Human Era (paper presented at the VISION-21 Symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, March 30-31, 1993).

[3] Kurzweil, it should be noted, is driven to defeat death so that he may resurrect his father who died early on in Kurzweils life. How human is that?!

[4] Its difficult to ignore humor when discussing the end. One cannot approach nothingness without being a bit glib.

[5] Friedrich Nietzsche, On Truth and Lies in a Non-Moral Sense, Trans. Ronald Spiers. 1873.

(Image at top: Sterling Crispin, Self-Contained Investment Module and Contingency Package (Cloud-Enabled Modular Emergency-Enterprise Application Platform) (detail), 2015. Courtesy of the artist)

Link:

Sterling Crispin: Begin at the End - ArtSlant

Singularitarianism | Prometheism.net

Ray Kurzweil is a genius. One of the greatest hucksters of the age. Thats the only way I can explain how his nonsense gets so much press and has such a following. Now he has the cover of Time magazine, and an article called 2045: The Year Man Becomes Immortal. It certainly couldnt be taken seriously anywhere else; once again, Kurzweil wiggles his fingers and mumbles a few catchphrases and upchucks a remarkable prediction, that in 35 years (a number dredged out of his compendium of biased estimates), Man (one, a few, many? How? He doesnt know) will finally achieve immortality (seems to me youd need to wait a few years beyond that goal to know if it was true). Now weve even got a name for the Kurzweil delusion: Singularitarianism.

Theres room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or wont happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe youre walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything. They have no fear of sounding ridiculous; your ordinary citizens distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality. When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence.

Wow. Sounds just like the Raelians, or Hercolubians, or Scientologists, or any of the modern New Age pseudosciences that appropriate a bit of jargon and blow it up into a huge mythology. Nice hyperbole there, though. Too bad the whole movement is empty of evidence.

One of the things I do really despise about the Kurzweil approach is their dishonest management of critics, and Kurzweil is the master. He loves to tell everyone whats wrong with his critics, but he doesnt actually address the criticisms.

Take the question of whether computers can replicate the biochemical complexity of an organic brain. Kurzweil yields no ground there whatsoever. He does not see any fundamental difference between flesh and silicon that would prevent the latter from thinking. He defies biologists to come up with a neurological mechanism that could not be modeled or at least matched in power and flexibility by software running on a computer. He refuses to fall on his knees before the mystery of the human brain. Generally speaking, he says, the core of a disagreement Ill have with a critic is, theyll say, Oh, Kurzweil is underestimating the complexity of reverse-engineering of the human brain or the complexity of biology. But I dont believe Im underestimating the challenge. I think theyre underestimating the power of exponential growth.

This is wrong. For instance, I think reverse-engineering the general principles of a human brain might well be doable in a few or several decades, and I do suspect that well be able to do things in ten years, 20 years, a century that I cant even imagine. I dont find Kurzweil silly because Im blind to the power of exponential growth, but because:

Kurzweil hasnt demonstrated that there is exponential growth at play here. Ive read his absurd book, and his data is phony and fudged to fit his conclusion. He cheerfully makes stuff up or drops data that goes against his desires to invent these ridiculous charts.

Im not claiming he underestimates the complexity of the brain, Im saying he doesnt understand biology, period. Handwaving is not enough if hes going to make fairly specific claims of immortality in 35 years, there had better be some understanding of the path that will be taken.

There is a vast difference between grasping a principle and implementing the specifics. If we understand how the brain works, if we can create a computer simulation that replicates and improves upon the function of our brain, that does not in any way imply that my identity and experiences can be translated into the digital realm. Again, Kurzweil doesnt have even a hint of a path that can be taken to do that, so he has no basis for making the prediction.

Smooth curves that climb upward into infinity can exist in mathematics (although Kurzweils predictions dont live in state of rigor that would justify calling them mathematical), but they dont work in the real world. There are limits. Weve been building better and more powerful power plants for aircraft for a century, but they havent gotten to a size and efficiency to allow me to fly off with a personal jetpack. I have no reason to expect that they will, either.

While I dont doubt that science will advance rapidly, I also expect that the directions it takes will be unpredictable. Kurzweil confuses engineering, where you build something to fit a predetermined set of specifications, with science, in which you follow the evidence wherever it leads. Look at the so-called war on cancer: it isnt won, no one expects that it will be, but what it has accomplished is to provide limited success in improving health and quality of life, extending survival times, and developing new tools for earlier diagnosis thats reality, and understanding reality is achieved incrementally, not by sudden surges in technology independent of human effort. It also generates unexpected spinoffs in deeper knowledge about cell cycles, signaling, gene regulation, etc. The problems get more interesting and diverse, and its awfully silly of one non-biologist in 2011 to try to predict what surprises will pop out.

Kurzweil is a typical technocrat with limited breadth of knowledge. Imagine what happens IF we actually converge on some kind of immortality. Who gets it? If its restricted, what makes Kurzweil think he, and not Senator Dumbbum who controls federal spending on health, or Tycoon Greedo the trillionaire, gets it? How would the world react if such a capability were available, and they (or their dying mother, or their sick child) dont have access? What if its cheap and easy, and everyone gets it? Kurzweil is talking about a technology that would almost certainly destroy every human society on the planet, and he treats it as blithely as the prospect of getting new options for his cell phone. In case he hadnt noticed, human sociology and politics shows no sign of being on an exponential trend towards greater wisdom. Yeah, expect turbulence.

Hes guilty of a very weird form of reductionism that considers a human life can be reduced to patterns in a computer. I have no stock in spiritualism or dualism, but we are very much a product of our crude and messy biology we percieve the world through imprecise chemical reactions, our brains send signals by shuffling ions in salt water, our attitudes and reactions are shaped by chemicals secreted by glands in our guts. Replicating the lightning while ignoring the clouds and rain and pressure changes will not give you a copy of the storm. It will give you something different, which would be interesting still, but its not the same.

Kurzweil shows other signs of kookery. Two hundred pills a day? Weekly intravenous transfusions? Drinking alkalized water because hes afraid of acidosis? The man is an intelligent engineer, but hes also an obsessive crackpot.

Oh, well. Ill make my own predictions. Magazines will continue to praise Kurzweils techno-religion in sporadic bursts, and followers will continue to gullibly accept what he says because it is what they wish would happen. Kurzweil will die while brain-uploading and immortality are still vague dreams; he will be frozen in liquid nitrogen, which will so thoroughly disrupt his cells that even if we discover how to cure whatever kills him, there will be no hope of recovering the mind and personality of Kurzweil from the scrambled chaos of his dead brain. 2045 will come, and those of us who are alive to see it, will look back and realize it is very, very different from what life was like in 2011, and also very different from what we expected life to be like. At some point, I expect artificial intelligences to be part of our culture, if we persist; theyll work in radically different ways than human brains, and they will revolutionize society, but I have no way of guessing how. Ray Kurzweil will be forgotten, mostly, but records of the existence of a strange shaman of the circuitry from the late 20th and early 21st century will be tucked away in whatever the future databases are like, and people and machines will sometimes stumble across them and laugh or zotigrate and say, How quaint and amusing!, or whatever the equivalent in the frangitwidian language of the trans-entity circumsolar ansible network might be.

And thatll be kinda cool. I wish I could live to see it.

Go here to read the rest:

Singularitarianism? Pharyngula

Read the original post:

Singularitarianism | Prometheism.net