Page 112

Category Archives: Singularitarianism

Ethics of artificial intelligence – Wikipedia

Posted: October 30, 2021 at 3:28 pm

Ethical issues specific to AI

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificially intelligent systems.[1] It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, in machine ethics. It also includes the issue of a possible singularity due to superintelligent AI.

The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots.[2] Robot ethics intersect with the ethics of AI. Robots are physical machines whereas AI can be only software.[3] Not all robots functions through AI systems and not all AI systems are robots. Robot ethics considers how machines may be used to harm or benefit humans, their impact on individual autonomy, and their effects on social justice.

Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral.[4][5][6][7] To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs.[8]

Isaac Asimov considered the issue in the 1950s in his I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.[9] More recently, academics and many governments have challenged the idea that AI can itself be held accountable.[10] A panel convened by the United Kingdom in 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator.[11]

In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fdrale of Lausanne in Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.[12]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[13] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[14][15] The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue.[16] They point to programs like the Language Acquisition Device which can emulate human interaction.

Vernor Vinge has suggested that a moment may come when some computers are smarter than humans. He calls this "the Singularity."[17] He suggests that it may be somewhat or possibly very dangerous for humans.[18] This is discussed by a philosophy called Singularitarianism. The Machine Intelligence Research Institute has suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.[19]

There are discussion on creating tests to see if an AI is capable of making ethical decisions. Alan Winfield concludes that the Turing test is flawed and the requirement for an AI to pass the test is too low.[20] A proposed alternative test is one called the Ethical Turing Test, which would improve on the current test by having multiple judges decide if the AI's decision is ethical or unethical.[20]

In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.[17]

However, there is one technology in particular that could truly bring the possibility of robots with moral competence to reality. In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons.[21] Robots embedded with neuromorphic technology could learn and develop knowledge in a uniquely humanlike way. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.

In Moral Machines: Teaching Robots Right from Wrong,[22] Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific learning algorithms to use in machines. Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. stare decisis),[23] while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers".[24]

According to a 2019 report from the Center for the Governance of AI at the University of Oxford, 82% of Americans believe that robots and AI should be carefully managed. Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deep fakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that don't require a human controller.[25]

In the review of 84[26] ethics guidelines for AI 11 clusters of principles were found: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, sustainability, dignity, solidarity.[26]

Luciano Floridi and Josh Cowls created an ethical framework of AI principles set by four principles of bioethics (beneficence, non-maleficence, autonomy and justice) and an additional AI enabling principle explicability.[27]

Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts.[28] Ben Goertzel and David Hart created OpenCog as an open source framework for AI development.[29] OpenAI is a non-profit AI research company created by Elon Musk, Sam Altman and others to develop open-source AI beneficial to humanity.[30] There are numerous other open-source AI developments.

Unfortunately, making code open source does not make it comprehensible, which by many definitions means that the AI code is not transparent. The IEEE has a standardisation effort on AI transparency.[31] The IEEE effort identifies multiple scales of transparency for different users. Further, there is concern that releasing the full capacity of contemporary AI to some organizations may be a public bad, that is, do more damage than good. For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted an extraordinary blog on this topic, asking for government regulation to help determine the right thing to do.[32]

Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term.[33] The OECD, UN, EU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks.[34][35][36]

On June 26, 2019, the European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG) published its Policy and investment recommendations for trustworthy Artificial Intelligence.[37] This is the AI HLEG's second deliverable, after the April 2019 publication of the "Ethics Guidelines for Trustworthy AI". The June AI HLEG recommendations cover four principal subjects: humans and society at large, research and academia, the private sector, and the public sector. The European Commission claims that "HLEG's recommendations reflect an appreciation of both the opportunities for AI technologies to drive economic growth, prosperity and innovation, as well as the potential risks involved" and states that the EU aims to lead on the framing of policies governing AI internationally.[38]

AI has become increasingly inherent in facial and voice recognition systems. Some of these systems have real business applications and directly impact people. These systems are vulnerable to biases and errors introduced by its human creators. Also, the data used to train these AI systems itself can have biases.[39][40][41][42] For instance, facial recognition algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people's gender;[43] These AI systems were able to detect gender of white men more accurately than gender of darker skin men. Further, a 2020 study reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people's voices than white people's.[44] Furthermore, Amazon terminated their use of AI hiring and recruitment because the algorithm favored male candidates over female ones. This was because Amazon's system was trained with data collected over 10-year period that came mostly from male candidates.[45]

Bias can creep into algorithms in many ways. For example, Friedman and Nissenbaum identify three categories of bias in computer systems: existing bias, technical bias, and emergent bias.[46] In natural language processing, problems can arise from the text corpus the source material the algorithm uses to learn about the relationships between different words.[47]

Large companies such as IBM, Google, etc. have made efforts to research and address these biases.[48][49][50] One solution for addressing bias is to create documentation for the data used to train AI systems.[51][52]

The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries and that almost no one is making an effort to identify or correct it.[53] There are some open-sourced tools [54] by civil societies that are looking to bring more awareness to biased AI.

"Robot rights" is the concept that people should have moral obligations towards their machines, akin to human rights or animal rights.[55] It has been suggested that robot rights (such as a right to exist and perform its own mission) could be linked to robot duty to serve humanity, analogous to linking human rights with human duties before society.[56] These could include the right to life and liberty, freedom of thought and expression, and equality before the law.[57] The issue has been considered by the Institute for the Future[58] and by the U.K. Department of Trade and Industry.[59]

Experts disagree on how soon specific and detailed laws on the subject will be necessary.[59] Glenn McGee reported that sufficiently humanoid robots might appear by 2020,[60] while Ray Kurzweil sets the date at 2029.[61] Another group of scientists meeting in 2007 supposed that at least 50 years had to pass before any sufficiently advanced system would exist.[62]

The rules for the 2003 Loebner Prize competition envisioned the possibility of robots having rights of their own:

61. If in any given year, a publicly available open-source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry. If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.[63]

In October 2017, the android Sophia was granted "honorary" citizenship in Saudi Arabia, though some considered this to be more of a publicity stunt than a meaningful legal recognition.[64] Some saw this gesture as openly denigrating of human rights and the rule of law.[65]

The philosophy of Sentientism grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligence show evidence of being sentient, this philosophy holds that they should be shown compassion and granted rights.

Joanna Bryson has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society.[66]

Joseph Weizenbaum[67] argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as:

Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated, for the artificially intelligent system would not be able to simulate empathy. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."[68]

Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer," pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.[68] However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines; Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against.[69]

Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as computationalism). To Weizenbaum, these points suggest that AI research devalues human life.[67]

AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes. Bill Hibbard[70] writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."

As the widespread use of autonomous cars becomes increasingly imminent, new challenges raised by fully autonomous vehicles must be addressed.[71][72] Recently,[when?] there has been debate as to the legal liability of the responsible party if these cars get into accidents.[73][74] In one report where a driverless car hit a pedestrian, the driver was inside the car but the controls were fully in the hand of computers. This led to a dilemma over who was at fault for the accident.[75]

In another incident on March 19, 2018, a Elaine Herzberg was struck and killed by a self-driving Uber in Arizona. In this case, the automated car was capable of detecting cars and certain obstacles in order to autonomously navigate the roadway, but it could not anticipate a pedestrian in the middle of the road. This raised the question of whether the driver, pedestrian, the car company, or the government should be held responsible for her death.[76]

Currently, self-driving cars are considered semi-autonomous, requiring the driver to pay attention and be prepared to take control if necessary.[77][failed verification] Thus, it falls on governments to regulate the driver who over-relies on autonomous features. as well educate them that these are just technologies that, while convenient, are not a complete substitute. Before autonomous cars become widely used, these issues need to be tackled through new policies.[78][79][80]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomy.[13][81] On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. However, a major concern is how the report will be implemented.[82] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[83][15] Some researchers state that autonomous robots might be more humane, as they could make decisions more effectively.[84]

Within this last decade, there has been intensive research in autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots."[85] From a consequentialist view, there is a chance that robots will develop the ability to make their own logical decisions on whom to kill and that is why there should be a set moral framework that the AI cannot override.[86]

There has been a recent outcry with regard to the engineering of artificial intelligence weapons that have included ideas of a robot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry. The United States Navy recently announced plans to develop autonomous drone weapons, paralleling similar announcements by Russia and Korea respectively. Due to the potential of AI weapons becoming more dangerous than human-operated weapons, Stephen Hawking and Max Tegmark signed a "Future of Life" petition[87] to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.[88]

"If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow", says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.[89]

Physicist and Astronomer Royal Sir Martin Rees has warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." Huw Price, a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology." These two professors created the Centre for the Study of Existential Risk at Cambridge University in the hope of avoiding this threat to human existence.[88]

Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios "seem potentially as important as the risks related to loss of control", but research investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them".[90]

Approaches like machine learning with neural networks can result in computers making decisions that they and the humans who programmed them cannot explain. It is difficult for people to determine if such decisions are fair and trustworthy, leading potentially to bias in AI systems going undetected, or people rejecting the use of such systems. This has led to advocacy and in some jurisdictions legal requirements for explainable artificial intelligence.[91]

Many researchers have argued that, by way of an "intelligence explosion," a self-improving AI could become so powerful that humans would not be able to stop it from achieving its goals.[92] In his paper "Ethical Issues in Advanced Artificial Intelligence" and subsequent book Superintelligence: Paths, Dangers, Strategies, philosopher Nick Bostrom argues that artificial intelligence has the capability to bring about human extinction. He claims that general superintelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the superintelligence to specify its original motivations. Because a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[93]

However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that superintelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to enhance ourselves.[94]

The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.[92][93] Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.[95] AI researchers such as Stuart J. Russell,[96] Bill Hibbard,[70] Roman Yampolskiy,[97] Shannon Vallor,[98] Steven Umbrello[99] and Luciano Floridi[100] have proposed design strategies for developing beneficial machines.

There are many organisations concerned with AI ethics and policy, public and governmental as well as corporate and societal.

Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit, The Partnership on AI to Benefit People and Society, to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. Apple joined in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.[101]

The IEEE put together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organization.

Traditionally, government has been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government and non-government organizations to ensure AI is ethically applied.

Intergovernmental initiatives:

Governmental initiatives:

Academic initiatives:

The role of fiction with regards to AI ethics has been a complex one. One can distinguish three levels at which fiction has impacted the development of artificial intelligence and robotics: Historically, fiction has been prefiguring common tropes that have not only influenced goals and visions for AI, but also outlined ethical questions and common fears associated with it. During the second half of the twentieth and the first decades of the twenty-first century, popular culture, in particular movies, TV series and video games have frequently echoed preoccupations and dystopian projections around ethical questions concerning AI and robotics. Recently, these themes have also been increasingly treated in literature beyond the realm of science fiction. And, as Carme Torras, research professor at the Institut de Robtica i Informtica Industrial (Institute of robotics and industrial computing) at the Technical University of Catalonia notes,[118] in higher education, science fiction is also increasingly used for teaching technology-related ethical issues in technological degrees.

History

Historically speaking, the investigation of moral and ethical implications of thinking machines goes back at least to the Enlightenment: Leibniz already poses the question if we might attribute intelligence to a mechanism that behaves as if it were a sentient being,[119] and so does Descartes, who describes what could be considered an early version of the Turing Test.[120]

The romantic period has several times envisioned artificial creatures that escape the control of their creator with dire consequences, most famously in Mary Shelleys Frankenstein. The widespread preoccupation with industrialization and mechanization in the 19th and early 20th century, however, brought ethical implications of unhinged technical developments to the forefront of fiction: R.U.R Rossums Universal Robots, Karel apeks play of sentient robots endowed with emotions used as slave labor is not only credited with the invention of the term robot (derived from the Czech word for forced labor, robota) but was also an international success after it premiered in 1921. George Bernard Shaw's play Back to Metuselah, published in 1921, questions at one point the validity of thinking machines that act like humans; Fritz Lang's 1927 film Metropolis shows an android leading the uprising of the exploited masses against the oppressive regime of a technocratic society.

The Impact of Fiction on Technological Development

While the anticipation of a future dominated by potentially indomitable technology has fueled the imagination of writers and film makers for a long time, one question has been less frequently analyzed, namely, to what extent fiction has played a role in providing inspiration for technological development. It has been documented, for instance, that the young Alan Turing saw and appreciated G.B. Shaw's play Back to Metuselah in 1933[121] (just 3 years before the publication of his first seminal paper[122] which laid the groundwork for the digital computer), and he would likely have been at least aware of plays like R.U.R., which was an international success and translated into many languages.

One might also ask the question which role science fiction played in establishing the tenets and ethical implications of AI development: Isaac Asimov conceptualized his Three laws of Robotics in the 1942 short story Runaround, part of the short story collection I, Robot; Arthur C. Clarke's short The sentinel, on which Stanley Kubrick's film 2001: A Space Odyssey is based, was written in 1948 and published in 1952. Another example (among many others) would be Philip K. Dicks numerous short stories and novels in particular Do Androids Dream of Electric Sheep?, published in 1968, and featuring its own version of a Turing Test, the Voight-Kampff Test, to gauge emotional responses of Androids indistinguishable from humans. The novel later became the basis of the influential 1982 movie Blade Runner by Ridley Scott.

Science Fiction has been grappling with ethical implications of AI developments for decades, and thus provided a blueprint for ethical issues that might emerge once something akin to general artificial intelligence has been achieved: Spike Jonze's 2013 film Her shows what can happen if when a user falls in love with the seductive voice of his smartphone operating system; Ex Machina, on the other hand, asks a more difficult question: if confronted with a clearly recognizable machine, made only human by a face and an empathetic and sensual voice, would we still be able to establish an emotional connection, still be seduced by it? (The film echoes a theme already present two centuries earlier, in the 1817 short story The Sandmann by E.T.A. Hoffmann.)

The theme of coexistence with artificial sentient beings is also the theme of two recent novels: Machines like me by Ian McEwan, published in 2019, involves (among many other things) a love-triangle involving an artificial person as well as a human couple. Klara and the Sun by Nobel Prize winner Kazuo Ishiguro, published in 2021, is the first-person account of Klara, an AF (artificial friend), who is trying, in her own way, to help the girl she is living with, who, after having been lifted (i.e. having been subjected to genetic enhancements), is suffering from a strange illness.

TV Series

While ethical questions linked to AI have been featured in science fiction literature and feature films for decades, the emergence of the TV series as a genre allowing for longer and more complex story lines and character development has led to some significant contributions that deal with ethical implications of technology. The Swedish series Real Humans (20122013) tackled the complex ethical and social consequences linked to the integration of artificial sentient beings in society. The British dystopian science fiction anthology series Black Mirror (20132019) was particularly notable for experimenting with dystopian fictional developments linked to a wide variety of resent technology developments. Both the French series Osmosis (2020) and British series The One deal with the question what can happen if technology tries to find the ideal partner for a person.

Future Visions in Fiction and Games

The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost Speciesism. The short story "The Planck Dive" suggests a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.[123]

The ethics of artificial intelligence is one of several core themes in BioWare's Mass Effect series of games.[124] It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scale neural network. This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story.

Over time, debates have tended to focus less and less on possibility and more on desirability,[125] as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.

Experts at the University of Cambridge have argued that AI is portrayed in fiction and nonfiction overwhelmingly as racially White, in ways that distort perceptions of its risks and benefits.[126]

Read more:

Ethics of artificial intelligence - Wikipedia

Posted in Singularitarianism | Comments Off on Ethics of artificial intelligence – Wikipedia

The singularity is not near: The intellectual fraud of the …

Posted: December 19, 2020 at 8:24 am

Technology was identified as the true official religion of the modern state more than seventy years ago by the late Christian anarchist philosopher Jacques Ellul. A remarkable man, and a leader of the French underground resistance who sheltered refugees from the Holocaust, Ellul survived a global catastrophe that was enabled by scientists and engineers only to find that these same technicians, these false priests, would rule the century. And how he loathed them. Particularly disquieting is the gap between the enormous power they wield and their critical ability, which must be estimated as null, he wrote.

If, as Ellul has it, technology is the state religion, Singularitarianism must be seen as its most extreme and fanatical sect. It is the Opus Dei of the postwar church of gadget worship. Ray Kurzweil may be the best-known prophet of this order, but he was not the first. The true father of Singularitarianism is a sci-fi author and retired mathematics professor from Wisconsin named Vernor Vinge. His earliest written exposition of the idea appeared in the January 1983 issue of Omni, an oddball science magazine founded by Kathy Keeton, once among the highest-paid strippers in Europe, according to her New York Times obituary, but better known for promoting quack cancer cures and for cofounding Penthouse with her husband, Bob Guccione. In this esteemed journal, amid articles on sea monkeys, apemen and living dinosaurs, Vinge forecast a looming technological singularity in which computer intelligence would exceed the comprehension of its human creators. The remarkable exponential growth curve of technological advancement was not about to level off, Vinge proclaimed, but rather to accelerate beyond all imagining. We will soon create intelligences greater than our own, Vinge wrote. Unlike later writers, he did not see this as necessarily a positive development for humanity. Physical extinction may not be the scariest possibility, he wrote. Think of the different ways we relate to animals. In other words, our new robot overlords might reduce humans to slaves, livestock, or, if were lucky, pets.

Like many creative types, Vinge lacked the business savvy to fully exploit the market potential of his ideas. That task fell to Ray Kurzweil. A consummate brand builder, Kurzweil turned Vinges frown upside-down and recast the Singularity as a great big cosmic party, to great commercial success. Douglas Hofstadter, the scientist and author, derided Kurzweils theses as a very bizarre mixture of ideas that are solid . . . with ideas that are crazy. Nevertheless, it was a winning formula. By 2011, Time magazine named Kurzweil one of the one hundred most influential people in the world and endorsed the Singularity sect in a cover story. While seemingly preposterous, the magazine declared, the prospect of super-intelligent immortal cyborgs deserved sober, careful evaluation.

Even though it sounds like science fiction, it isnt, no more than a weather forecast is science fiction. Its not a fringe idea; its a serious hypothesis about the future of life on Earth.

This is absurd. Science begins with doubt. Everything else is sales. And Kurzweil is more salesman than scientist. In his writing and speeches, he has recycled the same tired catchphrases and anecdotes again and again. His entire argument hangs on two magic words: Moores Law, the theory that computer processing power grows exponentially each year. The theory, which was first conceived of by Intel cofounder Gordon Moore (and later named after him), doubles, incidentally, as a kind of advertisement for Intel microchips. Moores Law also inspired Kurzweils own Law of Accelerating Returns, which encapsulates his belief that the pace of all technological innovation is, over time, exponential. Within decades, Kurzweil figures, the unstoppable evolution of gadgetry will bring about the Singularity and all it entails: unlimited energy, superhuman AI, literal immortality, the resurrection of the dead, and the destiny of the universe, namely, the awakening of all matter and energy.

Kurzweil may not be much of a scientist, but he is an entertaining guru. His fake-it-till-you-make-it approach seems in good fun, except when he uses it to bluff through life-or-death problems. Whats worse, powerful people take him seriously, because he is forever telling them what theyd like to hear and zealously defending the excesses of consumer capitalism. Like techno-utopians such as Peter Thiel, Kurzweil has long argued that corporate interests should be calling the shots in the new paradigms of the future. Such views are unsurprising coming from a longtime corporate executive and salesman. Fossil fuels wrecking the planet? No worries, Kurzweil declares. Well crack the problem of cold fusion soon, and nanobotsalways with the nanobots!will restore the ruined environment. As Americas fortunes and prospects faded through the aughts, Kurzweils sanguine reveries sold more copies than ever, and the author insisted that things were better than ever and soon to be even more amazing.

For every conceivable problem, there is a plan, and its always the same plan: Someone in the future will invent something to solve it. Kurzweil has delivered the one true American faith the people were always waiting for, and it turns out to be an absurdly optimistic form of business-friendly millenarianism, which could pass for a satirical caricature of the tech worship Jacques Ellul identified.

The trick will be to survive a few more decades, until the inventions of atom-scaled medical nanobots and digital backups of human consciousness. We have the means right now to live long enough to live forever, Kurzweil writes. But most baby boomers wont make it. This led to his other scammy obsession life extension. To help his own rapidly aging generation survive until the arrival of the technological tipping point when they might upload their memories and personalities to a Google cloud serveraround 2045, he figureshe promotes a program of diet, exercise, and unproven life-extending supplements. If all else fails to ward of the Reaper, one can always have ones body or brain frozen for later resuscitation, a process known as cryonics, which Kurzweil endorses as a last resort.

Kurzweils morbid obsession with disease and death led him into the depths of tech-abetted unconventional medicine, where many a Singularitarian followed. He received a diabetes diagnosis at age thirty-five. Displeased with insulin treatment, he set out to find a better way. The result was an idiosyncratic and ever-changing menu of herbal medicine, plus hundreds of daily nutritional supplements and a custom fitness regimen. The details are laid out in two books that Kurzweil co-wrote with his doctor, Terry Grossman: "Fantastic Voyage: Live Long Enough to Live Forever" and "Transcend: Nine Steps to Living Well Forever." The latter includes sixty-nine pages of recipes, including one for carrot salad sweetened with stevia, yum yum. Skeptic magazine slammed "Fantastic Voyage" as the triumph of hope over evidence and common sense and suggested that some of its advice might actually be harmful.

Kurzweil and Grossman shamelessly cashed in on their presumed authority by selling loosely regulated supplements to credulous consumers under the label of Ray and Terrys Longevity Productswhere science and nutrition meet. The authors website shills dubious formulations including an $86 Anti-Aging MultiPack that promises a one-month supply of smart nutrients. As proof of efficacy, Kurzweil offers himself. Although he is seventy at this writing, he has long claimed that his true biological age was twenty years younger. The lens suggests otherwise. In 2014, Kurzweil began sporting a new hairdolonger, straighter, and several shades darker than before. The sudden change worried some commenters on his website, kurzweilai.net. Was it a hairpiece? An unfortunate dye job? Or maybe Kurzweil had finally stumbled across a real miracle pill?

* * *

I am by no means the first to label Singularitarianism a new religion or a cult. Kurzweil himself has said the comparison was understandable, given the preoccupation with mortality. However, he rejects the argument that his sect is religious in nature, because he did not come to it as a spiritual seeker. Rather, Kurzweil writes, he became a Singularitarian as a result of practical eforts to make optimal tactical decisions in launching technology enterprises. Startups showed him the way!

Being a Singularitarian, Kurzweil claims, is not a matter of faith but one of understanding. This is a refrain Singularitarians share with Scientologists, for L. Ron Hubbard always marketed his doctrines as technology. This tic makes Singularitarians impossible to argue with. Because they believe that they have arrived at their beliefs scientifically, anyone who disputes their ludicrous conclusions must be irrational. If this sect did not have the ears of so many powerful men in business, politics, and military affairs, its leaders might seem clownish. But they are serious, dangerously so.

# # #

Corey Pein's new book, "Live Work Work Work Die: A Journey to the Savage Heart of Silicon Valley," is available now from Metropolitan Books. Read Salon's review here.

See the original post here:

The singularity is not near: The intellectual fraud of the ...

Posted in Singularitarianism | Comments Off on The singularity is not near: The intellectual fraud of the …

What it means to be a cyborg in 2019 – Quartz

Posted: November 23, 2019 at 12:01 pm

I have a four-foot-tall robot in my house that plays with my kids. Its name is Jethro.

Both my daughters, aged 5 and 9, are so enamored with Jethro that they have each asked to marry it. For fun, my wife and I put on mock weddings. Despite the robot being mainly for entertainment, its very basic artificial intelligence can perform thousands of functions, including dance and teach karate, which my kids love.

The most important thing Jethro has taught my kids is that its totally normal to have a walking, talking machine around the house that you can hang out with whenever you want to.

Given my daughters semi-regular use of smartphones and tablets, I have to wonder how this will affect them in the future. Will they have any fear of technologies like driverless cars? Will they take it for granted that machine intelligences and avatars on computers can be their best friends, or even their bosses?

Will marrying a super-intelligent robot in 20 years be a natural decision? Even though I love technology, Im not sure how I would feel about having a robot-in-law. But my kids might think nothing of it.

This is my story of transhumanism.

Courtesy of Zoltan Istvan

My transhumanism journey began in 2003 when I was reporting a story for National Geographic in Vietnams demilitarized zone and I almost stepped on a landmine.

I remember my guide roughly shoving me aside and pointing to the metal object half sticking out of the ground in front of me.

I stared at the device that would have completely blown my legs off had my boot tripped the mine. I had just turned 30. The experience left me shaken. And it kept haunting me.

That night as I lay tense and awake in my hotel room, I had the epiphany that has helped define the rest of my life: I decided that the most important thing in my existence was to fight for survival. To put it another way: My goal was to never die.

Because I was not religious, I immediately turned to the thing that gave meaning to my world: science and technology. I took a leap of faith and made a wager that day. I later called this (and even later, dedicated a book to it) the transhumanist wager.

The life extension business of transhumanism will be a $600 billion industry by 2025.

My idea for an immortality wager came from Pascals Wager, the famous bet that caught on in the 17th century that loosely argued it was better to believe in God than not to, because you would be granted an afterlife if there was indeed a God. My transhumanist wager was based in my belief that its better to dedicate our resources to science and technology to overcome death while were still aliveso we dont ever have to find out whether there is an afterlife or not. It turns out I wasnt alone in my passion to live indefinitely through science. A small social movement, mostly of academics and researchers, were tackling similar issues, starting organizations, and funding research.

Some of them called themselves transhumanists.

Fast-forward 16 years from my landmine incident, and transhumanism has grown way beyond its main mission of just overcoming death with science.

Now the movement is the de facto philosophy (maybe even the religion) of Silicon Valley. It encapsulates numerous futurist fields: singularitarianism, cyborgism, cryonics, genetic editing, robotics, AI, biohacking, and others.

Biohacking in particular has taken offthe practice of physically hacking ones body with science, changing and augmenting our physiology the same way computer hackers would infiltrate a mainframe.

Its pretty obvious why it has emerged as such a big trend: It attracts the youth.

Not surprisingly, worrying about death is something that older people usually do (and, apparently, those younger people who almost step on landmines). Most young people feel invincible. But tell young people they can take brain drugs called nootropics that make them super smart, or give them special eye drops that let them see in the dark, or give them a chip implant that enhances human ability (like the one I have), and a lot of young people will go for it.

In 2016, I ran for the US presidency as the Transhumanist Party nominee. To get support from younger biohackers, my team and I journeyed on the Immortality Busmy 38-foot coffin-shaped campaign busto Grindfest, the major annual biohacking meet-up in Tehachapi, California. In an old dentists chair in a garage, biohackers injected me with a horse syringe containing a small radio-frequency-identification implant that uses near-field communication technologythe same wireless frequency used in most smartphones. The tiny deviceits about the size of a grain of ricewas placed just under the skin in my hand. With my chip, I could start a car, pay with bitcoin, and open my front door with a lock reader.

Four years later, I still have the implant and use it almost every day. For surfers or joggers like myself, for example, its great because I dont have to carry keys around.

One thing I do have to navigate is how some religious people view me once they understand I have one. Evangelical Christians have told me that an implant is the mark of the beast, as in from the Bibles Book of Revelations.

Even though Im tagged by conspiracy theorists as a potential contender for the Antichrist, I cant think of any negatives in my own experiences to having a chip implant. But as my work in transhumanism has reached from the US Military to the World Bank to many of the worlds most well-known universities, my chip implant only exasperates this conspiracy.

While people often want to know what other things Ive done to my body, in reality becoming a cyborg is a lot less futuristic and drastic than people think.

For me and for the thousands of people around the world who have implants, its all about functionality. An implant simply makes our lives easier and more efficient. Mine also sends out pre-written text messages when peoples phones come within a few feet of me, which is a fun party trick.

But frankly, a lot of the most transformative technology is still being developed, and if youre healthy like me, theres really not much benefit in doing a lot of biohacking today.

I take nootropics for better brain memory, but theres no conclusive research I know of that it actually works yet. Ive done some brainwave therapy, sometimes called direct neurofeedback, or biofeedback, but I didnt see any lasting changes. I fly drones for fun, and of course I also have Jethro, our family robot.

For the most part, members of the disabled community are the ones who are truly benefiting from transhumanist technologies today. If you have an arm shot off in a war, its cyborg science that gives you a robot arm controlled by your neural system that allows you to grab a beer, play the piano, or shake someones hand again.

But much more dramatic technology is soon to come. And the hope is that it will be availableand accessibleto everyone.

I asked to be added to a volunteer list for an experiment that will place implants in peoples brains that would allow us to communicate telepathically, using AI. (Biohacking trials like this are secretive because they are coming under more intense legal scrutiny.)Im also looking into getting a facial recognition security system for my home. I might even get a pet dog robot; these have become incredibly sophisticated, have fur softer than the real thing (that doesnt shed all over your couch or trigger allergies) and can even act as security systems.

Beyond that, people are using stem cells to grow new teeth, genetic editing to create designer babies, and exoskeleton technology that will likely allow a human to run on water in the near future.

Most people generally focus on one aspect of transhumanism, like just biohacking, or just AI, or just brainwave-tech devices. But I like to try it all, embrace it all, and support it all. Whatever new transhumanist direction technology takes, I try to take it all in and embrace the innovation.

This multi-faceted approach has worked well in helping me build a bridge connecting the various industries and factions of the transhumanist movement. Its what inspired me to launch presidential and California gubernatorial campaigns on a transhumanist platform. Now Im embarking on a new campaign in 2020 for US president as a Republican, hoping to get conservatives to become more open-minded about the future.

The amount of money flowing into transhumanist projects is growing into many billions of dollars. The life extension business of transhumanism will be a $600 billion industry by 2025, according to Bank of America. This is no time for transhumanism to break apart into many different divisions, and its no time to butt heads. We need to unite in our aim to truly change the human being forever.

Transhumanistsit doesnt matter what kind you arebelieve they can be more than just human. The word natural is not in our vocabulary. Theres only what transhumanists can do with the tools of science and technology they create. That is our great calling: to evolve the human being into something better than it is.

Because transhumanism has grown so broadly by now, not all transhumanists agree with me on substantially changing the human being. Some believe we should only use technology to eliminate suffering in our lives. Religious transhumanists believe we should use brain implants and virtual reality to improves our morality and religious behavior. Others tell me politics and transhumanism should never mix, and we must always keep science out of the hands of the government.

We need unity of some significant sort because as we grow at such a fast rate there are a lot of challenges ahead. For example, the conservative Christian Right wants to enact moratoriums against transhumanism. The anarcho-primativists, led by people like the primitivist philosopher and author John Zerzan (who I debated once at Stanford University), want to eliminate much technology and go back to a hunting-gathering lifestyle which they believe is more in tune with Earths original ecology. And finally, we must be careful that the so-called one percent doesnt take transhumanist technology and leave us all in the dust, by becoming gods themselves with radical tech and not sharing the benefits with humanity.

I personally believe the largest danger of the transhumanist era is the fact that within a few decades, we will have created super-intelligent AI. What if this new entity simply decides it doesnt like humans? If something is more sophisticated, powerful, intelligent, and resilient than humans, we will have a hard time stopping it if it wants to harm or eliminate us.

Whatever happens in the future, we must take greater care than we ever have before as our species enters the transhumanist age. For the first time, we are on the verge of transforming the physical structure of our bodies and our brains. And we are inventing machines that could end up being more intelligent and powerful than we are. This type of change requires that not only governments act together, but also cultures, religions, and humanity as a whole.

In the end, I believe that a lot more people will be on board with transhumanism than admit it. Nearly all of us want to eliminate disease, protect our families from death, and create a better path and purpose for science and technology.

But I also realize that this must be done ever so delicately, so as not to prematurely push our species into crisis with our unbridled arrogance. One day, we humans may look back and revel in how far our species has evolvedinto undying mammals, cyborgs, robots, and even pure living data. And the most important part will be to be able to look back and know we didnt destroy ourselves to get there.

Excerpt from:

What it means to be a cyborg in 2019 - Quartz

Posted in Singularitarianism | Comments Off on What it means to be a cyborg in 2019 – Quartz

The Furthest Exit: Bannon’s complex agenda – Village

Posted: May 26, 2017 at 3:54 am

Steve Bannon, President Trumps chief strategist, was removed from the National Security Council in early April. Among the Kremlinologists who watch the Trump White House, this has been interpreted as a setback for the man whose neo-reactionary philosophy provides the guiding principles of Trumpism: Islamophobia, misogyny, xenophobia, and excited anticipation of a new American revolution. But Bannons ousting has also been called a disguised promotion, as he is restored to his proper role of the mostly unseen puppet-master.

In the first part of this article in last months issue, I put Bannon in the context of the alt-right and drew the connections between him, Gamergate, Milo Yiannopoulos, 4chan and Alexander Dugin. Here I want to continue this profile of Bannon by looking at his political philosophy.

Bannon subscribes to an esoteric version of history known as the Fourth Turning. Developed by amateur historians William Strauss and Neil Howe in the 1990s, the Fourth Turning applies the logic of cyclical history to the United States. Each turning represents a distinctive atmosphere that dominates a generation. Or better yet, to borrow a phrase from True Detective, a psychosphere, encompassing the social field of possibilities.

In the first turning, following a period of crisis, the atmosphere is one of societal confidence built on a strong state and positively repressed individualism, known as The High. For Strauss and Howe, this period ran from the end of World War II to the Kennedy assassination in 1963. This is the era of the Greatest Generation and profound optimism in the American Dream.

This turning was followed by The Awakening, where the state-individual relation was inverted. Characterised by a dismantling of the social order and the pursuit of individual autonomy, it descended, over time, into generalised confusion as society splintered. It ran up until the 1980s, and was followed by The Unravelling where individualism became unfettered to such an extent that societal ties became exceptionally weak. Then follows the final stage, the one Bannon believes we are entering, of The Crisis, where conditions require a radical re-assertion of the collective.

One may wonder what the crisis was that shifted us into the Crisis. For Bannon the financial crisis of 2008 marked the moment when the individualism of the baby boomers was revealed in its full consequence: a stolen future. This is how he couches his vision when speaking to older conservative audiences, requiring that they own up to their failure and then pointing toward the rise, in line with Strauss and Howe, of a robust Millennial generation that will blast through the Crisis to get to the next High.

Bannon has in mind a quite specific segment of the Millennial generation: the pick-up artists, the meme-warriors of Twitter and 4chan, and the campus-touring Milo enthusiasts. It also includes the Chad nationalists, a group of norms who might not explicitly position themselves on the political spectrum, but tend to be on the right. Did Chad vote for Trump? Its implicit in his name, like some kind of metaphysical property. And it means Chads dad and his girlfriend and his fraternity did too.

These people will quietly act to maintain Americanism, but not necessarily in a militant way. The decision might not always be theirs, however, as central to Bannons vision is an existential confrontation with Islam that will radicalise the entire Millennial generation away from individualism and back toward statism, since only a strong state could win such a battle. For Bannon, there is a multi-faceted project to accomplish. The State in its current decadent baby-boomer form must be dismantled. Yet this deconstruction (his own term) is simply a prelude to a complete regeneration of the society to be accomplished through total war. On this point, we find ourselves hoping that Trumps personality will prove sufficiently resistant to Bannons apocalypticism. Some say it is General James Mad Dog Mattis, Secretary of Defense, who will be the greatest obstacle to Bannons vision. Surely this makes Mattis the worlds most unlikely dove.

Maybe you know all this. You have heard about Bannon the puppeteer and the raw onslaught the alt-right has engaged Western culture with. Yet the story is even murkier. Alongside the alt-right exists another position, neoreaction, and it as close as this spectrum has to a philosophical system. Trumpist populism and Bannonesque esotericism are no doubt in the ascendant, but they are always threatened by their innate anarchism. There is a sense that the game might implode, that equilibrium could be restored, that a counter-populist movement might render Trumps reign an aberration.

Neoreaction, in contrast, is content to abide its time. Developed by the elusive Curtis Yarvin, under the penname Mencius Moldbug, neoreaction binds a disdain for stagnated democratic politics with a cold formalist system of neo-monarchism. Given the inefficiencies of democracy, only a strong leader, fully free to implement a political programme, can steady the ship. Neoreaction sees itself as an antidote to the Whiggish misreading of history that traces a continuous record of human progress. Instead of the Enlightenment, neoreaction ushers in the Dark Enlightenment. The most consistent formulation of the Dark Enlightenment comes not from Moldbug, but from the British philosopher Nick Land. Land has a storied history, emerging as one of the most exciting Continental philosophers in the 1990s before abandoning academia and the west for a freelance writing career in Shanghai. Throughout, he served as an intellectual lightning rod for the hugely diverse spectrum of alt-right and neoreactionary ideas. This has involved him extolling the virtues of cryptocurrencies, human biodiversity, and singularitarianism (space prevents me from developing these), but his most important contribution, is his emphasis on the all-too-easily overlooked libertarian concept of exit.

In the 1970s and 1980s, libertarians became split over whether to enter representational politics. The entryist wing established the Libertarian Party in the United Sates as a means to introduce the idea of libertarianism into mainstream politics and out of obscurity; similar parties have cropped up in other countries. The American party was eventually bought out by the wealthy Koch Brothers, who pitched a bid for the presidency in 1980, but eventually gained a foothold in the Republican Party. Ron Paul long acted as the libertarian conscience in presidential debates, never expecting to win, but at least influencing the debate (the act is now performed by his son, Rand).

Yet not all libertarians believed that entryism was the best method. Rather than gain a voice in democratic politics they would seek an exit from it. This concept of exit over voice is expressed in numerous proposals: isolated communes, seasteading (living on ships at sea), space colonisation, and perhaps most successfully, by developing a digital frontier. Land in particular praises this cyber-libertarian politics for its pragmatic ability to implement exit.

The most successful cyber-libertarians have been the cypherpunks. Originally a small group of privacy-conscious hackers, the cypherpunks planted the seed for the development of a digital currency, known as Bitcoin, that has allowed a large number of libertarians to opt out, within constraints, from state-backed finance. Bitcoin also acted as the bedrock for the emergence of darknet marketplaces, such as the infamous Silk Road, where illicit goods could be traded outside the gaze of the state. There is no denying, however, that these new, ungovernable worlds are only proto-libertarian, in as much as they do not bleed out into the real world.

At least, thats how it seemed until just a couple years ago, when libertarians noticed a strange new phenomenon. Their word, exit, was entering the debate on the future of the European Union.

In a European context, the very concept of exit seems startling. And yet, that far more integrated union of states, the United States, has been haunted by the same concept, under the name of secession, almost since the very start. Since the Civil War of the 1860s, secessionism has taken the form of threats from the defeated South to dissolve the union. A more modern variation on the theme has been the possibility of Californian exit. The Californian strain, which blends technological religiosity and libertarianism elitism, has its spiritual capital in Silicon Valley.

Silicon Valley is also, of course, the natural territory of PayPal founder Peter Thiel. An ambitious, driven and preternaturally gifted entrepreneur, Thiel is the perfect embodiment of this culture. His contrarian streak ante-dates his relatively recent alliance with Trump. As far back as 1999, Thiel co-authored, with David O Sacks, The Diversity Myth, a book that may have seemed a little radical at the time, but would likely earn him a campus ban these days, if students were aware it existed. Lambasting the decline of the American university in a stew of gender politics, multicultural lip service and upended curricula, Thiel and Sacks portray the contemporary campus as a training ground for a new elite, recognisable to one another through their politically correct, stilted discourse. Politics is the natural home for such an elite, an arena where milquetoast personalities coast along through connections and survive primarily by causing as little disruption as possible. Thiel finds himself, therefore, within the recognisable orbit of alt-right concerns, especially those about campus indoctrination, political correctness and haughty elitism.

More than anything else, however, it is the almost pathological lack of political ambition on the part of humdrum, non-disruptive elitist politics that explains Thiels decision to plump for Trump. In the past Thiel, has explained his libertarianism as an escape from politics (exit) and the construction of non-political futures through technological means. The means are varied, but involve a triple bet on cyberspace, outer space and seasteading. Thiel has placed a number of such bets, but perhaps the most interesting, politically, are those on Bitcoin-related projects for instance, the creator of the rival digital currency Ethereum, Vitalik Buterin was a recipient of a Thiel Fellowship scholarship. Buterin is a mild fellow and rarely gets mixed up in politics, but Thiel is content to invest in projects developed by contentious figures, as with his financial support of Hulk Hogan in a suit against the muckraking website Gawker. Thiel is also an investor in Urbit, a super-nerdy take on having a personal server.

Who developed Urbit? None other than Curtis Yarvin/Mencius Moldbug, of neoreactionary fame. Thiel is not just betting on Trump. He is betting on a Moldbuggian outcome where the state is finally recognised for what it is a large company run by a CEO, but a special one since it is so powerful. With Trump installed CEO-King, Thiel seems himself as its CTO (Chief Technology Officer), for now. Bannon, as an ex-Goldman Sachs trader, is by no means immune to such a perspective, and he too has been linked to Moldbug, but it is Trump who plays the part so well, all gold furniture and court intrigue.

As in all kingdoms, it is the machinations of court politics that will ultimately settle the direction of the meta-religion of the nation. For what is most consequential about the alt-right is its annihilation of what Moldbug called the Cathedral. The Cathedral describes a media-academic-cultural consensus with conditions for belonging: members must ascribe to the progressivist religion and must accept dogmas from feminism, multiculturalism and trans-rights activism.

What will fill the vacuum left by the collapsing Cathedral? Bannon offers the Catholic option of the Fourth Turning where redemption can be achieved through submission to meta-historical destiny. Thiel offers Protestantism without progressivism, but one that is even less defined in terms of outcome, preferring instead that society develop a taste for the risky rewards of exit. And in the midst of this struggle for the soul sits Trump, the most irreligious of kings, the physical embodiment of the emptiness to be filled, with one pussy-grabbing hand on the nuclear button and the other wrapped around his golf-club sceptre. 100 days in, 1346 to go.

by Paul Eliot-Ennis

Go here to read the rest:

The Furthest Exit: Bannon's complex agenda - Village

Posted in Singularitarianism | Comments Off on The Furthest Exit: Bannon’s complex agenda – Village

Singularitarianism | Transhumanism Wiki | Fandom powered …

Posted: March 6, 2017 at 3:03 pm

Singularitarianism is a moral philosophy based upon the belief that a technological singularity the technological creation of smarter-than-human intelligence is possible, and advocating deliberate action to bring it into effect and ensure its safety. While many futurists and transhumanists speculate on the possibility and nature of this technological development (often referred to as the Singularity), Singularitarians believe it is not only possible, but desirable if, and only if, guided safely. Accordingly, they might sometimes "dedicate their lives" to acting in ways they believe will contribute to its safe implementation.

The term "singularitarian" was originally defined by Extropian Mark Plus in 1991 to mean "one who believes the concept of a Singularity". This term has since been redefined to mean "Singularity activist" or "friend of the Singularity"; that is, one who acts so as to bring about the Singularity.[1]

Ray Kurzweil, the author of the book The Singularity is Near, defines a Singularitarian as someone "who understands the Singularity and who has reflected on its implications for his or her own life".[2]

In his 2000 essay, "Singularitarian Principles", Eliezer Yudkowsky writes that there are four qualities that define a Singularitarian:[3]

In July 2000 Eliezer Yudkowsky, Brian Atkins and Sabine Atkins founded the Singularity Institute for Artificial Intelligence to work towards the creation of self-improving Friendly AI. The Singularity Institute's writings argue for the idea that an AI with the ability to improve upon its own design (Seed AI) would rapidly lead to superintelligence. Singularitarians believe that reaching the Singularity swiftly and safely is the best possible way to minimize net existential risk.

Many believe a technological singularity is possible without adopting Singularitarianism as a moral philosophy. Although the exact numbers are hard to quantify, Singularitarianism is presently a small movement. Other prominent Singularitarians include Ray Kurzweil and Nick Bostrom.

Often ridiculing the Singularity as "the Rapture for nerds", many critics have dismissed singularitarianism as a pseudoreligion of fringe science.[4] However, some green anarchist militants have taken singularitarian rhetoric seriously enough to have called for violent direct action to stop the Singularity.[5]

lt:Singuliaritarianizmas

Original post:

Singularitarianism | Transhumanism Wiki | Fandom powered ...

Posted in Singularitarianism | Comments Off on Singularitarianism | Transhumanism Wiki | Fandom powered …

Technological utopianism – Wikipedia

Posted: March 5, 2017 at 4:06 pm

Technological utopianism (often called techno-utopianism or technoutopianism) is any ideology based on the premise that advances in science and technology will eventually bring about a utopia, or at least help to fulfil one or another utopian ideal. A techno-utopia is therefore a hypothetical ideal society, in which laws, government, and social conditions are solely operating for the benefit and well-being of all its citizens, set in the near- or far-future, when advanced science and technology will allow these ideal living standards to exist; for example, post-scarcity, transformations in human nature, the abolition of suffering and even the end of death. Technological utopianism is often connected with other discourses presenting technologies as agents of social and cultural change, such as technological determinism or media imaginaries.[1]

Douglas Rushkoff, a leading theorist on technology and cyberculture claims that technology gives everyone a chance to voice their own opinions, fosters individualistic thinking, and dilutes hierarchy and power structures by giving the power to the people.[2] He says that the whole world is in the middle of a new Renaissance, one that is centered on technology and self-expression. However, Rushkoff makes it clear that people dont live their lives behind a desk with their hands on a keyboard [3]

A tech-utopia does not disregard any problems that technology may cause,[4] but strongly believes that technology allows mankind to make social, economic, political, and cultural advancements.[5] Overall, Technological Utopianism views technologys impacts as extremely positive.

In the late 20th and early 21st centuries, several ideologies and movements, such as the cyberdelic counterculture, the Californian Ideology, transhumanism,[6] and singularitarianism, have emerged promoting a form of techno-utopia as a reachable goal. Cultural critic Imre Szeman argues technological utopianism is an irrational social narrative because there is no evidence to support it. He concludes that it shows the extent to which modern societies place faith in narratives of progress and technology overcoming things, despite all evidence to the contrary.[7]

Karl Marx believed that science and democracy were the right and left hands of what he called the move from the realm of necessity to the realm of freedom. He argued that advances in science helped delegitimize the rule of kings and the power of the Christian Church.[8]

19th-century liberals, socialists, and republicans often embraced techno-utopianism. Radicals like Joseph Priestley pursued scientific investigation while advocating democracy. Robert Owen, Charles Fourier and Henri de Saint-Simon in the early 19th century inspired communalists with their visions of a future scientific and technological evolution of humanity using reason. Radicals seized on Darwinian evolution to validate the idea of social progress. Edward Bellamys socialist utopia in Looking Backward, which inspired hundreds of socialist clubs in the late 19th century United States and a national political party, was as highly technological as Bellamys imagination. For Bellamy and the Fabian Socialists, socialism was to be brought about as a painless corollary of industrial development.[8]

Marx and Engels saw more pain and conflict involved, but agreed about the inevitable end. Marxists argued that the advance of technology laid the groundwork not only for the creation of a new society, with different property relations, but also for the emergence of new human beings reconnected to nature and themselves. At the top of the agenda for empowered proletarians was to increase the total productive forces as rapidly as possible. The 19th and early 20th century Left, from social democrats to communists, were focused on industrialization, economic development and the promotion of reason, science, and the idea of progress.[8]

Some technological utopians promoted eugenics. Holding that in studies of families, such as the Jukes and Kallikaks, science had proven that many traits such as criminality and alcoholism were hereditary, many advocated the sterilization of those displaying negative traits. Forcible sterilization programs were implemented in several states in the United States.[9]

H.G. Wells in works such as The Shape of Things to Come promoted technological utopianism.

The horrors of the 20th century - communist and fascist dictatorships, world wars - caused many to abandon optimism. The Holocaust, as Theodor Adorno underlined, seemed to shatter the ideal of Condorcet and other thinkers of the Enlightenment, which commonly equated scientific progress with social progress.[10]

The Goliath of totalitarianism will be brought down by the David of the microchip.

A movement of techno-utopianism began to flourish again in the dot-com culture of the 1990s, particularly in the West Coast of the United States, especially based around Silicon Valley. The Californian Ideology was a set of beliefs combining bohemian and anti-authoritarian attitudes from the counterculture of the 1960s with techno-utopianism and support for libertarian economic policies. It was reflected in, reported on, and even actively promoted in the pages of Wired magazine, which was founded in San Francisco in 1993 and served for a number years as the "bible" of its adherents.[11][12][13]

This form of techno-utopianism reflected a belief that technological change revolutionizes human affairs, and that digital technology in particular - of which the Internet was but a modest harbinger - would increase personal freedom by freeing the individual from the rigid embrace of bureaucratic big government. "Self-empowered knowledge workers" would render traditional hierarchies redundant; digital communications would allow them to escape the modern city, an "obsolete remnant of the industrial age".[11][12][13]

Similar forms of "digital utopianism" has often entered in the political messages of party and social movements that point to the Web or more broadly to new media as harbingers of political and social change.[14] Its adherents claim it transcended conventional "right/left" distinctions in politics by rendering politics obsolete. However, techno-utopianism disproportionately attracted adherents from the libertarian right end of the political spectrum. Therefore, techno-utopians often have a hostility toward government regulation and a belief in the superiority of the free market system. Prominent "oracles" of techno-utopianism included George Gilder and Kevin Kelly, an editor of Wired who also published several books.[11][12][13]

During the late 1990s dot-com boom, when the speculative bubble gave rise to claims that an era of "permanent prosperity" had arrived, techno-utopianism flourished, typically among the small percentage of the population who were employees of Internet startups and/or owned large quantities of high-tech stocks. With the subsequent crash, many of these dot-com techno-utopians had to rein in some of their beliefs in the face of the clear return of traditional economic reality.[12][13]

In the late 1990s and especially during the first decade of the 21st century, technorealism and techno-progressivism are stances that have risen among advocates of technological change as critical alternatives to techno-utopianism.[15][16] However, technological utopianism persists in the 21st century as a result of new technological developments and their impact on society. For example, several technical journalists and social commentators, such as Mark Pesce, have interpreted the WikiLeaks phenomenon and the United States diplomatic cables leak in early December 2010 as a precursor to, or an incentive for, the creation of a techno-utopian transparent society.[17]Cyber-utopianism, first coined by Evgeny Morozov, is another manifestation of this, in particular in relation to the Internet and social networking.

Bernard Gendron, a professor of philosophy at the University of WisconsinMilwaukee, defines the four principles of modern technological utopians in the late 20th and early 21st centuries as follows:[18]

Rushkoff presents us with multiple claims that surround the basic principles of Technological Utopianism:[19]

Critics claim that techno-utopianism's identification of social progress with scientific progress is a form of positivism and scientism. Critics of modern libertarian techno-utopianism point out that it tends to focus on "government interference" while dismissing the positive effects of the regulation of business. They also point out that it has little to say about the environmental impact of technology[22] and that its ideas have little relevance for much of the rest of the world that are still relatively quite poor (see global digital divide).[11][12][13]

In his 2010 study System Failure: Oil, Futurity, and the Anticipation of Disaster, Canada Research Chairholder in cultural studies Imre Szeman argues that technological utopianism is one of the social narratives that prevent people from acting on the knowledge they have concerning the effects of oil on the environment.[7]

In a controversial article "Techno-Utopians are Mugged by Reality", Wall Street Journal explores the concept of the violation of free speech by shutting down social media to stop violence. As a result of British cities being looted consecutively, Prime British Minister David Cameron argued that the government should have the ability to shut down social media during crime sprees so that the situation could be contained. A poll was conducted to see if Twitter users would prefer to let the service be closed temporarily or keep it open so they can chat about the famous television show X-Factor. The end report showed that every Tweet opted for X-Factor. The negative social effects of technological utopia is that society is so addicted to technology that we simply can't be parted even for the greater good. While many Techno-Utopians would like to believe that digital technology is for the greater good, it can also be used negatively to bring harm to the public.[23]

Other critics of a techno-utopia include the worry of the human element. Critics suggest that a techno-utopia may lessen human contact, leading to a distant society. Another concern is the amount of reliance society may place on their technologies in these techno-utopia settings.[24] These criticisms are sometimes referred to as a technological anti-utopian view or a techno-dystopia.

Even today, the negative social effects of a technological utopia can be seen. Mediated communication such as phone calls, instant messaging and text messaging are steps towards a utopian world in which one can easily contact another regardless of time or location. However, mediated communication removes many aspects that are helpful in transferring messages. As it stands today, most text, email, and instant messages offer fewer nonverbal cues about the speakers feelings than do face-to-face encounters.[25] This makes it so that mediated communication can easily be misconstrued and the intended message is not properly conveyed. With the absence of tone, body language, and environmental context, the chance of a misunderstanding is much higher, rendering the communication ineffective. In fact, mediated technology can be seen from a dystopian view because it can be detrimental to effective interpersonal communication. These criticisms would only apply to messages that are prone to misinterpretation as not every text based communication requires contextual cues. The limitations of lacking tone and body language in text based communication are likely to be mitigated by video and augmented reality versions of digital communication technologies.[26]

Excerpt from:

Technological utopianism - Wikipedia

Posted in Singularitarianism | Comments Off on Technological utopianism – Wikipedia

Sterling Crispin: Begin at the End – ArtSlant

Posted: February 28, 2017 at 7:59 pm

This essay was first published in the ArtSlant Prize 2016 Catalogue, on the occasion of theArtSlant Prize Shortlist exhibitionat SPRING/BREAK Art Show, from February 28March 6, 2016.Sterling Crispin is the ArtSlant Prize 2016 Third Prize winner.Other ArtSlant Prize 2016 catalogue essays:Brigitta Varadi & Tiffany Smith

What does the end, The End, look like? Is it a transcendent experience like the religious and singularitarians believe? Will humans transform into iridescent angels of ethereal nature, timeless in their march towards oneness? Will the end look like an episode of The Walking Dead? Like an episode of Doomsday Preppers? Will the remnants of society scrabble together the few resources left to find baseline survival the underlying truth of excess? Does the end resemble a person sitting in a concrete box buried underground swallowing baked beans out of a can, or do we become waves of energy, identifiable not by our body but by a collection of experiences and tropes traveling from host to host, like a Westworld protagonist?

It is hard to conceive of a greater tension between these two visions and yet they exist, in tandem, in our collective imaginations. To imagine civilization dwindling down to a couple thousand people, the Earth in environmental hell, taking global collapse to its conclusionits unimaginably terrible, says artist Sterling Crispin. But, he continues, take techno-optimism to its extreme, with humans living for hundreds of thousands of years, and its also kind of unimaginable.

Sterling Crispin explores the end. From a fascination with Buddhist conceptions of oneness and propelled by the rapid technological pace in the era of Moores Law[1], Crispin takes as his subject the hurtling hulk of humanity as it flies towards some kind of imagined or real conclusion. Transhumanism is on my mind a lot, he says.

Crispins materials are birthed in todays technology. Aluminum server frames, Alexa towers, emergency water filtration systems, canned food, Bitcoin miners, extruded plastics and resinsthese are the vocabulary of an end-times practice.

The singularity as a concept comes from a 1993 paper[2]by mathematician Vernor Vinge in which he states: We are on the edge of change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater-than-human intelligence. The basic principle of singularitarianism is that, at a certain point, advancement will be out of human hands. Technology will be free to replicate and improve on its own. Futurist Ray Kurzweilbelieves that at this point a massive rupture in human culture, philosophy, and civilization will occur, characterized by the end of death and anthropocentric evolution. Kurzweils end is an apocalypse of a different sort.[3] His is a moment of becoming and transcendence beyond the human.

Sterling Crispin, Self-Contained Investment Module and Contingency Package (Cloud-Enabled Modular Emergency-Enterprise Application Platform),2015. Courtesy of the artist

The globe just scored a hat trick of hottest years on record. The doomsday clock has begun ticking towards midnight again. Amidst the statistical evidence, markers of impending doom keep pinging us. The cries of apocryphal evangelists are beginning to ring true.

With each passing meteor, every seemingly-significant date on an ancient calendar that appears on our Julian calendar, throngs proclaim the end with rapturous fervency. But the end interrogated by Crispin is not fanciful. His work has a sincere immediacy: Trumps presidency and the collapse of civil society really gets you thinking about how fragile our whole global economy is and how loosely everything is held together. He goes on, Next month, some catastrophe could happen that could close down international shipping, close off the internet; millions of people could die because there wasnt enough food. Were just on the edge of this all of the time.

Never has the world been so interconnected. In 2015, $16 trillion (21% of GWP) in merchandise exchanged hands across the world. In 2013, one fifth of the average Americans diet was imported. This interdependence isnt trivial. As political forces around the world begin to pull back from the integrated system of globalized advanced capitalism, the connections holding it all together seem more tenuous than ever.

Crispins suite of four sculptures, N.A.N.O., B.I.O., I.N.F.O., C.O.G.N.O. (2015), serves as sentries. Each monolith is attached to an industry stock: N.A.N.O. comes with 100 shares of stock in a nanotechnology company, B.I.O., biotechnology, I.N.F.O., informatics, and C.O.G.N.O., cognitive research. If separated, these Gundam-like structures will track each other: a GPS display shows you where the other three horsemen are at all times. An emergency water purifier and food rations anchor the sculptures. N.A.N.O. et al. recall ancient statues guarding a crypt, protectors of humanity straight out of anime waiting for the right time to awaken and save the world. They reach towards the promises of advanced capital, zeroing in on the industries most likely to transform humanity via the singularity and save it from itself.

Sterling Crispin, N.A.N.O. , B.I.O. , I.N.F.O. , C.O.G.N.O., 2015. Courtesy of the artist

Of course, if that doesnt work out, theres always a jerrycan of clean water and some freeze-dried beef.[4]

Self-Contained Investment Module and Contingency Package (2015), like N.A.N.O., is practical and sculptural. Inside an aluminum frame sits an ASIC Bitcoin mining tube, a Lifesaver Systems 4000 ultra-filtration water bottle, an emergency radio, Mayday emergency food rations, a knife, heirloom seeds, etc. The connections are barely waiting to be pieced together by the viewer: theyre all there, visible in the cube. Crispins work makes hard connections, direct metaphors, in his search for the aesthetic of the end. The metaphors I use are heavy-handed but rounded in the utility of their function in reality, relays the artist.

This frankness fights the obfuscating nature of reality. Are things really as dire as they seem? It is readily accepted that things will be okay; we tell ourselves the same often enough. But why is it so difficult to accept that things might not be okay? Is it so difficult to imagine that, shit, were fucked?

In some remote corner of the universe, flickering in the light of the countless solar systems into which it had been poured, there was once a planet on which clever animals invented cognition. It was the most arrogant and most mendacious minute in the history of the world; but a minute was all it was. After nature had drawn just a few more breaths the planet froze and the clever animals had to die.[5]

There is something reflected in the gleaming aluminum, the candy-apple neon, and low hum of Self-Contained. An optimism, perhaps, that if we structure things just right, if we allow for recursive corrections, if we prepare and adjust, we wont be the ones responsible for bringing the short reign of humanity to an end. We might not be Nietzsches arrogant creatures doomed to death on a frozen, or in this case, scorched Earth. We may just be the ones that become whats next. Either way, be prepared.

Joel Kuennen

Joel Kuennenis the Chief Operations Officer and a Senior Editor at ArtSlant.

[1] Moores Law holds that the number of transistors in an integrated circuit doubles every two years. This law has been extrapolated to include the exponential rate of computational and technological advancement more broadly.

[2] Vernor Vinge, The Coming Technological Singularity: How to Survive in the Post-Human Era (paper presented at the VISION-21 Symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, March 30-31, 1993).

[3] Kurzweil, it should be noted, is driven to defeat death so that he may resurrect his father who died early on in Kurzweils life. How human is that?!

[4] Its difficult to ignore humor when discussing the end. One cannot approach nothingness without being a bit glib.

[5] Friedrich Nietzsche, On Truth and Lies in a Non-Moral Sense, Trans. Ronald Spiers. 1873.

(Image at top: Sterling Crispin, Self-Contained Investment Module and Contingency Package (Cloud-Enabled Modular Emergency-Enterprise Application Platform) (detail), 2015. Courtesy of the artist)

The rest is here:

Sterling Crispin: Begin at the End - ArtSlant

Posted in Singularitarianism | Comments Off on Sterling Crispin: Begin at the End – ArtSlant

Singularitarianism | Prometheism.net | Futurist Transhuman …

Posted: January 14, 2017 at 8:56 pm

Ray Kurzweil is a genius. One of the greatest hucksters of the age. Thats the only way I can explain how his nonsense gets so much press and has such a following. Now he has the cover of Time magazine, and an article called 2045: The Year Man Becomes Immortal. It certainly couldnt be taken seriously anywhere else; once again, Kurzweil wiggles his fingers and mumbles a few catchphrases and upchucks a remarkable prediction, that in 35 years (a number dredged out of his compendium of biased estimates), Man (one, a few, many? How? He doesnt know) will finally achieve immortality (seems to me youd need to wait a few years beyond that goal to know if it was true). Now weve even got a name for the Kurzweil delusion: Singularitarianism.

Theres room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or wont happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe youre walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything. They have no fear of sounding ridiculous; your ordinary citizens distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality. When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence.

Wow. Sounds just like the Raelians, or Hercolubians, or Scientologists, or any of the modern New Age pseudosciences that appropriate a bit of jargon and blow it up into a huge mythology. Nice hyperbole there, though. Too bad the whole movement is empty of evidence.

One of the things I do really despise about the Kurzweil approach is their dishonest management of critics, and Kurzweil is the master. He loves to tell everyone whats wrong with his critics, but he doesnt actually address the criticisms.

Take the question of whether computers can replicate the biochemical complexity of an organic brain. Kurzweil yields no ground there whatsoever. He does not see any fundamental difference between flesh and silicon that would prevent the latter from thinking. He defies biologists to come up with a neurological mechanism that could not be modeled or at least matched in power and flexibility by software running on a computer. He refuses to fall on his knees before the mystery of the human brain. Generally speaking, he says, the core of a disagreement Ill have with a critic is, theyll say, Oh, Kurzweil is underestimating the complexity of reverse-engineering of the human brain or the complexity of biology. But I dont believe Im underestimating the challenge. I think theyre underestimating the power of exponential growth.

This is wrong. For instance, I think reverse-engineering the general principles of a human brain might well be doable in a few or several decades, and I do suspect that well be able to do things in ten years, 20 years, a century that I cant even imagine. I dont find Kurzweil silly because Im blind to the power of exponential growth, but because:

Kurzweil hasnt demonstrated that there is exponential growth at play here. Ive read his absurd book, and his data is phony and fudged to fit his conclusion. He cheerfully makes stuff up or drops data that goes against his desires to invent these ridiculous charts.

Im not claiming he underestimates the complexity of the brain, Im saying he doesnt understand biology, period. Handwaving is not enough if hes going to make fairly specific claims of immortality in 35 years, there had better be some understanding of the path that will be taken.

There is a vast difference between grasping a principle and implementing the specifics. If we understand how the brain works, if we can create a computer simulation that replicates and improves upon the function of our brain, that does not in any way imply that my identity and experiences can be translated into the digital realm. Again, Kurzweil doesnt have even a hint of a path that can be taken to do that, so he has no basis for making the prediction.

Smooth curves that climb upward into infinity can exist in mathematics (although Kurzweils predictions dont live in state of rigor that would justify calling them mathematical), but they dont work in the real world. There are limits. Weve been building better and more powerful power plants for aircraft for a century, but they havent gotten to a size and efficiency to allow me to fly off with a personal jetpack. I have no reason to expect that they will, either.

While I dont doubt that science will advance rapidly, I also expect that the directions it takes will be unpredictable. Kurzweil confuses engineering, where you build something to fit a predetermined set of specifications, with science, in which you follow the evidence wherever it leads. Look at the so-called war on cancer: it isnt won, no one expects that it will be, but what it has accomplished is to provide limited success in improving health and quality of life, extending survival times, and developing new tools for earlier diagnosis thats reality, and understanding reality is achieved incrementally, not by sudden surges in technology independent of human effort. It also generates unexpected spinoffs in deeper knowledge about cell cycles, signaling, gene regulation, etc. The problems get more interesting and diverse, and its awfully silly of one non-biologist in 2011 to try to predict what surprises will pop out.

Kurzweil is a typical technocrat with limited breadth of knowledge. Imagine what happens IF we actually converge on some kind of immortality. Who gets it? If its restricted, what makes Kurzweil think he, and not Senator Dumbbum who controls federal spending on health, or Tycoon Greedo the trillionaire, gets it? How would the world react if such a capability were available, and they (or their dying mother, or their sick child) dont have access? What if its cheap and easy, and everyone gets it? Kurzweil is talking about a technology that would almost certainly destroy every human society on the planet, and he treats it as blithely as the prospect of getting new options for his cell phone. In case he hadnt noticed, human sociology and politics shows no sign of being on an exponential trend towards greater wisdom. Yeah, expect turbulence.

Hes guilty of a very weird form of reductionism that considers a human life can be reduced to patterns in a computer. I have no stock in spiritualism or dualism, but we are very much a product of our crude and messy biology we percieve the world through imprecise chemical reactions, our brains send signals by shuffling ions in salt water, our attitudes and reactions are shaped by chemicals secreted by glands in our guts. Replicating the lightning while ignoring the clouds and rain and pressure changes will not give you a copy of the storm. It will give you something different, which would be interesting still, but its not the same.

Kurzweil shows other signs of kookery. Two hundred pills a day? Weekly intravenous transfusions? Drinking alkalized water because hes afraid of acidosis? The man is an intelligent engineer, but hes also an obsessive crackpot.

Oh, well. Ill make my own predictions. Magazines will continue to praise Kurzweils techno-religion in sporadic bursts, and followers will continue to gullibly accept what he says because it is what they wish would happen. Kurzweil will die while brain-uploading and immortality are still vague dreams; he will be frozen in liquid nitrogen, which will so thoroughly disrupt his cells that even if we discover how to cure whatever kills him, there will be no hope of recovering the mind and personality of Kurzweil from the scrambled chaos of his dead brain. 2045 will come, and those of us who are alive to see it, will look back and realize it is very, very different from what life was like in 2011, and also very different from what we expected life to be like. At some point, I expect artificial intelligences to be part of our culture, if we persist; theyll work in radically different ways than human brains, and they will revolutionize society, but I have no way of guessing how. Ray Kurzweil will be forgotten, mostly, but records of the existence of a strange shaman of the circuitry from the late 20th and early 21st century will be tucked away in whatever the future databases are like, and people and machines will sometimes stumble across them and laugh or zotigrate and say, How quaint and amusing!, or whatever the equivalent in the frangitwidian language of the trans-entity circumsolar ansible network might be.

And thatll be kinda cool. I wish I could live to see it.

Go here to read the rest:

Singularitarianism? Pharyngula

Read the original post:

Singularitarianism | Prometheism.net

. Bookmark the

.

Read more from the original source:

Singularitarianism | Prometheism.net | Futurist Transhuman ...

Posted in Singularitarianism | Comments Off on Singularitarianism | Prometheism.net | Futurist Transhuman …

Singularitarianism Wikipedia – euvolution.com

Posted: December 14, 2016 at 11:54 pm

Singularitarianism is a movement[1] defined by the belief that a technological singularitythe creation of superintelligencewill likely happen in the medium future, and that deliberate action ought to be taken to ensure that the Singularity benefits humans.

Singularitarians are distinguished from other futurists who speculate on a technological singularity by their belief that the Singularity is not only possible, but desirable if guided prudently. Accordingly, they might sometimes dedicate their lives to acting in ways they believe will contribute to its rapid yet safe realization.[2]

Time magazine describes the worldview of Singularitarians by saying that they think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe youre walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything.[1]

Inventor and futurist Ray Kurzweil, author of the 2005 book The Singularity Is Near: When Humans Transcend Biology, defines a Singularitarian as someone who understands the Singularity and who has reflected on its implications for his or her own life; he estimates the Singularity will occur around 2045.[2]

Singularitarianism coalesced into a coherent ideology in 2000 when artificial intelligence (AI) researcher Eliezer Yudkowsky wrote The Singularitarian Principles,[2][3] in which he stated that a Singularitarian believes that the singularity is a secular, non-mystical event which is possible and beneficial to the world and is worked towards by its adherents.[3]

In June 2000 Yudkowsky, with the support of Internet entrepreneurs Brian Atkins and Sabine Atkins, founded the Machine Intelligence Research Institute to work towards the creation of self-improving Friendly AI. MIRIs writings argue for the idea that an AI with the ability to improve upon its own design (Seed AI) would rapidly lead to superintelligence. These Singularitarians believe that reaching the Singularity swiftly and safely is the best possible way to minimize net existential risk.

Many people believe a technological singularity is possible without adopting Singularitarianism as a moral philosophy. Although the exact numbers are hard to quantify, Singularitarianism is a small movement, which includes transhumanist philosopher Nick Bostrom. Inventor and futurist Ray Kurzweil, who predicts that the Singularity will occur circa 2045, greatly contributed to popularizing Singularitarianism with his 2005 book The Singularity Is Near: When Humans Transcend Biology .[2]

What, then, is the Singularity? Its a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian or dystopian, this epoch will transform the concepts we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself. Understanding the Singularity will alter our perspective on the significance of our past and the ramifications for our future. To truly understand it inherently changes ones view of life in general and ones particular life. I regard someone who understands the Singularity and who has reflected on its implications for his or her own life as a singularitarian.[2]

With the support of NASA, Google and a broad range of technology forecasters and technocapitalists, the Singularity University opened in June 2009 at the NASA Research Park in Silicon Valley with the goal of preparing the next generation of leaders to address the challenges of accelerating change.

In July 2009, many prominent Singularitarians participated in a conference organized by the Association for the Advancement of Artificial Intelligence (AAAI) to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard (i.e., cybernetic revolt). They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They warned that some computer viruses can evade elimination and have achieved cockroach intelligence. They asserted that self-awareness as depicted in science fiction is probably unlikely, but that there were other potential hazards and pitfalls.[4] Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[5] The President of the AAAI has commissioned a study to look at this issue.[6]

Science journalist John Horgan has likened singularitarianism to a religion:

Lets face it. The singularity is a religious rather than a scientific vision. The science-fiction writer Ken MacLeod has dubbed it the rapture for nerds, an allusion to the end-time, when Jesus whisks the faithful to heaven and leaves us sinners behind. Such yearning for transcendence, whether spiritual or technological, is all too understandable. Both as individuals and as a species, we face deadly serious problems, including terrorism, nuclear proliferation, overpopulation, poverty, famine, environmental degradation, climate change, resource depletion, and AIDS. Engineers and scientists should be helping us face the worlds problems and find solutions to them, rather than indulging in escapist, pseudoscientific fantasies like the singularity.[7]

Kurzweil rejects this categorization, stating that his predictions about the singularity are driven by the data that increases in computational technology have been exponential in the past.[8]

See original here:

Singularitarianism Wikipedia

. Bookmark the

.

Continued here:

Singularitarianism Wikipedia - euvolution.com

Posted in Singularitarianism | Comments Off on Singularitarianism Wikipedia – euvolution.com

Singularitarianism – Lesswrongwiki

Posted: September 20, 2016 at 7:10 pm

Wikipedia has an article about

Singularitarianism refers to attitudes or beliefs favoring a technological singularity.

The term was coined by Mark Plus, then given a more specific meaning by Eliezer Yudkowsky in his Singularitarian principles. "Singularitarianism", early on, referred to an principled activist stance aimed at creating a singularity for the benefit of humanity as a whole, and in particular to the movement surrounding the Machine Intelligence Research Institute.

The term has since sometimes been used differently, without it implying the specific principles listed by Yudkowsky. For example, Ray Kurzweil's book "The Singularity Is Near" contains a chapter titled "Ich bin ein Singularitarian", in which Kurzweil describes his own vision for technology improving the world. Others have used the term to refer to people with an impact on the Singularity and to "expanding one's mental faculties by merging with technology". Others have used "Singularitarian" to refer to anyone who predicts a technological singularity will happen.

Yudkowsky has (perhaps facetiously) suggested that those adhering to the original activist stance relabel themselves the "Elder Singularitarians".

Visit link:

Singularitarianism - Lesswrongwiki

Posted in Singularitarianism | Comments Off on Singularitarianism – Lesswrongwiki

Page 112