Page 112

Category Archives: Extropianism

‘Techno-Optimism’ is Not Something You Should Believe In Current … – Current Affairs

Posted: October 22, 2023 at 9:55 am

Billionaire tech investor Marc Andreessen recently published a manifesto for techno-optimism, a worldview that contends technology will solve all of humanitys problems and create a world of infinite abundance for all. Andreessens manifesto is so extreme that it has been heavily criticized even in the tech sector. It accuses anyone who opposes the unrestricted development of AI of having blood on their hands (since AI will save lives, meaning that if you slow down its development, you are essentially a murderer). It quotes favorably from Italian fascist Filippo Tommaso Marinetti in envisioning a race of technologically-augmented supermen and conquerors. It condemns socialism in favor of merit and achievement and treats social responsibility, trust and safety, risk management, and even sustainable development goals as enemy ideas.

Andreessens manifesto comes across as unhinged and manic. It is, in fact, more a religious catechism than a manifesto. It is filled with We believe assertions that lay out the core of the Techno-Optimist faith. For example:

Are any of these beliefs substantiated by actual evidence? Do we get convincing proof, or even substantive argument, that human wants and needs are infinite or that every single material problem can be solved by technology? No. All we get is assertion, even around highly dubious claims about human nature and the workings of free markets. For instance, Andreessen claims that free markets are the only sensible way to organize society in part because humans are motivated by the selfish pursuit of money:

David Friedman points out that people only do things for other people for three reasons love, money, or force. Love doesnt scale, so the economy can only run on money or force. The force experiment has been run and found wanting. Lets stick with money.

But whether or not David Friedman says people only do things for other people for three reasons, empirical evidence suggests that people actually often do things for other people out of a sense of perceived fairness. Scientists who study statistically valid samples of actual humans, instead of projecting from their own inclinations or observations of their abnormally greed-motivated peers, find that humans evolved to beas Frans de Waal observedmoral beings to the core. Andreessen is not interested in evidence, though. He makes this clear at the end of his manifesto, which says that in lieu of detailed endnotes and citations, read the work of these people, and you too will become a Techno-Optimist, before listing a series of figures ranging from anonymous Twitter accounts (e.g., @BasedBeffJezos, @bayeslord) to right-wing economists (Ludwig von Mises, Thomas Sowell).

It would be easy to dismiss Andreessens manifesto as the frenzied ranting of another rich man who thinks that the depth and correctness of ones opinions on political and social matters exist in proportion to ones net worth. But Andreessens techno-optimism is hardly new, unique, or persuasive. (Optimism has always been a tool used by the powerful to advance their interests.) In this particular philosophy, growth and technology have magical problem-solving capabilities, and if we pursue them relentlessly, we will eliminate the need to ask any deeper questions (such as growth toward what? or technology that does what? or, crucially, how the benefits are allocated, i.e. growth and tech for whom?). The optimism in techno-optimism is the idea that we can be confident that the future will be a certain way without having to do much work ourselves to make sure it is that way. The cult-like chanting of the god-word technology is typically an attempt to evade the political (i.e., ethical) work of deciding how to justly allocate harms and benefits. Andreessen is unashamed about this: he explicitly rejects the need for socially responsible technology and the precautionary principle. He doesnt spend a moment dealing with the many serious dangers that people have highlighted around current artificial intelligence technology (such as its capacity to propagate racial biases or manufacture hoaxes and lies at breathtaking speeds). For Andreessen, we dont need to think about which technologies to develop or how to develop them responsibly. The invisible hand of the free market knows best.

Faith is indeed the appropriate word for this kind of optimism, which is a totally unjustified confidence in ones ability to know how the future will unfold. Is it actually the case that all of our problems can be solved by technology? That question is never considered, because in a catechism it doesnt have to be. After all, we believe that all problems can be solved by technology. For Andreessen, belief is enough. (Likewise, he believes that it is OK for the global population to reach 50 billion, therefore there is no need to prove that the Earth can sustain this population.)

What happens when Andreessens confident beliefs are actually put to the test? Is he right that capitalist markets and technology create abundance for all? To assess the credibility of that claim, we can examine how real-world abundance has been allocated by greed-driven markets in practice. Take, for instance, our global food system. We produce more than enough calories to feed all humanswhich is to say that our total food supply is adequatebut 77 percent of global farmland fattens livestock to make meat for the wealthy, and rich-world pets seem to have better food securitythan 2.37 billion people (nearly one in three humans do not have access to adequate food, according to the U.N.). Meanwhile, 150 million kids are stunted by malnutrition, and we waste enough grain to feed 1.9 billion people annually in the production of environmentally disastrous biofuels. And under the guise of enhancing market efficiency, some of the worlds richest people and institutions, like hedge-funders and elite university endowments, invest (which is to say gamble) in food commodity markets. We can see clearlywithout elaborate economistic euphemismsthat the market ensures that greedy ghouls are profiting by taking calories out of the mouths of the planets poorest and most vulnerable children.

The chart on the left shows the steady global supply of calories per person, which bears no relation to the rollercoaster ride of food commodity prices on the right.

Those huge price swings, which harmed the global poor, were driven by commodity speculation, not by the fundamentals of supply and demand. Clearly, the market is not producing morally acceptable outcomes here. Does this not mock the fine Enlightenment philosophizing about equal human dignity among all? More importantly, does this provide justification for Andreessens doctrine of market- and tech-driven optimism? How does this argument about the efficiency of the markets look to the eyes of the worlds poorest and least powerful people? Only the absurdly blinkered could imagine that our global food abundance is used rationally or efficientlynever mind ethically. Is there any reason to expect Andreessens techno-capital machine will allocate any other kind of abundance in a better or more morally justifiable manner?

One concrete example of how capitalist-controlled technology is actually used is the Covid crisis, where profits have been placed above the lives of the poor. The moral fiasco of vaccine apartheidwhereby rich countries hoarded vaccines and refused to waive intellectual property patent rights so that lower-income countries might produce affordable vaccine for their populationshas been linked to more than a million avoidable deaths (even today, according to the U.N. and the W.H.O., two-thirds of people in low-income countries remain unvaccinated against Covid). That was on top of an already inadequate and unjust healthcare system for the worlds population overall. Scholars estimate that 15.6 million excess deaths per year could be prevented through universal global healthcare and public health measures. (The climate crisis has also created a deadly battle over howand for whomour technologies will be used, as four billion people face health-threatening heat by 2030.)

Do any of these preventable deaths or harms appear at all in Andreessens calculus? No. Instead, he perversely focuses on the fear that the deceleration of AI will cost lives, likening AI skeptics to murderers. But the market, as we have seen, already kills millions by allocating goods and services according to ability to pay rather than need. The loss of life due to not developing AI sufficiently is merely speculative, while the avoidable deaths of people who have fallen victim to the markets are already well documented. By Andreessens logic, we ought to consider these millions of deaths to be mass murder by markets, which arent historically rare.1 To ignore so many deaths happening now is absurd and cruel.

The markets distribution is grotesquely unfair, and this fact undercuts Andreessens faith that the techno-capital machine is not anti-humanin fact, it may be the most pro-human thing there is. It serves us. The techno-capital machine works for us. All the machines work for us. The machines, in fact, do not work for us automatically, even if they could be put to humane use. To quote an astute insight from science fiction legend Ted Chiang, Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. Without a suitable political and social context, technology will often be used against those with the least power.

It is also always crucial to probe the meaning of human and us. Us often means me and people like mein this case, rich humans. (See, for instance, Steven Pinkers claim that we are insufficiently grateful for human progress, which implicitly excludes those who do not experience its upsides.) If poor humans cant meet market prices, they get a far less pro-human fate: death by preventable disease.

A common claim often pushed by rich elites like Andreessen is that market growth is solving global poverty. As Andreessen puts it, markets are by far the most effective way to lift vast numbers of people out of poverty. But Andreessen is wrong, and these ideas are wildly misleading.

For instance, the income gains made by the worlds people in recent years have been anything but fair. World Inequality Database data show that from 2009 to 2019, the aggregate global personal income pie grew by $37 trillion. Of that, the top 10 percent took $8.7 trillion (24 percent) while the bottom 10 percent got $25 billion (0.07 percent). Thats not a typo. The poor got 0.07 percent, about 350 times less than the rich.

Claims that global growth is lifting people out of poverty do not square with the following figures. Zooming in on the World Inequality Data reveals that average annual individual income gains in that decade for top versus bottom ten-percent earners were $1,800 and $5. Five dollars a year is 1.3 cents per daya far less laudable feat than Andreessen et al. celebrate. Its hard to argue that $5 added to the extreme poverty level of $6942 per year is really an escape from anything.

Now lets consider what would happen if we redistributed some of those earnings from top to bottoma thought experiment I wrote about in Jacobin last year. If just 5 percent of these top 10-percenter gains were redistributed, the bottom 10 percent would gain $90. Theyd escape poverty 18 times better than their current gain of $5 which is to say that theres so much more we could be doing to increase the resources available to the poor.

Now lets say our aim were to end extreme poverty without delay. This could be achieved. The World Inequality Lab (WIL) has calculated that a tiny wealth tax on the obscenely wealthy (those who have at least $100 million) would net $581 billionthats almost triple the amount of current global aid. The same tax on all millionaires would net $1.6 trillion. Thats more than enough resources to get the job done pronto.

But under the greed-driven, market growth method that Andreessen advocates, the poor will just have to wait. It takes around $1,400 of global income growth to put an extra $1 into the hands of a person at the global bottom. This is an enormously inefficient method! It also amounts to hundreds of times more in gains for the already rich, thus making this approach more about perpetuating inequality than about alleviating poverty.

The truth here is that the improvement in extreme poverty levelsthe metric so beloved by progress-cheering elites and which is shown on the left belowis nothing more than, as I have stated before, a tiny fig leaf barely concealing an ugly truth. The data couldnt be clearer: the GDP per capita gap between rich and poor nations, as shown by the figure below on the right, is growing, not shrinking.

And without robust redistribution, as U.N.s Olivier De Schutter notes, it would take 200 years to eradicate poverty under a $5 a day line, assuming empirical market growth rates. It is therefore grotesque for anyone to claim that this situation ought to be seen as good news for humanity, especially when we note that $5 per day amounts to one-eighth Americas already too-low poverty line, and in each and every one of those 200 years, gains for the global rich will be hundreds of times greater than for the poor. Two hundred years amounts to eight generations of people being denied an appallingly low level of resources so that the rich can get richer.

The idea that market progress is closing the rich-poor gap is a pie-in-the-sky fantasy, an elite-flattering story. The global economy, which is run by gangster financial institutions, has no real mechanism for alleviating inequality and poverty, and there are no grounds to support optimism that it will create universal abundance if left to its own devices. That so many believe capitalism is making fantastic progress against poverty by showering the poor with trickle-down blessings testifies to a spectacularly successful cover-up. Disguising rapacious global profiteering as anti-poverty do-gooding is genius PR, and Andreessen is just the latest to parrot this delusional assertion that the markets will alleviate poverty.

We have seen that the assertion that a techno-capital machine will produce an infinite upward spiral of abundance has no grounding in the world of fact. It is a pleasant fantasy that exists in the minds of believers and keeps them from having to ask hard questions about how we can actually create economies that produce a decent standard of living for all without imperiling the planet.

Optimism expresses unwarranted confidence that the worlds problems will somehow be solved without our having to do the difficult work of coming up with (and implementing) political solutions ourselves. It is worth emphasizing that optimismtechno or otherwiseis always a dangerous philosophy. This becomes clear by looking at how the word has evolved conceptually over time.

Optimism was coined by 17th-century polymath Gottfried Leibniz, who used the term to mean that we live in literally the best of all possible worlds. Leibnizs argument that this had to be absolutely the best of all possible worlds had been meant in both a moral and a mathematical sense. He was a god-and-math smitten genius; in his teens he imagined settling all philosophical debates using a purely logical language that an arithmetical machine could process. He and his Enlightenment peers harbored vast hope for what math-driven thinking could do. To Leibniz it was obvious that the god-ordained order of nature operates by maxima and minima. By Gods very nature, his creation must do the most good at the cost of the least evil. Thus, all that exists is part of the divine plan, and all evil serves Gods greater good (albeit in often mysterious ways). This calculus-like optimizing and economizing imagery was vital to casting all woes as necessary evils. Philosophical poet Alexander Pope, in perhaps the 18th centurys favorite poem, stated the dont-worry-be-happy doctrine similarly, asserting that One truth is clear, Whatever is, is right.

Then comes Voltaire, who saw both these views as obviously preposterous. Far from benign, these ideas were, to his mind, dangerous. In his famous 1759 novella Candide, or Optimism, he mercilessly lampooned these ideas and popularized optimism as an insult. In the novella, throughout Candides many ordeals, his tutor (every young aristocrat had one back then), Dr. Pangloss, applies strict optimism: all events, crises included, are for the best. Even Pangloss acquisition of syphilis is deemed positive, since the pathogen came with the plunder that brought chocolate to Europe. Voltaire saw that optimism could easily sanction a numbing indifference to human suffering. It was a worldview plausible only to young aristocrats born into blessings and privilege.

As Voltaire warned, the idea of an all-for-the-best grand plan has long been used to justify inaction in the face of suffering. Examples abound. For instance, titan of early economics Reverend Thomas Malthus decried conventional charitymisery and starvation were Gods provident checks on the poor, to keep them from reproducing like rabbits. Providence was better served by toil in harsh for-profit workhouses. Charles Dickens wrote Hard Times to attack the scientific cruelty (a phrase from Karl Polanyi) of economists who advocated that workhouses served the all-for-the-best optimistic grand plan. Dickens skewers a character who felt the Good Samaritan was a bad economist. Up to well into the 19th century, economics, often thought of as a rational and neutral science, was heavily influenced by theology. Of course, resource allocation is always a deeply moral endeavor, even when supposedly inspired by heavenly plans or hidden under earthly mathematical schemes. But that morality need not be dictated by the doctrines of a particular religion.

Similarly concerning, economics as currently practiced often presents itself as morally neutral. As Freakonomics authors Steven D. Levitt and Stephen J. Dubner put it, Morality, it could be argued, represents the way that people would like the world to work whereas economics represents how it actually does work. For Andreessen, it seems that worshiping thein his viewomnipotent and omniscient market is central to his religious cult of techno-optimism.

Many economists and market optimists like Andreessen now sanction a similar scientific cruelty. Like Pangloss, todays pro-market pundits in effect preach that present material suffering is just part of the grand plan on the road to a bright future. Its a seductive message to the contemporary equivalents of Voltaires smug upbeat aristocrats. Like Leibniz, todays Optimists urge the continuation of staggeringly unjust but self-serving systems. Their equivalent of a best-of-all-possible outcomes is the rational resource allocations of the great Invisible Hand. The economy is seen as a mathematical optimization scheme, which operates with qualities tantamount to omniscience and quasi-omnipotence. Indeed, thats precisely how Andreessen speaks of it, repeating the idea that no human has sufficient information to question the Invisible Hand judgements.

But this notion of Market Providence is, of course, riddled with deep anti-poor biases. To the market gods, your ability to avoid material suffering, never mind aspire to happiness, should be granted strictly in accordance with your demonstrated market virtues, expressed solely in cold hard cash. Thats the core doctrine of trickle-down market theology. But as the Federal Reserves own Jeremy Rudd wrote: the primary role of mainstream economics is to provide an apologetics for a criminally oppressive, unsustainable, and unjust social order.

Andreessens manifesto is a perfect example of a bundle of ideas that have been called TESCREAL by mile Torres and Timnit Gebru (this acronym stands for transhumanism, extropianism, singularitarianism, cosmism, rationalism, Effective Altruism, and longtermism). According to these ideas, humanity is on a trajectory toward some great technological miracle that will massively augment human capacities and produce endless abundance for all. The ideas themselves often come uncomfortably close to those of classical eugenics (see Andreessens quotation of a fascist and belief in Nietzschean supermen). In practice, they seem likely to produce a dystopia that only a billionaire could love. But sadly, the billionaires who believe this stuff have a great deal of power in our world as it exists.

In a way, it is a good thing that Andreessen wrote and published his manifesto. It lays bare what the planet is up against. These are the beliefs that many of the aspiring masters of the universe hold. They preach a dangerous faith in technology and capitalist markets and are unwilling to consider any of the disastrous drawbacks produced by poorly-designed tech and unregulated markets. They dismiss socialism as the enemy of growth and abundance, waving away all considerations of justice and equality. They are utterly detached from the real-world conditions of peoples lives (TechCrunch asked, When was the last time Marc Andreessen talked to a poor person?). Like any other monomaniacal faithin which doubters are seen as enemies and beliefs are accepted without evidencethis package of beliefs is deeply threatening to any moral persons vision of a just and sustainable future for humans and all that inhabit the planet. As Voltaire knew, optimism is typically a demon in disguise.

See the article here:

'Techno-Optimism' is Not Something You Should Believe In Current ... - Current Affairs

Posted in Extropianism | Comments Off on ‘Techno-Optimism’ is Not Something You Should Believe In Current … – Current Affairs

AI and the threat of "human extinction": What are the tech-bros … – Salon

Posted: June 12, 2023 at 10:14 pm

On May 30, a research organization called the Center for AI Safety released a 22-word statement signed by a number of prominent "AI scientists," including Sam Altman, the CEO of OpenAI; Demis Hassabis, the CEO of Google DeepMind; and Geoffrey Hinton, who has been described as the "godfather" of AI. It reads:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

This statement made headlines around the world, with many media reports suggesting that experts now believe "AI could lead to human extinction," to quote a CNN article.

What should you make of it? A full dissection of the issue showing, for example, that such statements distract from the many serious harms that AI companies have already caused would require more time and space than I have here. For now, it's worth taking a closer look at what exactly the word "extinction" means, because the sort of extinction that some notable signatories believe we must avoid at all costs isn't what most people have in mind when they hear the word.

Understanding this is a two-step process. First, we need to make sense of what's behind this statement. The short answer concerns a cluster of ideologies that Dr. Timnit Gebru and I have called the "TESCREAL bundle." The term is admittedly clunky, but the concept couldn't be more important, because this bundle of overlapping movements and ideologies has become hugely influential among the tech elite. And since society is being shaped in profound ways by the unilateral decisions of these unelected oligarchs, the bundle is thus having a huge impact on the world more generally.

The acronym stands for "transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism and longtermism." That's a mouthful, but the essence of TESCREALism meaning the worldview that arises from this bundle is simple enough: at its heart is a techno-utopian vision of the future in which we re-engineer humanity, colonize space, plunder the cosmos, and establish a sprawling intergalactic civilization full of trillions and trillions of "happy" people, nearly all of them "living" inside enormous computer simulations. In the process, all our problems will be solved, and eternal life will become a real possibility.

This is not an exaggeration. It's what Sam Altman refers to when he writes that, with artificial general intelligence (AGI), "we can colonize space. We can get fusion to work and solar [energy] to mass scale. We can cure all human diseases. We can build new realities. We are only a few breakthroughs away from abundance at scale that is difficult to imagine." It's what Elon Musk implicitly endorsed when he retweeted an article by Nick Bostrom which argues that we have a moral obligation to spread into the cosmos as soon as possible and build "planet-sized" computers running virtual-reality worlds in which 1038 digital people could exist per century. (That's a 1 followed by 38 zeros.) According to the tweet, this is "likely the most important paper ever written." When Twitter founder Jack Dorsey joined Musk in suggesting that we have a "duty" to "extend" and "maintain the light of consciousness to make sure it continues into the future," he was referencing a central tenet of the TESCREAL worldview.

I don't think that everyone who signed the Center for AI Safety's short statement is a TESCREAList meaning someone who accepts more than one of the "TESCREAL" ideologies but many notable signatories are, and at least 90% of the Center for AI Safety's funding comes from the TESCREAL community itself. Furthermore, worries that AGI could cause our extinction were originally developed and popularized by TESCREALists like Bostrom, whose 2014 bestseller "Superintelligence" outlined the case for why superintelligent AGI could turn on its makers and kill every human on Earth.

Here's the catch-22: If AGI doesn't destroy humanity, TESCREALists believe it will usher in the techno-utopian world described above. In other words, we probably need to build AGI to create utopia, but if we rush into building AGI without proper precautions, the whole thing could blow up in our faces. This is why they're worried: There's only one way forward, yet the path to paradise is dotted with landmines.

Here's the catch-22: TESCREALists believe we probably need to build AGI to create utopia, but if we rush into building AGI without proper precautions, the whole thing could blow up in our faces.

With this background in place, we can move on to the second issue: When TESCREALists talk about the importance of avoiding human extinction, they don't mean what you might think. The reason is that there are different ways of defining "human extinction." For most of us, "human extinction" means that our species, Homo sapiens, disappears entirely and forever, which many of us see as a bad outcome we should try to avoid. But within the TESCREAL worldview, it denotes something rather different. Although there are, as I explain in my forthcoming book, at least six distinct types of extinction that humanity could undergo, only three are important for our purposes:

Terminal extinction: this is what I referenced above. It would occur if our species were to die out forever. Homo sapiens is no more; we disappear just like the dinosaurs and dodo before us, and this remains the case forever.

Final extinction: this would occur if terminal extinction were to happen again, our species stops existing and we don't have any successors that take our place. The importance of this extra condition will become apparent shortly.

Normative extinction: this would occur if we were to have successors, but these successors were to lack some attribute or capacity that one considers to be very important something that our successors ought to have, which is why it's called "normative."

The only forms of extinction that the TESCREAL ideologies really care about are the second and third, final and normative extinction. They do not, ultimately, care about terminal extinction about whether our species itself continues to exist or not. To the contrary, the TESCREAL worldview would see certain scenarios in which Homo sapiens disappears entirely and forever as good, because that would indicate that we have progressed to the next stage in our evolution, which may be necessary to fully realize the techno-utopian paradise they envision.

There's a lot to unpack here, so let's make things a little more concrete. Imagine a scenario in which we use genetic engineering to alter our genes. Over just one or two generations, a new species of genetically modified "posthumans" arises. These posthumans might also integrate various technologies into their bodies, perhaps connecting their brains to the internet via "brain-computer interfaces," which Musk's company Neuralink is trying to develop. They might also become immortal through "life-extension" technologies, meaning that they could still die from accidents or acts of violence but not from old age, as they'd be ageless. Eventually, then, after these posthuman beings appear on the scene, the remaining members of Homo sapiens die out.

This would be terminal extinction but not final extinction, since Homo sapiens would have left behind a successor: this newly created posthuman species. Would this be bad, according to TESCREALists? No. In fact, it would be very desirable, since posthumanity would supposedly be "better" than humanity. This is not only a future that die-hard TESCREALists wouldn't resist, it's one that many of them hope to bring about. The whole point of transhumanism, the backbone of the TESCREAL bundle, is to "transcend" humanity.

Want a daily wrap-up of all the news and commentary Salon has to offer? Subscribe to our morning newsletter, Crash Course.

As the TESCREAList Toby Ord writes in his 2020 book "The Precipice," "forever preserving humanity as it is now may also squander our legacy, relinquishing the greater part of our potential," adding that "rising to our full potential for flourishing would likely involve us being transformed into something beyond the humanity of today."

Along similar lines, Nick Bostrom asserts that "the permanent foreclosure of any possibility of transformative change of human biological nature may itself constitute an existential catastrophe." In other words, the failure to create a new posthuman species would be an enormous moral tragedy, since it would mean we failed to fulfill most of our grand cosmic "potential" in the universe.

Of course, morphing into a new posthuman species wouldn't necessarily mean that Homo sapiens disappears. Perhaps this new species will coexist with "legacy humans," as some TESCREALists would say. They could keep us in a pen, as we do with sheep, or let us reside in their homes, the way our canine companions live with us today. The point, however, is that if Homo sapiens were to go the way of the dinosaurs and the dodo, that would be no great loss from the TESCREAList point of view. Terminal extinction is fine, so long as we have these successors.

Or consider a related scenario: Computer scientists create a population of intelligent machines, after which Homo sapiens dwindles in numbers until no one is left. In other words, rather than evolving into a new posthuman species, we create a distinct lineage of machine replacements. Would this be bad, on the TESCREAList view?

Prominent "transhumanists" suggest that the failure to create a new posthuman species would be an enormous moral tragedy, since it would mean we failed to fulfill most of our grand cosmic "potential" in the universe.

In his book "Mind Children," the roboticist Hans Moravec argued that biological humans will eventually be replaced by "a postbiological world dominated by self-improving, thinking machines," resulting in "a world in which the human race has been swept away by the tide of cultural change, usurped by its own artificial progeny." Moravec thinks this would be terrific, even describing himself as someone "who cheerfully concludes that the human race is in its last century, and goes on to suggest how to help the process along." Although Moravec was writing before TESCREALism took shape, his ideas have been highly influential within the TESCREAL community, and indeed the vision that he outlines could be understood as a proto-TESCREAL worldview.

A more recent example comes from the philosopher Derek Shiller, who works for The Humane League, an effective-altruism-aligned organization. In a 2017 paper, Shiller argues that "if it is within our power to provide a significantly better world for future generations at a comparatively small cost to ourselves, we have a strong moral reason to do so. One way of providing a significantly better world may involve replacing our species with something better." He then offers a "speculative argument" for why we should, in fact, "engineer our extinction so that our planet's resources can be devoted to making artificial creatures with better lives."

Along similar lines, the TESCREAList Larry Page co-founder of Google, which owns DeepMind, one of the companies trying to create AGI passionately contends that "digital life is the natural and desirable next step in the cosmic evolution and that if we let digital minds be free rather than try to stop or enslave them, the outcome is almost certain to be good." According to Page, "if life is ever going to spread throughout our Galaxy and beyond, which it should, then it would need to do so in digital form." Consequently, a major worry for Page is that "AI paranoia would delay the digital utopia and/or cause a military takeover of AI that would fall foul of Google's 'Don't be evil' slogan." (Note that "Don't be evil" was "removed from the top of Google's Code of Conduct" in 2018.)

Some have called this position "digital utopianism." However one labels it, Page's claim that we will need to become digital beings, or create digital successors, in order to spread throughout the galaxy is correct. While colonizing our planetary neighbor, Mars, might be possible as biological beings, building an interstellar or intergalactic civilization will almost certainly require our descendants to be digital in nature. Outer space is far too hostile an environment for squishy biological creatures like us to survive for long periods, and traveling from Earth to the nearest galaxy the Andromeda galaxy would require some 10 billion years at current propulsion speeds. Not only would digital beings be able to tolerate the dangerous conditions of intergalactic space, they would effectively be immortal, making such travel entirely feasible.

This matters because, as noted, at the heart of TESCREALism is the imperative to spread throughout the whole accessible universe, plundering our "cosmic endowment" in the process, and creating trillions upon trillions of future "happy" people. Realizing the utopian dream of the TESCREAL bundle will require the creation of digital posthumans; they are necessary to make this dream a reality. Perhaps these posthumans will keep us around in pens or as pets, but maybe they won't. And if they don't, TESCREALists would say: So much the better.

This brings us to another crucial point, directly linked to the supposed threat posed by AGI. For TESCREALists, it doesn't just matter that we have successors, such as digital posthumans, it also matters what these successors are like. For example, imagine that we replace ourselves with a population of intelligent machines that, because of their design, lack the capacity for consciousness. Many TESCREALists would insist that "value" cannot exist without consciousness. If there are no conscious beings to appreciate art, wonder in awe at the universe or experience things like happiness, then the world wouldn't contain any value.

Imagine two worlds: The first is our world. The second is exactly like our world in every way except one: The "humans" going about their daily business, conducting scientific experiments, playing music, writing poetry, hanging out at the bar, rooting for their favorite sports teams and so on have literally no conscious experiences. They behave exactly like we do, but there's no "felt quality" to their inner lives. They have no consciousness, and in that sense they are no different from rocks. Rocks we assume don't have anything it "feels like" to be them, sitting by the side of the road or tumbling down a mountain. The same goes for these "humans," even if they are engaged in exactly the sorts of activities we are. They are functionally equivalent tozombies what are called "philosophical zombies."

This is the only difference between these two worlds, and most TESCREALists would argue that the second world is utterly valueless. Hence, if Homo sapiens were to replace itself with a race of intelligent machines, but these machines were incapable of consciousness, then the outcome would be no better than if we had undergone final extinction, whereby Homo sapiens dies out entirely without leaving behind any successors at all.

That's the idea behind the third type of extinction, "normative extinction," which would happen if humans do have successors, but these successors lack something they ought to have, such as consciousness. Other TESCREALists will point to additional attributes that our successors should have, such as a certain kind of "moral status." In fact, many TESCREALists literally define "humanity" as meaning "Homo sapiens and whatever successors we might have, so long as they are conscious, have a certain moral status and so on."

Consequently, when TESCREALists talk about "human extinction," they aren't actually talking about Homo sapiens but this broader category of beings. Importantly, this means that Homo sapiens could disappear entirely and forever without "human extinction" (by this definition) having happened. As long as we have successors, and these successors possess the right kind of attributes or capacities, no tragedy will have occurred. Put differently and this brings us full circle what ultimately matters to TESCREALists isn't terminal extinction, but final and normative extinction. Those are the only two types of extinction that, if they were to occur, would constitute an "existential catastrophe."

When TESCREALists talk about "human extinction," they aren't actually talking about Homo sapiens but this broader category that could include digital beings or intelligent machines. So Homo sapiens could disappear entirely and forever without "human extinction" (by this definition) having happened.

Here's how all this connects to the current debate surrounding AGI: Right now, the big worry of TESCREAL "doomers" is that we might accidentally create an AGI with "misaligned" goals, meaning an AGI that could behave in a way that inadvertently kills us. For example, if one were to give AGI the harmless-sounding goal of maximizing the total number of paperclips that exist, TESCREALists argue that it would immediately kill every person on Earth, not because the AGI "hates" you but because "you are made out of atoms which it can use for something else," namely paperclips. In other words, it would kill us simply because our bodies are full of useful resources: roughly a billion billion billion atoms.

The important point here is that if a "misaligned" AGI were to inadvertently destroy us, the outcome would be terminal extinction but not final extinction. Why? Because Homo sapiens would no longer exist yet we will have left behind a successor the AGI! A successor is anything that succeeds or comes after us, and since the AGI that kills us will continue to exist after we are all dead, we won't have undergone final extinction. Indeed, Homo sapiens would be gone precisely because we avoided final extinction, as our successor is what murdered us a technological case of parricide.

However, since in this silly example our AGI successor would do nothing but make paperclips, this would be a case of normative extinction. It's certainly not the future most TESCREALists want to create. It's not the utopia where trillions and trillions of conscious posthumans with a similar moral status to ours cluttering every corner of the accessible universe. This is the importance of normative extinction: To bequeath the world to a poorly designed AGI would be just as catastrophic as if our species were to die out without leaving behind any successors at all. Put differently, the threat of "misaligned" AGI is that Homo sapiens disappears and we bequeath the world to a successor, but this successor lacks something necessary for the rest of cosmic history to have "value."

So that's the worry. The key point I want to make here is that Homo sapiens plays no significant role in the grand vision of TESCREALism even if everything goes just right. Rather, TESCREALists see our species as nothing more than a springboard to the next "stage" of "evolution," a momentary transition between current biological life and future digital life, which is necessary to fulfill our "longterm potential" in the cosmos. As Bostrom writes,

transhumanists view human nature as a work-in-progress, a half-baked beginning that we can learn to remold in desirable ways. Current humanity need not be the endpoint of evolution. Transhumanists hope that by responsible use of science, technology, and other rational means we shall eventually manage to become posthuman, beings with vastly greater capacities than present human beings have.

Transhumanism, once again, is the backbone of the TESCREAL bundle, and my guess is that virtually all TESCREALists believe that the inevitable next step in our story is to become digital, which probably means casting aside Homo sapiens in the process. Furthermore, many hope this transition begins in the near future literally within our lifetimes. One reason is that a near-term transition to digital life could enable TESCREALists living today to become immortal by "uploading" their minds to a computer. Sam Altman, for example, was one of 25 people in 2018 to sign up to have his brain preserved by a company called Nectome. As an MIT Technology Review article notes, Altman feels "pretty sure minds will be digitized in his lifetime."

Another reason is that creating a new race of digital beings, whether through mind-uploading or by developing more advanced AI systems than GPT-4, might be necessary to keep the engines of scientific and technological "progress" roaring. In his recent book "What We Owe the Future," the TESCREAList William MacAskill argues that in order to counteract global population decline, "we might develop artificial general intelligence (AGI) that could replace human workers including researchers. This would allow us to increase the number of 'people' working on R&D as easily as we currently scale up production of the latest iPhone." In fact, the explicit aim of OpenAI is to create AGI "systems that outperform humans at most economically valuable work" in other words, to replace biological humans in the workplace.

Later in his book, MacAskill suggests that our destruction of the natural world might actually be net positive, which points to a broader question of whether biological life in general not just Homo sapiens in particular has any place in the "utopian" future envisioned by TESCREALists. Here's what MacAskill says:

It's very natural and intuitive to think of humans' impact on wild animal life as a great moral loss. But if we assess the lives of wild animals as being worse than nothing on average, which I think is plausible (though uncertain), then we arrive at the dizzying conclusion that from the perspective of the wild animals themselves, the enormous growth and expansion of Homo sapiens has been a good thing.

So where does this leave us? The Center for AI Safety released a statement declaring that "mitigating the risk of extinction from AI should be a global priority." But this conceals a secret: The primary impetus behind such statements comes from the TESCREAL worldview (even though not all signatories are TESCREALists), and within the TESCREAL worldview, the only thing that matters is avoiding final and normative extinction not terminal extinction, whereby Homo sapiens itself disappears entirely and forever. Ultimately, TESCREALists aren't too worried about whether Homo sapiens exists or not. Indeed our disappearance could be a sign that something's gone very right so long as we leave behind successors with the right sorts of attributes or capacities.

William MacAskillsuggests that our destruction of the natural world might actually be net positive, which points to a broader question of whether biological life in general not just Homo sapiens in particular has any place in the "utopian" future.

If you love or value Homo sapiens, the human species as it exists now, you should be wary of TESCREALists warning about "extinction." Read such statements with caution. On the TESCREAL account, if a "misaligned" AGI were to kill us next year, the great tragedy wouldn't be that Homo sapiens no longer exists. It would be that we disappeared without having created successors to realize our "vast and glorious" future to quote Toby Ord once again through colonizing space, plundering the universe, and maximizing "value." If our species were to cease existing but leave behind such successors, that would be a cause for rejoicing. It would mean that we'd taken a big step forward toward fulfilling our "longterm potential" in the universe.

I, personally, would like to see our species stick around. I'm not too keen on Homo sapiens being cast aside for something the TESCREALists describe as "better." Indeed, the word "better" is normative: its meaning depends on the particular values that one accepts. What looks "better," or even "utopian," from one perspective might be an outright dystopian nightmare from another.

I would agree with philosopher Samuel Scheffler that "we human beings are a strange and wondrous and terrible species." Homo sapiens is far from perfect. One might even argue that our species name is a misnomer, because it literally translates as "wise human," which we surely have not proven to be.

But posthumans would have their own flaws and shortcomings. Perhaps being five times "smarter" than us would mean they'd be five times better at doing evil. Maybe developing the technological means to indefinitely extend posthuman lifespans would mean that political prisoners could be tortured relentlessly for literally millions of years. Who knows what unspeakable horrors might haunt the posthuman world?

So whenever you hear people talking about "human extinction," especially those associated with the TESCREAList worldview, you should immediately ask: What values are concealed behind statements that avoiding "human extinction" should be a global priority? What do those making such claims mean by "human"? Which "extinction" scenarios are they actually worried about: terminal, final or normative extinction? Only once you answer these questions can you begin to make sense of what this debate is really about.

Read more

from mile P. Torres on the human future

View post:

AI and the threat of "human extinction": What are the tech-bros ... - Salon

Posted in Extropianism | Comments Off on AI and the threat of "human extinction": What are the tech-bros … – Salon

Prometheus Rising – Wikipedia

Posted: November 17, 2021 at 12:40 pm

Prometheus Rising is a 1983 guidebook by Robert Anton Wilson. The book includes explanations of Timothy Leary's eight-circuit model of consciousness, Alfred Korzybski's general semantics, Aleister Crowley's Thelema, and various other topics related to self-improvement, occult traditions, and pseudoscience. In the introduction written by Israel Regardie, Wilson's purpose for writing the book is given as unleashing humanity's "full stature".[1]:v.

The book examines many aspects of social mind control and mental imprinting, and provides mind exercises at the end of every chapter, with the goal of giving the reader more control over how one's mind works. The book has found many readers among followers of alternative culture, and discusses the effect of certain psychoactive substances and how these affect the brain, tantric breathing techniques, and other methods and holistic approaches to expanding consciousness. It draws a parallel between the development of one's mind and the development of higher intelligence theorized by biological evolution.

Prometheus Rising was published in 1983, but it began as Wilson's 1979 doctoral thesis while a student at Paideia University titled "The Evolution of Neuro-Sociological Circuits: A Contribution to the Sociobiology of Consciousness".[2] In 1982, while in Ireland, Wilson rewrote the manuscript, removing footnotes, improving the style, adding chapters, exercises, and diagrams and illustrations. An introduction by Israel Regardie was included.

A brief comment made by Wilson in the book became the main seed thought for The Sekhmet Hypothesis.[3] Wilson suggested that the gentle angel symbol from Ezekiel in the bible had its modern correlation with the flower child of the Sixties.[1]:55 The author of The Sekhmet Hypothesis, Iain Spence, went on to compare Ezekiel's other symbols to various pop cultural trends. In Prometheus Rising, Wilson compared the four Life Positions of Transactional Analysis to the four main Life Positions presented in Timothy Leary's earlier interpersonal circumplex grid. Spence favored Leary's model and used it to describe the moods of atavistic pop culture.

Prometheus Rising is listed as one of the ten seminal works of extropian thought in the Extropianism FAQ.[4] It was also listed on Max More's reading list for extropians, the Immortality Institute's reading list in The Scientific Conquest of Death, and other reading lists for extropians and transhumans.[5][6][7]

View post:

Prometheus Rising - Wikipedia

Posted in Extropianism | Comments Off on Prometheus Rising – Wikipedia

Extropianism – H+Pedia

Posted: March 23, 2021 at 1:51 pm

Extropianism is a philosophy of transhumanism that encompasses the Extropian principles in improving the human condition.[1] It was founded in 1989 and incorporated in 1990 as a 501(c)3 non-profit. It established the modern movement of transhumanism through its conferences and publications. While it closed in 2006,[2] for its time frame, it was largely in support of libertarian political values of democracy such as small government, individual rights, liberty, morphological freedom and the Proactionary Principle. However, many of its members were not libertarian and as an international organization encompassed transhumanists of diverse political backgrounds and views. These individuals share the advocacy of individual rights and the reduction of government. The movement's leading advocates include founder Max More.

Positions include:

Members of the Extropian mailing lists would go on to be involved with Bitcoin, encryption and the beginnings of blockchain, future libertarian and transhumanist projects.[4]

Principles: See Extropian principles

Main: Extropy Magazines

Official history

Flyer for 'EXTRO 1 - The First Extropy Institute Conference on Transhumanist Thought'

Sunnyvale, California, April 30 - May 1 1994

https://github.com/Extropians/Extropy/blob/master/extro1_ad.pdf

Extro1

https://web.archive.org/web/20011211090742/http://members.aol.com:80/T0Morrow/PolyJust.html

1994. Extropy Institute's first conference, Extro-1, takes place in Sunnyvale, California, with keynote speaker Hans Moravec on "The Age of Robots" from what would be his next book. At the conference, Dr. Christopher Heward discusses his ideas of "biometrics" for personalized anti-aging medicine. In 1999-2000, the first Kronos clinic will open, implementing this idea. (Chris developed the idea further at his Extro-2 talk.) Based partly on the conference, Ed Regis's major article for Wired magazine brings in many hundreds of information inquiries and bringing awareness of extropic thinking to around 100,000 people. (In the next issue, one reader's letter derides extropy, calling our movement a passing fad. Since then, our numbers have multiplied a hundredfold.) Extropy #12 includes "The Open Society and Its Media" by Mark Miller, Dean Tribble, Ravi Pandya, and Marc Steigler. Extropy #13 includes a seminal article on Utility Fog by J. Storrs Hall, who ran the nanotech Usenet list and who becomes Extropy's Nano editor.[18]

Proceedings

1995 The Extro 2 conference, held in Santa Monica, California, reaches out to new communities by creating liaisons with the digital media community. David McFadzean and Duane Hewitt present a web-based implementation of Dr. Robin Hanson's Idea Futures (now called the Foresight Exchange). Prof. Michael Rothschild, author of Bionomics, speaks on "The 4th Information Revolution". (Dr. More speaks at the Bionomics conference later this year.) Pioneering futurist FM-2030 brings his ideas to a new audience. Presentations by Natasha Vita-More (and the panel following her presentation including Fiorella Terenzi and Roy Walford) expand the conference's approach from science, technology, philosophy, and economics into culture and the arts.[18]

1997 The Extro 3 conference in Northern California tops the previous events: Eric Drexler makes his first public announcement of his cryonics arrangements as part of a witty banquet keynote talk (according to many it is one of Eric Drexler's best speeches); AI pioneer Prof. Marvin Minsky also announces his cryonics arrangement and is awarded his cryonics bracelet by Eric Drexler to resounding applause. Also at Extro-3, Dr. Greg Stock speaks on engineering the human germline, a talk that leads to Stock's UCLA conference on the topic the next January.[18]

1999: The Extro 4 conference on Biotech Futures: Challenges and Choices of Life Extension and Genetic Engineering brings together radical thinkers and mainstream scientists from places such as Geron Corporation, the Berkeley National Laboratory, UCLA, and the University of California, Berkeley. Scientific research is presented, and legal, artistic, and philosophical issues are discussed. Prof. Vernor Vinge and Greg Bear delight the audience with their creative thinking, and Natasha Vita-More forms the focus of a feature article in the January 2000 issue of Wired magazine. The Kronos Clinics starts up, aimed at personalized age management, based on the ideas expounded by Christopher Heward at the first two Extro conferences.[18]

2001: Reason.com review

2004 http://www.extropy.org/summitabout.htm

The Vital Progress conference was held in response to the latest moves by Leon Kass.

Review

Vital Progress Summit II was scheduled for winter 2005 but never materialised.

ExI Satellite Meeting in 2005, Caracus, Venezuela ended up launching under the TransVision instead which held many successive events.

View post:

Extropianism - H+Pedia

Posted in Extropianism | Comments Off on Extropianism – H+Pedia

Cliff’s Edge Google, Please Solve Death! – Adventist Review

Posted: April 25, 2017 at 4:52 am

April 21, 2017

At 61 years old, Im moth eaten enough to remember not just John F. Kennedys assassination, but his election to the presidency. Which means that since a tender age Ive been subjected to more than a half century of the fanfaronade, buffoonery, and deceit that every four years makes this great republic look like a cross between Animal House and The Manchurian Candidate as the hoi polloi, exercising their sacred constitutional right to vote, decide who will be the most powerful person in the world.

Death is a bummer, yes, but mostly for those who are alive.

I remember, for instance, Lyndon Johnsons infamous Daisy ad, which, with powerful graphics all but assured Americans that wed be nuked by the Bolsheviks if we elected his GOP opponent, Barry Goldwater, as president.

About 12 years later, I howled with laughter along with a bunch of other Florida Gators in a local Ratskeller when the thirty-eighth president of the United States, Gerald R. Ford Jr., declared during a presidential debate that there is no Soviet domination of Eastern Europein 1976!

And who can forget Michael Dukakis MI-A-I battle tank ride into electoral oblivion, Al Gores (2000) invention of the Internet, and the Howard Dean Scream of 2004?

Nothing, though, compared to the shtick we endured during the 2016 presidential campaign. Even Isomeone whose mitochondria can get overclocked by presidential politicsjust wanted it over.

Amid the hoopla over Hillary Clintons e-mails and Donald Trumps tax returns, however, you might have missed the candidacy of Zoltan Istvan, who traveled around the country in a vehicle shaped like a coffin, dubbed the Immortality Bus. In the mother of all campaign promises (one that made Bernie Sanders desire for universal health care seem trite) candidate Istvan declared that if elected president, he would allocate funds for science and technology to help us overcome death.

Overcome death? With science and technology? Well, considering all that science and technology have done so far (20 years ago watching a movie on a cell phone would have seemed like Star Trek stuff), why not? In 2013 TIME magazine ran a cover article titled, Can Google Solve Death? The subhead read: The search giant is launching a venture to extend the human life span. That would be crazyif it werent Google.

Of course, extending the human life span is one thing (following the advice in Counsels on Diets and Foods would do the trick, too), but thats as far from overcoming death as adding three inches to a yard is from infinity. These people want immortality, not longevity.

PayPal billionaire Peter Theil, for example, is one of the new ber-rich who actually hope their vast coffers can buy off the grim reaper. Theil has been investing in technology in which older people get blood transfusions from younger ones. If, as Scripture says, the life of the flesh is in the blood (Lev. 17:11),wouldnt a fresh supply of young blood be good for the flesh? The logic works. How well the technology will is, well, another matter entirely.

For those put off by this high-tech Dracula stuff, another hoped-for route to the tree of life is to map the complete neural structure of the brain, the unique neuro-chemical configurations that make you and your consciousness distinctly you, and then upload you, in bits and bytes, to a supercomputer. The idea is that if this could ever be done (not likely), your conscious self would exist unencumbered by hemorrhoids, arthritis, and all the other foibles of fallen flesh. However, this potential procedure comes with numerous questions, such as: If they create back-ups, which one is the real you?

Another strategy already being implemented is the freeze-dried approach to immortality, known as Cryonics. At the moment of death, the corpse is immersed into a vat of liquid nitrogen eventually cooled down to -196 degrees centigrade, in hopes that future technology will have so far advanced that you could be thawed out, refurbished, and sent on your merry way. A whole body freeze goes for a cool $200,000. Heads only, called Neurocyropreservation, can be had for $80,000. What good is a thawed-out frozen head? Well, if they can get this brain mapping technology down, the plan would be to thaw out the head, upload the neural structure to a computer and Voila! You are mentally, if not physically, resurrected, existing inside a computer that, ideally, could allow you to exist forever, as long as the parts can be replaced.

If all this seems tragically farfetched, it is. Its its farfetchedness that makes it so tragic, a twenty-first century testament to humanitys futile attempt to beat death, and the even more futile hope that science and technology can do it.

Science is the new God, said Roen Horn, of the Eternal Life Fan Club. Science is the new hope.

Google, please solve death! read a placard carried by a woman on the streets of New York City, another indicator of just how desperately we dont want to die.

Those hopeful mortals counting on some technician in a lab coat to outwit Mother Natures dirtiest trick call themselves Transhumanists, the trans referring to the prospect that science will enable them to transcend their humanity, or at least the one aspect of our humanity that always ends our humanity, which is death. Others call themselves Extropians, a word created to express the opposite of entropy, the physical process that describes on the atomic level why everything, including ourselves, falls apart.

The hope that science can beat death is as mythical a quest as was the search for the Fountain of Youth. Science and technology cant give eternal life. They dont need to. Jesus already has. And this is the testimony: God has given us eternal life, and this life is in His Son (1 John 5:11). I write these things to you who believe in the name of the Son of God so that you may know that you have eternal life (1 John 5:13).All that was needed for eternal life has been given to us in Jesus. The provision has been completed, the price paid, the promise fulfilled. And this is what he promised useternal life (1 John 2:25).

This desperate desire to escape our immediate physical demise, however understandable, rests upon the same ignorance that makes people think that Google might be able to accomplish that escape for them. Death is a bummer, yes, but mostly for those who are alive. From the perspective of those dead, death is experienced as nothing but a short, deep sleep until rising to glory at the Second Coming (for those rising at the third coming, well, things are a bit more problematic).

From Zoltan Istvans Immortality Bus to freezing corpses in vats of nitrogen, Transhumanism and Extropianism are doomed to fail. Worse, setting up science as the new God makes it less likely to trust in the only God who can give people the eternal life they so desperately want.

Of all the various approaches for immortality, Peters Theils, however painfully off track, at least has the mechanism right. Whoever eats my flesh and drinks my blood has eternal life, and I will raise them up at the last day (John 6:54). The key is blood, yes. He just needs the right source for it.

Clifford Goldstein is editor of the Adult Sabbath School Bible Study Guide. His next book, Baptizing the Devil: Evolution and the Seduction of Christianity, is set to be released by Pacific Press this fall.

More:

Cliff's Edge Google, Please Solve Death! - Adventist Review

Posted in Extropianism | Comments Off on Cliff’s Edge Google, Please Solve Death! – Adventist Review

Extropianism | Transhumanism Wiki | Fandom powered by Wikia

Posted: March 7, 2017 at 10:08 pm

Extropianism, also referred to as extropism or extropy, is an evolving framework of values and standards for continuously improving the human condition. Extropians believe that advances in science and technology will some day let people live indefinitely and that humans alive today have a good chance of seeing that day. An extropian may wish to contribute to this goal, e.g. by doing research and development or volunteering to test new technology.

Extropianism describes a pragmatic consilience of transhumanist thought guided by a proactionary approach to human evolution and progress.

Originated by a set of principles developed by Dr. Max More, The Principles of Extropy,[1] extropian thinking places strong emphasis on rational thinking and practical optimism. According to More, these principles "do not specify particular beliefs, technologies, or policies". Extropians share an optimistic view of the future, expecting considerable advances in computational power, life extension, nanotechnology and the like. Many extropians foresee the eventual realization of unlimited maximum life spans, and the recovery, thanks to future advances in biomedical technology, of those whose bodies/brains have been preserved by means of cryonics.

Extropy, coined by Tom Bell (T. O. Morrow) in January 1988, is defined as the extent of a living or organizational system's intelligence, functional order, vitality, energy, life, experience, and capacity and drive for improvement and growth. Extropy expresses a metaphor, rather than serving as a technical term, and so is not simply the hypothetical opposite of Information entropy.

In 1987, Max More moved to Los Angeles from Oxford University in England, where he had helped to establish (along with Michael Price, Garret Smyth and Luigi Warren) the first European cryonics organization, known as Mizar Limited (later Alcor UK), to work on his Ph.D. in philosophy at the University of Southern California.

In 1988, "Extropy: The Journal of Transhumanist Thought" was first published. This brought together thinkers with interests in artificial intelligence, nanotechnology, genetic engineering, life extension, mind uploading, idea futures, robotics, space exploration, memetics, and the politics and economics of transhumanism. Alternative media organizations soon began reviewing the magazine, and it attracted interest from likeminded thinkers. Later, More and Bell co-founded the Extropy Institute, a non-profit 501(c)(3) educational organization. "ExI" was formed as a transhumanist networking and information center to use current scientific understanding along with critical and creative thinking to define a small set of principles or values that could help make sense of new capabilities opening up to humanity.

The Extropy Institute's email list was launched in 1991, and in 1992 the institute began producing the first conferences on transhumanism. Affiliate members throughout the world began organizing their own transhumanist groups. Extro Conferences, meetings, parties, on-line debates, and documentaries continue to spread transhumanism to the public.

The Internet soon became the most fertile breeding ground for people interested in exploring transhumanist ideas, with the availability of websites for such organizations that have joined the Extropy Institute in developing and advocating transhumanist (and related) ideas. These include Humanity Plus, the Alcor Life Extension Foundation, the Life Extension Foundation, Foresight Institute, Transhumanist Arts & Culture, the Immortality Institute, Betterhumans, Aleph in Sweden, the Singularity Institute for Artificial Intelligence, and the Institute for Ethics and Emerging Technologies.

In 2006 the board of directors of the Extropy Institute made a decision to close the organisation, stating that its mission was "essentially completed."[1]

fr:Extropianisme it:Estropianesimo sk:Extropy Institute fi:Ekstropianismi

The rest is here:

Extropianism | Transhumanism Wiki | Fandom powered by Wikia

Posted in Extropianism | Comments Off on Extropianism | Transhumanism Wiki | Fandom powered by Wikia

What rhymes with Extropianism?

Posted: at 10:08 pm

What rhymes or sounds like the word Extropianism? Extropianism, also referred to as the philosophy of Extropy, is an evolving framework of values and standards for continuously improving the human condition. Extropians believe that advances in science and technology will some day let people live indefinitely. An extropian may wish to contribute to this goal, e.g. by doing research and development or volunteering to test new technology. Extropianism describes a pragmatic consilience of transhumanist thought guided by a proactionary approach to human evolution and progress. Originated by a set of principles developed by Dr. Max More, The Principles of Extropy, extropian thinking places strong emphasis on rational thinking and practical optimism. According to More, these principles "do not specify particular beliefs, technologies, or policies". Extropians share an optimistic view of the future, expecting considerable advances in computational power, life extension, nanotechnology and the like. Many extropians foresee the eventual realization of unlimited maximum life spans, and the recovery, thanks to future advances in biomedical technology or mind uploading, of those whose bodies/brains have been preserved by means of cryonics.

more on Definitions.net

Select another language: - Select - (Chinese - Simplified) (Chinese - Traditional) Espaol (Spanish) (Japanese) Portugus (Portuguese) Deutsch (German) (Arabic) Franais (French) (Russian) (Korean) (Hebrew) (Ukrainian) (Urdu) Magyar (Hungarian) (Hindi) Indonesia (Indonesian) Italiano (Italian) (Tamil) Trke (Turkish) (Telugu) (Thai) Ting Vit (Vietnamese) etina (Czech) Polski (Polish) Bahasa Indonesia (Indonesian) Romnete (Romanian) Nederlands (Dutch) (Greek) Latinum (Latin) Svenska (Swedish) Dansk (Danish) Suomi (Finnish) (Persian) (Yiddish) Norsk (Norwegian)

See original here:

What rhymes with Extropianism?

Posted in Extropianism | Comments Off on What rhymes with Extropianism?

Tools make things easier but don’t make them better – Namibia Economist

Posted: March 4, 2017 at 1:06 am

President Joe once had a dream. I wonder if you recognise the song? Its Saviour Machune from the earl David Bowie album, The Man who Sold the World. The machine is built to solve all problems, and does so, but ends up miserable and disaffected. Its obvious that the thing was a computer, but the word machine works better in the song.

If you havent heard the song yet, its worth a listen. You can find it on Youtube. During those years, David Bowie made songs that still sound modern. That was before he learned to sing properly and became poppish. If you do head in that virtual direction, you might also want to listen to the track, The Man Who Sold the World.

First glance, the song sounds like an oddity, pardon the pun, a preposterous notion. If you go a bit deeper into things and put aside the concept of the machine, you are left with another player, President Joe who built the machine. President Joe is not particularly preposterous. There are a bunch of people out there, just like him.

The notion of the omnipotent machine is nothing new. Its one of the common strands in science fiction, and has been for a long time. The idea of an intelligent machine is old hat as well. The Turing Test scratches the surface by seeking a computer that can fool a human into believing that it too is human. Some or other machine managed to fool a couple of experts into thinking it was a 13 year old a couple of weeks ago.

Next on the horizon, we have The Singularity. That is supposed to be an intelligent machine that is able to replicate itself. After that comes extropianism, the idea of transfering a soul to a machine. All of these phenomena are fetishes, I suspect on the part of people who cannot cope with other people. If I cant cope with the vagaries of real human emotions, Ill hope that machines are more predictable. Sad. It makes me think of Pinocchio as an object of affection, if not desire.

Perhaps its not so much Pinocchios wooden nature that is the problem, but the people who worship machinesd that need to get real.

There is something else that is interesting about the song. The machine is called Prayer and its answer is law. There is definitely something in that as well, yet another get-out-of-jail card for people who really dont want to have to deal with their own thoughts and emotions.

The line that divides the two sides of the thing is the internal and the external. There are a huge number of people who need external systems to get by, not just in the starry eyed worship of tools like computers, but in slavish, slack-jawed belief in and acceptance of thought systems.

I suppose, at the extreme end of the spectrum, the most convinced and optimistic computer geek is really not much different from your average religious fundamentalist, if not in intensity of and reliance on belief, then possibly as in need of control as a bog-standard hell-and-damnation preacher or some angry worshiper at the altar of Dawkins atheism.

Machines are becoming the new cult. They define us and our lives, to the point where personal values and our own judgments become secondary resources and measures of value.

The proof of this lies in processing and graphics capability. Apparently the higher the capability, the more able the person. Yet, at the end of the day, there arent all that many people who use much more than a browser, mail and a productivity suite.

Its about the same with religion. Why do people need theological sophistication and loopholes when the actual object of the exercise is to break as many commandments as possible and ignore the validity of strictures against venal sins? One proof of this lies in the church which handed out guns to people who converted.

If there is a truth to be had from this, it is that tools make things easier but dont make them better. Systems create their own messes. Computers become more complex and less predictable. Religion needs to more enemies and more violence.

Continue reading here:

Tools make things easier but don't make them better - Namibia Economist

Posted in Extropianism | Comments Off on Tools make things easier but don’t make them better – Namibia Economist

Max More – Wikipedia

Posted: January 27, 2017 at 5:53 am

Max More (born Max T. O'Connor, January 1964) is a philosopher and futurist who writes, speaks, and consults on advanced decision-making about emerging technologies.[1][2]

Born in Bristol, England, More has a degree in Philosophy, Politics and Economics from St Anne's College, Oxford (1987).[3][4] His 1995 University of Southern California doctoral dissertation The Diachronic Self: Identity, Continuity, and Transformation examined several issues that concern transhumanists, including the nature of death, and what it is about each individual that continues despite great change over time.[5]

Founder of the Extropy Institute, Max More has written many articles espousing the philosophy of transhumanism and the transhumanist philosophy of extropianism,[6] most importantly his Principles of Extropy.[7][8] In a 1990 essay "Transhumanism: Toward a Futurist Philosophy",[9] he introduced the term "transhumanism" in its modern sense.[10]

More is also noted for his writings about the impact of new and emerging technologies on businesses and other organizations. His "proactionary principle" is intended as a balanced guide to the risks and benefits of technological innovation.[11]

At the start of 2011, Max More became president and CEO of the Alcor Life Extension Foundation, an organization he joined in 1986. [12]

Read more from the original source:

Max More - Wikipedia

Posted in Extropianism | Comments Off on Max More – Wikipedia

The Published Data of Robert Munafo at MROB

Posted: November 27, 2016 at 9:47 am

xkcd readers: RIES page is here

My resume (a bit untraditional, like me).

More pages and topics, grouped by subject:

Dice : Links to examples of all known types of dice, mainly organised by number of sides; and lists of tabletop games telling which dice are used for each.

Exponentially Distributed Dice : Dice to roll random numbers whose logarithms are evenly distributed. Benford: It's Not Just the Law It's How We Roll.

Formal Power Series algorithms (for now, just the square root)

Generating Functions are discussed in the context of decimal expansions of fractions like 1/98 = 0.0102040816...

How Many Squares : One of the more popular types of mathematical troll-bait.

Hypercalc is my "calculator that cannot overflow", available as a web app and a more powerful Perl version for UNIX/Linux/Mac OS X and Cygwin.

Integer Sequences : I have many pages on specific integer sequences like A181785 and A020916 (some of which required quite specialized high-speed programs); pages on sequence categories like 2nd-order linear recurrence and Narayana numbers; and a sorted table of sequences I find interesting, with links to these and many other pages.

Large Numbers : The -illion names, tetration and faster-growing functions, Graham's number, and other fascinating ways to go far beyond the merely astronomical.

Lucas Garron's "Three Indistinguishable Dice" Problem : a fun little puzzle involving how to make better use of a basically useless three-dice-in-a-box thingy. As seen on Numberphile.

mcsfind : A program that will find the simplest recurrence-generated integer sequence given some initial terms.

Minimally Complex Sequences : An exhaustive index of integer sequences generated by simple "classical" formulas.

My Laws of Mathematics : Kinda humorous, kinda serious.

Numbers : Notable properties of specific numbers, like 2.685452..., 107, and 45360.

Puzzles : Not always mathematical, but those are the ones that I seem to discuss more often. MIT Mystery Hunt stuff is here too.

Riemann Zeta Function MP3 File : Music that only a number theorist would love...

Rubik's Cube and Other Rectangular Delights : My unique solution algorithms from 1982, some software, and a survey of similar puzzles like the 223.

Sloandora : An interactive browser for the OEIS, using a text concordance metric.

Computational Science :

I discovered that the Gray-Scott system supports patterns about as complex as those in Conway's Game of Life. This page links to the paper I wrote and the talk I gave on the same topic.

"Popular" Science :

Orrery : A solar system model built from LEGO parts.

Size Scales Exhibit : An adapted and improved version of the AMNH Rose Center exhibit.

Slide Rules : Notes about slide rules and photos of ones I made myself.

Solar System : Some facts and figures about the planets and their orbits.

Tides : A step-by-step explanation of the tides, designed to explain all the differences that occur from one day to the next and from one location to another.

Here are most of the topics that will not be obsolete with [company]'s next release of [product].

Alternative Number Systems : A list of the most popular alternatives to fixed-point (integer) and floating-point representations, and the advantages and disadvantages of each.

Answers to Questions on Stack Exchange Sites : I couldn't add corrections or comments directly, so I have published them here.

automeme : A tool for the automatic generation of "mad-libs" style texts, with a simple and powerful specification language.

Diameter-Degree or "TTL" problem : a graph theory problem related to wiring multi-processor computer networks.

Floating-Point Formats : A list of the ranges and precisions of various floating-point implementations over the years.

Functional Computation : A set of recursive definitions starting with a minimal set of LISP-like functions, and specifically related to work of Turing and Gdel.

My High-Performance Projects : Just a brief summary of all the CPU-intensive projects I've created over the years, from Z80 assembly-language to the present.

Hypercalc : The calculator that doesn't overflow. Available as a JavaScript web application courtesy of Kenny TM~ Chan, and in a standalone Perl version that supports 295-digit precision and is programmable in BASIC.

LogCPU : a simple, very efficient load monitor for MacOS X. (This is in the general CS category because it is a good example of elegant display and UI)

Minimal RNN Implementation in Python : A recurrent neural network that models plain text, based on this gist by @karpathy, but greatly enhanced.

MIRA : A text-only web browser with unique features, designed for scholars and others who conduct research on the Internet.

Perl scripts : The language of choice of those who have that occasional "hankerin' for some hackerin'"

png-csum-fix : Program that recomputes the CRCs in a PNG file; also allows changing colour table (palette) entries on the fly.

Programming Languages : An automated survey of the popularity of various computer languages.

RHTF : The "embarassingly-readable" markup language I created for these webpages.

SimpleGet : A small stand-alone replacement for the perl library LWP::Simple.

The SPEC Benchmarks : Conversion formulas for the industry-standard CPU benchmarks.

This QR does not loop.

Items in this section are brand-specific, dated, and/or purely recreational.

Apple II Colors : An exact calculation of the RGB values of the lores (COLOR=) and hires (HCOLOR=) colors on the Apple ][, derived by converting through the Y R-Y B-Y and YUV systems.

Apple Product History : List of computer and PDA models released by Apple, some with details.

Chip's Challenge : Maps and hints, and some walkthroughs, for the Atari Lynx version of the videogame.

Computer History : The history of the development of computers, with a focus on performance issues and the adoption of supercomputer design ideas into desktop machines.

The Eden World Builder File Format : Eden World Builder is a Minecraft-like game for iOS. I worked out the internal data format so I could print maps.

Eden World Builder : Other pages about Eden World Builder, including a change log and versions of my main creation Mega City Tokyo Unified.

Fitbit Flex : Technical specifications, a list of the flashing light patterns, and some instructions that should have been included in the manual.

iBook: How to Prevent Sleep : A simple, cheap and reversible way to prevent the iBook from going to sleep when you close the lid.

LibreOffice Bugs and Workarounds : Making a great free software project slightly greater.

The Lunacraft/Mooncraft File Format : Lunacraft, originally called Mooncraft is a Minecraft-like game for iOS. I worked out the internal data format so I could print maps and recover from the dreaded "terrain regen bug".

Lynx Chip's Challenge : My maps and hints rendered with a custom font.

MacBook Pro : mainly concerned with hard drive upgrades.

Missile : My first Macintosh program also happens to be one of the few programs that ran from the Mac's introduction in 1984 until the switch to Intel twenty years later.

Playstation : My notes about Sony Playstation games.

Q04B : Based on 2048, with color graphics, boosts, and a lot more (artwork by Randall Munroe).

SDRAM : A list of some older SDRAM chip types giving their speed and size.

TextEdit: Fixing the Margins Bug : How to alter TextEdit's printing code so that it respects the margins from Page Setup.

UNIX Project Build Tools : A partial history of the tools (cc, make, etc.) used to build from source and why it keeps getting more and more complicated.

xapple2 : My modifications to add accurate sound reproduction (/dev/audio) to the xapple2 Apple ][ emulator for UN*X and Linux.

Abbreviations : Common phrases that are frequently made obscure by abbreviation.

Archetypes : A Periodic Table of Jungian personality archetypes.

Associativity Matrix : A little twisty maze of thoughts, all different.

Blogs : In addition to my primary blog, Robert Munafo's General Weblog, I also have two blogs hosted by blog-specific websites: Robert Munafo on Blogspot and Robert Munafo on WordPress.

Core Values : An attempt to describe my preferences for how to prioritise life and make decisions (thus, highly subjective and in need of continual revision).

Data : Miscellaneous small bits of data I want to publish.

Experimental page : For my experiments with HTML and JavaScript (it's published mainly so I can check it from gatewayed ISP's like AOL and WebTV, and limited viewers like smartphones).

Extropianism : Why we shouldn't feel quite so bad about the future.

Filk : Some funny lyrics I've written.

Friends : Links to Web pages of various friends and co-conspirators.

Gearing Ideas and Notes : Mosty related to my orrery work.

General Blog : A weblog of articles not limited to any particular topic.

History of Music : Focusing on the industrialized distribution of music as a recent and unnatural phenomenon.

Index to the One True Thread : Silliness and serious creativity for hopelessly obsessed xkcd fans.

Linux rules : Duh.

Mamma Mia : The Broadway musical.

MBTI : A Karnaugh map of the Myers-Briggs personality types

Mispronounced Words : My bid for the "most useless collection of data on a web page" world record.

MIT OCW 18.06SC errata : Has several corrections and a cross-reference guide to the newer textbook by Prof. Strang.

Movies : Some material I have written that relates to a few movies. (See also the Top Movies list.)

Nest Thermostat: Using Auxiliary Heat : One of the many topics that are poorly documented on the official website.

Non-Obvious Answers to the Stupid Problems Life Gives Us : like how to find a practical lid-wrench.

Non-Obvious Answers to the Senseless Impediments Google Throws at Us : pretty much what it says on the tin.

Pod People : from Apple Customers to Werewolves organized and classified for your amusement and as a public service.

PVC Espas : A musical instrument I have built.

SAMPA : A clear concise way to represent phonetics in ASCII.

South Park on the Gun Debate, Without Blood or Bullets : Fan fiction written for early 2013.

Split Sleep : Sleeping twice per day for greater efficiency

Top Movies : My list of top movies of all time, rated by attendance (number of tickets sold) in U.S. theatres.

xkcd 1190 "Time" : discussion forum index

You do.

The following links represent projects that I lost interest in.

Apple Computer : Notes about the company and its history (currently just covers the "1984" commercial)

Follow this link:

The Published Data of Robert Munafo at MROB

Posted in Extropianism | Comments Off on The Published Data of Robert Munafo at MROB

Page 112