The Utopia Force

I have a new article up on GOOD.is about what I could only call "transhuman goodness" - enjoy!
--------------------------------------------------------------------------------------------
Think of this note as if it were an invitation to a ball—a ball that will take place only if people show up. We call the lives we lead here “Utopia.”

– Nick Bostrom, Letter from Utopia.

Why should you care about the singularity when studies show that material possessions and technology beyond a certain point don’t actually make people any happier? Two weeks ago, I spoke about the possibility of giving a superintelligent AI the goal of doing whatever the human race would, after careful consideration, decide was best. This is known as the CEV algorithm. The outcome of this process would be very much unlike the technology, gadgets, and consumerism of today.

As Nick Bostrom has so eloquently reminded us, humanity’s biggest problems aren’t what we think they are: the most insidious and hard to notice, according to Bostrom, is that life is not nearly as good as it could be. This problem is really difficult for us to see; what could possibly be substantially better about our lives, even here in the developed world?

To start with, one has to realize that we’re not built for our own good. Evolution built us caring only about our ability to pass our genes on. We are easier to hurt than to pleasure, and we have been built with happiness set-points that are near impossible to significantly move away from without altering our biology. Studies have shown that giving a person $1,000,000 doesn’t actually make them happier in the long run, because of the hedonic treadmill effect: The human brain gets “used to” your circumstances, so that if your circumstances improve, your happiness goes up at first, but then returns to average. Our genes “calculate” that our bodies are worth keeping in good shape for 50 or so years, but after that, we are of little use, so our genes allow us to fall apart.

The society around us is also not built entirely for our benefit; it is a set of self-sustaining institutions that are, to a lesser or greater degree, influenced by the whims of a capricious electorate. Corporations can survive by hiring marketing departments to make us want things that we don‘t really need, and by hiring lobbying departments to make sure that the democratic process doesn‘t get in the way (see, for example, the tobacco industry). The challenges that work presents to us are often stifling and tedious; working in an office is not the natural human environment, which is why so many people ask for (and never get) a job that involves being outdoors.

Perhaps most importantly, the social dynamics that emerge from the interaction of many people who are each individually seeking status, power, and happiness often results in zero and negative-sum interactions. People are mean to each other, argue, fight, cheat, lie, and frequently make each others’ lives a misery—and this is ultimately a result of our evolved psychology, which was designed to deal with situations in which humans were forced by scarcity to kill each other in order to survive.

A world fashioned by the CEV algorithm would, at the very least, fix all of these fairly obvious flaws. Human psychology and biology could be altered to make us kinder, happier, healthier, and free from involuntary death or aging, and to remove the hedonic treadmill effect. New and better institutions could be developed from the ground up, and complex yet nourishing intellectual and physical challenges could be designed to replace what we today call “work.”

Iain Banks has described such a world in his science-fiction books about a future society called “The Culture”: enhanced humans live for thousands of years, and do exactly what they want with their time; they create art and science, they socialize, they enjoy a selection of customized virtual reality and real-world experiences and safe recreational drugs. They are all permanently young and attractive, with bodies and brains that have been altered in beneficial ways, they rarely argue with each other or have significant or prolonged negative interactions, and they have lots of sex.

If we consider all the possible ways that the universe could be arranged, and rank them in terms of how good they would be, Banks’ utopia certainly gets a very high rank. But it seems unlikely that it is the very best—or even close to the best. Banks’ utopia represents the limit of good experiences that we can currently think of and realize are good according to our complex values. Just how much better could it get?

In order to really make an accurate guess about the limits of goodness of the world, one must think about the problem indirectly or by analogy, because there are some states that are both so good and so complex that we cannot even imagine them yet. For example, what are the limits of goodness of subjective experience? What is the limit of the level and degree of mutual respect, friendship, passion or love that is possible?

At the risk of severely embarrassing myself forever over the internet, I’ll illustrate this with a personal example. Before I had ever kissed anyone, I didn’t actually know that the subjective experience of a passionate kiss was possible. I knew that kissing was possible, but not what it would feel like, or even that feelings that good were possible. Not only was I missing out, but I was unaware that I was missing out.

Humanity as a whole may have the same problem. We haven’t realized that a level of life that far surpasses what we currently experience might be possible. Imagine a human raised by animals who did not have the ability to speak, or the very real South American Indian tribe who are unable to learn, even when given food incentives, to count up to 10. Imagine a person who spent their entire life without experiencing romantic love, imagine a human who had neither hearing nor sight. These impoverished human beings are to us as we are to humans living in a post-positive singularity world: There are very probably intellectual, aesthetic, and emotional enhancements that would put their recipients as far beyond the citizens of The Culture as we are beyond a blind, solitary ignoramus.

A benevolent superintelligent AI would drastically and precisely alter the world, but do so in a direction that was dictated by your preferences. It would be like a new physical force that consistently pushed life towards our wisest utopian ideal. This ideal, or something very close to it, really is attainable. The laws of physics do not forbid it. It is attainable whether we feel that it is “unreasonable” that life could get that good, whether we shy away from it for fear of sounding religious, whether we want to close our eyes to the possibility because it scares us to believe that there is something greater out there, but we might let it slip through our fingers.

And indeed we might. As Carl Sagan puts it, “Our descendants, safely arrayed on many worlds throughout the solar system and beyond, will marvel at how vulnerable the repository of raw potential once was, how perilous our infancy”

Related Posts

Comments are closed.