Response to Pearce

David Pearce writes, in response to my recent blog post:

Crucial to the cognitive success of organic robots like us seems to be superior "mind-reading" skills - the ability to "take the intentional stance". So presumably post-biological intelligence will need the functional analogues of empathetic understanding if it is successfully to interact with (post)human sentients. "Mind-blind" autistics who are mathematical prodigies are still vulnerable. Even a SuperAsperger would be vulnerable: calculating everything at the level of microphysics is too computationally demanding even for a SuperAsperger.
So presumably post-biological intelligence will need a sophisticated theory of mind - otherwise it's just a glorified idiot-savant. Or does your scenario assume that sophisticated functional analogues of empathy are feasible without phenomenal consciousness? Are you assuming a runaway growth in empathetic understanding by post-biological intelligence that outclasses "mind-reading" organic sentients - and yet has no insight into why organic sentients find some states (e.g. agony) intrinsically normative but others (e.g. cosmic paperclip tiling) totally trivial???


It is entirely possible to have a post-biological optimizing-intelligence that outclasses "mind-reading" organic sentients and knows exactly why organic sentients find some states intrinsically normative, but just doesn't care. It knows that the punishment it is meting out to you hurts you, it knows that you don't want to be killed, but yet it doesn't care. It just wants to produce the maximal number of paperclips. This is highly conterintuitive for humans, because we possess mirror neurons and we instinctively sympathize with the suffering of other human beings. But that is just another human universal trait that doesn't generalize to all minds. Heck, it doesn't even generalize to all evolved minds; predators do not empathize with the suffering of their prey, and, as David Pearce is keen to point out, this causes the natural world to be an agony machine.

Alternatively _if_ post-biological intelligence is subject to the pleasure-pain axis, then I can't see the cosmic outcome is likely to be different than (hypothetically) for organic life i.e. some friendly sentient version of "Heaven" - not paperclips. Phenomenal pleasure and pain will be no less intrinsically normative if they can be instantiated in other substrates. [ I confess here I'm a sceptical carbon chauvinist / micro-functionalist.] Crudely, what unites Einstein and a flatworm is the pleasure pain-axis. All sentient life is subject to the pleasure principle.

It seems unlikely to me that all possible optimizing minds are subject to the "pleasure/pain" axis.

For reasons we don't understand, the phenomenology of pleasure and suffering is intrinsically normative. [Try plunging your hand into iced cold water and holding it there for as long as you can for a nasty reminder of what this means.] Perhaps what _will_ mark a major discontinuity in the evolution of sentient life is that we'll shortly be able to rewrite our own source code and gain direct control over our own reward circuitry. I don't pretend to know what guise "Heaven" will take. ["orgasmium", cerebral bliss, modes of blissful well-being yet unknown - choose your favourite utopia.] But I reckon in future the hedonic tone of all experience will be hugely enriched. One can argue whether such hedonically amplified states will "really" be as valuable as they feel. But they'll certainly seem to be valuable - more subjectively valuable than anything accessible now - and therefore worth striving for. IMO 🙂

And, the "IMO" is key here. In the opinion of the paperclip-maximizer, the only thing worth striving for is more paperclips.

Pleasure and pain are intrinsically normative to minds that have a pleasure/pain reward system. Other minds don't. And even then, there is a difference between my pain and your pain; your pain is not intrinsically motivating to me. To quote from value is fragile:

You do have values, even when you're trying to be "cosmopolitan", trying to display a properly virtuous appreciation of alien minds. Your values are then faded further into the invisible background - they are less obviously human. Your brain probably won't even generate an alternative so awful that it would wake you up, make you say "No! Something went wrong!" even at your most cosmopolitan. E.g. "a nonsentient optimizer absorbs all matter in its future light cone and tiles the universe with paperclips". You'll just imagine strange alien worlds to appreciate.

Trying to be "cosmopolitan" - to be a citizen of the cosmos - just strips off a surface veneer of goals that seem obviously "human".

But if you wouldn't like the Future tiled over with paperclips, and you would prefer a civilization of...

...sentient beings...

...with enjoyable experiences...

...that aren't the same experience over and over again...

...and are bound to something besides just being a sequence of internal pleasurable feelings...

...learning, discovering, freely choosing...

...well, I've just been through the posts on Fun Theory that went into some of the hidden details on those short English words.

Values that you might praise as cosmopolitan or universal or fundamental or obvious common sense, are represented in your brain just as much as those values that you might dismiss as merely human. Those values come of the long history of humanity, and the morally miraculous stupidity of evolution that created us.

These values do not emerge in all possible minds. They will not appear from nowhere to rebuke and revoke the utility function of an expected paperclip maximizer.

If you want a vision of the defualt future without special effort spent on AI friendliness, look at this video. You are the baby Wildebeast, the next form of intelligence is the hyena pack:

Related Posts

Comments are closed.