If I had to pick a single statement that relies on more Overcoming Bias content I've written than any other, that statement would be:
If you believe this statement, there is cause to be very worried about the future of humanity. Currently, the future gets its detailed, reliable inheritance from human morals and metamorals because your children will have almost exactly the same kind of brain that you do, and (to a lesser extent) because they will be immersed in a culture that is (in the grand scheme of things) extremely similar to the culture we have today. Over many generations and technological changes, the inheritance of values between human generations breaks to some small extent, though it seems to the author that human hunter-gatherers from the very distant past want roughly the same things that modern humans do; they would be relatively at home in a utopia that we designed. That is a chain of reliable inheritance of values that spans fifty thousand years, from mother to daughter and father to son.
When intelligence passes to another medium, it seems that the "default" outcome is the breaking of that chain, as Frank puts it:
Each aspiration and hope in a human heart, every dream you’ve ever had, stopped in its tracks by a towering, boring, grey slate wall.
How would it happen? Those who lusted after power and money would unleash the next version of intelligence, probably in competition with other groups. They would engage in wishful thinking, understate the risks, they would push each other forward in a race to be first. Perhaps the race might involve human intelligence enhancement or human uploads. The end result could be systems that have more effective ways of modeling and influencing the world than ordinary humans do. These systems might work by attempting to shape the universe in some way; if they did, they would shape it to not include humans, unless very carefully specified. But humans do not have a good track record of achieving some task perfectly the first time around under conditions of pressure and competition.
----------------------------------------------------------------------------------------------
To answer a few critics on Facebook: Stefan Pernar writes:
The argument in the linked post goes something like this: a) higher intelligence = b) more power = c) we will be crushed. The jump from b) -> c) is a bare assertion.
This post does not claim that any highly intelligent, powerful AI will crush us. It implicitly claims (amongst other things) that any highly intelligent, powerful AI whose goal system does not contain "detailed reliable inheritance from human morals and metamorals" will effectively delete us from reality. The justification for this statement is eluded to in the value is fragile post. As Yudkowsky states in that post, the set of arguments for this statement and counterarguments against it and counter-counterarguments constitutes a large amount of written material, much of which ought to appear on the Less Wrong wiki, but most of which is currently buried in the Less Wrong posts of Eliezer Yudkowsky.
The most important concepts seem to be listed as Major Sequences on the LW wiki. In particular, the Fun theory sequence, the Metaethics sequence, and the How to actually change your mind sequence.
- Neurodiversity vs. Cognitive Liberty - November 8th, 2009 [November 8th, 2009]
- Link dump: 2009.10.13 - November 8th, 2009 [November 8th, 2009]
- Limits to the biolibertarian impulse - November 8th, 2009 [November 8th, 2009]
- Link dump: 2009.10.15 - November 8th, 2009 [November 8th, 2009]
- Neurodiversity vs. Cognitive Liberty, Round II - November 8th, 2009 [November 8th, 2009]
- Link dump: 2009.10.17 - November 8th, 2009 [November 8th, 2009]
- Cognitive liberty and right to one's mind - November 8th, 2009 [November 8th, 2009]
- TED Talks: Henry Markram builds a brain in a supercomputer - November 8th, 2009 [November 8th, 2009]
- And Now, for Something Completely Different: Doomsday! - November 8th, 2009 [November 8th, 2009]
- Link dump: 2009.10.19 - November 8th, 2009 [November 8th, 2009]
- Oklahoma and abortion - some fittingly harsh reflections - November 8th, 2009 [November 8th, 2009]
- Pigliucci on science and the scope of skeptical inquiry - November 8th, 2009 [November 8th, 2009]
- Remembering Mac Tonnies - November 8th, 2009 [November 8th, 2009]
- Link dump: 2009.10.24 - November 8th, 2009 [November 8th, 2009]
- Link dump: 2009.10.26 - November 8th, 2009 [November 8th, 2009]
- The Bright Side of Nuclear Armament - November 8th, 2009 [November 8th, 2009]
- Grieving chimps - November 8th, 2009 [November 8th, 2009]
- Elephant prosthetic - November 8th, 2009 [November 8th, 2009]
- Mass produced artificial skin to replace animal testing - November 8th, 2009 [November 8th, 2009]
- Dog gets osseointegrated prosthetic - November 8th, 2009 [November 8th, 2009]
- NASA Shuttle-derived Sidemount Heavy Launch Vehicle Concept - November 8th, 2009 [November 8th, 2009]
- Link dump for 2009.02.02 - November 8th, 2009 [November 8th, 2009]
- Link dump for 2009.11.04 - November 8th, 2009 [November 8th, 2009]
- Link dump for 2009.11.05 - November 8th, 2009 [November 8th, 2009]
- IEET's Biopolitics of Popular Culture Seminar - November 8th, 2009 [November 8th, 2009]
- Einstein and Millikan should have done a Kurzweil - November 8th, 2009 [November 8th, 2009]
- Affective Death Spirals - November 8th, 2009 [November 8th, 2009]
- Cure aging or give a small number of disabled people jobs as janitors? - November 8th, 2009 [November 8th, 2009]
- Would unary notation prevent scope insensitivity? - November 8th, 2009 [November 8th, 2009]
- Cure aging or give a small number of disabled people jobs as janitors - unary version. - November 8th, 2009 [November 8th, 2009]
- At SENS4, Cambridge, UK - November 8th, 2009 [November 8th, 2009]
- SENS4 overview and review - how evolution complicates SENS, and why we must try harder - November 8th, 2009 [November 8th, 2009]
- SENS4 top 10 photos - November 8th, 2009 [November 8th, 2009]
- My AI research for this year - November 8th, 2009 [November 8th, 2009]
- My AI research: Formal Logic - November 8th, 2009 [November 8th, 2009]
- My AI research: Category theory and institution theory - November 8th, 2009 [November 8th, 2009]
- My AI research: The Semantic Web - November 8th, 2009 [November 8th, 2009]
- My AI research: Features and Flaws of Logical representation - November 8th, 2009 [November 8th, 2009]
- My AI research: Graphical models and probabilistic logics - November 8th, 2009 [November 8th, 2009]
- Hughes and More engage Italian Catholicism: Image caption competition - November 8th, 2009 [November 8th, 2009]
- Surprisingly good solutions, falling in love and life in a materialistic universe - November 8th, 2009 [November 8th, 2009]
- What do you get when you cross slightly evolved, status seeking monkeys with the scientific method? - November 8th, 2009 [November 8th, 2009]
- Seeking the optimal philanthropic strategy: Global Warming or AI risk? - November 8th, 2009 [November 8th, 2009]
- Machine Learning - harbinger of the future of AI? - November 8th, 2009 [November 8th, 2009]
- At the Singularity Summit in NYC - November 8th, 2009 [November 8th, 2009]
- Katja Grace: world-dominating superintelligence is "unlikely" - November 8th, 2009 [November 8th, 2009]
- Normal Human Heroes on "Nightmare futures" - November 8th, 2009 [November 8th, 2009]
- Anissimov on Intelligence Enhancement - November 8th, 2009 [November 8th, 2009]
- Response to Pearce - November 8th, 2009 [November 8th, 2009]
- Creative thinking lets you believe whatever you want - December 13th, 2009 [December 13th, 2009]
- Let’s get metaphysical: How our ongoing existence could appear increasingly absurd - December 13th, 2009 [December 13th, 2009]
- Linda MacDonald Glenn guest blogging in November and December - December 13th, 2009 [December 13th, 2009]
- Link dump for 2009.11.15 - December 13th, 2009 [December 13th, 2009]
- Call 1-800-New-Organ, by 2020? - December 13th, 2009 [December 13th, 2009]
- IBM's claim to have simulated a cat's brain grossly overstated - December 13th, 2009 [December 13th, 2009]
- John Hodgman pulls off Fermi Paradox schtick - December 13th, 2009 [December 13th, 2009]
- Deus Sex Machina - December 13th, 2009 [December 13th, 2009]
- How Americans spent themselves into ruin... but saved the world - December 13th, 2009 [December 13th, 2009]
- I am my own grandpa (or grandma)? - December 13th, 2009 [December 13th, 2009]
- Link dump for 2009.11.29 - December 13th, 2009 [December 13th, 2009]
- The art of Tomas Saraceno - December 13th, 2009 [December 13th, 2009]
- Link dump: 2009.12.05 - December 13th, 2009 [December 13th, 2009]
- The Harmonic Convergence of Science, Sight, & Sound - December 13th, 2009 [December 13th, 2009]
- Working on my website - December 13th, 2009 [December 13th, 2009]
- Transhumanism, personal immortality and the prospect of technologically enabled utopia - December 13th, 2009 [December 13th, 2009]
- RokoMijic.com is up - December 13th, 2009 [December 13th, 2009]
- Why the Fuss About Intelligence? - December 13th, 2009 [December 13th, 2009]
- Initiation ceremony - December 13th, 2009 [December 13th, 2009]
- Birthing Gods - December 13th, 2009 [December 13th, 2009]
- 11 core rationalist skills - from LessWrong - December 13th, 2009 [December 13th, 2009]
- The best of the guests - December 13th, 2009 [December 13th, 2009]
- The best of Sentient Developments: 2009 - December 13th, 2009 [December 13th, 2009]
- Link dump: 2009.12.15 - December 15th, 2009 [December 15th, 2009]
- The Utopia Force - December 22nd, 2009 [December 22nd, 2009]
- Avatar: The good, the bad and ugly - December 23rd, 2009 [December 23rd, 2009]
- Singularity Institute launches "2010 Singularity Research Challenge" - December 24th, 2009 [December 24th, 2009]
- Transhumanism as a "nonissue" - December 24th, 2009 [December 24th, 2009]
- Hanson on "Meh, Transhumanism" - December 25th, 2009 [December 25th, 2009]
- Merry Newtonmas from Transhuman Goodness - December 25th, 2009 [December 25th, 2009]
- After proud knowledge - December 26th, 2009 [December 26th, 2009]