Quebec government to fund in vitro fertility treatments

The ongoing normalization of biotechnologies continues. As is the notion that fertilization therapies are just that, therapies that should be covered by the healthcare system.

Quebec is going ahead with a plan to fully fund in vitro fertility treatments for women. Health Minister Yves Bolduc says the province will fund up to three cycles of in vitro fertilization (IVF) treatments by the end of the spring.

Aside from the humanitarian element, Bolduc believes it will save the province up to $30 million a year on money being spent to treat premature babies born as a result of fertility treatments.

Link.

Rae-Hunter’s take on Inception

Casey Rae-Hunter of The Contrarian offers his take on Christoper Nolan's Inception. Rae-Hunter, who has guest blogged on Sentient Developments, is surprised that many of the technorati have neglected such themes as neurosecurity and mindfulness in their reviews of the film. He writes:

To me, the idea of establishing a defense against neural invaders is interesting, especially in light of new discoveries in neuroplasticity and the battle to maintain computer network security.
Fascinating stuff, but I’m pretty sure our psyches are less in danger of being harmed by outside forces than our own mental habits.

One of Nolan’s most original ideas is that the subconscious can be trained to act as a built-in police force during synaptic security breaches. The director seems to gravitate towards characters who exhibit tremendous martial/intellectual/transcendental discipline on the road to exceptionalism (Batman, The Prestige). This includes certain mental technologies.

Buddhism has for centuries been aware of the the mind’s plasticity. It teaches (among other things) that we can shape the function of our neural networks by observing our thoughts and establishing new patterns. In therapeutic psychology, this is called Cognitive Behavioral Therapy (CBT) — a remarkably effective treatment for a host of mental afflictions. Borrowing from Buddhism, it prescribes mindfulness as a method for rooting out “bad code” and establishing a healthier psyche.

Remapping the mind requires a great deal of discipline, but it can be done. Brains are far less rigid than stone, and even stone can be shaped by water. In this view, our thoughts are similar to ripples on a swift-moving river. Like thoughts, these ripples spontaneously and constantly appear and disappear. By not fixating on the origin of the ripples, but rather accepting the simple fact of their existence, we can begin to see the river as a whole and even influence its flow.

Inception takes a more martial approach to mindfulness, but it does offer hints as to how we can keep our shit together in the midst of chaos. In the film, one of the characters experiences acute panic when he realizes the reality he thought was solid is in fact quite the opposite. (We experience similar feelings of disassociation when someone close to us dies, we lose our job, get divorced, etc.) The character is told to focus on his breath and remember his training. The particulars of instruction aren’t revealed, but I’m guessing it involves meditation and mindfulness.

Rae-Hunter is absolutely right; Inception is a treasure trove of food-for-thought. Nolan's film should keep thinkers and writers busy for years to come.

Speaking of neurosecurity, it may someday be possible to create firewalls for the 'jacked-in' mind: Fighting back against mindhacks.

HuffPo: Sims, Suffering and God: Matrix Theology and the Problem of Evil

Check out Clay Farris Naff's latest article, Sims, Suffering and God: Matrix Theology and the Problem of Evil:

And that brings us back to the Sims. How can know whether we're simulations in some superduper computer built by posthumans? Some pretty amusing objections have been raised, such as quantum tests that a simulation would fail. It seems safe to say that any sim-scientists examining the sim-universe they occupy would find that the laws of that universe are self-consistent. To assert that a future computer could simulate us, complete with consciousness, but crash when it came to testing Bell's Inequality strikes me as ludicrous. Unless, of course, the program were released by Microsoft. Oooh, sorry, Bill, cheap shot. Let's take it for granted that we could not expose a simulation from within -- unless the Creators wanted us to.

But the problem of pointless suffering leads me to very different conclusion. Recall Bostrom's first conjecture: that few or none of our civilizations reach a posthuman stage capable of building computers that can run the kind of simulation in which we might exist. There are many ways civilization could end (just ask the dinosaurs!), but the one absolutely necessary condition for survival in an environment of continually increasing technological prowess is peace. Not a mushy, bumper sticker kind of peace, but the robust containment of conflict and competition within cooperative frameworks. (Robert Wright, in his brilliant if uneven book NonZero: The Logic of Human Destiny, unfolds this idea beautifully.)

What is civilization if not a mutual agreement to sacrifice some individual desires (to not pay taxes, for example, or to run through red lights) for the greater common good? Communication, trust, and cooperation make such agreements possible, but the one ingredient in the human psyche that propels civilization forward even as we gain technological power is empathy.

Link.

Abou Farman: The Intelligent Universe

Abou Farman has penned a must-read essay about Singularitarianism and modern futurism--even if you don't agree with him and his oft sleight-of-hand dismissives. Dude has clearly done his homework, resulting in provocative and insightful commentary.

Thinkers mentioned in this article include Ray Kurzweil, Eliezer Yudkowsky, Giulio Prisco, Jamais Cascio, Tyler Emerson, Michael Anissimov, Michael Vasser, Bill Joy, Ben Goertzel, Stephen Wolfram and many, many more.

Excerpt:

Images of transhuman and posthuman figures, hybrids and chimeras, robots and nanobots became uncannily real, blurring further the distinction between science and science fiction. Now, no one says a given innovation can’t happen; the naysayers simply argue that it shouldn’t. But if the proliferating future scenarios no longer seem like science fiction, they are not exactly fact either—not yet. They are still stories about the future and they are stories about science, though they can no longer be banished to the bantustans of unlikely sci-fi. In a promise-oriented world of fast-paced technological change, prediction is the new basis of authority.

That is why futurist groups, operating thus far on the margins of cultural conversation, were thrust into the most significant discussions of the twenty-first century: What is biological, what artificial? Who owns life when it’s bred in the lab? Should there be cut off-lines to technological interventions into life itself, into our DNA, our neurological structures, or those of our foodstuffs? What will happen to human rights when the contours of what is human become blurred through technology?

The futurist movement, in a sense, went viral. Bill McKibben’s Enough (2004) faced off against biophysicist Gregory Stock’s Redesigning Humans (2002) on television and around the web. New groups and think tanks formed every day, among them the Foresight Institute and the Extropy Institute. Their general membership started to overlap, as did their boards of directors, with figures like Ray Kurzweil ubiquitous. Heavyweight participants include Eric Drexler—the father of nanotechnology—and MIT giant Marvin Minsky. One organization, the World Transhumanist Association, which broke off from the Extropy in 1998, counts six thousand members, with chapters across the globe.

If the emergence of NBIC and the new culture of prediction galvanized futurists, the members were also united by an obligatory and almost imperial sense of optimism, eschewing the dystopian visions of the eighties and nineties. They also learned the dangers of too much enthusiasm. For example, the Singularity Institute, wary of sounding too religious or rapturous, presents its official version of the future in a deliberately understated tone: “The transformation of civilisation into a genuinely nice place to live could occur, not in a distant million-year future, but within our own lifetimes.”

Link.

Economist: The future is another country

Interesting article in The Economist this week about Facebook and how it's starting to look and act like a sovereign state:

In some ways, it might seem absurd to call Facebook a state and Mr Zuckerberg its governor. It has no land to defend; no police to enforce law and order; it does not have subjects, bound by a clear cluster of rights, obligations and cultural signals. Compared with citizenship of a country, membership is easy to acquire and renounce. Nor do Facebook’s boss and his executives depend directly on the assent of an “electorate” that can unseat them. Technically, the only people they report to are the shareholders.

But many web-watchers do detect country-like features in Facebook. “[It] is a device that allows people to get together and control their own destiny, much like a nation-state,” says David Post, a law professor at Temple University. If that sounds like a flattering description of Facebook’s “groups” (often rallying people with whimsical fads and aversions), then it is worth recalling a classic definition of the modern nation-state. As Benedict Anderson, a political scientist, put it, such polities are “imagined communities” in which each person feels a bond with millions of anonymous fellow-citizens. In centuries past, people looked up to kings or bishops; but in an age of mass literacy and printing in vernacular languages, so Mr Anderson argued, horizontal ties matter more.

So if newspapers and tatty paperbacks can create new social and political units, for which people toil and die, perhaps the latest forms of communication can do likewise. In his 2006 book “Code: Version 2.0”, a legal scholar, Lawrence Lessig noted that online communities were transcending the limits of conventional states—and predicted that members of these communities would find it “difficult to stand neutral in this international space”.

To many, that forecast still smacks of cyber-fantasy. But the rise of Facebook at least gives pause for thought. If it were a physical nation, it would now be the third most populous on earth. Mr Zuckerberg is confident there will be a billion users in a few years. Facebook is unprecedented not only in its scale but also in its ability to blur boundaries between the real and virtual worlds. A few years ago, online communities evoked fantasy games played by small, geeky groups. But as technology made possible large virtual arenas like Second Life or World of Warcraft, an online game with millions of players, so the overlap between cyberspace and real human existence began to grow.

Link.

Should we clone Neanderthals?

Zach Zorich of Archeology explores the scientific, legal, and ethical obstacles to cloning Neanderthals:

The ultimate goal of studying human evolution is to better understand the human race. The opportunity to meet a Neanderthal and see firsthand our common but separate humanity seems, on the surface, too good to pass up. But what if the thing we learned from cloning a Neanderthal is that our curiosity is greater than our compassion? Would there be enough scientific benefit to make it worth the risks? "I'd rather not be on record saying there would," Holliday told me, laughing at the question. "I mean, come on, of course I'd like to see a cloned Neanderthal, but my desire to see a cloned Neanderthal and the little bit of information we would get out of it...I don't think it would be worth the obvious problems." Hublin takes a harder line. "We are not Frankenstein doctors who use human genes to create creatures just to see how they work." Noonan agrees, "If your experiment succeeds and you generate a Neanderthal who talks, you have violated every ethical rule we have," he says, "and if your experiment fails...well. It's a lose-lose." Other scientists think there may be circumstances that could justify Neanderthal cloning.

"If we could really do it and we know we are doing it right, I'm actually for it," says Lahn. "Not to understate the problem of that person living in an environment where they might not fit in. So, if we could also create their habitat and create a bunch of them, that would be a different story."

"We could learn a lot more from a living adult Neanderthal than we could from cell cultures," says Church. Special arrangements would have to be made to create a place for a cloned Neanderthal to live and pursue the life he or she would want, he says. The clone would also have to have a peer group, which would mean creating several clones, if not a whole colony. According to Church, studying those Neanderthals, with their consent, would have the potential to cure diseases and save lives. The Neanderthals' differently shaped brains might give them a different way of thinking that would be useful in problem-solving. They would also expand humanity's genetic diversity, helping protect our genus from future extinction. "Just saying 'no' is not necessarily the safest or most moral path," he says. "It is a very risky decision to do nothing."

Hawks believes the barriers to Neanderthal cloning will come down. "We are going to bring back the mammoth...the impetus against doing Neanderthal because it is too weird is going to go away." He doesn't think creating a Neanderthal clone is ethical science, but points out that there are always people who are willing to overlook the ethics. "In the end," Hawks says, "we are going to have a cloned Neanderthal, I'm just sure of it."

Link.

Wisdom: From Philosophy to Neuroscience by Stephen S. Hall [book]

Stephen S. Hall's new book, Wisdom: From Philosophy to Neuroscience, looks interesting.

Promotional blurbage:

A compelling investigation into one of our most coveted and cherished ideals, and the efforts of modern science to penetrate the mysterious nature of this timeless virtue.

We all recognize wisdom, but defining it is more elusive. In this fascinating journey from philosophy to science, Stephen S. Hall gives us a dramatic history of wisdom, from its sudden emergence in four different locations (Greece, China, Israel, and India) in the fifth century B.C. to its modern manifestations in education, politics, and the workplace. We learn how wisdom became the provenance of philosophy and religion through its embodiment in individuals such as Buddha, Confucius, and Jesus; how it has consistently been a catalyst for social change; and how revelatory work in the last fifty years by psychologists, economists, and neuroscientists has begun to shed light on the biology of cognitive traits long associated with wisdom—and, in doing so, begun to suggest how we might cultivate it.

Hall explores the neural mechanisms for wise decision making; the conflict between the emotional and cognitive parts of the brain; the development of compassion, humility, and empathy; the effect of adversity and the impact of early-life stress on the development of wisdom; and how we can learn to optimize our future choices and future selves.

Hall’s bracing exploration of the science of wisdom allows us to see this ancient virtue with fresh eyes, yet also makes clear that despite modern science’s most powerful efforts, wisdom continues to elude easy understanding.

Hall's book is part of a larger trend that, along with happiness studies, is starting to enter (or is that re-enter?) mainstream academic and clinical realms of inquiry.

A. C. Grayling has penned an insightful and critical review of Hall's book:

First, though, one must point to another and quite general difficulty with contemporary research in the social and neurosciences, namely, a pervasive mistake about the nature of mind. Minds are not brains. Please note that I do not intend anything non-materialistic by this remark; minds are not some ethereal spiritual stuff a la Descartes. What I mean is that while each of us has his own brain, the mind that each of us has is the product of more than that brain; it is in important part the result of the social interaction with other brains. As essentially social animals, humans are nodes in complex networks from which their mental lives derive most of their content. A single mind is, accordingly, the result of interaction between many brains, and this is not something that shows up on a fMRI scan. The historical, social, educational, and philosophical dimensions of the constitution of individual character and sensibility are vastly more than the electrochemistry of brain matter by itself. Neuroscience is an exciting and fascinating endeavour which is teaching us a great deal about brains and the way some aspects of mind are instantiated in them, but by definition it cannot (and I don't for a moment suppose that it claims to) teach us even most of what we would like to know about minds and mental life.

I think the Yale psychologist Paul Bloom put his finger on the nub of the issue in the March 25th number of Nature where he comments on neuropsychological investigation into the related matter of morality. Neuroscience is pushing us in the direction of saying that our moral sentiments are hard-wired, rooted in basic reactions of disgust and pleasure. Bloom questions this by the simple expedient of reminding us that morality changes. He points out that "contemporary readers of Nature, for example, have different beliefs about the rights of women, racial minorities and homosexuals compared with readers in the late 1800s, and different intuitions about the morality of practices such as slavery, child labour and the abuse of animals for public entertainment. Rational deliberation and debate have played a large part in this development." As Bloom notes, widening circles of contacts with other people and societies through a globalizing world plays a part in this, but it is not the whole story: for example, we give our money and blood to help strangers on the other side of the world. "What is missing, I believe," says Bloom, and I agree with him, "is an understanding of the role of deliberate persuasion."

Contemporary psychology, and especially neuropsychology, ignores this huge dimension of the debate not through inattention but because it falls well outside its scope. This is another facet of the point that mind is a social entity, of which it does not too far strain sense to say that any individual mind is the product of a community of brains.

Intersexed athlete Caster Semenya given green light to compete

South African sprinter Caster Semenya has been given approval by the International Association of Athletics Federations (IAAF) to race as a female. The 19 year old runner is an intersexed individual with internal males testes that are producing testosterone at rates considerably above average for women. After a gender test in September 2009, the IAAF decided to ban her from racing, citing a biological advantage that was not of Semenya's doing. Now, after conducting an investigation, the Federation has passed a ruling allowing Semenya to race again.

This is a very interesting, if not perplexing, decision, and I wonder how it's going to play against the International Olympic Committee's (IOC) recent decision calling for intersexed athletes to have a medical procedure in order to qualify for the Olympics. By all accounts, Semenya has not had the procedure, or if she has, is not disclosing that information to the public. Moreover, the results of her most recent gender test are not being disclosed.

Very fishy.

So why did the IAAF suddenly change its mind, and why are they not giving any reasons? Did they feel pressured by the public? Is this a case of political correctness on the track? Or did Semenya have the medical procedure? And if so, why not disclose it? Or would that open a huge can of worms -- and a possible charge of a human rights violation?

Let's assume Semenya did not have the procedure. Has the IAAF therefore decided that intersexed persons are good to compete against unambiguously gendered individuals? And what about her competitors? I can't imagine that they're very happy right now. This would seem to be a dangerous and ill conceived precedent. Semenya is not the only intersexed athlete currently competing in Olympic sports. What about them?

I have a feeling this story is far from over.

Video games as art?

The interwebs are angry because Roger Ebert, a film critic who knows virtually nothing about video games, is arguing that video games will never be considered an artform. Grant Tavinor of Kotaku takes a more nuanced approach to the question and uses the popular BioShock video game to make his case:

Finally, and this is my judgment, BioShock is the result of the intention to make an artwork. Intentions can be slippery things, but it seems evident enough in the game that it is intended to be something more than just a game: BioShock is intended to have the features listed above (they are not accidental) and it is intended to have these features as a matter of its being art.

Hence, BioShock seems an entirely natural candidate for art status. It has, in some form, all but one of the criteria. The one it lacks-belonging to an established artistic form-it lacks because of the very newness of video games. BioShock is not necessarily a masterpiece (the last act is problematic) but this is beside the point; the vast majority of art works are not masterpieces. Surely it would be unfair to deny BioShock art status when it has so many of the qualities that in other uncontested art works accounts for their art status?

I agree that part of the problem is the nascent status of video-games-as-art. Pacman never attempted to be artist; Bioshock clearly does. It's still early days.

Moreover, even if you dismiss current games as having any artistic merit, Ebert's claim that they will never be legitimate artforms is suspicious. Never? Really? Not even when augmented reality enters the picture? Or completely immersive virtual reality?

Even more profoundly, a number of years ago I speculated about the potential for directly altering subjective and emotional experience and how mental manipulation could become an art form. In the article, Working the Conscious Canvas, I wrote:

It's conceivable that predetermined sets of emotional experiences could be a future art form. Artists might, for example, manipulate emotions alongside established art forms, a la A Clockwork Orange-but certainly not for the same questionable ends.

For example, imagine listening to Beethoven's "Ode to Joy" or "Moonlight Sonata" while having your emotional centers manipulated in synch with the music's mood and tone. You'd be compelled to feel joy when the music is joyful, sadness when the music is sad.

The same could be done with film. In fact, last century, director Orson Welles, who was greatly influenced by German expressionistic filmmaking, directed movies in which the subjective expression of inner experiences was emphasized (Touch of Evil, for example). In the 1960s, Alfred Hitchcock, also a student of expressionism, went a step further by creating and editing sequences in a way that was synchronized with subjective perception, such as the quick-cut shower sequence in Psycho.

In the future, audiences could share emotional experiences with a film's protagonist. Imagine watching Saving Private Ryan, Titanic or Gone with the Wind in such a manner. The experience would be unbelievably visceral, nothing like today's experience of sitting back and watching.

The beauty of such experiences is that sophisticated virtual reality technology isn't required, just the control mechanisms to alter emotional experience in real-time.

Of course, some will argue that when artists can directly manipulate emotions, they will have lost a dialogue with their audience, as audience members will simply be feeling exactly what's intended. But this won't necessarily be the case. Rather, audience members will respond to emotional tapestries in unique ways based on their personal experiences, the same way they do now to other art forms.

Imagine this same technology, but in the context of video games. Now there's some scary potential.

Art, whether it be traditional or novel, has always been about transcending the individual and sharing the subjective experience of others. As I've written before, "The greatest artists thrill us with their stories, endow us with emotional and interpersonal insight, and fill us with joy through beautiful melodies, paintings and dance. By doing so they give us a piece of their selves and allow us to venture inside their very minds—even if just for a little bit."

And yes, this includes video games.

Economist: War in the Fifth Domain

The latest cover article of The Economist poses the question: are the mouse and keyboard the new weapons of conflict?

Important thinking about the tactical and legal concepts of cyber-warfare is taking place in a former Soviet barracks in Estonia, now home to NATO’s “centre of excellence” for cyber-defence. It was established in response to what has become known as “Web War 1”, a concerted denial-of-service attack on Estonian government, media and bank web servers that was precipitated by the decision to move a Soviet-era war memorial in central Tallinn in 2007. This was more a cyber-riot than a war, but it forced Estonia more or less to cut itself off from the internet.

Similar attacks during Russia’s war with Georgia the next year looked more ominous, because they seemed to be co-ordinated with the advance of Russian military columns. Government and media websites went down and telephone lines were jammed, crippling Georgia’s ability to present its case abroad. President Mikheil Saakashvili’s website had to be moved to an American server better able to fight off the attack. Estonian experts were dispatched to Georgia to help out.

Many assume that both these attacks were instigated by the Kremlin. But investigations traced them only to Russian “hacktivists” and criminal botnets; many of the attacking computers were in Western countries. There are wider issues: did the cyber-attack on Estonia, a member of NATO, count as an armed attack, and should the alliance have defended it? And did Estonia’s assistance to Georgia, which is not in NATO, risk drawing Estonia into the war, and NATO along with it?

Such questions permeate discussions of NATO’s new “strategic concept”, to be adopted later this year. A panel of experts headed by Madeleine Albright, a former American secretary of state, reported in May that cyber-attacks are among the three most likely threats to the alliance. The next significant attack, it said, “may well come down a fibre-optic cable” and may be serious enough to merit a response under the mutual-defence provisions of Article 5.

Link.

NYT: Until Cryonics Do Us Part

The New York Times has published a piece about cryonicists and how not all family members buy into it. The article focuses on Robin Hanson, a name that should be familiar to most readers of this blog:

Among cryonicists, Peggy’s reaction might be referred to as an instance of the “hostile-wife phenomenon,” as discussed in a 2008 paper by Aschwin de Wolf, Chana de Wolf and Mike Federowicz.“From its inception in 1964,” they write, “cryonics has been known to frequently produce intense hostility from spouses who are not cryonicists.” The opposition of romantic partners, Aschwin told me last year, is something that “everyone” involved in cryonics knows about but that he and Chana, his wife, find difficult to understand. To someone who believes that low-temperature preservation offers a legitimate chance at extending life, obstructionism can seem as willfully cruel as withholding medical treatment. Even if you don’t want to join your husband in storage, ask believers, what is to be lost by respecting a man’s wishes with regard to the treatment of his own remains? Would-be cryonicists forced to give it all up, the de Wolfs and Federowicz write, “face certain death.”

Link.

The Brain Preservation Foundation: Better preservation through plastination

I've often thought that cryonics, the practice of storing tissue (namely the brain) in a vat of liquid nitrogen, may eventually come to be seen as a rather primitive and naive technique for preservation. While it may be the only current option for those hoping to capture and restore their brain states for future reanimation, cryonics as a concept may not stand the test of time. More sophisticated methods have already been proposed, including warm biostasis and plastination.

While warm biostasis remains a largely theoretical endeavor, brain plastination was recently given a considerable boost through the founding of the Brain Preservation Foundation. Launched by Accelerating Studies Foundation founder John Smart and Harvard neuroscientist Ken Hayworth, the BPF is seeking to facilitate the development of any technology that will effectively preserve the brain for eventual reanimation. While the foundation members' pet interest is in plastination, they are not married to any particular technique. As far as they're concerned, the successful development of any kind of brain preservation technology means that everyone wins.

To this end, the Foundation has launched the Brain Preservation Technology Prize – a prize for demonstrating ultrastructure preservation across an entire large mammalian brain and verified by a comprehensive electron microscopic survey procedure. Think of it as an X-Prize for brain preservation technology. The Foundation wants to encourage researchers to develop techniques “capable of inexpensively and completely preserving an entire human brain for long-term storage with such fidelity that the structure of every neuronal process and every synaptic connection remains intact and traceable using today’s electron microscopic imaging techniques.”

The current purse is for $100,000, but they expect this prize amount to increase as donors chip-in. And in anticipation of success, the BPF has created a mind uploader's bill of rights.

As noted, the BPF has a special interest in brain plastination, mostly on account of Smart and Hayworth's extensive work in this field. If you've ever seen seen a Body Worlds exhibit, then you know about plastination. It is thought that brain-state may be preserved through the chemical conversion of brain matter into a non-degradable substrate, which is why the proposed technique is also referred to as chemical brain preservation. For example, it might be possible to flood a brain shortly after death with glutaraldehyde to fix proteins, followed by osmium tetroxide to stabilize lipids and other compounds. Essentially, this process could turn a deceased brain into a chunk of plastic that will last indefinitely.

Smart envisions the day when this technology is refined and streamlined to the point where preservation may cost as little as $2,000. Not a bad price for a radically extended life.

I recently had the opportunity to speak with Smart and Hayworth about their project at the Humanity+ Summit that was held in early June. As a conference attendee, I was given a tour of Harvard's Center for Brain Science Neuroimaging where Hayworth works. In his lab, Hayworth uses electron microscopy to delineate every synaptic connection from plastinated mouse brains, a process that preserves both structure and molecular level information. Essentially, while they're working to preserve and analyse mouse brains, Hayworth and his team are developing the theory and technologies required to preserve human brains.

The tour of Hayworth's lab was jaw dropping on many levels. Not only did I get a chance to see slides of brains at the nanometer scale, I got a chance to see real researchers doing real work in a real lab. It's transhumanism under construction; this wasn't airy-fairy armchair futuristic fantasy - this research is actually happening.

Hayworth speculates that scientists will produce a synapse level map of an entire human brain over the next decade. As for mind uploading from a plastic embedded brain, Hayworth believes that's about 50 years off.

Make a donation to the Brain Preservation Foundation today. Your life may depend on it.

Sam Vaknin: The Ten Errors of Science Fiction

Global Politician columnist Sam Vaknin argues in a recent article that science fiction is guilty of ten specific mistakes when postulating the characteristics of advanced extraterrestrial life. Specifically, he contends that sci-fi writers consistently buy into fallacies about:

  1. Life in the universe
  2. The concept of structure
  3. Communication and interaction
  4. Location
  5. Separateness
  6. Transportation
  7. Will and intention
  8. Intelligence
  9. Artificial vs. natural
  10. Leadership

While the article certainly raises some food for thought, Vaknin's call for writers to think more 'outside of the box' is a bit of a stretch, if not condescending. Science fiction writers, for the most part, take great pains to weave a coherent narrative around novel imaginings of what ETIs might look like. Moreover, Vaknin is himself guilty of considerably hand-waving, arguing that ETIs may be existentially and qualitatively of a different sort than what we might expect, but at the same time he doesn't provide any substantive or compelling evidence for us to believe otherwise.

Sure, I agree that ETIs may be dramatically different than what we can imagine and that they may exist outside of expected paradigms, but until our exoscience matures we should probably err on the side of the self-sampling assumption and figure that the ignition and evolution of life tends to follow a similar path to the one taken on Earth. Now, I'm not suggesting that we refrain from hypothesizing about radically different existence-states; I'm just saying that these sorts of extraordinary claims (like alternative intelligences spawning different quantum realities) require the requisite evidence. It's far too easy to fantasize about some kind of energy-based hive-mind living in the core of asteroids, it's another thing to prove that such a thing could come about through the laws of physics [my example, not Vaknin's].

In the article, Vaknin also posits six basic explanations to the Fermi Paradox (and the apparent failure of SETI) that are not mutually exclusive:

  1. That Aliens do not exist
  2. That the technology they use is far too advanced to be detected by us and, the flip side of this hypothesis, that the technology we use is insufficiently advanced to be noticed by them
  3. That we are looking for extraterrestrials at the wrong places
  4. That the Aliens are life forms so different to us that we fail to recognize them as sentient beings or to communicate with them
  5. That Aliens are trying to communicate with us but constantly fail due to a variety of hindrances, some structural and some circumstantial
  6. That they are avoiding us because of our misconduct (example: the alleged destruction of the environment) or because of our traits (for instance, our innate belligerence) or because of ethical considerations

Very quickly, point number one is possible but grossly improbable, points two to five are essentially the same argument—that we don't yet know where, how and what to look for, and point six violates the non-exclusivity principle (explains some but not all ETI behavior). It's odd that Vaknin selected these particular six arguments. There are many, many potential resolutions to the FP with these not being particularly stronger than any other (though point #1 has a lot of traction among the Rare Earthers.). And where is the Great Filter argument, which is possibly the strongest of them all?

Nice try, Vaknin, but the Great Silence problem is more complex than what you've laid out.

NYT: Merely Human? That’s So Yesterday

I'm sure most of you have caught this by now, but the New York Times recently published a 5,000 word article about the Singularity University, Ray Kurzweil, and the technological Singularity. All the usual suspects are referenced within, including the IEET's James Hughes, Terry Grossman, Peter Thiel, Peter Diamandis, Andrew Hessel, Sonia Arrison, and William S. Bainbridge.

A taste of the article:

Richard A. Clarke, former head of counterterrorism at the National Security Council, has followed Mr. Kurzweil’s work and written a science-fiction thriller, “Breakpoint,” in which a group of terrorists try to halt the advance of technology. He sees major conflicts coming as the government and citizens try to wrap their heads around technology that’s just beginning to appear.

“There are enormous social and political issues that will arise,” Mr. Clarke says. “There are vast groups of people in society who believe the earth is 5,000 years old. If they want to slow down progress and prevent the world from changing around them and they engaged in political action or violence, then there will have to be some sort of decision point.”

Mr. Clarke says the government has a contingency plan for just about everything — including an attack by Canada — but has yet to think through the implications of techno-philosophies like the Singularity. (If it’s any consolation, Mr. Long of the Defense Department asked a flood of questions while attending Singularity University.)

Mr. Kurzweil himself acknowledges the possibility of grim outcomes from rapidly advancing technology but prefers to think positively. “Technological evolution is a continuation of biological evolution,” he says. “That is very much a natural process.”

Disturbing fact revealed in the article: Google and Microsoft employees trailed only members of the military as the largest individual contributors to Ron Paul’s 2008 presidential campaign.

For a curious and infuriating response to the NYT article, be sure to check out Pete Shank's "A Singular Kind of Eugenics," but be warned: the bullshit factor is off the charts (e.g. Shank is terribly confused about the history of transhumanism, particularly the role and evolution of the Extropy Institute, the World Transhumanist Association, Humanity+ and the Institute for Ethics and Emerging Technologies).

Singularity Summit 2010: August 14-15

The Singularity Summit for 2010 has been announced and will be held on August 14-15 at the San Francisco Hyatt Regency. Be sure to register soon.

This year's Summit, which is hosted by the Singularity Institute, will focus on neuroscience, bioscience, cognitive enhancement, and other explorations of what Vernor Vinge called 'intelligence amplification' -- the other route to the technological Singularity.

Of particular interest to me will be the talk given by Irene Pepperberg, author of "Alex & Me," who has pushed the frontier of animal intelligence with her research on African Gray Parrots. She will be exploring the ethical and practical implications of non-human intelligence enhancement and of the creation of new intelligent life less powerful than ourselves.

A sampling of the speakers list includes:

  • Ray Kurzweil, inventor, futurist, author of The Singularity is Near
  • James Randi, skeptic-magician, founder of the James Randi Educational Foundation
  • Dr. Anita Goel, a leader in the field of bionanotechnology, Founder & CEO, Nanobiosym, Inc.
  • Dr. Irene Pepperberg, leading investigator of animal intelligence, trainer of the African Grey Parrot "Alex"
  • Prof. Alan Snyder, Director, Centre for the Mind at the University of Sydney, researcher in brain-computer interfaces
  • Prof. Steven Mann, augmented reality pioneer, professor at University of Toronto, "world's first cyborg"
  • Dr. Gregory Stock, bioethicist and biotech entrepreneur, author of Engineering Humans: Our Inevitable Genetic Future
  • Dr. Ellen Haber-Katz, a professor at the Wistar Institute who studies rapid-regenerating mice
  • Joe Z. Tsien, scholar at the Medical College of Georgia, who created a strain of "Doogie Mouse" with twice the memory of average mice
  • Eliezer Yudkowsky, research fellow with the Singularity Institute
  • Michael Vassar, president of the Singularity Institute
  • David Hanson, CEO of Hanson Robotics, creator of the world's most realistic humanoid robots
  • Demis Hassabis, research fellow at the Gatsby Computational Neuroscience Unit at the University of London

From the press release:

Will it be one day become possible to boost human intelligence using brain implants, or create an artificial intelligence smarter than Einstein? In a 1993 paper presented to NASA, science fiction author and mathematician Vernor Vinge called such a hypothetical event a "Singularity", saying "From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye". Vinge pointed out that intelligence enhancement could lead to "closing the loop" between intelligence and technology, creating a positive feedback effect.

This August 14-15, hundreds of AI researchers, robotics experts, philosophers, entrepreneurs, scientists, and interested laypeople will converge in San Francisco to address the Singularity and related issues at the only conference on the topic, the Singularity Summit. Experts in fields including animal intelligence, artificial intelligence, brain-computer interfacing, tissue regeneration, medical ethics, computational neurobiology, augmented reality, and more will share their latest research and explore its implications for the future of humanity.