Time: The quest for fake meat

Time Magazine on the potential for artificial meat.

Excerpt:

What has confounded fake-meat producers for years is the texture problem. Before an animal is killed, its flesh essentially marinates, for all the years that the animal lives, in the rich biological stew that we call blood: a fecund bath of oxygen, hormones, sugars and plasma. Vegan foods like tofu, tempeh (fermented soy) and seitan (wheat gluten) don't have the benefit of sloshing around in something so complex as blood before they go onto your plate. So how do you create fleshy, muscley texture without blood?

NS: Rise of the replicators

Enthusiasts are building machines that can make just about anything – including their own robotic offspring. New Scientist explains:

Still, ingenious as these machines are, they merely churn out piles of parts. What about assembly? A heap of plastic and metal is not a machine, just as you don't have much in common with a pile of flesh and bones.

Greg Chirikjian, a roboticist at Johns Hopkins University in Baltimore, Maryland, agrees. "When a prototype only makes parts, the machine that made those parts wasn't reproduced," he says. A true self-replicator must handle both fabrication and assembly. Chirikjian and his colleague Matt Moses are aiming to achieve this with a kind of Lego set that doesn't need anyone to play with it.

The pair have already demonstrated key parts of such a system, using around 100 plastic blocks. Although it cannot yet fabricate these blocks itself, the machine is able to move in 3D to pick up and bind them into larger structures. Moses is currently working on having it make a complete replica of its own structure using Lego-like bricks, though the machine still relies on conventional motors - which have to be installed by hand - to drive its activity.

Kate Ray’s semantic web mini-documentary

Kate Ray's Web 3.0 mini-doc explores the potential for the semantic web.

Web 3.0 from Kate Ray on Vimeo.

This documentary offers some interesting insights into how difficult it is to both develop and predict the next iteration of the Web. I can't help but feel, however, that human cognition is missing from the discussion; in my mind, the next iteration of the web has to be further conceptualized as a part of the exosomatic brain. Anything we can do to better streamline the process of accessing and processing information will be a step in this direction. Put another way, how can we blur the divide and reduce the friction that currently separates the human mind from the internet?

Maybe that's Web 4.0 stuff....

Singer: Should this be the last generation?

Philosopher and ethicist Peter Singer asks, is a world with people in it better than one without?

Excerpt:

Put aside what we do to other species — that’s a different issue. Let’s assume that the choice is between a world like ours and one with no sentient beings in it at all. And assume, too — here we have to get fictitious, as philosophers often do — that if we choose to bring about the world with no sentient beings at all, everyone will agree to do that. No one’s rights will be violated — at least, not the rights of any existing people. Can non-existent people have a right to come into existence?

I do think it would be wrong to choose the non-sentient universe. In my judgment, for most people, life is worth living. Even if that is not yet the case, I am enough of an optimist to believe that, should humans survive for another century or two, we will learn from our past mistakes and bring about a world in which there is far less suffering than there is now. But justifying that choice forces us to reconsider the deep issues with which I began. Is life worth living? Are the interests of a future child a reason for bringing that child into existence? And is the continuance of our species justifiable in the face of our knowledge that it will certainly bring suffering to innocent future human beings?

Clay Shirky: The Internet Makes You Smarter

New book: Cognitive Surplus: Creativity and Generosity in a Connected Age in which Clay Shirky argues that, amid the silly videos and spam, are the roots of a new reading and writing culture.

Excerpt from the WSJ review:

The case for digitally-driven stupidity assumes we'll fail to integrate digital freedoms into society as well as we integrated literacy. This assumption in turn rests on three beliefs: that the recent past was a glorious and irreplaceable high-water mark of intellectual attainment; that the present is only characterized by the silly stuff and not by the noble experiments; and that this generation of young people will fail to invent cultural norms that do for the Internet's abundance what the intellectuals of the 17th century did for print culture. There are likewise three reasons to think that the Internet will fuel the intellectual achievements of 21st-century society.

First, the rosy past of the pessimists was not, on closer examination, so rosy. The decade the pessimists want to return us to is the 1980s, the last period before society had any significant digital freedoms. Despite frequent genuflection to European novels, we actually spent a lot more time watching "Diff'rent Strokes" than reading Proust, prior to the Internet's spread. The Net, in fact, restores reading and writing as central activities in our culture.

The present is, as noted, characterized by lots of throwaway cultural artifacts, but the nice thing about throwaway material is that it gets thrown away. This issue isn't whether there's lots of dumb stuff online—there is, just as there is lots of dumb stuff in bookstores. The issue is whether there are any ideas so good today that they will survive into the future. Several early uses of our cognitive surplus, like open source software, look like they will pass that test.

Evolution: Too much reliance on memory is bad

Why good memory may be bad for you: The counterintuitive finding that too good a memory makes foragers inefficient reveals a glimpse of the forces that govern the evolution of intelligence.

From the article:

It's easy to imagine that a good memory would confer significant benefits to a foraging animal.

But it's not quite so straightforward, say Denis Boyer at Universite Paul Sabatier in France and Peter Walsh at the Universidad Nacional Autonoma de Mexico in Mexico.

These guys have created one of the first computer models to take into account a creature's ability to remember the locations of past foraging successes and revisit them.

Their model shows that in a changing environment, revisiting old haunts on a regular basis is not the best strategy for a forager.

It turns out instead that a better approach strategy is to inject an element of randomness into a regular foraging pattern. This improves foraging efficiency by a factor of up to 7, say Boyer and Walsh.

Clearly, creatures of habit are not as successful as their opportunistic cousins.

What the internet is doing to our brains

New book: The Shallows by Nicholas Carr on how the internet is changing the way we think.

Excerpt from the NYT review:

For Carr, the analogy is obvious: The modern mind is like the fictional computer. "I can feel it too," he writes. "Over the last few years, I've had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory." While HAL was silenced by its human users, Carr argues that we are sabotaging ourselves, trading away the seriousness of sustained attention for the frantic superficiality of the Internet. As Carr first observed in his much discussed 2008 article in The Atlantic, "Is Google Making Us Stupid?," the mere existence of the online world has made it much harder (at least for him) to engage with difficult texts and complex ideas. "Once I was a scuba diver in a sea of words," Carr writes, with typical eloquence. "Now I zip along the surface like a guy on a Jet Ski."

This is a measured manifesto. Even as Carr bemoans his vanishing attention span, he's careful to note the usefulness of the Internet, which provides us with access to a near infinitude of information. We might be consigned to the intellectual shallows, but these shallows are as wide as a vast ocean.

Nevertheless, Carr insists that the negative side effects of the Internet outweigh its efficiencies. Consider, for instance, the search engine, which Carr believes has fragmented our knowledge. "We don't see the forest when we search the Web," he writes. "We don't even see the trees. We see twigs and leaves." One of Carr's most convincing pieces of evidence comes from a 2008 study that reviewed 34 million academic articles published between 1945 and 2005. While the digitization of journals made it far easier to find this information, it also coincided with a narrowing of citations, with scholars citing fewer previous articles and focusing more heavily on recent publications. Why is it that in a world in which everything is available we all end up reading the same thing?

Digital degradation

This is what happens to a video after it has been uploaded, downloaded and re-uploaded to YouTube 1,000 times. Straight copying of digital data is (mostly) lossless -- it's compression and conversion that creates this sort of nastiness. Some of this degradation is noticeable even after only one generation (MP3s as an example).

This is what the original looks like:

James Hughes interviewed by Tricycle about transhumanism, Cyborg Buddha project

Buddhist magazine Tricycle recently interviewed the IEET's James Hughes about his unique take on transhumanism and Buddhism -- and how the two seemingly disparate philosophies should be intertwined.

Excerpt:

As a former Buddhist monk, Professor James Hughes is concerned with realization. And as a Transhumanist—someone who believes that we will eventually merge with technology and transcend our human limitations—he endorses radical technological enhancements to humanity to help achieve it. He describes himself as an “agnostic Buddhist” trying to unite the European Enlightenment with Buddhist enlightenment.

Sidestepping the word “happiness,” Hughes’ prefers to speak of “human flourishing,” avoiding the hedonism that “happiness” can imply.

“I’m a cautious forecaster,” says Hughes, a bioethicist and sociologist, “but I think the next couple of decades will probably be determined by our growing ability to control matter at the molecular level, by genetic engineering, and by advances in chemistry and tissue-engineering. Life expectancy will increase in almost all countries as we slow down the aging process and eliminate many diseases.” Not squeamish about the prospect of enhancing—or, plainly put, overhauling— the human being, Hughes thinks our lives may be changed most by neurotechnologies—stimulant drugs, “smart” drugs, and psychoactive substances that suppress mental illness.

More.

Richard Eskow, who did the interview, followed it up with a rebuttal of sorts: Cerebral Imperialism. In the article he writes,

Why “artificial intelligence,” after all, and not an “artificial identity” or “personality”? The name itself reveals a bias. Aren’t we confused computation with cognition and cognition with identity? Neuroscience suggests that metabolic processes drive our actions and our thoughts to a far greater degree than we’ve realized until now. Is there really a little being in our brains, or contiguous with our brains, driving the body?

To a large extent, isn’t it the other way around? Don’t our minds often build a framework around actions we’ve decided to take for other, more physical reasons? When I drink too much coffee I become more aggressive. I drive more aggressively, but am always thinking thoughts as I weave through traffic: “I’m late.” “He’s slow.” “She’s in the left lane.” “This is a more efficient way to drive.”

Why do we assume that there is an intelligence independent of the body that produces it? I’m well aware of the scientists who are challenging that assumption, so this is not a criticism of the entire artificial intelligence field. There’s a whole discipline called “friendly AI” which recognizes the threat posed by the Skynet/Terminator “computers come alive and eliminate humanity” scenario. A number of these researchers are looking for ways to make artificial “minds” more like artificial “personalities.”

Hopefully more to come on this intriguing debate.

Speaking at the H+ Summit at Harvard, June 11-12

I'll be at the H+ Summit @ Harvard during the weekend of June 11-12 and I hope to see you there. The Summit is an educational, and scientific outreach event that covers the themes of the impact of technology on the human condition. It is hosted, and organized by the Harvard College Future Society, in cooperation with Humanity+.

Tickets are still available, so register now.

Weaving in futurism, technoprogressivism and transhumanism, the H+ Summit is part of a larger cultural conversation about what it means to be human and, ultimately, more than human. This issue lies at the heart of the transhumanist movement -- and a common topic on this blog.

Key speakers include Ray Kurzweil, Aubrey de Gray, Stephen Wolfram and Ronald Bailey.

Oh, and little old me.

Here's the title and abstract of my talk:

When the Turing Test is not enough: Towards a functionalist determination of personhood and the advent of an authentic machine ethics

Abstract: Empirical research that works to identify those characteristics requisite for the identification of nonhuman persons are proving increasingly insufficient, particularly as neuroscientists further refine functionalist models of cognition. To say that an agent "appears" to have awareness or intelligence is inadequate. Rather, what is required is the discovery and understanding of those processes in the brain that are responsible for capacities such as self-awareness, empathy and emotion. Subsequently, the shift to a neurobiological basis for personhood will have implications for those hoping to develop self-aware artificial intelligence and brain emulations. The Turing Test alone cannot identify machine consciousness; instead, computer scientists will need to work off the functionalist model and be mindful of those processes that produce awareness. Because the potential to do harm is significant, an effective and accountable machine ethics needs to be considered. Ultimately, it is our responsibility as citizen-scientists to develop a rigorous understanding of personhood so that we can identify and work with machine minds in the most compassionate and considerate manner possible.

See you there!

Five reasons why Stephen Hawking—and everyone else—is wrong about alien threats

Stephen Hawking is arguing that humanity may be putting itself in mortal peril by actively trying to contact aliens (an approach that is referred to as Active SETI). I’ve got five reasons why he is wrong.

Hawking has said that, “If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans.”

He’s basically arguing that extraterrestrial intelligences (ETIs), once alerted to our presence, may swoop in and indiscriminately take what they need from us—and possibly destroy us in the process; David Brin paraphrased Hawking’s argument by saying, “All living creatures inherently use resources to the limits of their ability, inventing new aims, desires and ambitions to suit their next level of power. If they wanted to use our solar system, for some super project, our complaints would be like an ant colony protesting the laying of a parking lot.”

It’s best to keep quiet, goes the thinking, lest we attract any undesirable alien elements.

A number of others have since chimed in and offered their two cents, writers like Robin Hanson,Julian Savulescu, and Paul Davies, along with Brin and many more. But what amazes me is thateveryone is getting it wrong.
image
Here’s the deal, people:

1. If aliens wanted to find us, they would have done so already

First, the Fermi Paradox reminds us that the Galaxy could have been colonized many times over by now. We’re late for the show.

Second, let’s stop for a moment and think about the nature of a civilization that has the capacity for interstellar travel. We’re talking about a civ that has (1) survived a technological Singularity event, (2) is in the possession of molecular-assembling nanotechnology andradically advanced artificial intelligence, and (3) has made the transition from biological to digital substrate (space-faring civs will not be biological—and spare me your antiquated Ring World scenarios).

Now that I’ve painted this picture for you, and under the assumption that ETIs are proactively searching for potentially dangerous or exploitable civilizations, what could possibly prevent them from finding us? Assuming this is important to them, their communications and telescopic technologies would likely be off the scale.Bracewell probes would likely pepper the Galaxy. And Hubble bubble limitations aside, they could use various spectroscopic and other techniques to identify not just life bearing planets, but civilization bearing planets (i.e. looking for specific post-industrial chemical compounds in the atmosphere, such as elevated levels of carbon dioxide).

Moreover, whether we like it or not, we have been ‘shouting out to the cosmos’ for quite some time now. Ever since the first radio signal beamed its way out into space we have made our presence known to anyone caring to listen to us within a radius of about 80 light years.

The cat’s out of the bag, folks.

2. If ETIs wanted to destroy us, they would have done so by now

I’ve already written about this and I suggest you read my article, “If aliens wanted to they would have destroyed us by now.”

But I’ll give you one example. Keeping the extreme age of the Galaxy in mind, and knowing that every single solar system in the Galaxy could have been seeded many times over by now with various types of self-replicating probes, it’s not unreasonable to suggest that a civilization hell-bent on looking out for threats could have planted a dormant berserker probe in our solar system. Such a probe would be waiting to be activated by a radio signal, an indication that a potentially dangerous pre-Singularity intelligence now resides in the ‘hood.

In other words, we should have been destroyed the moment our first radio signal made its way through the solar system.

But because we’re still here, and because we’re on the verge of graduating to post-Singularity status, it’s highly unlikely that we’ll be destroyed by an ETI. Either that or they’re waiting to see what kind of post-Singularity type emerges from human civilization. They may still choose to snuff us out the moment they’re not satisfied with whatever it is they see.

Regardless, our communication efforts, whether active or passive, will have no bearing on the outcome.

3. If aliens wanted our solar system’s resources, they would haven taken them by now

Again, given that we’re talking about a space-faring post-Singularity intelligence, it’s ridiculous to suggest that we have anything of material value for a civilization of this type. They only thing I can think of is the entire planet itself which they could convert into computronium (Jupiter brain)—but even that’s a stretch; we’re just a speck of dust.

If anything, they may want to tap into our sun’s energy output (e.g., they could build a Dyson Sphere or Matrioshka brain) or convert our gas giants into massive supercomputers.

It’s important to keep in mind that the only resource a post-Singularity machine intelligence could possibly want is one that furthers their ability to perform megascale levels of computation.

And it’s worth noting that, once again, our efforts to make contact will have no influence on this scenario. If they want our stuff they’ll just take it.

4. Human civilization has absolutely nothing to offer a post-Singularity intelligence

But what if it’s not our resources they want? Perhaps we have something of a technological or cultural nature that’s appealing.

Well, what could that possibly be? Hmm, think, think think….

What would a civilization that can crunch 10^42 operations per second want from us wily and resourceful humans….

Hmm, I’m thinking it’s iPads? Yeah, iPads. That must be it. Or possibly yogurt.

5. Extrapolating biological tendencies to a post-Singularity intelligence is asinine

There’s another argument out there that suggests we can’t know the behavior or motivational tendencies of ETI’s, therefore we need to tread very carefully. Fair enough. But where this argument goes too far is in the suggestion that advanced civs act in accordance to their biological ancestry.

For examples, humans may actually be nice relative to other civs who, instead of evolving from benign apes, evolved from nasty insects or predatory lizards.

I’m astounded by this argument. Developmental trends in human history have not been driven by atavistic psychological tendencies, but rather by such things as technological advancements, resource scarcity, economics, politics and many other factors. Yes, human psychology has undeniably played a role in our transition from jungle-dweller to civilizational species (traits like inquisitiveness and empathy), but those are low-level factors that ultimately take a back seat to the emergent realities of technological, demographic, economic and politico-societal development.

Moreover, advanced civilizations likely converge around specific survivalist fitness peaks that result in the homogenization of intelligence; there won’t be a lot of wiggle room in the space of all possible survivable post-Singularity modes. In other words, an insectoid post-Singularity SAI or singleton will almost certainly be identical to one derived from ape lineage.

Therefore, attempts to extrapolate ‘human nature’ or ‘ETI nature’ to the mind of its respective post-Singularity descendant is equally problematic. The psychology or goal structure of an SAI will be of a profoundly different quality than that of a biological mind that evolved through the processes of natural selection. While we may wish to impose certain values and tendencies onto an SAI, there’s no guarantee that a ‘mind’ of that capacity will retain even a semblance of its biological nature.

So there you have it.

Transmit messages into the cosmos. Or don’t. It doesn’t really matter because in all likelihood no one’s listening and no one really cares. And if I’m wrong, it still doesn’t matter—ETIs will find us and treat us according to their will.

Five reasons why Stephen Hawking and everyone else is wrong about alien threats

Okay, I'm trying to take a much needed break from blogging, but as Popeye once said, "That's all I can stands and I can't stands no more!"

I felt that I had to say something about Stephen Hawking's recent injunctive against making contact with extraterrestrial intelligences (ETIs). For those living in a cave, Hawking is arguing that humanity may be putting itself in mortal peril by actively trying to contact aliens (an approach that is referred to as Active SETI).
Hawking has said that, "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans." He's basically arguing that ETIs, once alerted to our presence, may swoop in and indiscriminately take what they need from us -- and possibly destroy us in the process; David Brin paraphrased Hawking's argument by saying, "All living creatures inherently use resources to the limits of their ability, inventing new aims, desires and ambitions to suit their next level of power. If they wanted to use our solar system, for some super project, our complaints would be like an ant colony protesting the laying of a parking lot." It's best to keep quiet, goes the thinking, lest we attract any undesirable alien elements.
A number of thinkers have since chimed in and offered their two cents, writers like Robin Hanson, Julian Savulescu, Paul Davies, David Brin and many others. But what amazes me is that everyone is getting it wrong.
Here's the deal, people:

1. If aliens wanted to find us they would have done so already

First, the Fermi Paradox reminds us that the Galaxy could have been colonized many times over by now. We're late for the show.

Second, let's stop for a moment and think about the nature of a civilization that has the capacity for interstellar travel. We're talking about a civ that has (1) survived a technological Singularity event, (2) is in the possession of molecular assembling nanotechnology and radically advanced artificial intelligence, and (3) has made the transition from biological to digital substrate (space-faring civs will not be biological -- and spare me your antiquated Ring World scenarios).

Now that I've painted this picture for you, and under the assumption that ETIs are proactively searching for potentially dangerous or exploitable civilizations, what could possibly prevent them from finding us? Assuming this is important to them, their communications and telescopic technologies would likely be off the scale. Bracewell probes would likely pepper the Galaxy. And Hubble bubble limitations aside, they could use various spectroscopic and other techniques to identify not just life bearing planets, but civilization bearing planets (i.e. looking for specific post-industrial chemical compounds in the atmosphere, such as elevated levels of carbon dioxide).

Moreover, whether we like it or not, we have been 'shouting out to the cosmos' for quite some time now. Ever since the first radio signal beamed its way out into space we have made our presence known to anyone caring to listen to us within a radius of about 80 light years.

The cat's out of the bag, folks.

2. If ETIs wanted to destroy us they would have done so by now

I've already written about this and I suggest you read my article, "If aliens wanted to they would have destroyed us by now."

But I'll give you one example. Keeping the extreme age of the Galaxy in mind, and knowing that every single solar system in the Galaxy could have been seeded many times over by now with various types of self-replicating probes, it's not unreasonable to suggest that a civilization hell-bent on looking out for threats could have planted a dormant berserker probe in our solar system. Such a probe would be waiting to be activated by a radio signal, an indication that a potentially dangerous pre-Singularity intelligence now resides in the 'hood.

In other words, we should have been destroyed the moment our first radio signal made its way through the solar system.

But because we're still here, and because we're on the verge of graduating to post-Singularity status, it's highly unlikely that we'll be destroyed by an ETI. Either that or they're waiting to see what kind of post-Singularity type emerges from human civilization. They may still choose to snuff us out the moment they're not satisfied with whatever it is they see.

Regardless, our communication efforts, whether active or passive, will have no bearing on the outcome.

3. If aliens wanted our solar system's resources, they would haven taken them by now

Again, given that we're talking about a space-faring post-Singularity intelligence, it's ridiculous to suggest that we have anything of material value for a civilization of this type. They only thing I can think of is the entire planet itself which they could convert into computronium (e.g.Jupiter brain) -- but even that's a stretch; we're just a speck of dust.

If anything, they may want to tap into our sun's energy output (e.g. they could build a Dyson Sphere or Matrioshka brain) or convert our gas giants into massive supercomputers.
It's important to keep in mind that the only resource a post-Singularity machine intelligence could possibly want is one that furthers their ability to perform megascale levels of computation.
And it's worth noting that, once again, our efforts to make contact will have no influence on this scenario. If they want our stuff they'll just take it.

4. Human civilization has absolutely nothing to offer a post-Singularity intelligence

But what if it's not our resources they want. Perhaps we have something of a technological or cultural nature that's appealing.

Well, what could that possibly be? Hmm, think, think think....

What would a civilization that can crunch 10^42 operations per second want from us wily and resourceful humans....

Hmm, I'm thinking it's iPads? Ya, iPads. That must be it. Or possibly yoghurt.

5. Extrapolating biological tendencies to a post-Singularity intelligence is asinine

There's another argument out there which suggests we can't know the behavior or motivational tendencies of ETI's, therefore we need to trudge very carefully. Fair enough. But where this argument goes too far is in the suggestion that advanced civs act in accordance to their biological ancestry.

For examples, humans may actually be nice relative to other civs who, instead of evolving from benign apes, evolved from nasty insects or predatory lizards.

I'm astounded by this argument. Developmental trends in human history have not been driven by atavistic psychological tendencies, but rather by such things as technological advancements, resource scarcity, economics, politics and many other factors. Yes, human psychology has undeniably played a role in our transition from jungle-dweller to civilizational species (traits like inquisitiveness and empathy), but those are low-level factors that ultimately take a back seat to the emergent realities of technological, demographic, economic and politico-societal development.

Moreover, advanced civilizations likely converge around specific survivalist fitness peaks that result in the homogenization of intelligence; there won't be a lot of wiggle room in the space of all possible survivable post-Singularity modes. In other words, an insectoid post-Singularity SAI or singleton will almost certainly be identical to one derived from ape lineage.

Therefore, attempts to extrapolate 'human nature' or 'ETI nature' to the mind of its respective post-Singularity descendant is equally problematic. The psychology or goal structure of an SAI will be of a profoundly different quality than that of a biological mind that evolved through the processes of natural selection. While we may wish to impose certain values and tendencies onto an SAI, there's no guarantee that a 'mind' of that capacity will retain even a semblance of it's biological nature.

So there you have it.

Transmit messages into the cosmos. Or don't. It doesn't really matter because in all likelihoodno one's listening and no one really cares. And if I'm wrong it still doesn't matter; ETIs will find us and treat us according to their will.

More on the Fermi Paradox.