This is a guest post by Darlene Cavalier, a writer and senior adviser at Discover Magazine. Darlene holds a Masters degree from the University of Pennsylvania, and is a former Philadelphia 76ers cheerleader. She founded ScienceCheerleader.com and cofounded ScienceForCitizens.net to make it possible for lay people to contribute to science. Happy Thursday. Very pleased to be filling in for Sheril this month. These are big shoes to fill, to say the least. During my time with you, I hope my writings provide a bit of inspiration, provocation, or, failing that, some entertainment to brighten your day. All I ask in return is that you keep doing what you do so well here: share your ideas and comments. Some of you (two, three?) may know me as the Science Cheerleader, a persona who advocates--and creates mechanisms--for public participation in science and science policy. These are broad terms with multiple definitions, depending on the author's intention. Let's dive right into one of this author's intentions: to create a way for citizens and experts to participate in assessments of emerging technologies. Citizens, your time has come! On this day in history, Aretha Franklin released her hit song, Respect. And on THIS day, respect for your ...
Category Archives: Astronomy
Frost-Covered Asteroid Suggests Extraterrestrial Origin for Earth’s Oceans | 80beats
There are millions of asteroids in the asteroid belt between Mars and Jupiter, but yesterday attention focused on just one. According to a couple of studies in Nature, a large asteroid called 24 Themis is rife with water ice and organic molecules, and the researchers say that it could be more evidence that the water so precious to life on Earth came to our planet on board such rocks.
Two research teams took infrared images of 24 Themis, which is about 120 miles in diameter and was discovered in 1853. This asteroid has an extensive but thin frosty coating. It is likely replenished by an extensive reservoir of frozen water deep inside rock once thought to be dry and desolate [AP].
The team, led by Humberto Campins, says finding so much ice on the surface was a surprise; at the asteroid’s distance from the sun—3.2 astronomical units (AU), or just more than three times further than the Earth—exposed ice has a “relatively short lifetime,” the scientists write. As a result, the idea of a below-surface reservoir seems likely. (Icy comets aren’t nearly so close to the sun on average; Halley’s comet can come within .6 AU of the sun, but then retreats to a farthest distance of more than 35 AU.)
It might seem implausible that our planet’s water supply arrived incrementally as cargo on board comets or asteroids. But here’s how it may have happened: More than four billion years ago, after a massive collision between Earth and another large object created the moon, our planet was completely dessicated. Then, during the Late Heavy Bombardment period that followed, during which lots of asteroids hit Earth, the ice that the objects carried became our store of water [Wired.com]. The bombardment period, which occurred nearly 4 billion years ago, was largely responsible for our moon’s puckered appearance. A 2005 Nature study estimated that between 3 and 8 zettagrams of material slammed into the moon during that time (zetta means 10 to the 21st power, or a billion times a trillion), which implies that plenty of rocks slammed into the Earth, too.
Asteroids just keep getting more interesting. As we noted on Monday, the Japanese spacecraft that touched down on an asteroid is limping home to Earth, hoping to return its results (and maybe an asteroid sample) to the home world by June. And President Obama’s revised space exploration plan includes the idea for astronauts to visit an asteroid—a possibility that’s all the more scientifically enticing if they were the bringers of our water.
Related Content:
DISCOVER: Did An Early Pummeling of Asteroids Lead to Life on Earth?
80beats: Did An Asteroid Strike Billions of Years Ago Flip the Moon Around?
80beats: Our Alien Atmosphere? Earth’s Gases May Have Arrived Here Aboard Comets
80beats: Danger, President Obama! Visiting an Asteroid Is Exciting, But Difficult
Image: NASA
Uh-Oh: Gulf Oil Spill May Be 5 Times Worse Than Previously Thought | 80beats
Over the last few days, estimates had held that the Gulf of Mexico oil spilling was leaking about 1,000 barrels, or 42,000 gallons, into the water each day—bad, but still not historically bad on a scale like the spill caused by the Exxon Valdez. Except now, after closer investigation, the National Oceanic and Atmospheric Administration says that oil company BP’s estimate might in fact be five times too low.
Rear Adm. Mary Landry, the Coast Guard’s point person, gave the new estimate yesterday as the Coast Guard began its planned controlled burn of some of the oil. While emphasizing that the estimates are rough given that the leak is at 5,000 feet below the surface, Admiral Landry said the new estimate came from observations made in flights over the slick, studying the trajectory of the spill and other variables [The New York Times]. Because the oil below the surface is so hard to measure or estimate, NOAA’s numbers are still rough estimates, too. BP’s chief operating officer told ABC News he thinks the number is probably somewhere between the two estimates.
But if NOAA’s high-end number right, the oil spill caused by the explosion and sinking of the Deepwater Horizon just entered a new class of awful. Do the math: At the previous estimation—1,000 barrels (42,000 gallons) of oil per day—it would have taken this spill 261 days, or more than eight continuous months, to dump as much oil into the sea at the Exxon Valdez did near Alaska in 1989. But, if it’s true that 5,000 barrels (210,000 gallons) are entering the Gulf each day, it would take just 53 days to top the Valdez’ total of 11 million gallons. Already 9 days have passed since the explosion.
While the Coast Guard commenced burning off some of the oil to try to keep the worst of it away from American shorelines, and BP’s attempted to reach emergency valves with undersea robots, company CEO Tony Hayward is preparing a new strategy. The London-based Hayward was in Louisiana on Wednesday looking at progress in fabricating a 100-ton steel dome the company hopes to lower over the oil leak. The dome could be ready by the weekend, but it would take two to four weeks to put it in place, if that can be done at all. The dome would funnel oil, natural gas and seawater into a pipe leading to a floating processing and storage facility [Washington Post]. But while this has been done in a few hundred feet of water before, the Gulf oil spill emanates from thousands of feet below.
Related Content:
80beats: Coast Guard’s New Plan To Contain Gulf Oil Spill: Light It on Fire
80beats: Sunken Oil Rig Now Leaking Crude; Robots Head to the Rescue
80beats: Ships Race To Contain the Gulf of Mexico Oil Spill
80beats: Obama Proposes Oil & Gas Drilling in Vast Swaths of U.S. Waters
80beats: 21 Years After Spill, Exxon Valdez Oil Is *Still* Stuck in Alaska’s Beaches
Image: NOAA
How to Cook Steak in Your Beer Cooler | Discoblog
After years of serving as your faithful companion to ball games and keeping the brewskies frosty at backyard barbecues, your trusty beer cooler now has a new assignment--cooking up a gourmet meal, sous-vide style. For those of you who don't keep up with high-tech cookery, sous-vide is a method of cooking where food is heated for an extended period at relatively low temperatures. Unlike a slow cooker or Crock pot, the sous-vide process uses airtight plastic bags placed in hot water well below boiling point (usually around 140 Fahrenheit). The idea is to maintain the integrity and flavor of the food without overcooking it (but while still killing any bacteria that may be present). Normally, a sous-vide cooker like the Sous-Vide Supreme would set you back hundreds of dollars, but chef J. Kenzi Lopez-Alt shows us how to use a beer cooler to cook a perfect piece of meat. All you have to do is fill up your beer cooler with water a couple of degrees higher than the temperature you'd like to cook your food at (to account for temperature loss when you add cold food to it), seal your food in a simple plastic Ziplock bag, drop it in, and close ...
Modeling the probabilities of extinction | Gene Expression
Change is quite in the air today, whether it be climate change or human induced habitat shifts. What’s a species in the wild to do? Biologists naturally worry about loss of biodiversity a great deal, and many non-biologist humans rather high up on Maslow’s hierarchy of needs also care. And yet species loss, or the threat of extinction, seems too often to impinge upon public consciousness in a coarse categorical sense. For example the EPA classifications such as “threatened” or “endangered.” There are also vague general warnings or forebodings; warmer temperatures leading to mass extinctions as species can not track their optimal ecology and the like. And these warnings seem to err on the side of caution, as if populations of organisms are incapable of adapting, and all species are as particular as the panda.
That’s why I pointed to a recent paper in PLoS Biology, Adaptation, Plasticity, and Extinction in a Changing Environment: Towards a Predictive Theory below. I am somewhat familiar with one of the authors, Russell Lande, and his work in quantitative and ecological genetics, as well as population biology. I was also happy to note that the formal model here is rather spare, perhaps a nod to the lack of current abstraction in this particular area. Why start complex when you can start simple? Here’s their abstract:
Many species are experiencing sustained environmental change mainly due to human activities. The unusual rate and extent of anthropogenic alterations of the environment may exceed the capacity of developmental, genetic, and demographic mechanisms that populations have evolved to deal with environmental change. To begin to understand the limits to population persistence, we present a simple evolutionary model for the critical rate of environmental change beyond which a population must decline and go extinct. We use this model to highlight the major determinants of extinction risk in a changing environment, and identify research needs for improved predictions based on projected changes in environmental variables. Two key parameters relating the environment to population biology have not yet received sufficient attention. Phenotypic plasticity, the direct influence of environment on the development of individual phenotypes, is increasingly considered an important component of phenotypic change in the wild and should be incorporated in models of population persistence. Environmental sensitivity of selection, the change in the optimum phenotype with the environment, still crucially needs empirical assessment. We use environmental tolerance curves and other examples of ecological and evolutionary responses to climate change to illustrate how these mechanistic approaches can be developed for predictive purposes.
Their model here seems to be at counterpoint to something called “niche modelling” (yes, I am not on “home territory” here!), which operates under the assumption of species being optimized for a particular set of abiotic parameters, and focusing on the shifts of those parameters over space and time. So extinction risk may be predicted from a shift in climate and decrease or disappearance of potential habitat. The authors of this paper observe naturally that biological organisms are not quite so static, they exhibit both plasticity and adaptiveness within their own particular life history, as well as ability to evolve on a population wide level over time. If genetic evolution is thought of as a hill climbing algorithm I suppose a niche model presumes that the hill moves while the principal sits pat. This static vision of the tree of life seems at odds with development, behavior and evolution. The authors of this paper believe that a different formulation may be fruitful, and I am inclined to agree with them.
As I observed above the formalism undergirding this paper is exceedingly simple. On the left-hand side you have the variable which determines the risk, or lack of risk, of extinction more or less, because it defines the maximum rate of environmental change where the population can be expected to persist. This makes intuitive sense, as extremely volatile environments would be difficult for species and individual organisms to track.Too much variation over a short period of time, and no species can bend with the winds of change rapidly enough. Here are the list of parameters in the formalism (taken from box 1 of the paper):
?c – critical rate of environmental change: maximum rate of change which allows persistence of a population
B – environmental sensitivity of selection: change in the optimum phenotype with the environment. It’s a slope, so 0 means that the change in environment doesn’t change optimum phenotype, while a very high slope indicates a rapid shift of optimum. One presumes this is proportional to the power of natural selection
T – generation time: average age of parents of a cohort of newborn individuals. Big T means long generation times, small T means short ones
?2 – phenotypic variance
h2 – heritability: the proportion of phenotypic variance in a trait due to additive genetic effects
rmax intrinsic rate of increase: population growth rate in the absence of constraints
b – phenotypic plasticity: influence of the environment on individual phenotypes through development. Height is plastic; compare North Koreans vs. South Koreans
? – stabilizing selection: this is basically selection pushing in from both directions away from the phenotypic optimum. The stronger the selection, the sharper the fitness gradient. Height exhibits some shallow stabilizing dynamics; the very tall and very short seem to be less fit
Examining the equation, and knowing the parameters, some relations which we comprehend intuitively become clear. The larger the denominator, the lower the rate of maximum environmental change which would allow for population persistence, so the higher the probability of extinction. Species with large T, long generation times, are at greater risk. Scenarios where the the environmental sensitivity to selection, B, is much greater than the ability of an organism to track its environment through phenotypic plasticity, b, increase the probability of extinction. Obviously selection takes some time to operate, assuming you have extant genetic variation, so if a sharp shift in environment with radical fitness implications occurs, and the species is unable to track this in any way, population size is going to crash and extinction may become imminent.
On the numerator you see that the more heritable variation you have, the higher ?c. The rate of adaptation is proportional to the amount of heritable phenotypic variation extant within the population, because selection needs variance away from the old optimum toward the new one to shift the population central tendency. In other words if selection doesn’t result in a change in the next generation because the trait isn’t passed on through genes, then that precludes the population being able to shift its median phenotype (though presumably if there is stochastic phenotypic variation from generation to generation it would be able to persist if enough individuals fell within the optimum range). The strength of stabilizing selection and rate of natural increase also weight in favor of population persistence. I presume in the former case it has to do with the efficacy of selection in shifting the phenotypic mean (i.e., it’s like heritability), while in the latter it seems that the ability to bounce back from population crashes would redound to a species’ benefit in scenarios of environmental volatility (selection may cause a great number of deaths per generation until a new equilibrium is attained).
Of course a model like the one above has many approximations so as to approach a level of analytical tractability. They do address some of the interdependencies of the parameters, in particular the trade-offs of phenotypic plasticity. In this equation 1/?2b quantifies the cost of plasticity, r0 represents increase without any cost of plasticity. We’re basically talking about the “Jack-of-all-trades is a master of none” issue here. In a way this crops up when we’re talking of clonal vs. sexual lineages on an evolutionary genetic scale. The general line of thinking is that sexual lineages are at a short-term disadvantage because they’re less optimized for the environment, but when there’s a shift in the environment (or pathogen character) the clonal lineages are at much more risk because they don’t have much variation with which natural selection can work. What was once a sharper phenotypic optimum turns into a narrow and unscalable gully.
Figure 2 illustrates some of the implications of particular parameters in relation to trade-offs:

There’s a lot of explanatory text, as they cite various literature which may, or may not, support their model. Clearly the presentation here is aimed toward goading people into testing their formalism, and to see if it has any utility. I know that those who cherish biodiversity would prefer that we preserve everything (assuming we can actually record all the species), but reality will likely impose upon us particular constraints, and trade-offs. In a cost vs. benefit calculus this sort model may be useful. Which species are likely to be able to track the environmental changes to some extent? Which species are unlikely to be able to track the changes? What are the probabilities? And so forth.
I’ll let the authors conclude:
Our aim was to describe an approach based on evolutionary and demographic mechanisms that can be used to make predictions on population persistence in a changing environment and to highlight the most important variables to measure. While this approach is obviously more costly and time-consuming than niche modelling, its results are also likely to be more useful for specific purposes because it explicitly incorporates the factors that limit population response to environmental change.
The feasibility of such a mechanistic approach has been demonstrated by a few recent studies. Deutsch et al…used thermal tolerance curves to predict the fitness consequence of climate change for many species of terrestrial insects across latitudes, but without explicitly considering phenotypic plasticity or genetic evolution. Kearney et al…combined biophysical models of energy transfers with measures of heritability of egg desiccation to predict how climate change would affect the distribution of the mosquito Aedes aegiptii in Australia. Egg desiccation was treated as a threshold trait, but the possibility of phenotypic plasticity or evolution of the threshold was not considered. These encouraging efforts call for more empirical studies where genetic evolution and phenotypic plasticity are combined with demography to make predictions about population persistence in a changing environment. The simple approach we have outlined is a necessary step towards a more specific and comprehensive understanding of the influence of environmental change on population extinction.
Citation: Chevin L-M, Lande R, & Mace GM (2010). Adaptation, Plasticity, and Extinction in a Changing Environment: Towards a Predictive Theory PLoS Biol : 10.1371/journal.pbio.1000357
Possible instance of genetic discrimination | Gene Expression
Dr. Daniel MacArthur pointed me to this story, Conn. woman alleges genetic discrimination at work:
A Connecticut woman who had a voluntary double mastectomy after genetic testing is alleging her employer eliminated her job after learning she carried a gene implicated in breast cancer.
Pamela Fink, 39, of Fairfield said in discrimination complaints that her bosses at natural gas and electric supplier MXenergy gave her glowing evaluations for years, but targeted, demoted and eventually dismissed her when she told them of the genetic test results.
Her complaints, filed Tuesday with the U.S. Equal Opportunity Commission and Connecticut Commission on Human Rights and Opportunities, are among the first known to be filed nationwide based on the federal Genetic Information Nondiscrimination Act.
What probability do readers put in regards to this being a legitimate complaint? This seems a large firm, so I doubt that group insurance rates would change because of one person (I have heard of this occurring in small businesses where an expensive employee or employee’s family member can effect the rate for everyone else). So if it is legitimate the main issue would have been their fear of future illness, but the woman in question went through a double mastectomy, which I assume would obviate that concern. What am I missing? Are there expectations that she’d be taking medical leave in the future due to follow up operations or treatment?
Update: Brendan Maher has some follow up from Fink’s lawyer.
Unruly Democracy: What Is Wrong (or Right) With Science Blogs? | The Intersection
On Friday at our Harvard Kennedy School event, I'm going to be giving my rather pessimistic take--already laid out in Unscientific America, and only amplified by "ClimateGate" and other events since then--on the science blogosphere. I'll talk about how in comparison with the old media, the Internet fragments and narrows the audience for science information, even as there aren't really any norms for responsible conduct--and thus, misinformation, innuendo, and general nastiness abound. I'm sure, however, that others will have a different view. Perhaps Joe Romm will; he has just joined our roster for the event. Certainly, his blog has been a major success and demonstrates many of the upsides of science blogging. Such debate is all to the good; it's why we're having the event in the first place. Indeed, I myself will point out some clear positives when it comes to blogging about science (I'm sure you can guess many of them). But taken as a whole, are blogs broadening the conversation about science by reaching new audiences, replacing what has been lost in terms of science coverage in the old media, or elevating our general science discourse? I have to say, I'm skeptical. There is no going back from this new world, but ...
NCBI ROFL: For some reason, med students don’t want to show their genitals in class. | Discoblog
Don't want to show fellow students my naughty bits: medical students' anxieties about peer examination of intimate body regions at six schools across UK, Australasia and Far-East Asia. "BACKGROUND: Although recent quantitative research suggests that medical students are reluctant to engage in peer physical examination (PPE) of intimate body regions, we do not know why. AIM: This article explores first-year medical students' anxieties about PPE of intimate body regions at six schools. METHODS: Using the Examining Fellow Students (EFS) questionnaire, we collected qualitative data from students in five countries (UK; Australia; New Zealand; Japan; Hong Kong) between 2005 and 2007. RESULTS: Our framework analysis of 617 (78.7%) students' qualitative comments yielded three themes: present and future benefits of PPE; possible barriers to PPE; and student stipulations for successful PPE. This article focuses on several sub-themes relating to students' anxieties about PPE of intimate body regions and their associated sexual, gender, cultural and religious concerns. By exploring students' euphemisms about intimate areas, our findings reveal further insights into the relationship between students' anxieties, gender and culture. CONCLUSION: First-year students are anxious about examining intimate body regions, so a staged approach starting with manikins is recommended. Further qualitative research is needed employing interviews ...
Piezoelectric Promise: Charge a Touch-Screen by Poking It With Your Finger | 80beats
Imagine a day in the future when you can charge your cell phone using your sneakers, or charge a touch-screen device merely by rolling up the flexible screen. New devices that take advantage of the piezoelectric effect–the tendency of some materials to generate an electrical potential when they’re mechanically stressed–are taking us one step closer to that reality.
Ville Kaajakari of the Louisiana Tech University harnessed this effect by developing a tiny generator that can be embedded in a shoe sole. The tiny smart device is part of “MEMS” or “micro electro mechanical systems,” which combine computer chips with micro-components to generate electricity [EarthTechling]. Each time the sneaker-wearer goes for a stroll, the compression action would power up the circuits in the generator and produce tiny bits of usable voltage. “This technology could benefit, for example, hikers that need emergency location devices or beacons,” said Kaajakari. “For more general use, you can use it to power portable devices without wasteful batteries” [Clean Technica].
For now, the amount of energy produced is very small, but the generator could theoretically be used to power sensors, GPS units or portable devices that don’t require a large amount of energy [Clean Technica]. The scientist hopes that the technology can be developed further to charge common devices like mobile phones.
Meanwhile, Samsung and a research team from Korea have figured out a way to harvest energy from touch-screens.
In a paper published this month in the journal Advanced Materials, the researchers describe how they combined flexible, transparent electrodes with an energy-scavenging material to produce a thin film. Their experimental device sandwiches piezoelectric nanorods between highly conductive graphene electrodes on top of flexible plastic sheets [Technology Review]. This film can be used in touch screens of common mobile devices, wherein pressing the screen would generate about 20 nanowatts per square centimeter–that is, enough power to help run part of the device. On the flexible touch-screen the researchers developed, you could help charge the batteries just by rolling up the screen.
Scientists hope that the power produced by the touch-screen should one day be enough to toss our batteries into the bin. Study coauthor Sang-Woo Kim added: “The flexibility and rollability of the nano-generators gives us unique application areas such as wireless power sources for future foldable, stretchable, and wearable electronics systems” [Technology Review].
Related Content:
80beats: New “Nanogenerator” Could Power Your iPod With Your Own Movements
80beats: The World’s Smallest Motor Could Propel a Medical “Microbot” Through Arteries
80beats: Rubbery Computer Screens Can Be Bent, Folded, and Even Crumpled
80beats: Nanotubes Could Provide the Key to Flexible Electronics
Discoblog: Can Chatting on Your Cell Phone Cause It to Recharge? Researcher Says Yes
Image: LTU
Turn a Man Into Mush With a Nasal Spray of Pure Oxytocin | Discoblog
Who ever thought that couples could bond over nasal spray? But new research shows that a nasal spray containing the "love hormone" oxytocin helped make regular guys more empathetic and less gruff. Oxytocin is the hormone that strengthens the bond between nursing moms and their babies, and it's also involved in pair bonding, love, and sex. The spray was tested on a group of 48 healthy males--half received a spritz of the nose spray at the start of the experiment and the other half received a placebo. The researchers then showed their test subjects emotion-inducing photos like a bawling child, a girl hugging her cat, and a grieving man. Finally, they asked the guys to express how they felt. The placebo group men reacted normally to the soppy pictures; which is to say they were either mildly uncomfortable or stoic. Whereas the group that had used the nasal spray were markedly more empathetic. The Register reports:
"The males under test achieved levels [of emotion] which would normally only be expected in women," says a statement from Bonn University, indicating that they had cooed or even blubbed at the sight of the affecting images.
The study's findings, published in The Journal of Neuroscience, suggest one ...
When America was post-colonial | Gene Expression
Below I stated:
…until the late 20th century the majority of the ancestry of the white population of the republic descended from those who were counted in the 1790 census.
A commenter questioned the assertion. The commenter was right to question it. My source was a 1992 paper that estimated that only in 1990 did the proportion of American ancestry which derived from those who arrived after the 1790 census exceeding 50%. In other words, if you ran the ancestors of all Americans back to 1790, a majority of that set would have been counted in the 1790 census (so people of mixed ancestry would contribute to the two components are weighted by their ancestry).
The major issue here is that there is a difference between whites, and non-whites, especially before mass Asian and Latin American immigration post-1965, when white vs. non-white ~ white vs. black. Almost all the ancestors of black Americans who were black were already resident in the United States in 1790. A few years ago I read up on the history of American slavery and was surprised how genuinely indigenous the black American, slave and free, population was by the late 18th century (English speaking and Christian). There was an obvious reason why Southern slave-holders went along with the ban on importation of slaves which was due to kick in in the early decades of the republic: American blacks, unlike slave populations elsewhere in the New World, had endogenous natural increase. This explains part of the relative paucity of African aspects in their culture in relation to the blacks of Haiti or Brazil, where African-born individuals were still very substantial numerically at emancipation because of high attrition rates (it is sometimes asserted that the majority of blacks liberated during the Haitian Revolution were born in Africa. Likely a hyperbole, but it gets across the strength of connection).
In any case, to estimate the white proportion attributable to 1790, I have to correct for the black proportion within the total. As an approximation I think it’s acceptable to simply attribute blacks as a whole to the proportion which had ancestors here in 1790 in full. I suspect a greater proportion of the black ancestry which post-dates 1790 would come from the white component of their heritage which simply isn’t of notice in American society for various reasons in any case (Henry Louis Gates Jr. is more white than he is black in terms of ancestry, but he’s the doyen of Africana Studies). So, assuming that blacks contribute to the 1790 and before component in full, I estimate that between 1910 and 1920 the majority of the ancestry of the white population shifted from 1790 and before, to after. Specifically, in 1910 51% of the ancestry could be traced to 1790 and before among whites, and in 1920 49%. In 1950 it was 47% 1790 and before. So I should have said early 20th century, not late. I wouldn’t be surprised though if the balance has started to shift in recent years, as many “white ethnic” groups (Jews, Italians, Irish, etc.) are more heavily concentrated in urban areas, while the most fertile white community in the United States, the Mormons of Utah, are also the most Old Stock Yankee in ancestry (I am aware that many Mormons are descended from European immigrants who converted in Europe and made the journey after conversion, but Mormons are still far more Old Stock Yankee than any group outside of interior New England).
Naming the Unspeakable | Cosmic Variance
Two hundred thousand gallons per day of Gulf crude are leaking from a hole 5000 feet under the water’s surface in the wake of the still mysterious destruction of British Petroleum’s Deepwater Horizon drilling platform last week . How and when it will be stopped is entirely unknown. The mayonnaise-like oil is being blown ashore into the nursery for shrimp for the whole region and the home of hundreds of the other species. Welcome to what may turn out to be the worst single human-caused environmental disaster ever. (Unless you regard global warming in general as a single event. Semantics.)

This thing is going to need a name. The Exxon Valdez incident was a spill – there was a finite amount of oil aboard the ship. A lot of oil: 11 million gallons (40 million liters). The new one in the Gulf of Mexico could blow past that, depending on whether present efforts to close the valve or drill a relief well work.
The fact that we called it the “Exxon Valdez” incident clearly indicates the responsible (if not guilty) party involved. So, though I like the moniker “Spill, Baby, Spill” from a political point of view, it doesn’t lay any blame and this thing is not a spill. It’s a leak, and BP leased the rig from Transocean LTD, the world’s largest offshore drilling contractor. I think the responsibility has yet to be determined. If you rent a car, and wipe out a family in an accident because the steering was faulty, is it your fault or the car manufacturer’s? It may take some time, or even never be known, what happened a week ago to cause this tragedy.
The name of the rig was the Deepwater Horizon, but that doesn’t convey ownership or responsibility. Will this become known as the “BP Deepwater Horizon Spill”? The “Transocean/BP Leak”? The media seem to be stuck on “spill” and so I bet that will be in the name long term…and it will take a very long time to assess responsibility here.
My heart goes out to the families of the 11 lost on the rig, and to the thousands of fishermen and others whose livelihoods are in peril.
We’ve suspended new offshore drilling until we have understood this incident better. And no doubt a new debate about offshore drilling will ensue. This has certainly put the lie to those who claim that new modern drilling rigs are far safer than in the past, something even President Obama was saying as recently as April 2. Sigh.
Super slo mo Apollo, yo | Bad Astronomy
In the Very Cool Department…
My friend Mark Gray from SpaceCraftFilms narrates this film, showing the Apollo 11 Saturn V liftoff using a high-speed camera. I’ve seen this clip about eight bazillion times over the years, but Mark gives the details of what’s happening, providing insight I wasn’t aware of.
The cool thing about this, to me, is the fact that it’s so familiar, but there’s still so much to know about it! And it goes to show you: sending rockets into space is, well, rocket science.
With Prostate Cancer “Vaccine,” Immune System Wages War Against Tumors | 80beats
Yesterday the Food and Drug Administration gave its OK to Provenge, a new treatment for prostate cancer. It’s not a “vaccine” in the old-fashioned sense, but it could be a way to make the immune system wake up and take notice to the presence of cancer.
In a standard vaccination, a person receives an attenuated or dead version of a microorganism to spur them to produce antibodies (against, for example, the virus that causes smallpox). Provenge is not that—it doesn’t prevent prostate cancer—but it is a variation on the theme. To oversimplify quite a bit: with Provenge vaccination begins with a blood draw. Blood is then sent to the lab, where technicians extract immune cells known as antigen presenting cells (APCs) from the sample. From here, Dendreon combines the immune cells with proteins that are prevalent on the surface of prostate cancer cells. An immune boosting substance is also added into the mix [TIME]. That awakens the APCs, which doctors then inject back into the bloodstream. And once there, the APCs put white blood cells on high alert against cancer.
Seattle biotech firm Dendreon developed the treatment aimed at lowering the number of men in the United States killed by prostate cancer each year, which presently stands at 27,000. But Provenge was a long time coming even by medical standards. Dendreon’s low point came in 2007. Expecting FDA approval, the company was instead sent back to the drawing board. Angry prostate-cancer patients and advocates rallied outside FDA headquarters. The setback added another three years to the 15 Dendreon already had spent on Provenge. The amount of money plowed into the project is now close to $750 million [Seattle Times].
In the most recent tests, Provenge increased survival from a quarter of patients to a third, and boosted survival by four months compared to the placebo control group. However, the Seattle Times concludes, these kinds of immune treatments might prove most useful in patients who’ve already received cancer treatment and need protection from relapse.
Approval has finally come for Provenge, though still with caution: With clearance yesterday, Dendreon said the FDA will require it to monitor 1,500 patients given Provenge for increased risk of strokes seen in studies [BusinessWeek]. And it will take many years to study the long-term effects.
Related Content:
DISCOVER: How We Got the Controversial HPV Vaccine
80beats: “Sound Bullets” Could Target Tumors, Scan the Body, and… Create Weapons?
80beats: Researchers Find the Genetic Fingerprint of Cancer, 1 Patient at a Time
80beats: The Mutations That Kill: 1st Cancer Genomes Sequenced
Image: iStockphoto
Double Trouble; Twin Black Holes
Chandra images the first case where there is good evidence for more than one intermediate mass black hole in a galaxy. Here’s the NASA press release:
This image from the Chandra X-ray Observatory shows the central region of the starburst galaxy M82 and contains two bright X-ray sources of special interest. New studies with Chandra and ESA’s XMM-Newton show that these two sources may be intermediate-mass black holes, with masses in between those of the stellar-mass and supermassive variety. These “survivor” black holes avoided falling into the center of the galaxy and could be examples of the seeds required for the growth of supermassive black holes in galaxies, including the one in the Milky Way.
This is the first case where good evidence for more than one mid-sized black hole exists in a single galaxy. The evidence comes from how their X-ray emission varies over time and analysis of their X-ray brightness and spectra, i.e., the distribution of X-rays with energy. These results are interesting because they may help address the mystery of how supermassive black holes in the centers of galaxies form. M82 is located about 12 million light years from Earth and is the nearest place to us where the conditions are similar to those in the early Universe, with lots of stars forming.
Multiple observations of M82 have been made with Chandra beginning soon after launch. The Chandra data shown here were not used in the new research because the X-ray sources are so bright that some distortion is introduced into the X-ray spectra. To combat this, the pointing of Chandra is changed so that images of the sources are deliberately blurred, producing fewer counts in each pixel.
Image credit: NASA/CXC/Tsinghua Univ./H. Feng et al.
How to Save Gorillas: Turn People on to Snail Farming | Discoblog
Gorilla conservationists in Nigeria have a new ally--snails. The critically endangered Cross River gorilla is under constant threat from poachers in this poor nation, as poachers kill the animals for their bushmeat or sell them illegally to traffickers in the exotic pet trade. With just 300 Cross River gorillas left, the Wildlife Conservation Society (WCS) hopes to offer locals an alternate source of both food and revenue so they'll leave the poor apes alone. Enter the snail. For this conservation project, the WCS picked eight former gorilla poachers and got them to start farming African giant snails, a local delicacy. The WCS helped the poachers construct snail pens and stocked each pen with 230 giant snails, writes Scientific American. As the snails breed quickly, farmers can expect a harvest of 3,000 snails per year. Scientific American adds:
According to WCS, this should end up being a fairly profitable enterprise for local farmers. Annual costs are estimated at just $87 per farmer, with profits around $413 per year. The meat of one gorilla, says the WCS, would net a poacher around $70.
Related Content:
80beats: Bushmeat Debate: How Can We Save Gorillas Without Starving People?
80beats: New Threat to Primates Worldwide: Being “Eaten Into Extinction”
DISCOVER: Extinction–It’s What’s ...
Men & ideas on the move: settled lands & colonized minds | Gene Expression
I am currently reading Peter Heather’s Empires and Barbarians: The Fall of Rome and the Birth of Europe. This is a substantially more hefty volume in terms of density than The Fall of the Roman Empire: A New History of Rome and the Barbarians . It is also somewhat of a page turner. One aspect of Heather’s argument so far is his attempt to navigate a path between the historically tinged fantasy of what its critics label the “Grand Narrative” of mass migration of barbarian tribes such as the Goths, Vandals and Saxons during the 4th to 6th centuries, dominant before World War II, and its post-World World II counterpoint. As a reaction against this idea archaeologists have taken to a model of pots-not-people, whereby cultural forms flow between populations, and identities are fluid and often created de novo. This model would suggest that only a tiny core cadre of “German” “barbarians” (and yes, often in this area of scholarship the most banal terms are problematized and placed in quotations!) entered the Roman Empire, and the development of a Frankish ruling class in the former Gaul, for example, was a process whereby Romans assimilated to the Germanic identity (with the shift from togas to trousers being the most widespread obvious illustration of Germanization of norms). I believe that liberally applied this model is fantasy as well. Being a weblog where genetics is important, my skepticism of both extreme scenarios is rooted in new scientific data.
There are cases, such as the Etruscans, where the migration is clear from the genetics, both human and their domesticates. The peopling of Europe after the last Ice Age is now very much an open question. The likelihood that the present population of India is the product of an ancient hybridization event between an European-like population and an indigenous group with more affinity with eastern, than western, Eurasian groups, is now a rather peculiar prehistoric conundrum. It also seems likely that the spread of rice farming in Japan was concomitant with the expansion of a Korea-derived group, the Yayoi, at the expense of the ancient Jomon people. And yet there are plenty of inverse cases. The spread of Latinate languages and Romanitas did not seem to perturb the basic patterns of genetic relationship among the peoples of Europe. The emergence of the Magyar nation on the plains of Roman Pannonia seems to have involved mostly the Magyarization of the local population. In contrast, the Bulgars were totally absorbed by their Slavic subjects culturally, leaving only their name. The spread of the Arabic language and culture was predominantly one of memes, not genes (clearly evident in the current dynamic of Arabization in parts of the Maghreb).
And yet you will note that there is a slight difference between the few examples I’ve cited: population replacement seems to have occurred in the more antique cases, rather than the more recent ones. This would naturally bias the perspectives of historians, who have much more data on more recent events (no offense, but archaeologists seem to be able to say whatever they want!). The Etruscan language itself is known only from fragments, while the happenings in prehistoric Europe and India can only be inferred very indirectly. I now offer a modest hypothesis for the distinction, why in some cases is it just the “pots” which move (Arabs), and in other cases it is the people who move (the Japanese). In cases of population replacement there is often a shift in mode of production. In cases where there is the diffusion of culture it is often a system or set of ideas which rent-seeking elites can exploit to maintain their position, or perpetuate it, flow across space. Islam was not only a potent ideology which bound the tribes of Arabia together so that they could engage in collective action, local elites across the new Muslim-dominated world found it a congenial international system whereby they could integrate themselves into a civilization of elite peers, as well as justify their god-given position at the apex of the status hierarchy (granted, many had this in the form of Christianity or Zoroastrianism, but once the old top dogs were overthrown the benefit of these systems was considerably less). The spread of Yayoi culture in Japan involved a shift from more extensive, toward more intensive, forms of agriculture. Their population base was greater, and the domains of the Jomon were left “underexploited” from the perspective of the more productive mode of agriculture which the Yayoi were engaged in. It need not be an issue of mass slaughter or extermination, a high endogenous rate of natural increase as well as disease, combined with assimilation and co-option of local elites, could result in the swallowing up of a population engaged in a less intensive mode of production. This sort of hybrid aspect of cultural and genetic expansion, whereby the local substrate is assimilated and synthesized with the expanding ethnic group, seems to be a good fit to the pattern that we see among the Han of China.
But shifts from modes of production exhibit some level of discontinuity, insofar as there are diminishing returns once all the land appropriate for that mode of production has been taken over. Farmers who are expanding into land held by hunter-gatherers or those practicing less intensive forms of agriculture can have enormous rates of natural increase because they’re not bound by Malthusian constraints. This is evident in the United States, until the late 20th century the majority of the ancestry of the white population of the republic descended from those who were counted in the 1790 census. The reason had to do with the extremely high birthrates among white Americans. When regions such as New England were “filled up,” they pushed out to the “frontier,” to northern Ohio, then to the Upper Midwest, and finally the Pacific Northwest. And in the process there was a radical change in the genetic variation of North America, as the indigenous populations died from disease, were numerically overwhelmed, or genetically absorbed. This is an extreme case scenario, but I think it illustrates what occurs when modes of production collide, so to speak. The pattern in Latin America was somewhat different, though an amalgamated Mestizo population did emerge over time, there was not the wholesale demographic replacement in many regions. And I believe that the reason is that the Iberians did not bring a superior mode of production, rather, the large local population base engaged in agriculture presented an opportunity for rent-seekers to place themselves atop the status hierarchy. Sometimes this involved intermarriage with local elites, as was the case in Peru where the nobility of the Inca intermarried with the Spanish conquistadors for the first few generations (the whiteness of the Peruvian elite despite the fact that the old families have Inca ancestry is simply due to dilution as successive generations of lower Spanish nobility set off to the New World and married into Creole families).
By the Roman period I believe that much of the core Old World was “filled up” in terms of intensive agricultural production. So most, though not all, of the changes in ethnicity or identity are biased toward elite emulation and novel identity formation. The Turks did not bring an innovative new economic system whereby they replaced the Greek and Armenian peasantry in Anatolia, rather, on the contrary peculiarities in the Turkish Ottoman system of rule produced a situation where the old families were usually replaced in positions of power by converts from the Christian groups who assimilated to a Turkish identity. When the economic arrangements reach stasis and the population is at Malthusian equilibrium change is a matter of shifting identities and affinities of the rent-seekers. When radically new economic systems emerge, opportunities for disparate population growth present themselves. Ergo, England went from being demographically dwarfed by France in the 17th century, to surpassing it in population in the 19th. England was of course the first nation to break into a new mode of production since the agricultural revolution.
Credit: Thanks to Michael Vassar for triggering this line of reasoning after a conversation we had about the Neolithic revolution.
Saturn rages from a billion kilometers away | Bad Astronomy
[In a weird coincidence, I wrote this post up mere hours before this news story on the same topic came out at JPL.]
With all the stunning images and animations coming from the Cassini probe, it’s easy to forget that some pretty cool stuff can be seen from Earth, too. Amateur astronomer Emil Kraaikamp sent me this animation he made of Saturn taken with his 25 cm (10″) telescope in The Netherlands. Keep your eyes on the upper half of Saturn, above the rings.
See the white spot? That’s actually a huge storm… and by "huge", I mean about the same size as the Earth! I usually think of Jupiter as the stormy planet, but Saturn has its share as well. A lot of the time, these storms are discovered here on Earth by amateur astronomers, who spend more time looking at planets globally, as opposed to professional astronomers who aren’t always observing every planet all the time. Last year, a "storm" seen on Jupiter by an amateur turned out to be the impact cloud from a collision by an asteroid or comet!
Here’s one of the images Emil used in his animation:

You can see two moons, the rings (and the dark Cassini Division, a gap in the rings), banding on the planet itself, and of course the storm. Note that when he took these shots, Saturn was 1.3 billion km (almost 800 million miles) away! Astronomy is one of the very few sciences where amateurs — and by that, I mean people who aren’t paid to do it as a career — still make an incredibly important, and even critical contribution. With observations like Emil’s, you can see why.
James Cameron to Design a 3D Camera for Next-Gen Mars Rover | 80beats
After entertaining the entire planet with the movie Avatar, director James Cameron is now taking his expertise to space–specifically to Mars. He’s helping NASA build a 3D camera for its next rover, Curiosity.
The space agency announced that Cameron is working with Malin Space Science Systems Inc. of San Diego to develop the camera, which will be the rover’s “science-imaging workhorse.” The rover, which was previously known as the Mars Science Laboratory, is scheduled for launch in 2011.
NASA’s Jet Propulsion Laboratory had recently scaled back plans to mount a 3D camera on the rover, as the project was consistently over-budget and behind schedule. But Cameron lobbied NASA administrator Charles Bolden for inclusion of the 3-D camera during a January meeting, saying a rover with a better set of eyes will help the public connect with the mission [Associated Press]. Cameron, whose 3D spectacle Avatar earned more than $2 billion at box offices worldwide, had developed a special 3D digital camera system for the film, and felt the space agency could benefit from his expertise.
Malin has already delivered two standard cameras to be installed on the rover’s main mast. These cameras, which are set up for high-definition color video, are designed to take images of the Martian surface surrounding Curiosity, as well as of distant objects [Computer World]. But these cameras cannot provide a wide field of view, and they also don’t have a zoom; the cameras Cameron is developing will include these features, and will allow researchers to take cinematic video sequences in 3D on the surface of the Mars. However, the 3D cameras aren’t guaranteed to be included in the mission. To make it on the new rover, the cameras will have to be designed, assembled and tested before NASA begins its final testing of the rover early next year [Computer World].
The SUV-sized rover won’t only be toting cameras. Curiosity will also carry instruments, environmental sensors, and radiation monitors to investigate the Red Planet’s surface. NASA hopes to find out if life ever existed on Mars and if the planet can support human life in the future. “It’s a very ambitious mission. It’s a very exciting mission,” Cameron said. “(The scientists are) going to answer a lot of really important questions about the previous and potential future habitability of Mars” [AFP].
Related Content:
80beats: Photo Gallery: The Best Views From Spirit’s 6 Years of Mars Roving
80beats: Dis-Spirit-ed: NASA Concedes Defeat Over Stuck Mars Rover
80beats: Next Mars Rover Won’t Take Off Towards Mars Until 2011
80beats: 3-D TV Will Kick Off With World Cup Match This Summer
Discoblog: Just Like Avatar: Scenes from India, Canada, China, and Hawaii
The Intersection: The Science of Avatar
Image: NASA
Gulf Oil Spill Reaches U.S. Coast; New Orleans Reeks of “Pungent Fuel Smell” | 80beats

The moment conservationists have been dreading since the Gulf of Mexico oil spill started—that oil making landfall—appears to be upon us. This morning the Coast Guard is flying over the Gulf Coast to check out reports the crude washed ashore overnight, and more reports of oil drifting ashore are coming out of Louisiana. Crews in boats were patrolling coastal marshes early Friday looking for areas where the oil has flowed in, the Coast Guard said. Storms loomed that could push tide waters higher than normal through the weekend, the National Weather Service warned [AP].
Homeland Security Secretary Janet Napolitano set up a second base of operations to deal with potential impacts on the Gulf Coast states of Alabama, Mississippi, and Florida. Meanwhile, Louisiana Governor Bobby Jindal has declared a state of emergency, and said: “Based on current projections, we expect the oil to reach land today at the Pass-A-Loutre Wildlife Management Area. By tomorrow, we expect oil to have reached the Chandeleur Islands and by Saturday, it is expected to reach the Breton Sound. These are important wildlife areas and these next few days are critical” [Nature]. The city of New Orleans already reeks of a”pungent fuel smell” believed to come from the oil spill, as the Times-Picayune newspaper puts it.
With this news, along with yesterday’s announcement that the spill could be five times worse than first believed, the Deepwater Horizon disaster is close to becoming historically bad. The oil slick could become the worst U.S. environmental disaster in decades, threatening to eclipse even the Exxon Valdez in scope. It imperils hundreds of species of fish, birds and other wildlife along the Gulf Coast, one of the world’s richest seafood grounds, teeming with shrimp, oysters and other marine life [CBS News]. To make matters worse, experts say that marshlands are far more difficult to clean than sandy beaches. Says David Kennedy of the National Oceanic and Atmospheric Administration, “It is of grave concern. I am frightened. This is a very, very big thing. And the efforts that are going to be required to do anything about it, especially if it continues on, are just mind-boggling” [AP].
Responders keep trying to stem the flow, but all the Coast Guard’s containment boom and controlled fires, and all of BP’s undersea robots, haven’t been able to stop the oil leak deep undersea. Underscoring how acute the situation has become, BP is soliciting ideas and techniques from four other major oil companies — Exxon Mobil, Chevron, Shell and Anadarko [The New York Times]. The military is trying to help BP, who’d leased the Deepwater Horizon oil rig, reach it emergency shutoff valves. “To be frank, the offer of help from all quarters is welcome,” said David Nicholas, a BP spokesman [The New York Times].
Facing a far-reaching catastrophe, today President Obama’s administration announced that the plan announced a month ago to expand offshore drilling is going on hiatus, at least until people figure out what went wrong in the Gulf. Meanwhile, the Interior Department says it will commence an immediate safety review of all the rigs and drilling platforms in the area.
Our previous posts on the Gulf Oil Spill:
80beats: Uh-Oh: Gulf Oil Spill May Be 5 Times Worse Than Previously Thought
80beats: Coast Guard’s New Plan To Contain Gulf Oil Spill: Light It on Fire
80beats: Sunken Oil Rig Now Leaking Crude; Robots Head to the Rescue
80beats: Ships Race To Contain the Gulf of Mexico Oil Spill
Image: NOAA
