Jupiter and Ganymede in exquisite detail | Bad Astronomy

If you go outside shortly after sunset and face east, you’ll see a brilliant white "star" madly shining down on you. That’s no star: it’s Jupiter, king of the planets, the brightest object in the sky right now after the Sun and the Moon. Now is the best time to observe it, since the Earth is placed directly between the giant planet and the Sun, meaning we’re as close to it as we’ll get all year.

"Amateur" astronomer Emil Kraaikamp took advantage of the situation, and, with his friend Rik ter Horst — who crafted his own 40 cm (16") mirror telescope — took this amazing shot of Jupiter:

[Click to enjovianate.]

I found this image on the Astron/Jive image of the day page (you should really subscribe to their RSS feed), and Emil gave me permission to use it here. Isn’t it lovely? The level of detail is quite incredible, about as good as you can possibly get with a 40 cm ‘scope. They used a video camera to capture a lot of frames, and then pick the best ones to add together. Earth’s atmosphere roils and shifts, causing images to blur out, so this technique compensates for that — and Jupiter obliges by being very bright, allowing for lots of short exposures in rapid succession.

The little guy below Jupiter and to the right is the moon Ganymede, which, if Jupiter weren’t there, would be considered a planet in its own right. It’s the biggest moon in the solar system, and actually comfortably larger than Mercury — though also much less massive, because Mercury has lots of iron, while Ganymede is mostly rock and ice. It’s incredible that advances in technology have made it possible to capture such detail on an object 600 million km (360 million miles) away! The image on the right of Ganymede is a NASA map of the moon based on space probe images, showing that those features Emil and Rik captured are real.

Emil tells me it’s been cloudy where he is lately, which is too bad. It’s been touch-and-go here with the weather, but seeing this is making me think of hauling out my own ‘scope and taking a look. I should get on that before the snow starts to fall here in Boulder…

In the meantime, check out the Related posts links below to see more of Emil’s amazing work.


Related posts:

- Jupiter rolls into view
- Saturn rages from a billion kilometers away
- The blue clouds of the red planet [Must see animation of clouds on Mars!]


Jobs Lived 8 Years with Pancreatic Cancer, Steinman for 4, But It Was Steinman Who Beat the Odds. Here’s Why. | 80beats

spacing is important

Steve Jobs and new Nobelist Ralph Steinman both died of pancreatic cancer, a killer that’s hard to spot until it’s very far advanced. But fundamental differences in their diseases made Steinman’s survival more miraculous than Jobs’. Katherine Harmon at Scientific American has a great explanation of this, starting with the fact that the pancreas is made up of two different kinds of cells:

The pancreas itself is essentially two different organs, which means two distinct kinds of tissue—and two very different types of cancer, points out [Leonard Saltz, acting chief of the gastrointestinal oncology service at Memorial Sloan-Kettering Cancer Center]. The most common kind of pancreatic cancer[s] [the kind Steinman had] originate in what is known as the exocrine portion of the pancreas. This is the main mass of the organ, which makes digestive enzymes that get shuttled to the gastrointestinal tract via specialized ducts.

“Scattered in that larger organ are thousands of tiny islands,” Satlz explains. “These are islands of endocrine tissue,” which makes hormones that are secreted into the blood. It was a cancer of these islet cells that Jobs had.

For people with Jobs’ cancer, which is quite rare, survival is measured in years. For those with Steinman’s cancer, it’s measured in months.

Steinman’s survival for four years after diagnosis may be due in part to his use of experimental immunotherapies, which were being developed by his colleagues and sometimes incorporated Steinman’s own discoveries. Jobs’ liver transplant to replace an organ riddled with metastases, on the other hand, may or may not have helped him, says Saltz—having to take immunosuppressants to prevent rejection of the new organ weakens the immune system’s abilities to fight off the cancer.

Read more at Scientific American.

Images courtesy of mattbuchanan / flickr and Rockefeller University


A Fold in the Brain is Linked to Keeping Reality and Imagination Separate, Study Finds | 80beats

What’s the News: One of memory’s big jobs is to keep straight what actually happened versus what we imagined: whether we said something out loud or to ourselves, whether we locked the door behind us or just thought about locking the door. That ability, a new study found, is linked to the presence of a small fold in the front of the brain, which some people have and others don’t—a finding that could help researchers better understand not only healthy memory, but disorders like schizophrenia in which the line between the real and the imagined is blurred.


Scans of a brain with a distinctive paracingulate sulcus (left, marked by arrow) and without one (right)

How the Heck:

  • The researchers looked at MRI brain scans of a large group of healthy adults. In particular, they were looking for the paracingulate sulcus (PCS), a fold near the front of the brain. There’s a lot of variability in the PCS: some people have quite distinctive folds, others have barely any. It’s in a part of the brain known to be important in keeping track of reality, which is why the researchers chose to study it. Of the 53 people selected for the study, some had this fold on both sides of their brain, some had it on one side, and some had no fold.
  • The participants saw some full well-known word pairs (“Jekyll and Hyde”) and some half pairs (“Jekyll and ?”). If they only saw half of a pair, they were asked to imagine the other half (“Hyde”). After each pair or half pair, either the participant or the experimenter said the whole pair aloud.
  • Once they’d seen all the pairs, the participants were asked two questions about each phrase: Did you see both words of the pair, or just one? And who said the phrase aloud, you or the experimenter?
  • People who didn’t have the fold on either side of their brains did worse on both questions—remembering if something was real or imagined, and remembering who’d done something—than people whose brains had the fold. But they felt as confident in their answers, meaning they didn’t realize they’d been mixing up internal and external events.

What’s the Context:

  • Poor reality monitoring—not clearly remembering whether something was real or imagined—could play a role in diseases such as schizophrenia. Schizophrenics often report hallucinations, like hearing a voice when no one’s speaking. ”Difficulty distinguishing real from imagined information might be an explanation for such hallucinations. For example, the person might imagine the voice but misattribute it as being real,” explained lead researcher Jon Simons in a prepared statement.
  • Earlier studies have shown that people with schizophrenia frequently have smaller or no PCS, suggesting a lack of this brain structure—and the associated difficulties with reality monitoring—could play a role in the disease, Simons said.

Not So Fast: The study only shows that the PCS and reality monitoring are linked, not that the presence or absence of the PCS is what causes some people to be better than others at this sort of memory task. It could be that another factor in brain development causes both small PCS and poor reality monitoring, for instance.

The Future Holds: The research team is now planning to study whether these findings hold true for people suffering with schizophrenia, by looking at whether schizophrenics with little to no fold have more hallucinations that participants with a clear fold.

Reference: Marie Buda, Alex Fornito, Zara M. Bergström, and Jon S. Simons. “A Specific Brain Structural Basis for Individual Differences in Reality Monitoring.” Journal of Neuroscience, October 5, 2011. DOI: 10.1523/JNEUROSCI.3595-11.2011

Image: Journal of Neuroscience, Buda et al.

 

 

 


The Brain’s Medicine: Natural Marijuana-Like Chemicals Play Important Role in Placebo Effect | 80beats

Placebos are inactive treatments that shouldn’t, in some sense, have a real effect. And yet they often do. But the chemical basis of the placebo effect, despite its enormous importance, is still largely a mystery. A study published this week in Nature Medicine shows that cannabinoid receptors are involved in the placebo response to pain, which hasn’t been demonstrated before. The finding implies that the brain’s own endocannabinoids can fight pain, and actually do it via the same pathway as several compounds in the cannabis plant.

What’s the Context:

  • For most drugs and treatments to be approved today, they must be favorably compared to ineffective placebos to prove that the therapeutics actually work. (This comparison is not simple, and whether certain drugs—like many antidepressants—are actually better than placebos is a matter of considerable debate.)
  • The placebo effect has a impressive ability to confound expectations. For example, in a 1999 study researchers gave participants an inert substance but said it was a stimulant. The patients became stimulated and tense. Stranger still, they gave people a muscle relaxant, also calling it a stimulant. The patients still tensed up.
  • Much of what we know about placebo chemistry comes from studies of pain. Pain tolerance, in contrast to more slippery traits like anxiety, can be relatively easily quantified—i.e., the length of time someone can withstand a painful sensation (which doesn’t cause lasting damage). The current study employed a tourniquet that painfully tightens around the arm like the cuff of a blood-pressure monitor until participants said it was “unbearable.”

Painful Lessons: 

  • Several previous studies (in 1999 and 2007) found that if you give somebody morphine only twice, and then the third time give them a placebo that they think is as strong pain-killer, their pain tolerance will shoot up almost as high as it was on the drug; this is the placebo effect in action. Let’s call this group A.
  • Another control group was given morphine on three consecutive occasions, but the third time they were also knowingly given naloxone, a drug that blocks opiates like morphine and heroin from binding to opioid receptors in the brain and exerting an effect (for this reason naloxone can be used to treat heroin overdoses). As you might expect, naloxone prevented morphine from doing its thing and also squelched the placebo effect, as subjects expected the morphine not to work. Pain tolerance amongst these people was the same as that in the unmedicated control group.
  • This is where it gets weird. Researchers then treated another group of people (let’s call them group C) just as they did those in group A, except for one important difference: on the third treatment, with placebo, people were also unknowingly administered naloxone. Unlike group A, the placebo effect on pain tolerance vanished: people did not have a significantly increased pain tolerance.
  • These results suggest that after being “conditioned” with an opiate drug like morphine, people were capable of producing their own natural opiate-like chemicals that bind to some of the same receptors as morphine. These are called the brain’s endogenous opioids, a class that includes well-known natural painkillers like endorphins, which are released for example during exercise.
  • Researchers had known opioids were involved in pain tolerance, but these studies were amongst the first to show they can be involved in the brain’s placebo response to pain.

Pot and Pain:

  • Although naloxone blocks opiates like morphine, it has little to no effect upon less potent—but more widely used—drugs like non-steroidal anti-inflammatory drugs including ibuprofen. Given recent evidence that NSAIDs interact with cannabinoids and that their receptors are involved in pain tolerance, researcher Fabrizio Benedetti decided to go a step further and see if cannabinoid receptors are involved in the placebo response to pain. (Spoiler alert: they are.)
  • To test this proposition, Benedetti performed experiments very similar to those previously described. But instead of using naloxone to block opiods, he used a drug called rimonabant to block endogenous cannabinoids. (Rimonabant binds the cannabinoid receptors (CB1), boxing out the cannabinoids.)
  • Benedetti first established average pain tolerance in unmedicated study participants. By giving a second group solely rimonabant, he ensured the drug had no impact on pain tolerance by itself.
  • As before with morphine, patients in one group (“group 5″) were given an NSAID called ketorolac for two trials. One the third occasion they were given a placebo labelled a “strong painkiller”; their tolerance was much higher than normal. The placebo effect!
  • The same was done to people in “group 6.” But on the third treatment, besides being given the painkiller placebo, they were also given rimonabant. And voila! Their pain tolerance was back to normal, on par with the tolerance of people not given any drug or placebo treatment.
  • By binding to CB1, rimonabant must have blocked the action of the brain’s own cannabinoids, which the brain apparently is able to produce in order  to effectively combat pain in this instance.

So What:

  • The study is the first to prove that the placebo response to pain involves the cannabinoid system, specifically CB1, the receptor to which the brain’s own natural cannabinoids bind. It’s also the receptor bound to by THC, the main psychoactive ingredient in cannabis.

The Future Holds:

  • This study tested 82 people, large enough for meaningful results but not especially large or diverse, considering the wide variety of responses seen in placebos. Future studies are needed to fully understand the effect, which involves more neurotransmitters than just opioids and cannabinoids.

Reference: Fabrizio Benedetti, Martina Amanzio, Rosalba Rosato, Catherine Blanchard. Nonopioid placebo analgesia is mediated by CB1 cannabinoid receptors. Nature Medicine (2011). Published online 02 October 2011. DOI: 10.1038/nm.2435

Image: MikeBlogs / Flickr

 


Discovery Suggests a New Avenue for Advancing Stem Cell Research | 80beats

esc

What’s the News: Making stem cells without using embryos can be a difficult process, and scientists have had to cope with numerous failures. But a new discovery may help them home in on what’s missing from their biochemical recipes.

What’s the Context:

  • Induced pluripotent stem cells (iPSs) are cells that have had their biological clocks turned back to the point where they can develop into anything—blood cells, liver cells, you name it. Scientists hope to eventually work this magic on skin or blood cells removed from a patient desperately in need of new pancreatic cells, for instance, into order to grow them replacements.
  • The primary method for making iPSs involves inserting four genes into the cells you’d like to reprogram. The proteins those genes code for are usually found in embryonic stem cells, and they direct the cell to reset itself and begin to divide.
  • But cells made this way can have a lot of mistakes in their genetic code, which can turn them into tumor cells, and it appears that that their memory of their early identities aren’t always fully erased. (Check out Ed Yong’s interactive timeline of stem cell history for more.)

How the Heck:

  • To address this problem, this team of scientists went back to stem cells’ roots: they worked on swapping out the nucleus of a donated human egg and replacing it with a nucleus from a patient’s cell, an approach that had been used in the early days of studying stem cells. The idea is that unknown substances in the egg can reset the nucleus, and then as the egg-patient cell hybrid grows and divides, scientists can skim off the reprogrammed cells, which, courtesy of their nucleus, are compatible with the patient, and use them in treatments.
  • The way it’s usually done, though, the hybrid cell almost never makes it that far—it fails to divide enough to make it to the point where cells can be skimmed off. But when the team accidentally left the egg’s nucleus in the cell along with the new nucleus, they found that the cells made it to that point far more frequently than they had without the egg’s nucleus.
  • While cells made this way aren’t suitable for use in patients—for one thing, they have an extra set of chromosomes, thanks to the egg nucleus—this suggests that there is something that the egg nucleus itself is producing to keep the cells viable.

The Future Holds: The discovery that the egg nucleus can keep the hybrid cells growing is in and of itself not such a shocker: The nucleus, of course, carries the genes that code for the proteins that make eggs viable. But knowing definitively that an egg nucleus can coax these hybrid cells into dividing will help scientists narrow the field of candidates. Hopefully they will be able to find several more proteins they can add when making iPSs the usual way, in addition to the four they already use.

Reference: Noggle, S. et al. Human oocytes reprogram somatic cells to a pluripotent state. Nature 478, 70–75 (2011)

Image courtesy of PLoS


Uranus got double-tapped? | Bad Astronomy

One of the enduring mysteries of our solar system is why Uranus is tilted over on its side. If you measure the angle of a planet’s rotation axis (the location of its north pole) compared to the plane of its orbit, you find that all the planets in the solar system are tipped. Jupiter is only 3°, but Earth is at a healthy 23° angle; Mars is too. Venus is tipped so far over it’s essentially upside-down (we know this because it spins the wrong way).

Uranus, weirdly, is at 98°, like it’s rolling around the outer solar system on its side. The best guess is that it got hit hard by something planet-sized long ago, knocking it over (though there are other, more speculative, ideas). The problem with that is that its moons and rings all orbit around its equator, meaning their orbital planes are tipped as well. It’s hard to see how that might have happened, even if you assume the moons formed in that collision (as, apparently, our Moon formed in an ancient grazing impact with Earth by a Mars-sized body).

Well, a team of astronomers have come up with a new idea: maybe Uranus wasn’t hit by one big object. Maybe it was hit by two smaller ones.

It would’ve happened when the planet was still forming, and surrounded by a disk of leftover material that was in the process of forming its moons. A proto-planet could’ve hit it, knocking it over somewhat, and sending up a vast cloud of debris that puffed the disk up into a torus (that’s what us scientist-types call a donut). A second collision some time later would’ve completed the task. After more time elapsed things settled down and Uranus would’ve been rotating sideways, and the torus would’ve flattened back into a disk aligned with Uranus’ equator due to tidal forces.

It’s an interesting, if surprising idea. If there were only one collision at that time, the astronomers found the dynamics would’ve made the moons orbit the planet the wrong way (retrograde, against the spin of the planet). It would’ve taken a second hit to add enough momentum to the debris disk to get the moons orbiting prograde.

I wonder if this would also somehow explain the weird magnetic field of Uranus. It’s not aligned at all with the rotation axis, and is even off-center from the core of the planet! It’s unclear why this might be, though it may have to do with Uranus being an ice giant (PDF), with a different composition and structure than Jupiter and Saturn, the two gas giants. I’ll note Earth’s magnetic field isn’t well aligned with our spin axis either, but at least it has the courtesy to be centered on the center of our planet! One idea I’ve seen is that the magnetic field of Uranus isn’t generated in its core, like ours is (or, to be more accurate, in the outer layer of our core — this stuff gets complicated pretty quickly), but might be created higher up in the interior. Clearly, there’s a lot left to figure out here.

All of these things are clues to Uranus’ origin and evolution, its history. The characteristics we see today had some cause, and by piecing all this together we can, perhaps, understand the story of this giant planet. And we need to sometimes entertain unusual ideas — as long as the science supports them — because if there’s one thing that’s usual about the bodies inhabiting the solar system, it’s that they’re unusual.

Image credit: Erich Karkoschka (University of Arizona) and NASA


Related posts:

- Did Herschel see the rings of Uranus?
- Ooo-RAN-us
- Yes, yes, rings around Uranus, haha
- A new ring around Uranus (and this followup)


Wall Street Journal: neutrinos show climate change isn’t real | Bad Astronomy

OpEds — editorials expressing opinions in newspapers — are sometimes a source of wry amusement. Especially when they tackle subjects where politics impact science, like evolution, or the Big Bang.

Or climate change.

Enter the OpEd page of the Wall Street Journal, with one of the most head-asplodey antiscience climate change denial pieces I have seen in a while — and I’ve seen a few. The article, written by Robert Bryce of the far-right think tank Manhattan Institute, is almost a textbook case in logical fallacy. He outlays five "truths" about climate change in an attempt to smear the reality of it.

I won’t even bother going into the first four points, where he doesn’t actually deal with science and makes points that aren’t all that salient to the issue, because it’s his last point that really needs to be seen to believe anyone could possibly make it:

The science is not settled, not by a long shot. Last month, scientists at CERN, the prestigious high-energy physics lab in Switzerland, reported that neutrinos might—repeat, might—travel faster than the speed of light. If serious scientists can question Einstein’s theory of relativity, then there must be room for debate about the workings and complexities of the Earth’s atmosphere.

Seriously? I mean, seriously?

It’s hard to know where to even start with a statement so ridiculous as this. For one, there is always room for questioning science. But that questioning must be done by science, using a scientific basis, and above all else be done above board and honestly. But that’s not how much of the climate science denial has been done. From witch hunts to the climategate manufactrovery, much of the attack on climate science has not been on the science itself, but on the people trying to study it. And when many of those attacks have at least a veneer of science, it’s found they are not showing us all the data, or are inconclusive but still getting spun as conclusive by climate change deniers. And if you point that out, the political attacks begin again (read the comments in that last link).

Second, the neutrino story has nothing to do with climate change at all. It’s a total 100% non sequitur, a don’t-look-behind-the-curtain tactic. Just because one aspect of science can be questioned — and I’m not even saying that, which I’ll get to in a sec — doesn’t mean anything about another field of science. Bryce might as well question the idea that gravity is holding us to the Earth’s surface.

After all, gravity is just a theory.

And he’s wrong anyway: even if the neutrino story turns out to be true, it doesn’t prove Einstein was wrong. At worst, Einstein’s formulation of relativity would turn out to be incomplete, just as Newton’s was before him. Not wrong, just needs a bit of tweaking to cover circumstances unknown when the idea was first thought of. Relativity was a pretty big tweak to Newtonian mechanics, but it didn’t prove Newton wrong. Claims like that show a profound lack of understanding of how science works.

And finally, of course there is lots of room for arguing over how the Earth’s environment works. It’s a complex system with a host of factors affecting how it works. But that’s beside the point: we know the average global temperatures are increasing. The hockey stick diagram has been vindicated again and again, after being attacked many times by real science and otherwise. It’s always held up. Yes, the Earth is a difficult-to-understand system, but we’ve gotten pretty good at hearing what it’s telling us:

The temperatures are going up. Arctic sea ice is decreasing. Glaciers are retreating. Sea levels are rising, sea surface temperatures are increasing, snow cover is decreasing, average humidity rates are rising.

But hearing is one thing. Listening is another.

Someone like Bryce can try to sow confusion — and reading the comments on that OpEd, that tactic appears to work with lots of people — but the bottom line is that global warming is real, the climate is changing, and human influence is almost certainly the cause.

The only thing faster than neutrinos, I think, is the speed at which deniers will jump on any idea, no matter how tenuous, to increase doubt.


Related posts:

- Climate change: the evidence
- New study clinches it: the Earth is warming up
- Case closed: climategate was manufactured
- NASA talks global warming


MAGNIFICENT time lapse: Landscapes, Volume 2 | Bad Astronomy

I still can’t seem to get enough of these time lapse videos, especially when they show dramatic landscapes coupled with meteorological motion and astronomical beauty. And this one is especially amazing, so, for your Friday enjoyment: Landscapes, Volume 2.

[Make sure you have it set to HD and make it full screen! Turn up the volume, too.]

I think my favorite parts are when you see stars, yet the landscape is lit up and the sky is blue. How can that be?

It’s because of the Moon! Each frame of a nighttime time lapse is a several second exposure, so the stars show up, but the scattered light from the Moon is enough to make the sky blue (the same reason it’s blue during the day; scattered light from the Sun). It can also light up the landscape, making it look like day… until you notice those stars.

Photographer Dustin Farrell shot this in Arizona and Utah, using a Canon 5D2, a camera I’ve heard very good things about. The color balance is immaculate. Some day…

And if you noticed this is Volume 2, why then, here’s Volume 1. Gorgeous.

Tip o’ the lens cap to Max Bittle


Related posts:

- Well, at least light pollution makes for a pretty time lapse
- Time lapse: The Wagging Pole – Night Watch
- Stunning Finnish aurora time lapse
- Wyoming skies
- Another jaw-dropping time lapse video: Tempest
- Time lapse: Journey Through Canyons
- Down under Milky Way time lapse


The tedious inevitability of Nobel Prize disputes | The Loom

Once more we are going through the annual ritual of the Nobel Prize announcements. The early morning phone calls, the expressions of shock, the gnashing of teeth in the betting pools. In the midst of the hoopla, I got an annoyed email on Tuesday from an acquaintance of mine, an immunology grad student named Kevin Bonham. Bonham thought there was something wrong with this year’s Prize for Medicine or Physiology. It should have gone to someone else.

Kevin lays out the story in a new post on his blog, We Beasties.  The prize, he writes, “was given to a scientist that many feel is undeserving of the honor, while at the same time sullying the legacy of my scientific great-grandfather.” Read the rest of the post to see why he feels this way.

Kevin emailed me while he was writing up the blog post. He wondered if I would be interested in writing about this controversy myself, to give it more prominence. I passed. Even if I weren’t trying to carry several deadlines on my head at once, I would still pass. As I explained to Kevin, I tend to steer clear of Nobel controversies, because I think the prize is, by definition, a lousy way to recognize important science. All the rules about having to be alive to win it, about how there can be no more than three winners–along with the lack of prizes for huge swaths of important scientific disciplines–make these kinds of disputes both inevitable and tedious.

The people behind the Nobel Prize, I should point out, have done a lot of good. Their web site is a fine repository of information about the history of science. I’ve tapped it many times while working on books and articles. There’s also something pleasing to see the world drawn, for a couple days at least, to the underappreciated byways of science. If the Nobel Prize makes more people aware of quasicrystals, the Prize is doing something unquestionably wonderful.

But the vehicle that delivers this good is fundamentally absurd. The Nobel Prize rules say no more than three people can win an award, for example. This year’s prize for physics went to Saul Perlmutter, Brian Schmidt, and Adam Riess for their work on the dark energy that is accelerating the accelerating expansion of the universe. Half went to Perlmutter, and a quarter went to Riess and Schmidt. But, of course, scientists do not work in troikas. It wouldn’t even make sense to say that three people could accept the prize on behalf of three labs. Science is a stupendously complex social undertaking, in which scientists typically become part of shifting networks over the course of many years. And those networks are not just made up of happy friends collaborating on projects together. Rivals racing for the same goal can actually speed the pace towards discovery.

Now, some individual scientists are certainly remarkable people. But the Nobel Prize doesn’t merely recognize them for being remarkable individuals. The citations link each person to a discovery, as if there was some sort of equivalence between the two. But discoveries are usually a lot bigger than one person, or even three.

In his wonderful book The 4% Percent Universe, Richard Panek describes the history of the research that led to this year’s physics prize. I read the book to review it for the Washington Post, and I was particularly taken by a story at the end. In 2007, the Gruber Prize, the highest prize for cosmology research, was awarded for the research. Schmidt haggled with the prize committee until they agreed to widen the prize to all 51 scientists who had been involved in the two rival teams. Thirty-five of them traveled to Cambridge for the ceremony. It would have been fun to watch Schmidt go up against the Nobel Prize committee. He would have lost, of course, but at least he would have made an important point.

Should scientists get credit for great work? Of course. But that’s what history is for. Charles Darwin and Leonardo da Vinci never got the Nobel Prize, but somehow we still manage to remember them as important figures anyway. The time that’s spend arguing over whether someone should get fifty percent of a prize or twenty-five percent or zero percent could be spent on much better things, like more science.

[Update: Revised post to clarify that the prize was for research on the acceleration of the universe, not the dark energy many think is driving the acceleration.]


When did population genetics emerge? | Gene Expression

I recently heard an eminent geneticist declare that population genetics began with Theodosius Dobzhansky’s Genetics and the Origin of Species in 1937. My immediate reflex was to be skeptical of this, at least going by Will Provine’s treatment in The Origins of Theoretical Population Genetics, which seemed to push back the timing to the 1920s.

So I looked up “population genetics” in Ngram viewer.

 

 
These results are not consistent with my expectations. Looks like my intuition was wrong. At least for the term population genetics. Score one for experience and wisdom.

Here’s a Tribute That Speaks to the Real Steve Jobs | 80beats

The Internet’s cup runneth over with elegies for the Apple cofounder, who died yesterday at 56. People around the world are pouring out their stories of how Jobs, via the company’s products, changed their lives.

Many have a frankly religious tone, like the middle-aged mom who spoke in a breathless voice about the iPhone’s “grace” and the architect Jobs hired in the mid-80s who told how Jobs “put his hand on mine” when teaching him to use a mouse (both were on NPR member station WNYC this morning). Other testimonials focus more on when the teller first encountered an Apple product, back in the days when mice were the big new thing. People are even setting up shrines in Apple stores, a move that strikes some as fitting tribute, others as cultish (“If you needed any more proof that brands are our new gods…” one person tweeted in response to the news). Though the blog “Steve Jobs is God” appears to be defunct, its message is on many lips today, in some form or another. It’s simply astounding how much of a connection many felt to Jobs, whom they see as the architect of a significant portion of their lives.

The best tribute that we’ve seen, though, isn’t part of this sometimes-saccharine thicket. ZDnet has a simple video interview with Steve Wozniak, aka “Woz,” who cofounded Apple with Jobs. Wozniak talks about Jobs as a person, not as an icon, and it’s somehow more touching to hear him talk about the years when they bummed around together in the Bay Area than all of the other stuff combined. Wozniak also gives some perspective: When the interviewer compares Jobs to Edison, Wozniak says that’s not quite right, because Jobs’ strength wasn’t designing or building stuff, it was marketing, in part because he related well to the users. What we’re seeing across the web today, in these outpourings, is the end product of that relationship. And Apple has obviously radically shaped the modern sense of design, but before Apple was making beautiful objects, Wozniak recalls, Sony was the company that had the products with the fine details and clever engineering.

In their last few conversations, Jobs wanted to talk about their early times together, Wozniak says. But after all Jobs had accomplished, it seemed hard to relive those moments. “It’s hard to go back to those little simple days, where we were clowning around and thought maybe we’d make a few bucks,” Wozniak says with a smile. But imagining Steve Jobs as a young guy, pulling pranks and geeking out about Dylan with his buddy, is much more poignant than all the suggestions of his divinity. Thanks for everything, Steve. We’ll miss you.


‘Chivalrous’ crickets dying to let females go first | Not Exactly Rocket Science

A field cricket couple is in danger. A bird has spotted them, and the pair rush for the safety of their burrow. At the entrance, it’s ladies first: the male cricket waits while his partner dives in first. It’s a delay that could cost him his life.  This may all seem very chivalrous, but the male’s seemingly selfless actions also make selfish sense. He may die, but he ensures that his genes pass on to the next generation.

Male insects often stay close to female ones after they mate, and people have generally assumed that they’re standing guard. If the female mates again, the first male’s sperm will be flushed out by the second male’s contributions. If he wants to ensure that he fathers he offspring, he’d do well to keep other suitors at bay.

But Rolando Rodriguez-Munoz from the University of Exeter found that this narrative of conflict doesn’t quite work for field crickets. He set up a network of infrared cameras to study a wild population of the crickets that had all been individually marked and genetically analysed. The cameras recorded thousands of hours of video, and Rodriguez-Munoz watched them all.

Field crickets are territorial. They live in burrows, which they defend against other crickets. They’ll only share their shelters with members of the opposite sex. Rodriguez-Munoz found that a male spends more time with a female if they have sex, but while he’ll keep an eye on her, he’ll never coerce her or block her movements. Rodriguez-Munoz saw any behaviour from the males that could be interpreted as aggression.

Instead, the males seem to accommodate the females. They wander further away from their burrows than they would venture alone, allowing their partners to forage in the safe zone closer to the entrance. If the duo is attacked, the male lets the female into the burrow first, often paying a heavy price. Males are four times more likely to be eaten by predators when they’re with females than when they’re alone. Females are almost six times less likely to be eaten in pairs than alone. If a bird succeeds in catching one of the pair, it’s always the male that dies.

Both partners benefit. The longer the male stays with the female, the more mating opportunities he gets, and the more sperm he adds to her stores (females can keep sperm tucked away in side pouches for later use). He protects his investment in the next generation by ensuring that the female who carries his young has a better chance of survival. The female could wander away and mate with other males, but if she stays, she gets protection.

Rodriguez-Munoz suggests that mate guarding behaviour, as seen in the field crickets, can evolve in two very different ways. It can be borne of conflict, with males actively preventing females from mating with other suitors. To do that, males have evolved all manner of grisly strategies, from mating plugs to life-shortening sperm to spiky penises.

These strategies are less effective if the females cannot be coerced, or if they’re fertile for such a long period that males would need to spend too much energy on guard duty. In those situations, guarding only evolves if the females also benefit from the relationship, and female field crickets certainly do. The male is less a guard, and more a consort.

In the past, it looked as if this behaviour evolved in field cricket through the first route. Several scientists had shown that a male would guard a female more forcefully, prevent her from removing his sperm, and fight off rival males. But almost all these studies involved crickets interacting with one another in tiny boxes. The lone exception used three metre enclosures – bigger, but hardly roomy.

By contrast, Rodriguez-Munoz watched the crickets in their natural environment, and saw very different behaviour. “[These] differences [raise] the question of whether lab situations may produce anomalous behaviours, and at least suggests that observations of wild insects may lead to differing conclusions,” he says.

Reference: Rodriguez-Munoz, Bretman & Tregenza. 2011. Guarding Males Protect Females from Predation in a Wild Insect. Current Biology http://dx.doi.org/10.1016/j.cub.2011.08.053

Image by Roberto Zanon

Well, at least light pollution makes for a pretty time lapse | Bad Astronomy

Light pollution — wasted light that gets thrown up into the sky instead of down onto the ground where it’s actually useful — is the enemy of every sky watcher, from the professional astronomer to the some time star gazer. It overwhelms fainter objects, and in bad cases even the brightest stars, reducing the glory of the sky to a washed-out glow.

But, it pains me to admit, it can be pretty. Photographer Brad Goldpaint used it to his advantage to make this short, lovely time lapse video called "Wiser for the Time", showing orange-lit clouds racing past the sky above them:

[Make sure to watch this full screen in HD!]

Recognize those skies? Orion, Taurus, Capella, Polaris, the Milky Way… given the light pollution, I was surprised how well some of those fainter objects showed up (especially the Andromeda Galaxy in both sets!). I was thinking just yesterday, in fact, that it’s been a while since I’ve been to a seriously dark site and seen more stars than I could hope to count. Maybe it’s time to find some secluded spot in the Rockies and wait for sunset…


Related posts:

- Time lapse: The Wagging Pole – Night Watch
- Stunning Finnish aurora time lapse
- Wyoming skies
- Another jaw-dropping time lapse video: Tempest
- Time lapse: Journey Through Canyons
- Down under Milky Way time lapse


Political correctness in 1900 and 2000 | Gene Expression

In my post yesterday I made the comment that it seems that British media in particular seems to have a fascination with “different race twins.” The generalization derives from the fact that people who email me these stories tend to point to British sources, but I decided to Google it. I think I’ll stick by my assertion. These stories aren’t exclusively British, but they’re clearly driven by the British media.

But in the process of looking at stories on this topic I ran into a story in The Guardian on a pair of fraternal twins of different complexions who are now adults. These two are different more than in just physical appearance. One is gay and the other is straight. This section dovetails with something else I mentioned yesterday:

Alyson got used to the comments and the stares, the sniggers about their parentage and the “stupid things people said” when her boys were babies; but then, when Daniel and James went to nursery aged three, the twins’ skin colour plunged the family into controversy. “They were at this very politically correct nursery, and the staff told us that when Daniel drew a picture of himself, he had to make himself look black – because he was mixed-race,” says Alyson. “And I said, that’s ridiculous. Why does Daniel have to draw himself as black, when a white face looks back at him in the mirror?”

After a row with the nursery staff, she gave interviews to her local paper and TV. “I kicked up a fuss, because it really bothered me,” she says. “Daniel had one white parent and one black, so why couldn’t he call himself white? Why does a child who is half-white and half-black have to be black? Especially when his skin colour is quite clearly white! In some ways it made me feel irrelevant – as though my colour didn’t matter. There seemed to be no right for him to be like me.”

It’s a social convention that people of mixed black-white ancestry are black, especially in the United States. In fact in the USA black Americans themselves espouse this viewpoint, as is evident in the case of Barack Obama. But it does seem ridiculous that a mixed person whose physical appearance is more European would be forced to draw a self-portrait which was more African. Especially a child who has probably not internalized the various ideologies which adults take for granted in our societies. Madison Grant though would agree heartily with this decision:

Grant advocated restricted immigration to the United States through limiting immigration from Eastern Europe and Southern Europe, as well as the complete end of immigration from East Asia. He also advocated efforts to purify the American population through selective breeding. He served as the vice president of the Immigration Restriction League from 1922 to his death. Acting as an expert on world racial data, Grant also provided statistics for the Immigration Act of 1924 to set the quotas on immigrants from certain European countries…Even after passing the statute, Grant continued to be irked that even a smattering of non-Nordics were allowed to immigrate to the country each year. He also assisted in the passing and prosecution of several anti-miscegenation laws, notably the Racial Integrity Act of 1924 in the state of Virginia, where he sought to codify his particular version of the “one-drop rule” into law.

From Wikipedia:

The Racial Integrity Act required that a racial description of every person be recorded at birth and divided society into only two classifications: white and colored (all other, essentially, which included numerous American Indians). It defined race by the “one-drop rule”, defining as colored persons with any African or Indian ancestry.

Obviously today’s high priests of political correctness do not share the aims of someone like Madison Grant. But for the purposes of maintaining racial and cultural integrity (in this case, that of non-whites!) they have perpetuated a framework which came into being at a specific period of time in the 19th century, when white superiority was taken for granted, and the division between whites and non-whites was seen to be most useful. But in a genuinely multi-polar world this mentality is very outmoded. But ideologies often outlast their utility. Ask the Confucian Mandarins in 1900.

Beauty and the brain | Not Exactly Rocket Science

I’ve got a feature in today’s Eureka (the science supplement of the Times) about how feelings of beauty manifest in the brain, and how scientists are trying to objectively measure something that’s inherently subjective.

The piece focuses on Semir Zeki from UCL. I had a great morning walking around London’s National Gallery with the man, staring at Cezannes and Botticellis, and talking about neuroscience, art and everything in between. It’s very easy to caricature scientists who work on such topics as cold reductionists, but in spending time with Zeki, I was struck at how intensely he cares for the art that he investigates. He visits the National Gallery regularly, has financially sponsored several of its paintings, and gets sad when it closes for Christmas. He quoted philosphers and poets more often than he did other scientists. He has exhibited his own works.

I started off writing a piece about the neuroscience of beauty and half of it morphed into a piece about an art-loving scientist who studies beauty.

Anyway, I hope you like it. Times subscribers can read the online version. For everyone else, here’s a link to Scribd – you can download a PDF from there.

First Orbit | Bad Astronomy

I missed this a few months ago when it came out, but it’s pretty neat: First Orbit, a film made in space following Yuri Gagarin’s flight path as the first human ever to orbit our world:

The creators want to translate this film into as many languages as possible, and they’re looking for help. It seems like a pretty good idea; this year is the 50th anniversary of Gagarin’s flight, and it’s appropriate that as many people get to experience this as possible.


Something Fierce | Bad Astronomy

Marian Call is many things: a singer, a songwriter, a geek, and a friend. I’ve written about her many times, since a lot of her music is about geek culture (like Firefly and Battlestar, and living the nerd life)… and because she has a fantastic voice and writes brilliant lyrics.

And that’s why I’m so excited that she has a new album out: Something Fierce, a double-album of Mariany goodness.

Something Fierce is full of wonderful music. Some of it is geek, some normal, and some about Alaska. My favorite on the whole shebang is "Temporal Dominoes", which starts off with Marian wailing, and launches into a complex weave of lyrics about how life stacks up on you. It’s a very cool song, and not just because I commissioned the song from her for a plate of cookies*.

The whole album is pretty amazing. And, to top it off, she’s done this whole thing herself, with support of her fans. No agents, no labels, no record companies: just her and her throng. When I was in L.A. a few months ago to film for a documentary, she happened to be there as well, and we were (barely) able to arrange getting together for a quick breakfast; she had to find a place to crash from a fan and then spent most of the night mixing the tracks. For all that, the album is professional, polished, and clean, and is, in my opinion, the best thing she’s put out yet.

But why believe me? You can listen to her yourself:

Temporal Dominoes by Marian Call

Convinced? You can also listen to the whole album online. From there you can also buy a CD or a downloadable file. I grabbed the download, and it’s really nice. She also includes a ton of photos and liner notes that are fun to go through. You might recognize a name or two there, too.

She’s also on Facebook, Google+, and her own site.

I think it’s great that there are independent artists out there working hard to make their own individual voices heard, doing an end-run around the gatekeepers that in many cases mediocratize artists. That’s one of the many reasons I support Marian, and I hope you will too.


* This is true.


Related posts:

- Clarion Marian Call (my first post about her!)
- In the black
- The (Marian) Call of Mars
- New weeks, new geeks; so say we all!


Mixed-race people are mildly complicated | Gene Expression

I was pointed today to a piece in the BBC titled What makes a mixed race twin white or black?. The British media seems to revisit this topic repeatedly. There are perhaps three reasons I can offer for this. First, it tends toward sensationalism. Even though the BBC is relatively staid, when it comes to science it converges upon the tabloids. Second, because the number of non-whites in Britain is relatively small, there is a higher proportion of intermarriages between minorities and the white majority (from the perspective of minorities). This is especially true of people of Afro-Caribbean ancestry. So of the proportion of minorities a larger fraction are recently mixed in Britain than in the USA. Finally, the United States has a more complex attitude toward race relations than the United Kingdom, because the former has traditionally had a large non-white minority while the latter has only had so since the years after World War II. I suspect that “black-white twins” stories would seem in bad taste on this side of the pond, and bring up certain memories best forgotten.

Now, there are fallacies, confusions, and misleading shadings, in the BBC piece. I’ll hit those first before reviewing what’s going on here when fraternal twins exhibit totally different complexions.


It starts out somewhat ludicrously: “Her son Leo has black skin and her daughter Hope, has white skin.” This is false in a precise sense. Leo clearly has medium to light brown skin (there are photos in the piece). What’s going on here is that Leo has some African ancestry, and because of the rule of hypodescent all people of African ancestry with a shade of brown skin, from nearly black to light brown are termed “black skinned.” This is not a trivial semantic elision. If Leo truly had black skin, very dark brown, than there’d be a lot of explaining to do, because the genetics would be somewhat mystifying. More on that later.

Second:

She was adopted when she was four years old, and her birth mother is Afro-Caribbean and her British birth father was white. Her DNA tests revealed that, genetically, she was exactly 50% African and 50% European.

This is very unusual, and the results suggested that Shirley’s mother had pure African roots, and that her ancestors must have moved from Africa to the Caribbean quite recently.

Not necessarily. Mixed-race people, especially those with recent admixture, don’t have their different ancestral components distributed equally across their genome. It may be that in the process of sampling chromosomes from this individual’s Afro-Caribbean mother she received almost none of the European quantum, perhaps localized to a few chromosomal segments. This “noise” in the process explains why I seem to carry an elevated proportion of East Asian ancestry in relation to both of my parents. I simply received genetic copies sampled from the more “East Asian” regions of my parents’ genomes.

Next:

“Our skin colour is determined by a number of gene variants – at least 20 variants, I would say, probably quite a few more than that,” says Dr Wilson.

This is complicated, but I’d say that the good doctor is misleading the audience. Skin color seems to be a quantitative trait where you can explain the vast majority of between population variation with only a few genes, at most six. When it comes to European-African difference variants at two loci, SLC24A5 and KITLG can account for well over half of the difference. It is true that there are many, many, genes that effect skin color, but there is a definite distribution where the vast majority of genes tweak the trait only on the margins. In other words, there may be 20 variants (there are more), but for good predictive power at the inter-population level you’re good to go with 4 or 5.

I specify inter-population level, because within populations the gene set which can allow you to predict variation may be slightly different, and you have to take into account sex differences. For hormonal reasons males seem somewhat darker than females in human populations. Additionally, people also are palest in their youth, and become darker as they age. Finally, some of the genes which explain differences between populations are invariant within a population. Therefore the genes which are of lower effect size move up the stack. So when it comes to European-African variation, the largest effect gene, SLC24A5, won’t explain anything within these two populations. That’s because it is fixed for alternative variants (the light vs. dark conferring variants). So the second effect size may move up to first effect size when you evaluate on a smaller grain (but if the second effect size is nearly fixed, then it might drop far down as well).

Now let’s move on to the common idea that darkness dominates over lightness:

As in a painter’s palette, in the skin the presence of pigment dominates the absence of pigment, so the fact that Hope is white is very unusual.

This is hypodescent popping up again. Though in the West we live in an anti-racist age, at least notionally, it is interesting how concepts and models from a white supremacist era remain operative, at least implicitly. The idea that whites are recessive to non-whites makes totally sense if you code anyone with visible non-white ancestry as non-white. Even if they are genetically more white than not. The rationale for this model was the idea that there is a reversion to the more “primitive” type. So a cross between a black and a white produced a black, and a cross between a Nordic and a Mediterranean produced a Mediterranean. Inferiority taints the purity of superiority.

Less ideologically if you classify skin complexion into white and non-white in a dichotomous fashion then you logically consign the non-white trait to dominance. For example, if nearly, but not quite, white skin is “dark,” then you make it very difficult for someone with a substantial number of pigment conferring alleles to produce a child with very light skin.

Finally, now that we have elucidated the genetic architecture of pigmentation to a great extent we can make assessments of dominance and recessiveness on a locus by locus manner. If you plot skin complexion darkness as a function of reflectance you can turn it from a dichotomous or discrete trait to a continuous one. So individuals can have a “melanin index,” an integer value equivalent to their position on a scale of lightness and darkness. Converse to expectations above it turns out that on the two largest effect genes explaining difference between Africans and Europeans the light alleles are more dominant than the dark alleles! In other words, if the two alleles had an equal effect you’d expect a value between the two in their homozygote state. As it is, the values tend toward higher reflectance (light) than dark. I would caution that terms like “dominant” and “recessive” can be highly subjective and dependent on how you code the trait, the nature of the population you sample from in a polygenic character, or even scale the of values. So in this case you notice that switching from a dichotomous code of white vs. non-white to a continuous value corresponding to reflectance flips the model from the light trait being recessive to the dark, to the dark being recessive to the light (albeit, only mildly).

Because pigmentation is controlled by only a few genes the state at these loci are poor proxies for total genome content. In plainer language mixed-race siblings won’t deviate too much in their ancestral quantum, but they can deviate a great deal in their physical appearance. In fact, because of the poor correlation the slightly “blacker” twin in total ancestry may actually look more like a white person, and vice versa.

Now let’s go back to first principles. We’ll make some simplifying assumptions to illustrate what’s going on easily. Take 6 genes which control skin color. Assume equal effect. Each gene comes in two variants. Light and dark. Two copies of light result in a value of 0, while two copies of dark result in 2. A copy of each results in 1. In other words, the alleles are additive across a locus. Also assume that the genes are independent. They’re not linked. So the value at each gene is independent of the other genes. Finally, assume that the genes’ implied values summed together result in a total pigmentation phenotype outcome. So they’re additive across loci.

To make even simpler let’s assume that the parents are F1 African-European hybrids. That means that one of their parents’ was European and the other African. So both share the same ancestry of recent vintage. As it happens Africans and Europeans are very different on pigmentation genes, so we can assume that these parents carry one light copy and one dark copy across the six genes. This means you’d expect them to be brown.

Since they are brown, wouldn’t their children be brown? No. Not necessarily. As per Mendel’s Laws each contributes contributes one gene copy at each locus. So for the 6 loci above each parent contributes one pigmentation gene. What does that mean concretely? I Already simplified things to produce an elegant outcome: the F2 offspring could be all light, all dark, or one copy of both, like their parents, at any given gene. To illustrate what I’m talking about, SLC24A5 is disjoint in frequency across Africans and Europeans. All Europeans have one variant, and all Africans have another. So the offspring of a marriage between an African and a European will be heterozygote on that locus. If they marry another person of similar background, homozygote light and dark genotypes will resegregate out at fractions of 25% each, with half the outcome being heterozygote as in the parental condition. In other words, there are a 25% probability of a F2 child of F1 hybrids being “white” at this locus. There are 6 loci. Assuming independent probabilities, you multiply out 0.256, and get 1 out of ~4,000 that the child will be white like their white grandparents.

I ran this as a binomial 10,000 times, and here’s the distribution I came up with:

The white and black offspring don’t show up because the number of outcomes is so rare in this model, but as you can see the median outcome is brown, like the parents. But the tails are significant. In other words, don’t be surprised if there’s a lot of variation among the siblings. But why should you be? If you know of people from populations where pigmentation alleles are segregating in polymorphic frequencies, such as Latin Americans and South Asians, you are aware that different siblings can look strikingly different when it comes to complexion. Though I guess that’s a new insight for the British….

Can Brain Scans Detect Pedophiles? | 80beats

pedo

What’s the News: A new study suggests that watching brain activity when subjects are shown images of naked children can identify which are pedophiles. But what does this really mean in practical terms?

How the Heck:

  • 24 self-identified pedophiles, from a clinic that offers anonymous treatment, and 32 male controls were shown pictures of naked men, women, and children. Blogger Neuroskeptic, who brought this study to the web’s attention, notes in an aside that getting that past a university ethics board is quite a coup.
  • Using fMRI, the researchers recorded their brains’ responses and found that by comparing an individual’s brain to the average of the pedophiles and the average of the controls, they could assign them to the correct group more than 90% of the time. Their handling of the statistics avoids the most obvious pitfalls: they used an analyses technique called leave-one-out cross-validation to avoid comparing a given scan to an average that includes it, a common error in neuro studies.
  • When they plotted how the neural scans lined up along the age and sex axes (see image above), the pedophile and control scans formed two clear, separate clusters.

What’s the Context:

  • First, a caveat: the team wasn’t looking for any specific brain activity, and who knows what exactly the pedophiles were thinking when they saw the images of children. It could be sexual attraction, or it could be any number other things, like shame, which seems like a strong candidate, given that these people were in treatment for their pedophilia. Pedophiles who aren’t in treatment, or pedophiles who’ve never acted on their feelings, might not be so easily clustered with these subjects. The point is that the only thing shown here is that this group of pedophiles’ brains behaved differently than those of controls.
  • But moving along to the philosophy, ever since science brought the revelation that our brains are what make us who we are—rather than something like a soul, for example—there’s been the question of to what extent we can be judged on the basis of our biology. This study, raising as it does the idea that your brain betrays you, and that certain brain profiles could be linked with crimes, recalls a piece neuroscientist David Eagleman wrote for The Atlantic recently:

If you are a carrier of a particular set of genes, the probability that you will commit a violent crime is four times as high as it would be if you lacked those genes. You’re three times as likely to commit robbery, five times as likely to commit aggravated assault, eight times as likely to be arrested for murder, and 13 times as likely to be arrested for a sexual offense. The overwhelming majority of prisoners carry these genes; 98.1 percent of death-row inmates do…As regards that dangerous set of genes, you’ve probably heard of them. They are summarized as the Y chromosome. If you’re a carrier, we call you a male.

The Future Holds: Realistically, we can’t judge people on their biological red flags—whether it’s brain scans or genetics—without entering a Minority Report-style future. That’s because being attracted to children isn’t a crime. Acting on it is. Generally, a brain scan is not required to find out whether the crime has happened—there are other, more reliable forms of evidence, like porn downloads. Given that, it’s unlikely that you’ll be seeing regular scanning for pedophilic tendencies anytime soon. And for that, we should be grateful.

Reference: Ponseti, J., Granert, O., Jansen, O., Wolff, S., Beier, K., Neutze, J., Deuschl, G., Mehdorn, H., Siebner, H., & Bosinski, H. (2011). Assessment of Pedophilia Using Hemodynamic Brain Response to Sexual Stimuli. Archives of General Psychiatry DOI: 10.1001/archgenpsychiatry.2011.130

[via Neuroskeptic]

Image courtesy of Archives of General Psychiatry, Ponseti, et al., via Neuroskeptic