Empires of the Word & anti-Babel | Gene Expression

Languages_of_EuropeTo the left you see a map of the distribution of languages and language families in Europe. Language is arguably the most salient cultural feature of our species, as well as one of the most obviously biologically embedded. The trait of language is a human universal, to the point where even those without hearing can create their own gestural languages de novo. But the specific nature of language as it is instantiated from region to region varies greatly. Language in the generality is a straightforward utility with which you communicate with your fellow man. But language also separates you from your fellow man.

European nationalism in the 19th and 20th centuries was in large part rooted in the idea that language defined the boundaries of a nation. During the Reformation era some German-speaking Roman Catholic priests declaimed the value of the bond of language against that of religion, praising those non-Germans who adhered to the Catholic cause against German speaking heretics (in the specific case the priest was defending Spanish tercios brought in by the Holy Roman Emperor to put down the rebellion of Protestant German princes). In the long centuries between the Reformation and the Enlightenment the idea of a Western Christian Commonwealth slowly melted in the face of the rise of vernacular, but even after the shattering of Western Christianity with the explosion of Reformations the accumulated capital of a unified Christian European elite persisted. Hungarian Protestant students at Oxford could make do with Latin even if they were totally innocent of English (see The Reformation). Newer lingua francas, French and later English, lack the deep unifying power of Latin in part because they are also living vernaculars. They may resemble Latin in some particulars of function, but eliding the differences removes far too much from the equation to be of any use. Linguistic diversity is a fact of our universe, but how it plays out matters a great deal, and has mattered a great deal, over the arc of history.

806-8This is the subject of Empires of the Word: A Language History of the World. Nicholas Ostler, the author, tackles an enormous subject here. He acknowledges the Herculean nature of his task in the introduction. And yet he does avoid some of the more intractable controversies within historical linguistics by constraining his subject matter to the period of history. That is, where we have some written records. This means that Ostler does not address the origins of the Indo-European language family, or the more recent expansion of the Bantus. Despite being separated by thousands of years these are both in the domain of pre-history, because we have no written records of proto-Bantu or proto-Indo-European. This does not mean that the book is not ambitious all the same. On the contrary, Empires of the Word takes on the “thicker” and messier tangle which is the association between language and fine-grained historical processes, social, cultural, economic and political. How history has shaped the nature and distribution of languages which we see extant in our world today is a labyrinth with many doors. Ostler doesn’t come close to opening the majority of those doors, but those he selects in Empires of the Word yield a rich number of surprises and insights, though he does not in the end seem to be able to generate a Grand Unified Theory of linguistic diversity and change from the welter of details.

top-20-languagesThere are two parallel threads throughout Ostler’s narrative: description and prediction. The latter is not prediction as a physicist would predict, rather, it is as a historical scientist might. Taking the data and producing models which can plausibly explain the phenomena we describe. Let’s take a look at the top 20 languages in the world. It seems that there are two primary ways that the speakers of a language can become numerous: rice & empire. Such a generalization is a bit glib, as many Mandarin speakers do not live by the “rice bowl,” but the big picture is that some languages gained adherents through “brute force,” pushing inexorably against the Malthusian possibilities of primary production and reproduction and assimilating smaller groups on the wave of advance of the speakers. The Asian languages on this list fall into that category. In contrast, you have the languages which spread with empire, exploration, and colonialism. English and Spanish are the exemplars of this class. Of the hundreds of millions of English and Spanish speakers a majority can not be accounted for simply by demographic expansion of the home countries. Rather, these languages colonized new lands, and acquired new speakers, rather rapidly over the past 500 years. Turkish is almost certainly in this category, though the transition from Greek, Armenian and Kurdish speech in Anatolia is less clearly understood because of thinner textual records of the process.

Of course the distinction between the two is somewhat artificial. The expansion of Mandarin, let alone the Chinese dialects, was almost certainly a synthesis of demographic expansion & migration, and linguistic assimilation of “barbarians.” Han Chinese are a genetically far less homogeneous than the Koreans or Japanese, in large part because the expansion of Han identity occurred over a diverse group of populations which were resident within China proper 2,000 years ago. Similarly, it seems implausible that the Vietnamese ethnically cleansed all the Malay and Khmer speaking populations along the Annamese coast as they pushed toward the Mekong delta. The genetic data in fact hint to a large scale assimilation of Malay Chams by the Vietnamese. Inversely, the rise of English was partially accompanied by the demographic explosion of British peoples, while Spaniards contributed a great deal genetically to the mestizo populations of the New World. So it is not rice or empire, but rice and empire. Albeit with different weights on a case-by-case basis.

“Rice” really refers to social, cultural and economic forces which bubble up from below and swallow up the numerous islands of linguistic diversity. “Empire” connotes the political and military structure which allows for the trickle down from above of imperial values and mores. But the two are also intimately connected. The Chinese state under the Ching Dynasty saw a rapid rise in population, and that rise was enabled in large part due to political stability. That stability fostered long term projects which increased the land under cultivation as well as public works infrastructure which could distribute grain so as to dampen the effect of local shocks. The Greek historian Polybius attributed the resiliency and strength of the Roman state in to its assimilative capacity, turning barbarians into citizens. The military and political resiliency of the Roman Empire through the Crisis of the Third Century was probably conditioned on the expansion of Romanitas from the the Atlantic to the Black Sea (the military core of the revival drew from the Latin speaking regions south of the Danube in the Balkans).

Just as the Roman Catholic Church is sometimes referred to as “the ghost of the deceased Roman Empire,” so the distribution of modern languages are tells of political, social and economic events of the past. Social and economic forces almost certainly loom large in language family explosions which Ostler did not cover, that of the Bantus, the Polynesians and Indo-Europeans. In the first case it seems that the Bantu peoples brought with them a new mode of production to east and south Africa. This was then a rice expansion, along with some genetic assimilation. The case of the Polynesians is more difficult, but the existence of a similar group in Madagascar, attests to the power of long distance seafaring techniques in scattering obscure peoples. Without the existence of Malagasy, both their genetic and linguistic uniqueness, the written record would not clue us in to the existence of an organized community of long distance seafaring Southeast Asians across the Indian ocean basin. Finally, the Indo-European expansion is more mysterious because it is so much further back in time, but it is also the most significant as nearly half the world’s population speaks an Indo-European language. David Anthony in The Horse, the Wheel, and Language: How Bronze-Age Riders from the Eurasian Steppes Shaped the Modern World makes the case that a shift toward nomadic pastoralism enabled by the horse is the critical catalyst for the sweep of this language group from the Atlantic to the Bay of Bengal.

Though the Indo-European case is likely an ancient one Empires of the Word actually begins its story earlier. Ostler’s in depth knowledge of ancient Near Eastern linguistic history is frankly mind-blowing, and is arguably the most insightful and novel spin on the topic I’ve ever encountered. The extent of detailed and subtle grasp of the facts is awe inspiring. I did not know, for example, that the Elamites of southwest Iran once had their own writing system, which they eventually abandoned for Akkadian cuneiform. Ostler recounts the life-after-death which Sumerian experienced for over 1,000 years because of the nature of cuneiform itself, which was fitted to the Sumerian language, a linguistic isolate with no known relatives. For the last thousand years of cuneiform it was written in Akkadian, the first great Semitic language in the world, later to be succeeded by Aramaic, Punic, Hebrew, and Arabic. Parallel to the waxing and waning of these antique Semitic languages was the ebb and flow of ancient Egyptian, with its own peculiar form of writing.

One aspect of these ancient societies and their languages is the almost cold-blooded torpidity with which change occurred. Sumerian persisted as a liturgical language in what became Babylonia down to the Roman and Parthian period, 3,000 years of written history. The social-political entity which we term ancient Egypt arguably spanned 2,500 years, up until the final Persian conquest. Egyptian culture in a sense that the Pharaohs would recognize persisted for another 1,000 years, until the closure of the Temple of Philae under the orders of the Christian Emperor Justinian in the 6th century. This cut the last link with the literature and religion of ancient Egypt. Consider that the time between our own era and that of Jesus Christ is equivalent to that between the rise of the Egyptian polity and its decline in the late Bronze Age. Though there are certainly similarities between Paul of Tarsus and a modern Western man, a great many disruptions have broken chains of cultural continuity.

There may be one exception to this, and that is another language which arose just as Egypt went into decline, and that is Chinese. Classical Chinese in its written form remained relatively static between the ancient period of the first dynasties, and the early 20th century. This continuity is telling insofar as Western scholars never had to “discover” the history of the Chinese, they had always remembered it. The continuity of language, culture values, and political and ethnic identity, dovetailed together so that despite the reality that the architecture of China is ephemeral, its stories are not. In contrast, much of the literary corpus of the ancient Western world comes down to us only because of three intense periods of copying: the Carolingian Renaissance, 10th century translations in the Byzantine Empire, and the Abbasid translation project in the 9th century. The history of the societies before Greece was perceived only obliquely through the Bible and the classical authors. Modern archaeology and linguistics eventually unlocked the secrets of both hieroglyphics and cuneiform, but the reality that we did not know of the significance of the Hittites in the ancient world attests to the poverty of knowledge which lack of cultural continuity imposes (the great disruption between the Indus civilization and pre-Maurya India means that the script of the former remains lost to us).

The distribution and continuity of dead languages also is a signpost for that other aspect of human culture which is very powerful and ubiquitous: religion. Today most of the Latin spoken is “Church Latin,” and that is because of the languages sacred role within the Roman Catholic Church. Though Hebrew is the spoken language of the secular state of Israel thanks to a modern revival, for nearly 2,500 years it was a language of religion only, as the Jews adopted the languages of the people amongst whom they lived, Aramaic, Greek, Persian, Arabic, Latin, German, etc. The ancient languages of the Near East, Coptic from ancient Egyptian, and Syriac from Aramaic, persist as liturgical languages. It seems that so long as the gods do not die in the minds of believers the tongues of the ancients persist down the ages. So next to the language of rice and empire, you have languages of the gods.

As I indicated above Empires of the Word is rather thin on robust generalizations. But one point which the author mentions repeatedly is that the rise and fall of languages of great expanse and utility is the norm, not the exception. In particular, Nicholas Ostler takes time out to emphasize that languages which spread via trade often do not have long term staying power. Portuguese, Aramaic, Punic and Sogdian would fall into this category (the later success of Portuguese was a matter of rice and empire in Brazil). It seems that mercantile communities are too ephemeral, that successive historical shocks inevitably result in their decline when there isn’t a peasant demographic reservoir or imperial power which imposes it by fiat. Even those languages which eventually spread beyond traders and gain cultural and political cachet may fall from grace. Greek is the best case of this. It was the dominant language of the Roman East, and spoken as far as modern Pakistan, and studied in Dark Age Ireland. By the early modern period it was a strange and foreign language in the West, and with the rise of Islam in the east it lost its cultural glamor, and even those Christians in Arab lands who were Melkite, Greek Orthodox who adhered to the theological position of Constantinople, became Arab in speech and identity (in greater Syria the Greek Orthodox have been instrumental in the formulation of Arab nationalism).

And yet to some extent one must be cautious about over-reading the recession of Greek in the face of Arabic after the rise of Islam. Ostler repeats the conventional wisdom that the predominant vernacular in the Roman East was never Greek, but rather Semitic dialects descended from Aramaic. This is manifest in the fact that the Oriental Orthodox churches do not use Greek in their liturgy, but forms of Syriac. Their root is in an alternative intellectual tradition from that of the Greek Church. The transition to Arabic was then predominantly from a closely related Semitic language, not from Greek. One of the theses to explain the spread of Arabic across North Africa, but not into Persia, is that Arabic found it easier to replace other members of the Afro-Asiatic language family. I can accept that people can intuitively perceive differences of language family without a deep knowledge of said languages. In Sons of the Conquerors: The Rise of the Turkic World it is recounted that an ambassador to the court of the Hapsburg Emperor in Vienna communicated to the Sultan that apparently the locals spoke a dialect of Persian! Persian and German are of course both Indo-European languages, and set next to Turkish they may sound vaguely similar.

This thesis is plausible to me, and I have long held to it in regards to Arabic’s replacement of Aramaic. I have been told by a friend who is familiar with both languages (in addition to Hebrew) that they are rather close, and if not intelligible close enough to make language acquisition much easier. But Ostler extends the argument much further, suggesting that genetic affinity also explains the replacement of Egyptian and Berber dialects in North Africa. These are Afro-Asiatic languages, but they are not Semitic. I assume linguists do perceive similarities of character which can connect these languages, but what features span the Afro-Asiatic languages which would make language acquisition easier even at this remove of relationship? The Afro-Asiatic theory for the spread of Arabic is somewhat convenient in that it does explain the data well: Arabic has spread widely only in regions of other Afro-Asiatic languages, the exception being in Spain. And in Spain the Mozarab dialect had a stabilized existence with the Romance language of the rural areas, which eventually came back in the form of Castilian, Portuguese, etc. What Nicholas Ostler seems to be proposing is that the world of language acquisition is not flat. This is clearly true for closely related languages, but I think the thesis needs to be explored for distantly related languages from the same family. Does a native speaker of Marathi have a leg up on a Hungarian when it comes to learning Gaelic? I remain skeptical of the affirmative in that case.

So Empires of the Word outlines some broad generalizations of how languages grow, which seem born out by the record of history, and offers some more speculative theories about the importance of the cultural terrain upon which languages can flow and spread. But the narrative also lingers long on the future of the current lingua franca of our age, English. Nicholas Ostler does nothing to dismiss the omnipresence of English at the commanding heights of international culture. He reports for example that in 1994 50% of international telephone calls were between English speakers. 45% were between English speakers and those who were not English speakers! That means only 5% of international calls in 1994 were cases where people neither spoke English as their native language. I suspect that the numbers have changed a bit since then, but if that study is correct then it points to the awesome international spread of the English language. But Nicholas Ostler does not think that it will last, and his rationale seems to be the record of history, where such universal languages always fall. His next book, The Last Lingua Franca: English Until the Return of Babel outlines his thesis in detail.

And yet contra Ostler I have to suggest that perhaps this time it’s different. I do not believe that English in a unified form will dominate all. Already there has been considerable dialect drift. But the past 200 years are qualitatively different from what has come before, and there is already a revolution in communication technology. It may be that in the future languages do not crystallize as a function of geography, but perhaps more as a function of class and occupation. It does seem historically that trade lingua francas have been ephemeral in impact, and English, the language of McWorld, is the language of capital. But the modern world is much more dependent on flows of capital and commerce than the pre-modern world, the Sogdians and Portuguese were primarily vectors for high value luxury goods. Pre-modern capitalism had the air of a parlor game between the high and mighty, and was quite often in bad odor among rentier elites themselves. It is with reason that I observed above that the pace of cultural change in the past was less than what it is today. Positive feedback loops may be much more powerful than they once were, so that a “Globish” derived from English may quickly sweep away all comers, before it diversifies again.

But really I should wait for Ostler’s new book. The arguments I make here may be addressed, or I may misunderstood what I gleaned from Empires of the Word. It is as I said a story with rich and vibrant detail, much of which I glossed over, or did not address. For that Ostler’s tale is worth the time it takes to complete it. But there is I must say a lack of theoretical punch and heft. Perhaps this is just a function of the subject domain, which has too much complexity to distill down to any model of elegance or tractability. But I suspect a more rigorous analytical framework could squeeze some juice out of the enormous pile of detail which Nicholas Ostler has at his disposal. Perhaps he should read Replicated Typo.

Image Credit: Wikimedia, Ethnologue

China is #2 | Gene Expression

China Passes Japan as Second-Largest Economy:

After three decades of spectacular growth, China passed Japan in the second quarter to become the world’s second-largest economy behind the United States, according to government figures released early Monday.

The milestone, though anticipated for some time, is the most striking evidence yet that China’s ascendance is for real and that the rest of the world will have to reckon with a new economic superpower.

The recognition came early Monday, when Tokyo said that Japan’s economy was valued at about $1.28 trillion in the second quarter, slightly below China’s $1.33 trillion. Japan’s economy grew 0.4 percent in the quarter, Tokyo said, substantially less than forecast. That weakness suggests that China’s economy will race past Japan’s for the full year.

Lots of prose. Here’s another way to explore relationships, via Google Data Explorer.

Amplifying Our Brain Power Through Better Interactive Holographics | Science Not Fiction

iron_man_2_holographics5Think of the most complicated thing you’ve written. Maybe it was a report for your employer, or an essay while in college. It could even be a computer program. Whatever it was, think of all the stuff you packed into it. Now, pause for a moment to imagine creating all that without using a word processor or a paper and pen, or really anything at all to externalize thought to something outside of your head. It seems impossible. What we get with this technology–ancient as it is–is an amplification of our brain power. Besides their gorgeous techy looks, do interactive holographics like that shown in Iron Man 2, reminiscent of interfaces shown in Minority Report, offer up some of the same brain amping?

While I was still a doctoral student, I had the opportunity work with a relative of interactive holographics, 3D virtual reality data CAVEs. This particular one, at the National Center for Supercomputing Applications (NCSA) in Urbana Illinois (the birthplace of HAL) circa 1999, was a cube with back projection on five of the six walls. You wore a headset that tracked your head position and orientation, and goggles that were LCD screens that blocked images to your right eye when the projectors were rendering images for your left eye, and vice versa when the projector was displaying images for your right eye. As you walk through space or move your head, what you see in the virtual space changes as you would expect it to.

The problem that had pushed me to use this system was trying to analyze 3D motion data of a fish that I was conducting research on. I’d developed a motion capture system for the fish, which gave fantastic 3D data of the fish moving while it was attacking its prey, but looking at this 3D data on 2D computer monitors turned out to be quite difficult. Even replaying the motion from several different views didn’t quite do the trick. So Stuart Levy at NCSA put my data set into a system called “Virtual Director” and I was able to playback the data in the cave. It was something of an unbelievable experience the first time I tried it – suddenly I could walk around the animal as it engaged in its behavior, manipulate it to get any view, rotate the wand I held to wind the behavior forward or back at different speeds. Visitors particularly enjoyed my “Book of Jonah” demo where I positioned them so that they ended going into the mouth of the fish during a capture sequence.

For my technical problem, the VR CAVE was appropriate technology: 3D display and interaction for an inherently 3D data set. It helped me see patterns in the data that I had not clearly seen before, which were incorporated into some of my subsequent publications that analyzed the movement data. It was worth the effort, and the physicality of it was fine since I didn’t need to spend multiple days working through the data.

Other uses of these kinds of “direct manipulation” interfaces that mix 3D data and real world interaction have not found such a receptive audience, as people complain that it seems tiring to make sweeping (if dramatic) gestures to go through photos that would just as well be navigated through with an arrow key. As someone who still uses “vi” to edit my text with, I can relate to criticisms of interfaces that offer more than is needed.

The important question, for any given interface, is whether simplifies difficult problems of control or analysis, or gets in the way. My former colleague Don Norman at Northwestern University has contributed a great deal to our understanding of this question, in books like The Design of Everyday Things. One of my favorite examples from that book considers two different interfaces to manipulating the position of a car seat. In one interface, on a luxury American car, there is a panel of knobs and buttons almost hidden below the left side of the dashboard. To go from a state of discomfort to a new chair position requires translating your discomfort into a series of knob pulls and twists on a console of many controls with tiny labels below each. In contrast, a German luxury car had a small version of the driver’s chair in the dashboard. To move the back of your chair down, you manipulated the chair in the dashboard accordingly; to move it forward, you would move it in the direction the chair was facing, and so on. One interface placed a large cognitive load on the user to solve the discomfort problem, while the other placed minimal demands.

Another favorite example is the “speed bug” – a tab that a plane pilot puts on the edge of an airspeed indicator to mark the velocities for critical changes to shape of the wing. Were it not for those bugs, the pilot would have to remember the velocity to do the wing adjustments – and that’s not easy, because it changes with things like the weight of the plane.

The virtual fish, miniature car seat adjuster, and speed bug are all examples of interfaces that make problems easier, and in this sense, amplify our brain power. Interactive holographic interfaces can do the same for problems where space is a convenient or needed basis for navigating the information. This isn’t always apparent in sci-fi depictions of these interfaces, but their use speaks to our hope that such 3D holographic wizardry will help us cope with the flood of data we contend with on a daily basis.


Pocket science – swordfish and flatfish are close kin, and ancient death-grip scars | Not Exactly Rocket Science

Marlin_turbot

Flatfish are the closest living relatives to swordfish and marlins

At first glance, a swordfish and a flounder couldn’t seem more different. One is a fast, streamlined hunter with a pointy nose, and the other is an oddly shaped bottom-dweller with one distorted eye on the opposite side of its face. Their bodies are worlds apart, but their genes tell a different story.

Alex Little from Queen’s University, Canada, has found that billfishes, like swordfish and marlin, are some of the closest living relatives to the flatfishes, like plaice, sole, flounder and halibut. This result was completely unexpected; Little was originally trying to clarify the relationship between billfishes and their supposed closest relatives – the tunas. That connection seems to make more sense. Both tunas and billfishes are among a handful of fish that are partially warm-blooded. They can heat specific body parts, such as eyes and swimming muscles, to continuously swim after their prey at extremely fast speeds with keen eyesight.

But it turns out that these similarities are superficial. Little sequenced DNA from three species of billfishes and three tunas, focusing on three parts of their main genome and nine parts of their mitochondrial one (a small accessory genome that all animal cells have). By comparing these sequences to those of other fish, Little found that the billfishes’ closest kin are the flatfish and jacks. Indeed, if you look past the most distinctive features like the long bills and bizarre eyes, the skeletons of these groups share features that tunas lack. Indeed, billfish and tuna proved to be only distant relatives. Their ability to heat themselves must have evolved independently and indeed, their bodies product and retain heat in quite different ways.

Little’s work is testament to the power of natural selection. Even closely related species, like marlins are flounders, can end up looking vastly different if they adapt to diverse lifestyles. And distantly related species like tuna and swordfish can end up looking incredibly similar because they’ve adapted to similar challenges – pursuing fast-swimming prey. This shouldn’t come as a surprise – a few months ago, a French team found that prehistoric predatory sea reptiles were probably also warm-blooded.

Reference: Molecular Phylogenetics and Evolution: http://dx.doi.org/10.1016/j.ympev.2010.04.022; images by Luc Viatour and NAOA

Ant_scars

Ancient death-grip scars caused by fungus-controlled ants

Forty-eight million years ago, some ants marched up to a leaf and gripped it tight in their jaws. It would be the last thing they would ever do. Their bodies had already been corrupted by a fungus that, over the next few days, fatally erupted from their heads. The fungus produced a long stalk tipped with spores, which eventually blew away, presumably to infect more ants. In time, all that was left of this grisly scene were the scars left by the ants’ death-grip. Today, David Hughes from Harvard University has found such scars in a fossilised leaf from Germany.

Today, hundreds of species of Cordyceps fungi infect a wide variety of insects, including ants. Like many parasites, they can manipulate the way their hosts behave. One species, Cordyceps unilateralis, changes the brains of its ant hosts so that they find and bite into leaves, some 25cm above the forest floor. The temperature and humidity in this zone are just right for the fungus to develop its spore capsules. In its dying act, the ant leaves a distinctive bite mark that’s always on one of the leaf’s veins on its underside. And that’s exactly what Hughes saw in his fossil leaf.

Hughes originally thought that the marks were made by an insect cutting the veins of the leaf to drain away any potential poisons, something that modern insects also do. But these marks look very different – those on the fossil leaf bear a much closer resemblance to those of Cordyceps-infected ants. This is the first fossil trace of a parasite manipulating its host, but it’s not the oldest evidence for such a relationship. In 2008, another American group found a 105-million-year-old piece of amber containing a scale insect, with two Cordyceps stalks sticking out of its head. The war between insects and their Cordyceps nemeses is an ancient one indeed.

Reference: Biology Letters http://dx.doi.org/10.1098/rsbl.2010.0521
If the citation link isn’t working, read why here


Twitter.jpg Facebook.jpg Feed.jpg Book.jpg

How to White Balance a Satellite: Aim It at Lake Tuz | Discoblog

Tuz

How do you white balance your camera? Aim it at a piece of paper. How do you white balance an Earth-monitoring satellite? Aim it at a Turkish salt lake.

At least that’s the hope of scientists headed to southern Turkey to study a salt lake named Tuz Gölü (Turkish for “salt lake,” natch) later this month. During July and August, most of Lake Tuz evaporates into reflective white salt, making it perfect for satellite-calibration, the Committee on Earth Observation Satellites said, recently endorsing the spot as one of eight calibration sites.

Just as white balancing your camera is important to keep your friends from looking jaundiced, calibrating satellites makes sure that they can take accurate climate and coastal degradation measurements.

As Popular Science reports, the team led by the UK National Physical Laboratory will spend nine days at lake Tuz measuring the reflectance of test sites from a variety of angles. From above, several satellites will simultaneously take recordings of the white lake for comparison. The NPL hopes this will be the first step for an automated system “LandNET” using all eight sites.

Related content:
Discoblog: To Track Penguins, Scientists Use High-Tech Satellite Images of…Droppings
Discoblog: Extreme Intercontinental Ballistic Missile Makeover!
Discoblog: Dang, What Was That? Astronomers Wonder What Just Whizzed by Earth
Discoblog: Want to Monitor the Earth’s Magnetic Field? There’s an App for That.

Image: NASA


Ancient Rubbish Suggests Humans Hunted a Giant Turtle to Extinction | 80beats

mega-turtleDuring the Pleistocene epoch animals thought big: It was the age of the megafauna, when creatures like the mammoth, an 8-foot-long beaver, and a hippopotamus-sized wombat walked the Earth. But these giants vanished one by one, and scientists have long wondered why.

Debate over what caused the megafauna to die out has raged for 150 years, since Darwin first spotted the remains of giant ground sloths in Chile. Possible causes have ranged from human influence to climate change in the past, even to a cataclysmic meteor strike. [BBC]

Now, a discovery on the South Pacific island nation of Vanuatu seems to have answered the question for at least one species. Researchers have turned up the bones of a giant land turtle in a dump used by the people who settled on the islands 3,000 years ago, and lead researcher Trevor Worthy says the evidence strongly suggests that the turtles were hunted into extinction.

Significantly, they have found mainly leg bones, but no head or tail remains, and only small fragments of shell. “This suggests very strongly that the animals were butchered somewhere other than in the village where we excavated them,” says Worthy. “They just cut them up and brought back the bits that had most meat on them.” [Australian Broadcasting Corporation]

All the species in this turtle family, the meiolaniids, were previously thought to have gone extinct 50,000 years ago, but the new find shows that at least one species (Meiolania damelipi) hung on in the isolated Pacific islands. The turtle’s death knell seems to have sounded when the Vanuatu islands were settled by the Lapita people around 3,000 years ago. The researchers carefully dissected the layers of rubbish in the Lapita dump, and say the last bones of M. damelipi were found in sediment layer dating to 2,800 years ago. This suggests that the turtles were wiped out in the course of a few hundred years, according to the paper published in the Proceedings of the National Academy of Sciences.

The Lapita would have hunted the slow-moving turtles, burned forests to clear cropland, and brought pigs and rats that ate their eggs. Worthy estimates that Vanuatu could have supported tens of thousands of M. damelipi, but in just 200 years they were gone. And if giant land turtles were on Vanuatu, they were likely found on other Pacific islands, and hunted into oblivion. [Wired]

Related Content:
80beats: Photos From the Gulf’s Great Sea Turtle Relocation
80beats: Turtle Shell Develops Through Embryonic Origami
80beats: A Clue to the Evolutionary Riddle of How the Turtle Got Its Shell
80beats: Scientist Smackdown: Were Giant Kangaroos Hunted Into Extinction?
80beats: Cavemen Found Innocent: Cave Bears Died From Cold, Not Spears

Image: Australian Museum


Did Lou Gehrig Have Lou Gehrig’s Disease? | 80beats

467px-GehrigCUThat may seem a strange question, akin to asking who’s buried in Grant’s tomb. But a new study proposes that some athletes diagnosed with Lou Gehrig’s disease may in fact have a different fatal disease that is set off by concussions.

Researchers have previously investigated the link between athletes and this neurodegenerative disease, more technically known as amyotrophic lateral sclerosis (ALS). A recent study examined what seemed to be a higher than usual incidence of Lou Gehrig’s disease among soccer players, and, of course, the disease bears the name of a New York Yankee who was famously undaunted by the hard knocks of his sport. Though it’s impossible to determine now whether Lou Gehrig suffered from ALS or a different condition (Gehrig was cremated), the study’s lead author speculates that Lou Gehrig’s disease might be a misnomer:

“Here he is, the face of his disease, and he may have had a different disease as a result of his athletic experience,” said Dr. Ann McKee, the director of the neuropathology laboratory for the New England Veterans Administration Medical Centers, and the lead neuropathologist on the study. [The New York Times]

McKee’s team looked at the brains and spinal cords of deceased athletes such as former Minnesota Vikings linebacker Wally Hilgenberg and former Southern California linebacker Eric Scoggins who were thought to have died from ALS, and who had also been diagnosed with a dementia-causing disease linked to head injuries, chronic traumatic encephalopathy. The researchers found two proteins in the spinal cord which are known to harm motor neurons, and would therefore cause ALS-like symptoms. A similar pattern of proteins was found in the spinal cord of a deceased unnamed boxer.

Dr. McKee said that because she has never seen that protein pattern in A.L.S. victims without significant histories of brain trauma, she and her team were confident the three athletes did not have A.L.S., but a disorder that erodes its victims’ nervous system in similar ways. [The New York Times]

The paper detailing this research will appear tomorrow in the Journal of Neuropathology and Experimental Neurology, and a report on the subject will air on the HBO show Real Sports tonight.

“Most A.L.S. patients don’t go to autopsy–there’s no need to look at your brain and spinal cord,” said Dr. Brian Crum, an assistant professor of neurology at the Mayo Clinic in Rochester, Minn. “But a disease can look like A.L.S., it can look like Alzheimer’s, and it’s not when you look at the actual tissue. This is something that needs to be paid attention to.” [The New York Times]

Such distinctions are not only important for medical research. If concussions are causing disease in military veterans and athletes, they might seek compensation for treatment expenses.

Related content:
80beats: Turning Skin Cells Into Nerve Cells to Study Lou Gehrig’s Disease
80beats: A Biotech Magic Trick: Skin Cells Transformed Directly Into Brain Cells
80beats: Fetal Stem Cell Therapy Causes Cancer in Teenage Boy
Science Not Fiction: Do You Speak Brain? Try Studying These Neurons-on-a-Chip

Image: University Archives—Columbiana Library, Columbia University.


Tiger Stripes

Tiger stripe on Enceladus. Linked image is explained in post. Credit: NASA/JPL/Space Science Institute

Just days ago (Friday the 13th), the Cassini spacecraft did a flyby of the Saturn moon Enceladus and paid particular attention the “tiger stripes” that are apparent in the south polar regions.

We know the tiger stripes to be giant fissures spewing jets of water vapor and organic particles hundreds of kilometers (and miles) into space.

The southern hemisphere of Enceladus is going into winter darkness, fortunately Cassini has a composite infrared instrument so it can “see” heat.  It will be a while before temperature maps of the fissures will be generated from the data but you can bet they are working on them.

The image above was taken from 10,391 km (6,457 miles) and if you click it you can see a fissure relatively close up, only 2,673 km (1,661 miles).  I just love that picture, it looks COLD!  Both of the images are “raw” meaning they’ve had no processing.  Here is a larger image at the Cassini page.

Around the World in 80 Days: Electric Car Race Begins | 80beats

The goal: 80 days, 18,000 miles, no emissions.

Yesterday, the Zero Race electric car world tour began in front of the United Nations Palace in Geneva, Switzerland. Four teams–from Australia, Switzerland, Germany, and South Korea–won’t actually race one another to cross a finish line. Instead, spectators and experts will determine the winner based on reliability, energy efficiency, safety, design, and practicality, as the tour is meant to show the feasibility of electric vehicles.

zerorace

The race organizer Louis Palmer won the European Solar Prize after driving a solar-powered vehicle around the world in 2008. He says in a press release that the “race” is against climate change and disappearing fuel.

“Petrol is running out, and the climate crisis is coming… and we are all running against time.” [Zero Race]

The race route cuts through 150 major cities, where spectators can vote for their favorite car design. Judges will decide each individual car’s reliability based on the number of repairs it requires during the race; a panel of race car drivers will evaluate the cars’ power and speed; and vehicle manufacturers will test the cars for energy efficiency.

zero-raceThe cars must be shipped over the Atlantic and Pacific Oceans and the race organizers say that they will compensate for emissions from this transport as well as other incidental race emissions with investments into renewable energy projects. Each team cannot use any more power than they have generated or purchased from renewable energy sources. Australian team leader Jason Jones says it will cost only $360 US to power his sleek, light-weight electric car around the world.

“We’ve already bought the power and put it back in the grid,” 57-year-old Jones senior explained, standing next to their plastic-bodied two-seat three wheeler. “We thought it just a great way to show what this car is capable of. The future of automotive transport is not a one-and-a-half tonne gas guzzler.” [AFP]

Battery problems delayed the South Korean car from starting the race with the other teams yesterday, but the other teams had a successful start.

Swiss competitor Toby Wuelser claims his futuristic design can do 350 kilometres [217 miles] on a single charge and reach speeds up to 250 kilometres an hour [155 miles per hour]. “It’s like flying half a metre above the ground,” he said before boarding the bullet-shaped vehicle and zipping silently up the hill to the starting line. [The Canadian Press]

Car enthusiasts in the United States can catch the vehicles in San Francisco, Los Angeles, or Austin in the fall. After that, the cars will head to Mexico City and then Cancun where the racers plan to participate in late November’s World Climate Change Conference.

Related content:
80beats: From GM: A 2-Wheeled, Electric, Networked Urban People Mover
80beats: In the Commute of the Future, Drivers Can Let a Pro Take the Wheel
80beats: An Electric-Car Highway in California, But Just for Tesla
80beats: How Would You Like Your Green Car: Hydrogen-Powered, or With a Unicycle on the Side?
80beats: What Does GM’s Bankruptcy Mean for Its Much-Hyped Electric Car?

Image: Zero Race


Parasitic wasps on Weeds: We have video! | The Loom

Thanks to Mandarb for posting this clip from Weeds I was wondering about yesterday. I should point out that it’s a very abridged version of my original piece on the radio. For example, it sounds as if I’m giving God my own personal forgiveness for parasitic wasps. I was actually talking about a letter written by Darwin in which the wasps figured in his musings about God.

And I have to say that I’m not much closer to figuring out what parasitic wasps have to do with the show’s plot. I guess I’ll have to watch the whole episode. But–for the record–here it is:


Don’t Be a Dick, Part 1: the video | Bad Astronomy

[Note: As is obvious by the title, the article below contains mildly NSFW language.]

In July, I spoke at The Amaz!ng Meeting 8 in Las Vegas. Sponsored by the James Randi Educational Foundation, it’s the largest meeting of critical thinkers and skeptics in the world. Unlike my usual talks about the abuse of science that I had given at previous TAMs, this time I wanted to tackle a much thornier issue: how we skeptics argue with believers of various stripes.

My first point was that we must keep in mind our goal. If it’s to change the hearts and minds of people across the world, then at least as important as what we say is how we say it. And my second point was pretty simple… but you’ll get to it around 24 minutes in. It’s obvious enough.

Here’s the video. The whole thing is about a half hour long.

Phil Plait – Don’t Be A Dick from the James Randi Educational Foundation.

I’ll admit I was pretty nervous about this talk, as I was basically telling people to be nicer. It’s hard for some people to hear a message like that, and I knew there would be backlash. There was. I have heard from quite a few people about the talk, as you might expect. They fell into three basic categories: some agreeing with me, others saying being dick has its place, and still others who misinterpreted what I was saying.

I’ll post links to copious blog articles on all sides of this issue a bit later, but I want to clear a few things up here first.

Some people are claiming I was saying we need to be milquetoasts. That’s ridiculous. I was very clear that anger has its place, that we need to be firm, and that we need to continue the fight.

Some were claiming they have a right to be dicks — I’m bemused by this, as of course you have that right. But that doesn’t mean it’s most effective, or that you should be one.

Others took issue with my initial question, asking how many people were "converted" to skepticism by having a skeptic yelling at them and insulting them. In fact, at least one person said that method does work and worked on them. That’s good for them, but given what we know about the way people argue and change their views on issues, the vast majority of people will become further entrenched when confronted in that way.

In other words, being a dick not only usually doesn’t work, it almost always works against the bigger goal of swaying the most people we can.

Perhaps I should have been more clear on what I mean by being a dick. I thought I had been clear, but a lot of people seem to think that I meant anyone who gets upset, or angry, or argues with emotion. I wouldn’t include satire in that category, or comedic work, or even necessarily using insults; tone and attitude count here. Think of it this way: when someone argues that way do you think to yourself, "What a dick"? I don’t; at least not necessarily. I think that way when the person belittles their opponent, uses obviously inflammatory language, or overly aggressively gets in their face.

Y’know. Being a dick.

Again, to be clear, I did not say we should back down when confronted. I did not say we should be weak against ignorance. I did not say we shouldn’t be angry. I did not say we should be passionless.

In fact, I argued the exact opposite. We need our anger, or strength, and our passion.

And one last point: a lot of folks were speculating that in my talk I was targeting specific people such as PZ Myers, Richard Dawkins, even Randi himself. I wasn’t. I was thinking fairly generically when I wrote the talk, and though I did have some specific examples of dickery in mind, the talk itself was not aimed at any individual person. In fact, though the basis of the talk was due to the degradation in tone I’ve been seeing lately (and I’m not at all alone in seeing it), it was also something of a confessional. Like most skeptics, at some points — too many, I now feel — in the past I’ve been a dick. I regret those times, and will strive to make sure they stay in the past.

So no, the talk was not aimed at any specific individuals. It was aimed at everyone, everywhere, and also inward toward myself. I cannot accuse others of that which I have not at the very least searched for in myself. And I have indeed found it in myself, which was the final factor in my making the speech in the first place.

I can’t promise that I won’t be a dick. But I will strive mightily to try. That’s the most I can do, and the most I can ask of anyone.

[Note: There are two more parts of this saga coming up soon, including links to many and diverse opinions on what I said, and the talk's aftermath.]


For the Aging, Four-Eyed Astronaut: Fancy Space Bifocals | Discoblog

glassesOne of the requirements for flying in a spaceship used to be near-perfect vision. When NASA relaxed its vision standards (to 20/200 or better uncorrected, correctable to 20/20 each eye for a mission specialist) they in turn created a new requirement–for near-perfect astronaut eyeglasses.

TruFocals (made by Zoom Focus Eyewear, LLC) might improve current astronaut spectacles by allowing space-travelers to focus mid-float on both near and far objects, whether they’re dealing with experiments or cooling loop warning indicators. As Scientific American reports, the glasses are currently undergoing NASA evaluation for space readiness–tests that include burning. The lenses will correct the condition known as presbyopia, in which aging people’s eyes lose focusing ability, making it difficult to see near objects. That’s the condition that causes people with good eyes to pick up reading glasses, and those with glasses to turn to bifocals.

These space glasses aren’t much like your grandma’s bifocals. TruFocals have two lenses for each eye: the outer lens uses the person’s usual prescription and the inner lens (closer to the the eye) is flexible and controllable by a slider on the eyeglasses’ bridge. With a little slide the shape of the inner lens changes, allowing the wearer to adjust their focus. That could be handy in an environment like the International Space Station, where floating astronauts may be trying to focus on things from odd angles.

The round shape is a necessity for the glasses to work best, Stephen Kurtin the glasses’ inventor told Scientific American, not a fashion decision:

“Some people say they’re cool, and some say they’re butt ugly.”

NASA may approve the glasses in time for the next space mission, though, as shown in the target-practice video below, the lenses are already available for planet dwelling four-eyes.

Related content:
Discoblog: E-focals: Electric Eyeglasses Are the New Bifocals
Discoblog: Cheap “Liquid Glasses” Bring Clear Vision to the Poor
Discoblog: Contacts Claim to Fix Your Vision While You Sleep
Discoblog: Will the Laptops of the Future Be a Pair of Eye Glasses?
Discoblog: Possible Cure For Blindness: Implanting a Telescope in Your Eye

Image: ZOOM FOCUS EYEWEAR LLC


Antarctic Sea Ice Grows Despite Global Warming—But It Won’t Last | 80beats

Mount_William_AntarcticaScientists have suggested for years now that the effects of a warming planet won’t show up in a uniform fashion across the globe—different locations won’t see glaciers retreat or sea levels rise at the same rate. Some places are particularly confusing because they show signs that seem backward to one’s expectations for a hotter Earth. One of the those confusing outliers for climatologists has been the sea ice off Antarctica.

While the amount of sea ice in the Arctic has been trending downward, Antarctic sea ice has actually expanded even as the area has warmed (and as ice shelves collapsed on the continent). This week, in the Proceedings of the National Academy of Sciences, Jiping Liu and Judith Curry put forth an explanation for this paradox. But, they say, the ice growth probably won’t continue.

Liu and Curry looked through 60 years of temperature and precipitation readings to find an explanation for the increase of sea ice in the warming world, and showed that precipitation increased over Antarctica from 1950 to the the present.

The finding makes intuitive sense: Rising temperatures increase the amount of moisture in the air, which eventually becomes snow. And for the last few decades, that snow kept surface waters from warming even more, added bulk to sea ice, and reflected sunlight [Wired.com].

In addition, the extra fresh water that fell as precipitation would have lowered the salinity of the surface water, the scientists say, and that would have slowed the rate of ice melt:

More snow made the top layers of the ocean less salty and thus less dense. These layers became more stable, preventing warm, density-driven currents in the deep ocean from rising and melting sea ice [National Geographic].

The ice growth, though, may not have staying power. Liu and Curry expect precipitation to continue to increase over Antarctica’s edge in the near-term future, but they expect that precipitation to turn from snow to rain. If that happens, the trend with Antarctic sea ice would reverse as rainfall would begin to melt the sea ice. Losing sea ice feeds into a feedback circle: With less surface ice to reflect away the sun’s rays, the ocean warms even more.

Not all climatologists are convinced by the details of Liu and Curry’s explanation, with some asking whether they factored in the influence of the ozone hole; others, like Doug Martinson, wonder how possible it is to model the fine details of the Antarctic climate system. What is clear, though, is that we shouldn’t be surprised that sea ice near the north pole and the south pole act differently.

The Arctic and the Antarctic are very hard to compare, Martinson said, which is why work like that Liu and Curry are undertaking is important. “They are apples and oranges,” Martinson said. “They are dramatically different systems.” In one case, there is an icy ocean surrounded by land. In the other, there is an icy continent surrounded by icy water [Discovery News].

Related Content:
DISCOVER: It’s Getting Hot in Here, Judith Curry and Michael Mann battle over on the state of climate science
DISCOVER: Antarctica’s Hot Spot
80beats: Robot Sub Dives Deep for Clues To a Fast-Melting Antarctic Glacier
80beats: An Iceberg the Size of Luxembourg Breaks Free from Antarctica
80beats: Antarctica Was an Oasis for Life During “Great Dying” 250 Million Years Ago

Image: Wikimedia Commons


New Sci Comm Book: Escape from the Ivory Tower | The Intersection

Escape from the Ivory TowerIn my presentation yesterday at Scripps, I had a slide in which I jokingly remarked upon the “Science Communication Publishing Juggernaut.” I’m not sure publishers are exactly becoming rich off it, but quite a number of science communication oriented books have come out in the last two years, including Randy Olson’s Don’t Be Such a Scientist and Cornelia Dean’s Am I Making Myself Clear?

And now there is a new addition that I’m very much looking forward to dipping into: Nancy Baron’s Escape from the Ivory Tower: A Guide to Making Your Science Matter. Baron heads science communication training for the famed Aldo Leopold Leadership program, which has now taught these skills to scores of researchers. She knows whereof she speaks.

I’ve only gotten to thumb through Escape from the Ivory Tower so far, on the plane, but I thought I’d give it a plug now, followed by a more in-depth reaction later. This looks like a very important book, and yet another contribution to the ongoing change in how the scientific world reaches out to the public.


Genetics is One: Mendelism and quantitative traits | Gene Expression

quantgen

ResearchBlogging.orgIn the early 20th century there was a rather strange (in hindsight) debate between two groups of biological scientists attempting to understand the basis of inheritance and its relationship to evolutionary processes. The two factions were the biometricians and Mendelians. As indicated by their appellation the Mendelians were partisans of the model of inheritance formulated by Gregor Mendel. Like Mendel many of these individuals were experimentalists, with a rough & ready qualitative understanding of biological processes. William Bateson was arguably the model’s most vociferous promoter. Set against the Mendelians were more mathematically minded thinkers who viewed themselves as the true inheritors of the mantle of Charles Darwin. Though the grand old patron of the biometricians was Francis Galton, the greatest expositor of the school was Karl Pearson.* Pearson, along with the zoologist W. F. R. Weldon, defended Charles Darwin’s conception of evolution by natural selection during the darkest days of what Peter J. Bowler terms “The Eclipse of Darwinism”.** One aspect of Darwin’s theory as laid out in The Origin of Species was gradual change through the operation of natural selection upon extant genetic variation. There was a major problem with the model which Darwin proposed: he could offer no plausible engine in regards to mode of inheritance. Like many of his peers Charles Darwin implicitly assumed a blending model of inheritance, so that the offspring would be an analog constructed about the mean of the parental values. But as any old school boy knows the act of blending diminishes variation! This, along with other concerns, resulted in a general tendency in the late 19th century to accept the brilliance of the idea of evolution as descent with modification, but dismiss the motive engine which Charles Darwin proposed, gradual adaptation via natural selection upon heritable variation.

Mendels theory of inheritance rescued Darwinism from the problem of gradual diminution of natural selection’s raw material through the process of sexual reproduction. Yet due to personal and professional rivalries many did not see in Mendelism the salvation of evolutionary theory. Pearson and the biometricians scoffed at Bateson and company’s innumeracy. They also argued that the qualitative distinctions in trait value generated by Mendel’s model could not account for the wide range of continuous traits which were the bread & butter of biometrics, and therefore natural selection itself. Some of the Mendelians also engaged in their own flights of fancy, seeing in large effect mutations which they were generating in the laboratory an opening for the possibility of saltation, and rendering Darwinian gradualism absolutely moot.

There were great passions on both sides. The details are impeccably recounted in Will Provine’s The Origins of Theoretical Population Genetics. Early on in the great debates the statistician G. U. Yule showed how Mendelism could be reconciled with biometrics. But his arguments seem to have fallen on deaf ears. Over time the controversy abated as biometricians gave way to the Mendelians through a process of attrition. Weldon’s death in 1906 was arguably the clearest turning point, but it took a young mathematician to finish the game and fuse Mendelism and biometrics together and lay the seeds for a hybrid theoretical evolutionary genetics.

R._A._FischerThat young mathematician was R. A. Fisher. Fisher’s magnum opus is The Genetical Theory of Natural Setlection, and his debates with the American physiologist and geneticist Sewall Wright laid the groundwork for much of evolutionary biology in the 20th century. Along with J. B. S. Haldane they formed the three-legged population genetic stool upon which the Modern Neo-Darwinian Synthesis would come to rest. Not only was R. A. Fisher a giant within the field of evolutionary biology, but he was also one of the founders of modern statistics. But those accomplishments were of the future, first he had to reconcile Mendelism with the evolutionary biology which came down from Charles Darwin. He did so with such finality that the last embers of the debate were finally doused, and the proponents of Mendelism no longer needed to be doubters of Darwin, and the devotees of Darwin no longer needed to see in the new genetics a threat to their own theory.

One of the major issues at work in the earlier controversies was one of methodological and cognitive incomprehension. William Bateson was a well known mathematical incompetent, and he could not follow the arguments of the biometricians because of their quantitative character. But no matter, he viewed it all as sophistry meant to obscure, not illuminate, and his knowledge of concrete variation in form and the patterns of inheritance suggested that Mendelism was correct. The coterie around Karl Pearson may have slowly been withering, but the powerful tools which the biometricians had pioneered were just waiting to be integrated into a Mendelian framework by the right person. By 1911 R. A. Fisher believed he had done so, though he did not write the paper until 1916, and it was published only in 1918. Titled The Correlation Between Relatives on the Supposition of Mendelian Inheritance, it was dense, and often cryptic in the details. But the title itself is a pointer as to its aim, correlation being a statistical concept pioneered by Francis Galton, and the supposition of Mendelian inheritance being the model he wished to reconcile with classical Darwinism in the biometric tradition. And in this project Fisher had a backer with an unimpeachable pedigree: a son of Charles Darwin himself, Leonard Darwin.

You can find this seminal paper online, at the R. A. Fisher digital archive. Here is the penultimate paragraph:

In general, the hypothesis of cumulative Mendelian factors seems to fit the facts very accurately. The only marked discrepancy from existing published work lies in the correlation for first cousins. Snow, owning apparently to an error, would make this as high as an avuncular correlation; in our opinion it should differ by little from that of the great-grandparent. The values found by Miss Elderton are certainly extremely high, but until we have a record of complete cousinships measured accurately and without selection, it will not be possible to obtain satisfactory numerical evidence on this question. As with cousins, so we may hope that more extensive measurements will gradually lead to values for the other relationship correlations with smaller standard errors. Especially would more accurate determinations of the fraternal correlation make our conclusions more exact.

I have to admit at the best of times that R. A. Fisher can be a difficult prose stylist to follow. One might wish to add from a contemporary vantage point that his language has a quaint and dated feel which compounds the confusion, but the historical record is clear that contemporaries had great difficulty in teasing apart distinct elements in his argument. Much of this was due to the mathematical aspect of his thinking, most biologists were simply not equipped to follow it (as late as the 1950s biologists at Oxford were dismissing Fisher’s work as that of a misguided mathematician according to W. D. Hamilton). In the the text of this paper there are the classic jumps and mysterious connections between equations along the chain of derivation which characterize much of mathematics. The problem was particularly acute with Fisher because his thoughts were rather deep and fundamental, and he could hold a great deal of complexity in his mind. Finally, there are extensive tables and computations of correlations of pedigrees from that period drawn from biometric research which seem extraneous to us today, especially if you have Mathematica handy.

But the logic behind The Correlation Between Relatives on the Supposition of Mendelian Inheritance is rather simple: in the patterns of correlations betweens relatives, and the nature of variance in trait value across those relatives, one could perceive the nature of Mendelian inheritance. It was Mendelian inheritance which could explain most easily the patterns of variation across continuous traits as they were passed down from parent to offspring, and as they manifested across a pedigree. Early on in the paper Fisher observes that a measured correlation between father and son in stature is 0.5. From this one can explain 1/4 of the variance in the height across the set of possible sons. This biological relationship is just a specific instance of the coefficient of determination, how much of the variance in a value, Y (sons’ heights), you can predict from the variance in X (fathers’ heights). Correcting for sex one can do the same for mothers and their sons (and inversely, fathers and their daughters).*** So combing the correlations of the parents to their offspring you can explain about half of the variance in the offspring height in this example (the correlation is higher in contemporary populations, probably because of much better nutrition in the lower orders). But you need not constraint yourself to parent-child correlations. Fisher shows that correlations across many sorts of relationships (e.g., grandparent-grandchild, sibling-sibling, uncle-niece/nephew) have predictive value, though the correlation will be a function of genetic distance.

What does correlation, a statistical value, have to do with Mendelism? Remember, Fisher argues that it is Mendelism which can explain in the details patterns of correlations on continuous traits. There were peculiarities in the data which biometricians explained with abstruse and ornate models which do not bear repeating, so implausible were the chain of conjectures. It turns out that Mendelism is not only the correct explanation for inheritance, but it is elegant and parsimonious when set next to the alternatives proposed which had equivalent explanatory power. A simple blending model could not explain the complexity of life’s variation, so more complex blending models emerged. But it turned out that a simple Mendelian model explained complexity just as well, and so the epicycles of the biometricians came crashing down. Mendelism was for evolutionary biology what the Copernican model was for planetary astronomy.

To a specific case where Mendelism is handy: in the data Fisher noted that the height of a sibling can explain 54% of the variance of height of other siblings, while the height of parents can explain only 40% of that of their offspring. Why the discrepancy? It is noted in the paper that the difference between identical twins is marginal, and other workers had suggested that the impact of environment could not explain the whole residual (what remains after the genetic component). Though later researchers observe that Fisher’s assumptions here were too strong (or at least the state of the data on human inheritance at the time misled him) the big picture is that siblings have a component of genetic correlation which they share with each other which they do not share with their parents, and that is the fraction accounted for by dominance. When dominance is included in the equation heritability is referred to as the “broad sense,” while when dominance is removed it is termed “narrow sense.”

A concept such as dominance can of course be easily explained by Mendelism, at least formally (the physiological basis of dominance was later a point of contention between Fisher and Sewall Wright). Most of you have seen a Punnet square, whereby heterozygous parents will produce offspring in ratios where 50% are heterozygous, and 25% one homozygote and 25% another. But consider a scenario where one parent is a heterozygote, and the other a homozygote for the dominant trait. Both parents will express the same trait value, as will their offsprings. But, there will be a decoupling of the correlation between trait-value and genotype here, as the offspring will be genotypically variant. Parent-offspring correlations along the regression line become distorted by a dominance parameter, and so reduce correlations. In contrast, full siblings share the same dominance effects because they share the same parents and can potentially receive the same identical by descent alleles twice. Consider a rare recessively expressed allele, one for cystic fibrosis. As it is rare in a population in almost all cases where the offspring are homozygotes for the disease causing allele, both parents will be heterozygotes. They will not express the disease because of its recessive character. But 25% of their offspring may because of the nature of Mendelian inheritance. So there’s a major possible disjunction between trait values from the parental to offspring cohorts. On the other hand, each sibling has a 25% chance of expressing the disease, and so the correlation is much higher than that with the parents (who do not express disease). In other words siblings can resemble each other much more than they may resemble either parent! This makes intuitive sense when you consider the inheritance constraints and features of Mendelism in diploid sexual species. But obviously a simple blending model can account for this. What it can not account for is the persistence of variation. It is through the segregation of independent Mendelian alleles, and their discrete and independent reassortment, that one can see how variation would not only persist from generation to generation, but manifest within families as alleles across loci shake out in different combinations. A simple model of inheritance can then explain two specific phenomena which are very different from each other.

There is much in Fisher’s paper which prefigures later work, and much which is rooted in somewhat shaky pedigrees and biometric research of his day. The take home is that Fisher starts from an a priori Mendelian model, and shows how it could cascade down the chain of inferences and produce the continuous quantitative characteristics we see all around us. From the Hardy-Weinberg principle he drills down through the inexorable layers of logic to generate the formalisms which we associate with heritability, thick with variance terms. The Correlation Between Relatives on the Supposition of Mendelian Inheritance was a marriage between what was biometrics and Mendelism which eventually gave rise to population genetics, and forced the truce between the seeds of that domain and what became quantitative genetics.

As I said, the paper itself is dense, often opaque, and characterized by a prose style that lends itself to exegesis. But I find that it is often useful to see the deep logics behind evolution and genetics laid bare. Some of the issues which we grapple with today in the “post-genomic era” have their intellectual roots in this period, and Fisher’s work which showed that quantitative continuous traits and discrete Mendelian characters were one in the same. The “missing heritability” hinges on the fact that classical statistical techniques tell us that Mendelian inheritance is responsible for the variation of many traits, but modern statistical biology which has recourse to the latest sequencing technology has still not be able to crack that particular nut with satisfaction. Perhaps decades from now biologists will look at the “missing heritability” debate and laugh at the blindness of current researchers, when the answer was right under their noses. Alas, I suspect that we live in the age of Big Science, and a lone genius is unlikely to solve the riddle on his lonesome.

Citation: Fisher, R. A. (1918). On the correlation between relatives on the supposition of Mendelian inheritance Transactions of the Royal Society of Edinburgh

Suggested Reading: The Origins of Theoretical Population Genetics, R.A. Fisher: The Life of a Scientist, and The Genetical Theory of Natural Selection.

* Though I will spare you the details, it may be that the Galtonians were by and large more Galtonian than Galton himself! It seems that Francis Galton was partial was William Bateson’s Mendelian model.

** To be fair, I believe the phrase was originally coined by Julian Huxely.

*** Just use standard deviation units.

Image Credit: Wikimedia

NCBI ROFL: Head banger’s whiplash. | Discoblog

headbang“OBJECTIVE: The current trend in dancing includes “head banging” with extreme flexion, extension, and rotation of the head and cervical spine. We suggest that dance-related severe pain in the cervical area may result from head banging. DESIGN: A cohort of 37 eighth graders ages 13 or 14 participated in a dance marathon for charity lasting 7 h. There were 26 girls and 11 boys. SETTING: During the dance marathon, three “heavy metal” songs were played during which head banging could be done. PATIENTS: The painful syndromes that relate to head banging were evaluated by a convenience sample of the 37 marathon dancers in the eighth grade. INTERVENTIONS: None. MAIN OUTCOME MEASURES: A self-selected age-matched control group is included since 17 adolescents participated in head banging and 20 did not. RESULTS: Of the head bangers, 81.82% of the girls and 16.6% of the boys had resultant cervical spine pain that lasted 1-3 days. Only 26.2% of non-head-banging girls and 0% of non-head-banging boys had cervical spine pain lasting 1-3 days. Of all the 8th-grade participants, 62.16% had pain somewhere. Other types of pain included leg pain, back pain, and headache. Only three adolescents took any medication for their pain. CONCLUSIONS: The head-banger’s whiplash is a self-limiting painful disorder. The easy resolution of the pain problem in adolescents is a tribute to the resilience of youth.”

head_banger_whiplash

Photo: flickr/kelsey_lovefusionphoto

Related content:
Discoblog: NCBI ROFL: Head and neck injury risks in heavy metal: head bangers stuck between rock and a hard bass.
Discoblog: NCBI ROFL: Injuries due to falling coconuts.
Discoblog: NCBI ROFL: I wonder if this paper was cheer-reviewed.

WTF is NCBI ROFL? Read our FAQ!


Crescent planet, crescent moonrise | Bad Astronomy

Oh. My.

Another lovely, stunning Cassini image: A thin crescent Enceladus rising over the sunlit cloud tops of Saturn:

cassini_enceladus_crescent

What a sight! As the spacecraft rounded into the dark side of Saturn, it turned back toward the planet (and the far more distant Sun). The top of Saturn’s atmosphere is still lit as seen from Cassini’s vantage point, but also lit was the moon Enceladus. The moon was between Saturn and Cassini, and so the geometry dictates it too was showing almost entirely its dark side to the spacecraft. The result is the thin crescent of the moon just over the (only partially seen) thinly lit crescent of its parent planet.

cassini_craters_tethysThis raw image (meaning it has not been processed to remove camera defects and other artifacts) is one of several available on the CICLOPS site. There are other shots of Enceladus which show its famous string of geysers, and incredible close-ups of craters on the moon Tethys — like the one shown here (click to embiggen).

The average density of Tethys is actually less than that of water, meaning it is mostly ice. Clearly, Tethys has had a rough past; the surface is saturated with craters (the weird lines on the right hand side are one of those camera artifacts I mentioned, and aren’t real). The moon is nearly 1100 km (660 miles) across, so clearly Penelope, the large crater you can see there, is enormous… and old, at least old enough to have suffered multiple peppering impacts in the time since it was created.

The other images on the CICLOPS site are wonderful, and you should look. Also, Emily Lakdawalla took a few of them and made animations which are particularly amazing. Zoom in on Enceladus, or watch it rise in front of Saturn!


Native American ancestry in African Americans | Gene Expression

That is, in the African American sample in the HapMap3 population set. I was just browsing the Admixture manual, and stumbled onto this plot:

hapmap3afswplot


CEU = Utah Whites, and YRI = Yoruba. They should be familiar from the previous versions of the HapMap. MEX = Mexican Americans from Los Angeles. K = 3, three ancestral populations. So what’s ASW? “African ancestry in Southwest USA.” I was struck by the high proportion of what seems to be Native American ancestry in some of the African Americans. This is a controversial aspect of scientific genealogy, as most researchers have found far less Native American ancestry within the African American population that had been the expectation within the community (Henry Louise Gates Jr. had to face criticism and skepticism when reporting this result from the black public). These results would conflict with that. But here’s the fine print on this sample. All you really need to know is this: “Principal Investigator for Community Engagement and Sample Collection: Morris Foster, University of Oklahoma, Norman, Oklahoma.” Oklahoma used to be called “Indian country,” and most Americans are aware that whites and blacks in this part of the country have more than the usual fraction of indigenous ancestry. Just something to keep in mind when encountering results that come out of HapMap3 populations.

Third World means nothing now | Gene Expression

I recently had an exchange on twitter about the term “Third World” (starting from a tweet pointing to the idea of “Third World America”). Here’s Wikipedia on the origins of the term:

The term ‘Third World’ arose during the Cold War to define countries that remained non-aligned or not moving at all with either capitalism and NATO (which along with its allies represented the First World) or communism and the Soviet Union (which along with its allies represented the Second World). This definition provided a way of broadly categorizing the nations of the Earth into three groups based on social, political, and economic divisions.

Although the term continues to be used colloquially to describe the poorest countries in the world, this usage is widely discouraged since the term no longer holds any verifiable meaning after the fall of the Soviet Union deprecated the terms First World and Second World. A term increasingly being used to replace “Third World” is “Majority World”, which is gaining popularity in the global south. The term was introduced in the early nineties by the Bangladeshi photographer and activist, Shahidul Alam.

I don’t think the term “Third World” has much utility, but I think it’s not useful to replace it with another dichotomous categorization which simply falls into the trap of a human cognitive bias. The bias seems universal, and doesn’t brook ideology. Racial nationalists and multiculturalist liberals both accept the dichotomy between “people of color” and whites. I believe most white liberals today would agree with the framework that white nationalist Lothrop Stoddard outlined in The Rising Tide of Color Against White World-Supremacy; they would simply invert the moral valence, looking positively upon developments which Stoddard viewed with concern. Many racial minorities in the West also buy into the white vs. non-white dichotomy for purposes of cooperation between different groups. Though it has tactical utility in white majority societies it’s frankly ignorant to presume that there’s any fundamental solidarity between “people of color.” I assume that dark-skinned South Asians and Africans who have lived in East Asia, or even the Gulf states, can confirm that racism is not necessarily conditional on the existence of white people.*

But there is also the problem that there’s a wide range of economic and social outcomes outside of the developed world. To give you a sense, here’s a chart from Google Data Explorer:

I wanted to show a two-dimensional chart to indicate that issues of development shouldn’t always be viewed in a scalar context. Many Asian nations do not have the political instability or issues with disease that African nations have, but, they’re far closer to the Malthusian limit. So in general Africans are actually relatively well fed, but that’s in part probably due to the high mortality in those regions. NGOs and relative political stability have resulted in a floor of the quality of life in very poor nations like Bangladesh (so that mass starvation is no longer a concern), but that floor is very low indeed. Low enough that South Asia is the world epicenter of nutritionally induced mental retardation in raw numbers.

And then you have nations such as Mexico or Brazil, which suffer from “contrast effect.” Mexico is next to the United States, and so it can not perceive itself as a rich nation. Brazil has long been the “Land of the Future,” and is gifted with a surplus of land, so its relative underperformance next to the USA grates. But on a world wide scale they’re both rather affluent. Mexico is the second-fattest nation after the United States. 1 in 10 Brazilians is obese. Obesity is not positive, but it is an indication that populations have moved above bare subsistence and large swaths now have a surfeit of calories.

Here’s a bar plot of malnutrition prevalence under the age of 5, with color-coding by region:

* Sometimes I feel that in terms of the model of how the universe works, white nationalists and non-white racial activists in the West can agree on the facts. Whites are supernatural creatures, the former simply view them as gods, the latter as demons. But any model which does not include whites is no model at all, for they are the Nephilim of our age. When I talk to people versed in post-colonial theory about history a history without whites does not compute. They say that love and hate are two sides of the same coin.