The Pixel Vision of Kirk Crippens | Visual Science

NEXT>

1-map

I met Kirk Crippens at the Photo Alliance portfolio review in San Francisco a few weeks back, and was psyched to view his project Pixel Nation. I liked that the images in this series isolate everyday textures we take for granted, revealing the unexpected.

Crippens reports that while photographing pixels he discovered an ornate world of color and structure that he photographed at varying magnifications. Crippens: “Many of the photographs were taken with long exposures while the images on the screen were changing. This process meant that I did not know what the photo would look like until after the image was captured. With analog television ending in 2009, I decided to include the patterns of older screens, the pre-pixel screens. Changes have occurred in pixel design over the years – from the simple dashes and dots of early color TVs and computer monitors to plasma’s gaseous pixels and HD television’s intricately designed pixels. The ubiquitous pixel has been transforming just under our gaze.” Behold, the lowly pixel!

Sony Trinitron

Images courtesy Kirk Crippens


NEXT>

Testeidolia | Bad Astronomy

hauntedscrotum[Note: This post honors the day that is April 1.]

I have posted many a picture purporting paranormal parts that are actually just our minds playing tricks on us. But this one really puts us to the test. Or testis.

Yes. It’s a haunted scrotum.

It looks more like a monkey to me than a ghostly face, and there’s a vas deferens between them. Maybe you see something different. Leave a comment if you do, and please keep it clean… but have a ball.

Tip o’ the urethra to Dr. Joe Albietz.


Credibility of the Wikio science blogs rankingsGene Expression

Greg Laden posted the Wikio science blogs rankings for this month. Here they are:

1Wired Science – Wired Blog
2Watts Up With That?
3Climate Progress
4RealClimate
5Bad Astronomy
6Climate Audit
7Next Generation Science
8Respectful Insolence
9Dispatches from the Culture Wars
10The Frontal Cortex
11Deltoid
12FuturePundit
13Gene Expression
14Uncertain Principles
15BPS Research Digest
16Living the Scientific Life (Scientist, Interrupted)
17A Blog Around The Clock
18Greg Laden’s Blog
19TierneyLab – New York Times blog
20Stoat

First, where’s Pharyngula? Like him or hate him P. Z. Myers has to be high on this list. Where’s Cosmic Variance? Not Exactly Rocket Science? Perhaps some of these weblogs opted out of the rankings, I don’t know. But whatever is going on, something doesn’t pass the smell test here. For what it’s worth, my own weblog is ranked highly, but at the old URL, so I assume I’ll slowly start journeying south on the rankings as people start switching links.

Why Dukies underachieve as prosGene Expression

Interesting article in Slate. This shocked me:

Out of all the big schools, NBA teams likely fall harder for Dukies because of their NCAA tournament success. In Stumbling on Wins, economists David J. Berri and Martin B. Schmidt find that players who appear in the Final Four the year they’re drafted get a boost of 12 draft positions. Berri and Schmidt believe that this boost is unwarranted. One of the “statistically significant factors … that lead to less productivity in the NBA,” they write, is “playing for an NCAA champion the year drafted.”

I’ll have to look at the model itself, but this is somewhat surprising if plausible. It makes intuitive sense, but NBA teams don’t normally take the draft lightly and do prep work. On the other hand, as the years go by I’ve become more skeptical about the ability of institutions to squeeze all efficiencies out of any given process (I suspect there’s a principal-agent problem; those who are making the final call are less likely to get fired if they select a “can’t miss” who they think is overrated if that prospect flops than if they get someone who they believe is underrated, and it turns out their assessment was in error).

Personally, I think the similarities between Duke and Indiana during the Bobby Knight years are telling, and Knight was a mentor of Mike Krzyzewski. Both schools seem to produce fewer stars on the professional level in relation to the success of their teams; but I think the group vs. individual dynamic is key. There are differences between the pro and collegiate level, and Duke and Knight’s Indiana teams were able to leverage group level efficiency and precision in collective action to make up for shortfalls in relative individual talent. When a team manages to win many games individual players are perceived to be better than they are. Take individuals out of that context and their more modest talent endowments become obvious. A college team which routinely makes it far in the NCAA tournament can regularly field what might be “role players” at best in the NBA.

NCBI ROFL: Uh, no. Aunt Flo means no ho, bro! | Discoblog

3003650719_0881b46c0aThe receptivity of women to courtship solicitation across the menstrual cycle: a field experiment.

“Research has demonstrated that women’s behaviors toward men or sexual interest are different across the menstrual cycle. However, this effect was only found on verbal interest and the receptivity of women to a courtship solicitation had never been tested before. In a field experiment, 455 (200 with normal cycles and 255 pill-users) 18-25-year-old women were approached by 20-year-old male-confederates who solicited them for their phone number. A survey was administered to the women solicited 1 min later in order to obtain information about the number of days since the onset of their last menses. It was found that women in their fertile phase, but not pill-users, agreed more favorably to the request than women in their luteal phase or in their menstrual phase.”

hot_lady_menstual_cycle

Thanks to John for today’s ROFL!

Photo: flickr/Beau B

Related content:
Discoblog: NCBI ROFL: Bust size and hitchhiking: a field study.
Discoblog: NCBI ROFL: Does this outfit make me look like I want to get laid?
Discoblog: NCBI ROFL: Women’s bust size and men’s courtship solicitation.


New EPA Rules Clamp Down on Mountaintop Removal Coal Mining | 80beats

MTRIt’s been a busy week for President Obama and energy. Two days ago his administration rolled out plans to expand millions of new acres of ocean off the U.S. coastline for oil and gas drilling; after we posted on it, many DISCOVER fans expressed their disdain for Obama’s move on our Facebook page. Today, though, there’s good news for the environmentalists: Obama’s EPA said today it will put stricter restrictions on mountaintop removal coal mining.

At “mountaintop removal” mines, which are unique to Appalachian states, miners blast the peaks off mountains to reach coal seams inside and then pile vast quantities of rubble in surrounding valleys [Washington Post]. The chemicals that result from decapitating a mountain and mining coal tend to run off into the the valleys and pollute rivers and streams, however. So when 80beats last left mountaintop removal, a group of scientists had taken a public stance in the journal Science calling for a complete end to this kind of mining.

The new EPA rules don’t go that far. But the mining regulations will be difficult for mountaintop removal projects to meet. Basically, the EPA will set a standard level for permissible mining runoff allowed to reach waterways, and if a mining project is expected to exceed five times that number, it won’t go forward. Agency head Lisa Jackson says, “You are talking about either no or very few valley fills that are going to be able to meet standards like this. What the science is telling us is that it would be untrue to say you can have any more than minimal valley fill and not see irreversible damage to stream health” [The Guardian]. Between 2000 and 2008, the government had granted more than 500 permits for valley fills.

While the new rules are focused on new mines, it’s possible that established mines may not be simply grandfathered in. Ms. Jackson cited Arch Coal Inc.’s Spruce No. 1 mine, the biggest surface mine in West Virginia, as an example of a project that didn’t meet the new standard and would “degrade water quality in streams adjacent to the mine” [Wall Street Journal]. Last week the EPA proposed revoking Arch Coal’s permit, which unsurprisingly made the mining industry balk at what it calls an unprecedented move.

Related Content:
80beats: Obama Proposes Oil & Gas Drilling in Vast Swaths of U.S. Waters
80beats: Scientists Demand End to Mountaintop Decapitation; Mining Projects Advance Anyway
80beats: After Massive Tennessee Ash Spill, Authorities Try to Assess the Damage
80beats: Isn’t It Ironic: Green Tech Relies on Dirty Mining in China
80beats: Obama Admin. Rolls Back Bush-Era Rules on Mining & Forests

Image: Wikimedia Commons / JW Randolph


Meet the Genetically Engineered Pig With Earth-Friendly Poop | 80beats

Enviropig-ModelCanada has approved for limited production a genetically engineered, environmentally friendly pig.

The “Enviropig” has been genetically modified in such a manner that its urine and feces contain almost 65 percent less phosphorus than usual. That could be good news for lakes, rivers, and ocean deltas, where phosphorous from animal waste can play a role in causing algal blooms. These outbursts of algae rapidly deplete the water’s oxygen, creating vast dead zones for fish and other aquatic life [National Geographic].

All living creatures need phosphorus, as the element plays an important role in many cellular and organ functions. Domesticated pigs get their daily dose from corn or cereal grains, but not without a struggle. These foods contain a type of phosphorus that is indigestible to the pigs, so farmers also feed their pigs an enzyme called phytase to allow the animals to break down and digest the phosphorus. But ingested phytase isn’t as effective at breaking down phosphorus as phytase created inside the pig would be, so a fair amount of the element gets flushed out in pig waste. That waste, in turn, can make its way into the water supply [National Geographic].

To fix this problem, the scientists tinkered with the swine’s genes to make the pig produce its own phytase in its salivary glands. When the cereal grains are consumed, they mix with the phytase in the saliva, and throughout the pig’s digestive tract the enzyme works to break down the phosphorous in the food. With more phosphorus retained within the body, the amount excreted in waste is reduced by almost 65 percent, say researchers.

The researchers who created the Enviropig say it’s not just eco-friendly, but it also cut farmers’ feed-supplement costs. If the pigs eventually become common, they could also help U.S. farmers comply with “zero discharge” rules that forbid pork producers from releasing nitrogen or phosphorus runoff.

The Enviropigs will be raised only in controlled research settings in Canada for now, and experts say transgenic pork won’t be landing on your plate anytime soon; the new biotech pig will face years of safety trials to see if it should be approved for commercial production and consumption in the United States and Canada. No transgenic animal has been approved for consumption as of yet. But in 2008 the FDA announced approval of the first human health product made from a genetically engineered animal. The goat-derived anticoagulant, ATryn, is used for the prevention of blood clots in patients with a rare disease-causing protein deficiency [National Geographic].

Related Content:
80beats: Likely to Be FDA Approved: Transgenic Goats With Pharmaceutical Milk
80beats: Coming Soon to a Grocery Near You: Genetically Engineered Meat
80beats: Your Quarter-Pounder Just Might Have Come From a Cloned Cow (Indirectly)
80beats: Largest “Dead Zone” Yet Predicted for the Gulf of Mexico
DISCOVER: “Frankenfoods” That Could Feed the World (photo gallery)

Image: Enviropig/University of Guelph


On Track for a Monday Launch

The STS-131 crew from the left: Mission Specialists Clayton Anderson, Naoko Yamazaki, Stephanie Wilson, Dorothy Metcalf-Lindenburger and Rick Mastracchio, Pilot James P. Dutton, Jr., and Commander Alan Poindexter. Image credit: NASA/Jim Grossmann

Everything looks good for Monday’s shuttle launch.  Even the weather is looking favorable with just a 20 percent chance the weather will cause a delay.

Launch time is scheduled for 6:21 am EDT Monday morning.  I will try and record it and put it up for those not able to see it for whatever reason.  Heh, somebody will have it even if I flub it up.

This morning a Soyuz TMA-18 rocket launched NASA astronaut Tracy Caldwell Dyson and Russian cosmonauts Alexander Skvortsov and Mikhail Komienko on their way to the International Space Station.  You know of course, the Russians will pretty much be the only ticket to the ISS pretty soon.

No matter, our guys will be too busy fooling around with cars to amount to much spacewise in what some former NASA astronauts are calling “the dismantling of NASA”.  I wonder if the NASA people ever dreamed they would go from “rocket scientists” to glorified grease monkeys when they worked so hard in school to be able to have a chance at working at the prestigious agency.  I bet not.

I am not dissing automotive people either so no emails to that effect thank you.  I have many friends in the automotive industry and I have NO talent for such work myself.  But just the same, you have to admit it’s rather far afield from what these  NASA people worked so hard to be able to do.  I am also concerned with the message being sent to school-aged young adults; kind of like: meh, like science? Don’t bother.

The Best Way to Predict Box Office Hits: Twitter Chatter | Discoblog

the-blind-side-posterWondering which Hollywood movie will be this weekend’s smash hit? Head straight to Twitter, as a new study (pdf) suggests the microblogging service offers the most accurate predictions of a movie’s success.

In a new paper about Twitter’s success at gauging a film’s fortunes, Sitaram Asur and Bernando Huberman from HP devised a simple model that tracks people’s tweets about a certain movie (for their study, they collected almost 3 million tweets). The researchers found that compared to the industry’s gold standard for movie success prediction, the Hollywood Stock Exchange, tweets were far more accurate in predicting how much money a movie would make.

The researchers’ system tracks the rate and frequency of movie mentions, and also categorizes the tweet reviews as either positive or negative. The Twitter findings reflect marketing realities, the researchers note: While movie studios can push people to the theaters with hype and pre-release marketing, it’s usually positive reviews and word-of-mouth that sustains people’s interest after a movie has been released.

Mashable writes:

One movie analyzed in this study, The Blind Side, had an “enormous increase in positive sentiment after release,” reads the paper. The film’s score jumped from 5.02 to 9.65 on HP’s scale. After a “lukewarm” first weekend, with sales around $34 million, the movie “boomed in the next week ($40.1 million), owing largely to positive sentiment.”

The study suggests that the collective chatter of the crowds on Twitter can be used not just for predicting future movie hits, but could also be a valuable tool for marketers preparing to introduce new products. Perhaps even political candidates could use Twitter to check for interest in their policy positions. Hey, is #FinancialRegulations trending yet?

Follow us on Twitter.

Related Content:
Discoblog:How To Make Your Twitter Followers Uneasy: Use ShadyURLs
Discoblog: Astronauts in Space Finally Enter the Intertubes
Discoblog: New Device Aims to Read Your Dog’s Mind—and Broadcast It on Twitter

Image: The Blind Side


Cox on Ross | Bad Astronomy

My friend and fellow science promoter Brian Cox will be on Friday Night with Jonathan Ross tonight on BBC America (it already aired last week in the UK). Brian is funny and smart, as is Wossy (as Ross is called), so this should be a great time. And speaking of Matt Smith and Doctor Who, Mr. Smith will be on Wossy’s show as well.

Speaking of speaking of Doctor Who, oh how I wish this were true.

Tip o’ the sonic screwdriver to Fizzygoo for the Obama link.


Why Didn’t the Young Earth Freeze Into an Ice Ball? | 80beats

SnowballEarthThe “young sun paradox” just won’t go away. For decades, scientists like Carl Sagan have tried to resolve this mystery of the early solar system—how the newborn Earth stayed warm enough to keep liquid water—but it continues to bob and weave around an answer. In the journal Nature, a team led by Minik Rosing proposes an alternate solution to the leading theory, which relies on the greenhouse effect hypothesis. But don’t expect the debate to end here.

The problem is this: The young Earth received much less heat from the sun. Four billion years ago, a lower solar luminosity should have left Earth’s oceans frozen over, but there is ample evidence in the Earth’s geological record that there was liquid water — and life — on the planet at the time [Space.com]. So what gives? The traditional explanation going back to the 1970s has been that a powerful greenhouse effect, far stronger than the one we experience today, kept the Earth basked in enough warmth to keep water sloshing around the planet’s surface as a liquid and not packed in solid ice. In 1972, Sagan and colleague George Mullen wrote that such an effect would have required intense carbon dioxide concentrations in the atmosphere during that period, the Archaen.

But the evidence isn’t there, Rosing argues. To research the greenhouse hypothesis, he and his team studied banded iron formations in Greenland ice dating back 3.8 billion years. They focused on two minerals, magnetite and siderite, that can provide a bellwether of the CO2 concentrations in the atmosphere. Too much CO2, and magnetite can’t form, whereas the opposite is true for siderite [ScienceNOW]. According to Rosing’s analysis, the CO2 concentration in the atmosphere could have been as high as three times what we see today, but that’s not nearly enough to account for the warmth that would’ve been needed to stave off a snowball Earth.

So, Rosing puts forth his own solution. Back then, he says, the continents were smaller and thus more of the Earth’s surface was covered by water. Since water tends to absorb more heat than land, an ultra-watery Earth could have helped conserve warmth. Secondly, he says, the early earth wasn’t a cloudy place back then, due to the fact that life had just arisen. The droplets of water that make up clouds form by glomming on to tiny particles, called cloud condensation nuclei, many of which are chemical substances produced by algae and plants, which weren’t present on the Earth at that time [Space.com]. You see the same effect today, he says, in areas of the open ocean that have neither much marine life nor much cloud cover. And if the young Earth truly had few clouds, more sunlight reached the surface.

Atmospheric scientist James Kasting says that if Rosing’s team is right, the idea could have consequences beyond our own planet’s history—it could widen the habitable zone for exoplanet hunters seeking extraterrestrial life. But regarding the young sun paradox, he says, this new answer is incomplete. “I think their mechanism fails because they just barely get up to freezing point,” Kasting said, adding that it also fails to take into account the reflectively of the planet’s ice [Discovery News].

Related Content:
DISCOVER: The Fast Young Earth
DISCOVER: Did Life Evolve in Ice?
DISCOVER: Snowball Earth
DISCOVER: Our Solar System’s Explosive Early Years

Image: Wikimedia Commons / Neethis


The Rare Humans Who See Time & Have Amazing Memories | Discoblog

synaesthesiaThe “normal” form of the condition called synesthesia is weird enough: For people with this condition, sensory information gets mixed in the brain causing them to see sounds, taste colors, or perceive numbers as having particular hues.

But psychologist David Brang is studying a bunch of people with an even odder form of synesthesia: These people can literally “see time.”

Brang’s subjects have time-space synesthesia; because they have extra neural connections between certain regions of the brain, the patients experience time as a spatial construct.

In his research, Brang describes one patient who was able to see the year as a circular ring surrounding her body that rotated clockwise throughout the year. The current month was reportedly inside her chest with the previous month in front of her chest, reports New Scientist.

Describing the condition to New Scientist, Brang said:

“In general, these individuals perceive months of the year in circular shapes, usually just as an image inside their mind’s eye.”

So, while boring, normal folks whip out their physical or online calendars to jot down plans for the future, time-space synesthetes just look straight ahead and can tell you precisely what days work for them. It’s almost like their personal version of augmented reality.

Brang found these uncommon individuals by recruiting 183 students for a trial, in which he asked them to visualize the months of the year and reconstruct that representation on a computer screen.

New Scientist writes:

Four months later the students were shown a blank screen and asked to select a position for each of the months. They were prompted with a cue month – a randomly selected month placed as a dot in the location where the student had originally placed it. Uncannily, four of the 183 students were found to be time-space synesthetes when they placed their months in a distinct spatial array – such as a circle – that was consistent over the trials.

A second test was conducted to compare the memories of time-space synesthetes to those of regular folks. Brang asked his subjects to memorize an unfamiliar spatial calendar and reproduce it. The results showed that when it came to recalling events in time, time-space synesthetes far outperformed the others.

Earlier research has shown that people with synesthesia, on an average, remember about 123 different facts related to a certain period in their life, while a regular person could recall just 39 facts.

Psychologist Julia Simner last year told the BBC:

“So the average person might remember that they went on holiday to America when they were seven…. This person would recall the name of the guesthouse, the name of the guesthouse owner and the breed of the owner’s dog.”

Related Content:
Discoblog: “Seeing” Sounds and “Hearing” Food: The Science of Synesthesia
80beats: Revealed: The Genetic Root of Seeing Sounds and Tasting Colors
Cosmic Variance: Your Mental Image of Time
Cosmic Variance: Martian Colors explains how to test for synesthesia
DISCOVER: Are We All Synesthetes?

Image: Julia Simner / BBC


Yet-Another-Genome Syndrome | The Loom

There’s a certain kind of headline I have become sick of: “Scientists Have Sequenced the Genome of Species X!”

haemophilus genome 440Fifteen years ago, things were different. In 1995, scientists published the first complete genome of a free-living organisms ever–that of a nasty germ called Haemophilus influenzae. Bear in mind, this was in the dark ages of the twentieth century, when a scientist might spend a decade trying to decipher the sequence of a single gene.

And then, with a giant thwomp, a team of scientists dropped not just one gene, but 1740 genes, spelled out across 1.8 million base pairs of DNA. At the core of the paper was an image of the entire genome, a kaleidoscopic wheel marking all the genes that it takes to be H. influenzae. It had a hypnotizing effect, like a genomic mandala. Looking at it, you knew that biology would never be the same.

Looking over that paper today, I’m struck by what a catalog it is. The authors listed every gene and sorted them by their likely function. They didn’t find a lot of big surprises about H. influenzae itself. It had genes for metabolism, they reported, and genes for attaching to host cells, and for sensing the environment. But scientists already knew that. What was remarkable was the simple fact that scientists could now sequence so much DNA in so little time.

Then came more microbes. Then, ten years ago this coming June, came a rough draft of the human genome. Then finished drafts of other animals: chickens, mice, flies, worms. Flowers, truffles, and malaria-causing parasites. They came faster and faster, cheaper and cheaper. The acceleration now means that the simple accomplishment of sequencing a genome is no longer news.

torrent220On Wednesday I caught a talk at Yale by Jonathan Rothberg, a scientist who invented “next-generation sequencing” in 2005. By sequencing vast numbers of DNA fragments at once, the new technology made it possible to get a genome’s sequence far faster than earlier methods. Rothberg’s company, 454, got bought up by Roche, leaving him without a lot to do. So he came up with a new machine: a genome sequencer about the size of a desktop printer that can knock out complete genomes with high accuracy in a matter of hours. Rothberg has been unveiling details of the machine over the past few weeks. To convince his audience that the machine actually works, he flashed another mandala wheel–the 4.7 million base pairs of E. coli.

I recalled when the first E. coli genome was published in 1997. It was the result of years of work by 17 co-authors, an event celebrated in newspapers. Now Rothberg just threw up a quick slide of the germ’s genome just to show what he could do in a matter of hours. And I have to say that the sight of yet another circular map of a genome, on its own, no longer gave me a thrill. It’s a bit like someone waving you over to a telescope and saying, “Look! I found a star!”

What remains truly exciting is the kind of research starts after the genomes are sequenced: discovering what genes do, mapping out the networks in which genes cooperate, and reconstructing the deep history of life. Thanks to the hundreds of genomes of microbes scientists can now compare, for example, they can see how the history of life is, in some ways, more like a web than a tree. Insights like these are newsworthy. The sequencing of those genomes, on its own, is not.

And yet my email inbox still gets overwhelmed with press releases about the next new genome sequence. The press releases typically read like this:

“Scientists have sequenced the genome of species X. Their research, published today in the Journal of Terribly Important Studies, will lead to new insights about this important species. Maybe it will even cure cancer or eliminate world hunger!”

And then those press releases give rise to news articles. Here are dozens of pieces that came out over the past couple days, describing the freshly-sequenced genome of the zebra finch. What did the scientists actually learn about zebra finches through this exercise? The articles typically referred to 800 genes involved in the birds learning how to sing. Of course, nobody seriously would expect them to use just one or two genes for something so complex, so this was no big surprise. The articles also mentioned that a lot of the genes were similar to human language genes. This is not really news, either. For the most part, the articles look to the future–to experiments that scientists will be able to do on zebra finches, in the hopes of learning about human speech. But that’s not news of an achievement–that’s a promise.

Perhaps press release writers and journalists are still operating on the assumption that any new genome is news. But that assumption has been wrong for years now. Let’s wait to see what scientists actually discover in those marvelous mandalas.

[Update: Thanks to Jonathan Eisen for coining the easy-t0-remember acronym for my current disorder: YAGS. That goes into my personal lexicon right away.]


Ed Roberts, Personal Computer Pioneer & Mentor to Bill Gates, Dies at 68 | 80beats

edrobertsHenry Edward Roberts didn’t set out to kick-start the computer revolution. He was just trying to get out of debt.

Roberts, who died yesterday at 68, was an Air Force man in his younger days and a medical doctor in his later ones, but it was the middle part of his life that changed the world. In the mid-1970s, Roberts started a company call Micro Instrumentation and Telemetry Systems (MITS), and in 1975 introduced the Altair 8800—one of the first computers available and affordable for home hobbyists.

When Popular Electronics magazine featured him and the computer on its cover, it caught the attention of two young computer-philes, Bill Gates and Paul Allen. Gates and Allen quickly reached out to Roberts, looking to create software for the Altair. Landing a meeting, the pair headed to Albuquerque, N.M., where Roberts’ company was located. The two went on to set up Microsoft, which had its first offices in Albuquerque [CNET].

When Roberts founded MITS, he was using his technological know-how to make calculators. But when companies like Texas Instruments began to dominate that market, Roberts got squeezed out. In the mid-1970’s, with the firm struggling with debt, Dr Roberts began to develop a computer kit for hobbyists. The result was the Altair 8800, a machine operated by switches and with no display [BBC News]. Prior to the Altair, most computers were still giant machines in university labs, but Roberts said he believed there were enough tech nerds like him in the world that a personal computer—even one as rudimentary as the 8800—would be a success.

Gates reportedly visited Roberts at the hospital days before he died, and Gates and Allen paid tribute to their mentor in a statement. “Ed was willing to take a chance on us–two young guys interested in computers long before they were commonplace–and we have always been grateful to him,” Gates and Allen said. “The day our first untested software worked on his Altair was the start of a lot of great things” [CNET].

Roberts sold off his company in 1977 and retired to farming before becoming an internist. However, his son David Roberts says, he remained interested to the last in his accidental revolution. He never lost his interest in modern technology, even asking about Apple’s highly anticipated iPad from his sick bed. “He was interested to see one,” said Roberts, who called his father “a true renaissance man” [AFP].

Related Content:
80beats: Happy 40th Birthday, Internet! (Um, Again.)
80beats: Happy 40th Birthday, Internet!
80beats: 40 Years Ago Today, the World Saw Its First Personal Computer
DISCOVER: The “Father of the Internet” Would Rather You Call Him “Vint”
DISCOVER: The Emoticon Turns 25

Image: DigiBarn Computer Museum


Pigeons outperform humans at the Monty Hall Dilemma | Not Exactly Rocket Science

Monty_hall_dilemmaImagine that you’re in a game show and your host shows you three doors. Behind one of them is a shiny car and behind the others are far less lustrous goats. You pick one of the doors and get whatever lies within. After making your choice, your host opens one of the other two doors, which inevitably reveals a goat. He then asks you if you want to stick with your original pick, or swap to the other remaining door. What do you do?

Most people think that it doesn’t make a difference and they tend to stick with their first pick. With two doors left, you should have even odds of selecting the one with the car. If you agree with this reasoning, then you have just fallen foul of one of the most infamous of mathematical problems – the Monty Hall Dilemma. In reality, you should actually swap every time – doing so means double the odds of getting the car. I will explain why shortly but if you’re currently confused, you are not alone. Over the years, the problem has ensnared countless people, including professional mathematicians. But not, it seems, pigeons.

Walter Hebranson and Julia Schroder showed that, after some training, the humble pigeon can learn the best tactic for the Monty Hall Problem, switching from their initial choice almost every time. Amazingly, humans who get similar extensive practice never develop the optimal strategies that the pigeons pick up. This doesn’t mean that pigeons are “smarter than humans” as some news stories have claimed, but it does mean that the two species approach probability problems in different ways. We suffer because we overthink the problem.

The Monty Hall Dilemma takes its name from Monty Hall, the presenter of a show called Let’s Make a Deal, which involved similar choices. The dilemma became truly legendary when it featured in a column called Ask Marilyn in Parade magazine. When columnist Marilyn vos Savant, the then holder of the Guinness World Record for highest IQ, wrote the correct solution, she was inundated with complaints.

Around 10,000 readers disagreed with her, and wrote in to say as much (these were the days when trolls had to actually pay for postage; read the last letter in particular). Many of them had PhDs and many were mathematicians. Even Paul Erdos, the most prolific mathematician in history, refused to believe this explanation until computer simulations proved beyond all doubt that always switching was the best strategy.

The problem is that most people assume that with two doors left, the odds of a car lying behind each one are 50/50. But that’s not the case – the actions of the host beforehand have shifted the odds, and engineered it so that the chosen door is half as likely to hide the car.

At the very start, the contestant has a one in three chance of picking the right door. If that’s the case, they should stick. They also have a two in three chance of picking a goat door. In these situations, the host, not wanting to reveal the car, will always pick the other goat door. The final door hides the car, so the contestant should swap. This means that there are two trials when the contestant should swap for every one trial when they should stick. The best strategy is to always swap – that way they have a two in three chance of driving off, happy and goatless.

All over the world, people are spectacularly bad at this. We almost always stick or, at best, show indifference. Hebranson and Schroder wanted to see if other species would be similarly vexed. They worked with six Silver King pigeons and altered the game show format to suit their beaks.

Each pigeon was faced with three lit keys, one of which could be pecked for food. At the first peck, all three keys switched off and after a second, two came back on including the bird’s first choice. The computer, playing the part of Monty Hall, had selected one of the unpecked keys to deactivate. If the pigeon pecked the right key of the remaining two, it earned some grain. On the first day of testing, the pigeons switched on just a third of the trials. But after a month, all six birds switched almost every time, earning virtually the maximum grainy reward.

Monty_pigeonEvery tasty reward would reinforce the pigeon’s behaviour, so if it got a meal twice as often when it switched, you’d expect it to soon learn to switch. Hebranson and Schroder demonstrated this with a cunning variant of the Monty Hall Dilemma, where the best strategy would be to stick every time. With these altered probabilities, the pigeons eventually learned the topsy-turvy tactic.

It may seem obvious that one should choose the strategy that would yield the most frequent rewards and even the dimmest pigeon should pick up the right tactic after a month of training. But try telling that to students. Hebranson and Schroder presented 13 students with a similar set-up to the pigeons. There were limited instructions and no framing storyline – just three lit keys and a goal to earn as many points as possible. They had to work out what was going on through trial and error and they had 200 goes at guessing the right key over the course of a month.

At first, they were equally likely to switch or stay. By the final trial, they were still only switching on two thirds of the trials. They had edged towards the right strategy but they were a long way from the ideal approach of the pigeons. And by the end of the study, they were showing no signs of further improvement.

Monty_humanWhy is the Monty Hall Dilemma so perplexing to humans, when mere pigeons seem to cope with it? Hebranson and Schroder think this is a case of our own vaunted intelligence working against us. When faced with a problem like this, we try to think it through, working out the best solution before we do anything. This would be fine, except we’re really quite bad at problems involving conditional probability (such as “if this happens, what are the odds of that happening?”). Despite our best attempts at reasoning, most of us arrive at the wrong answer.

Pigeons, on the other hand, rely on experience to work out probabilities. They have a go, and they choose the strategy that seems to be paying off best. They also seem immune to a quirk of ours called “probability matching”. If the odds of winning by switching are two in three, we’ll switch on two out of three occasions, even though that’s a worse strategy than always switching. This is, of course, exactly what the students in Hebranson and Schroder’s experiments did. The pigeons, on the other hand, always switched – no probability matching for them.

In short, pigeons succeed because they don’t over-think the problem. It’s telling that among humans, it’s the youngest students who do best at this puzzle. Eighth graders are actually more likely to work out the benefits of switching than older and supposedly wiser university students. Education, it seems, actually worsens our performance at the Monty Hall Dilemma.

Reference: Journal of Comparative Psychology http://dx.doi.org/10.1037/a0017703

More on bird brains:

Twitter.jpg Facebook.jpg Feed.jpg Book.jpg

Tourist gets dramatic volcano plume snapshot | Bad Astronomy

soufriere_collapseA little while back I posted a dramatic satellite image of the Soufrière Hills volcano on Montserrat erupting (thumbnail on the right; click to get the embiggened shot).

Well, as from above, so it is from the side as well. Canadian tourist Mary Jo Penkala was on a plane near Montserrat at the time, and after the pilot made an announcement for passengers to look out the port window, she snapped this:

montserrat_volcano

Wow! I’ve seen a lot of amazing things out my airplane window, but never anything close to this. Ms. Penkala is enjoying a bit of well-deserved notoriety for the picture. It’s amazing in and of itself, but I love how we now can get so many views of volcanic plumes: from below, the side, and even from space. These natural disasters cause a huge amount of damage and grief, of course, but the good news is that with enough study we can learn more about them, how to predict them, and when is the best time to get people out of any potentially dangerous regions.


“Counterintuitive” social science finding of the dayGene Expression

Quotes because you might not find it counterintuitive, Self-Esteem Development From Young Adulthood to Old Age: A Cohort-Sequential Longitudinal Study:

The authors examined the development of self-esteem from young adulthood to old age. Data came from the Americans’ Changing Lives study, which includes 4 assessments across a 16-year period of a nationally representative sample of 3,617 individuals aged 25 years to 104 years. Latent growth curve analyses indicated that self-esteem follows a quadratic trajectory across the adult life span, increasing during young and middle adulthood, reaching a peak at about age 60 years, and then declining in old age. No cohort differences in the self-esteem trajectory were found. Women had lower self-esteem than did men in young adulthood, but their trajectories converged in old age. Whites and Blacks had similar trajectories in young and middle adulthood, but the self-esteem of Blacks declined more sharply in old age than did the self-esteem of Whites. More educated individuals had higher self-esteem than did less educated individuals, but their trajectories were similar. Moreover, the results suggested that changes in socioeconomic status and physical health account for the decline in self-esteem that occurs in old age

selfestAs a person well under 60 but slowing walking in that direction I’m pretty heartened by this. On the other hand, I’m one o those people who also tend to think that “self-esteem” is a bit overrated, so I’m not that heartened.

Via Randall Parker

Lunokhod 2 Found!

The Lunokhot 2 has been found. Click for a larger image. Image: University of Western Ontario / LRO

Finally!  Way back in 1973 the Soviet Union launched a lunar rover.  Truth is they launched a couple rovers, this particular one was called Lunokhod 2.

The rover was very successful, operating for over four months sending back: 86 panoramas, over 80,000 TV images, conducting many lunar surface and other tests, and traveling over 22 miles (35 km) in the process.  Then sadly the rover met its demise.  Apparently the open lid containing solar cells touched a crater wall at some point and got moon dirt in it.  When the lid closed for the night, the dirt went down into the radiators and the next day when the lid was opened the radiators could no longer cool the rover because of the insulating value of the dirt and basically cooked it.

I’ve been off-and-on looking at the LRO data for signs of these rovers for a while now without any luck.  Seems every place I wanted to look at didn’t have data.   As it turns out a fellow named Phil Stooke, a professor from the University of Western Ontario has found the Lunokhod 2.  YAY!  How did he do it?  He used images released just days ago from the LRO and maps from his own.  As I found out the “maps of his own” bit was a really important part.  The maps are needed to be able to pinpoint surface features were needed to be able to zoom in on the surface image someplace close to the target – I found out it was nearly impossible to find very small features any other way.

Here’s the time line of Lunokhod 2 (and Luna 21) from Zarya.

Read the press release from the University of Western Ontario.

After successfully solving a 37 year old mystery of Lunokhod, it sounds like Professor Stooke is turning his attention to Mars.  I hope he goes looking for the Beagle 2!!