What do you get when you cross slightly evolved, status seeking monkeys with the scientific method?

You get... a mockery of the great method of science, in the same way that monkeys, when presented with a keyboard use it primarily to defacate on, and take only a secondary interest in the fact that pressing a key makes a character appear. Analogously, slightly-evolved monkey scientists use scientific methods primarily to increase their own status within the slightly-evolved monkey scientist clan. I am not joking, see this:

An Example: Science at Low Pre-Study Odds:

Let us assume that a team of investigators performs a whole genome association study to test whether any of 100,000 gene polymorphisms are associated with susceptibility to schizophrenia. Based on what we know about the extent of heritability of the disease, it is reasonable to expect that probably around ten gene polymorphisms among those tested would be truly associated with schizophrenia, with relatively similar odds ratios around 1.3 for the ten or so polymorphisms and with a fairly similar power to identify any of them. Then R = 10/100,000 = 10?4, and the pre-study probability for any polymorphism to be associated with schizophrenia is also R/(R + 1) = 10?4. Let us also suppose that the study has 60% power to fi nd an association with an odds ratio of 1.3 at ? = 0.05. Then it can be estimated that if a statistically signifi cant association is found with the p-value barely crossing the 0.05 threshold, the post-study probability that this is true increases about 12-fold compared with the pre-study probability, but it is still only 12 × 10?4. Now let us suppose that the investigators manipulate their design, analyses, and reporting so as to make more relationships cross the p = 0.05 threshold even though this would not have been crossed with a perfectly adhered to design and analysis and with perfect comprehensive reporting of the results, strictly according to the original study plan. Such manipulation could be done, for example, with serendipitous inclusion or exclusion of certain patients or controls, post hoc subgroup analyses, investigation of genetic contrasts that were not originally specifi ed, changes in the disease or control defi nitions, and various combinations of selective or distorted reporting of the results. Commercially available “data mining” packages actually are proud of their ability to yield statistically signifi cant results through data dredging. In the presence of bias with u = 0.10, the poststudy probability that a research fi nding is true is only 4.4 × 10?4. Furthermore, even in the absence of any bias, when ten independent research teams perform similar experiments around the world, if one of them fi nds a formally statistically signifi cant association, the probability that the research fi nding is true is only 1.5 × 10?4, hardly any higher than the probability we had before any of this extensive research was undertaken!

Robin hanson has written extensively on science as mostly being about status seeking behaviour, with little interest in whether the global pool of scientific knowledge is increased.

Related Posts

Comments are closed.