Ground broken for Camden medical school – Philadelphia Inquirer


NJ.com
Ground broken for Camden medical school
Philadelphia Inquirer
The new Cooper Medical School in Camden will be both an economic boost to the city and a step toward improving the state's health care, ...
Officials break ground on Camden medical school, state's first in 30 yearsNJ.com
Cooper-Rowan med school groundbreaking hailed as area boostGloucester County Times - NJ.com

all 6 news articles »

Harvard plans new primary-care center – ModernHealthcare.com


WTNH
Harvard plans new primary-care center
ModernHealthcare.com
Harvard Medical School is using a $30 million philanthropic gift to establish a new Center for Primary Care, which will focus on researching ...
Harvard Medical School Launches $30M Primary Care CenterHarvard Crimson
Primary Care Center Announced by Harvard Medical SchoolRillara News (press release)
Harvard Medical School to Open Primary Care CenterGather.com
Boston Herald -PR Newswire (press release) -Bloomberg
all 45 news articles »

Singapore’s third medical school to reserve 80% of places for locals – TODAYonline


TopNews United Kingdom

Blog Discussion with an SBM Critic

Over the last couple of days I have been engaged at NeuroLogica in a discussion with a fellow blogger, Marya Zilberberg who blogs at Healthcare, etc. Since the topic of discussion is science-based medicine I thought it appropriate to reproduce my two posts here, which contain links to her posts.

A Post-Modernist Response to Science-Based Medicine

I receive frequent commentary on my public writing, which is great. The feature that most distinguishes blogs is that they are conversations. So I am glad to see that science-based medicine (a term I coined) is getting targeted for criticism in other blogs. One blogger, Marya Zilberberg at Healthcare, etc., has written a series of posts responding to what she thinks is our position at Science-based medicine. What she has done, however, is make many of the logical fallacies typically committed in defense of unscientific medical modalities and framed them as one giant straw man.

She is partly responding to this article of mine on SBM (What’s the harm) in which I make the point that medicine is a risk vs benefit game. Ethical responsible medical practice involves interventions where there is at least the probability of doing more benefit than harm with proper informed consent, so the patient knows what those chances are. Using scientifically dubious treatments, where there is little or no chance of benefit, especially when they are overhyped, is therefore unethical. And further, the “harm” side of the equation needs to include all forms of harm, not just direct physical harm.

Zilberberg’s response is the typical tu quoque logical fallacy — well, science-based medicine is not all it’s cracked up to be either, so there. She writes:

Now, let’s get on to “proof” in science-based medicine. As you well know, while we do have evidence for efficacy and safety of some modalities, many are grandfathered without any science. Even those that are shown to have acceptable efficacy and safety profiles as mandated by the FDA, are arguably (and many do argue) not all that. There is an important concept in clinical science of heterogeneous response to treatment, HTE, which I have addressed extensively on my blog. I did not make it up, it is very real, and it is this phenomenon that makes it difficult to predict how an individual will respond to a particular intervention. This confounds much of what we think is God’s own word on what is supposed to work in allopathic medicine.

This is also the fallacy of the perfect solution — since science-based medicine is not perfect, there is no legitimate basis for criticism of any modality. This is also premised on the false dichotomy of “allopathic medicine” (a derogatory term only used, in my experience, by defenders of dubious medicine) vs “alternative” medicine (which I will refer to as CAM for short). I and others at SBM have been clear that we eschew this false dichotomy. There is only medicine with varying degrees of plausibility and evidence — there is a continuum, and we advocate always using the best that is available. We also think there should be a minimum standard, a fuzzy line of plausibility and evidence below which treatments should only be given with proper informed consent as part of an approved clinical trial. And a further line below which even research is unethical because there is no plausible potential for benefit.

These principles are, in fact, already part of ethical medicine. We did not invent these concepts. It is, rather, the proponents of CAM who wish to do away with this ethical standard — to create a false dichotomy in order to establish a double standard. We are not trying to create a new standard, just to do away with the double standard of CAM.

She refers to the heterogeneous response to treatment, again as if realization of this basic fact is not already part of science-based medicine. I, in fact, explain this to patients all the time. Our knowledge of treatments is based upon statistics, but we can never know ahead of time how an individual patient will respond. What’s the alternative? Until we get better at predicting individual response (which really will just be another application of statistics), this is the best we can do. That is why you monitor the individual response to any treatment, and act accordingly. This is basic medical student stuff, but Zilberberg acts as if this is a big revelation for science-based medicine.

We at SBM advocate for the highest scientific standards of medicine, and apply that across the board — including with pharmaceutical companies, surgeons, and anything else that is labeled as “mainstream.” Again — we do not make false categories and distinctions. It is all medicine.

The reference to “God’s own word” is an obvious allusion to the bad-old-days of paternalistic medicine (dead and buried for decades now), or the TV caricature of a doctor with a God complex. This is a typical ploy — portray any attempt at defending a scientific standard in medicine as paternalistic arrogance. In fact, Zilberberg dedicates an entire blog post to this fallacy. She writes:

First of all, it is my belief that all interventions should be approached with equanimity, if not equipoise. Although I am quite dubious that either healing crystals or Reiki can produce actual results, I do not want to confuse the absence of any evidence to this effect with the evidence of absence of the effect. Although I am not that interested in allocating resources to studying these fields, it would be paternalistic of me to bar their further investigation. So the society can decide what it wants to do with them, and in the meantime every individual can make her/his own choice whether to spend their money on them.

This is clearly where we differ. I do think, from reading her writing, that Zilberberg means well and is sincere in her positions (unlike some I criticize who I feel are just trying to sell something). But notice the logical contortions in her position — she wants “equipoise” with regard to all interventions, and would not dare dictate how research money is spent. I should point out that there is a range of opinions on SBM when it comes to regulation — so we are not a united front on this score. We range from libertarians who think that we should educate the public and professionals, but are against laws that would restrict access to unscientific modalities. Essentially, people have the right to make stupid decisions. Others believe that there needs to be a minimum safety net against fraud and quackery, and in fact the public wants there to be one and believes there already is one. I don’t want to get bogged down in this debate on this blog entry — I am just pointing out that Zilberberg’s premise is overly simplistic and paints with too broad a brush.

But the real point here is that she is taking an almost post-modernist position that we need to approach all claims in medicine with “equipoise.” She says that society can decide how research money is spent, even if she would not personally research an implausible topic. Depending upon how you slice it, this is not necessarily far off from my position. If people want to raise money to research an implausible question they should go right ahead. I never proposed banning implausible research. My position, rather, is that we should not waste limited public/government research resources on highly implausible modalities.

I would also add, however, that once you start doing research on humans there is a host of ethics that also come into play. In human research it is the accepted ethical standard that subjects should at least have a chance of benefiting from the treatments being studied, or at least there should be a greater chance of benefit than harm. I don’t see how this ethical standard can be met with homeopathy, for example, where there is essentially zero chance of benefit. At some point you pass a line of infinitessimal plausibility where the ethics become problematic.

Zilberberg then makes the “absence of evidence vs evidence of absence” mistake — really an oversimplification of this concept to the point of being wrong. While it is true that the absence of evidence if not necessarily evidence of absence — it can be, depending upon how thoroughly you have looked. If I search my house for a specific item and don’t find it, that is pretty good evidence that it’s not there. It is not “proof” of absence, but it is evidence. With many of the modalities that Zilberberg admits she is personally dubious about there is evidence of absence of an effect. This evidence comes in two forms — all of the science that tells us the modalities are highly implausible, and often there is clinical evidence of lack of an effect. To pretend otherwise is dishonest — it is hiding from the facts out of political correctness.

Further, our patients do not want equipoise from us. They want our informed opinion. When patients ask me if they should take a homeopathic remedy I don’t give them a wishy washy answer. I give them my informed opinion, and they are grateful to have it. In the comments to her blog a commenter speculates about my bedside manner, assuming, essentially, that I must be a paternalistic ass. This is the typical cardboard caricature I encounter, and it has no relationship to reality. It is possible to give patients useful information without being judgmental. To give them informed consent (how do you do this, by the way, without giving them information?) but understand that they will make up their own minds. Patients are in charge of their own health care, and our job as clinicians, more than ever, is to give them the information and perspective they need to make good decisions. This does not demand “equipoise”, but evidence and perspective. In my opinion equipoise in the face of ridiculously implausible claims and evidence of lack of efficacy is a disservice to patients and a violation of trust.

Ironically, Zilberberg concludes:

Bottom line, we need to appreciate that none of the science is all that straightforward. Let us not dumb down the arguments and create false dichotomies. If we do, no one wins.

Does she actually read science-based medicine? I am left to wonder — since we regularly argue for the complexity of the science of medicine. I want people to understand how complex the relationship is, so they are not shocked every time conflicting studies come out. Medical science is a messy business, and it is challenging often to infer what the best approach is. I want the profession and the public to have a much more nuanced understanding of medical science, and for the media to do a better job of representing it.

This is especially true since we do not have a paternalistic system. Patients are partners in their own health care, and therefore it helps me do my job when they understand the science that underpins medicine.

Zilberberg’s position is anti-science, although perhaps not deliberately so. It is anti-science in a post-modernist sense. She points out all the limitations of science, as if that means we cannot come to any meaningful decision, and therefore must treat all claims as equal. But all claims are not equal. Even the best are imperfect, but we can still apply science and evidence to make informed decisions about the probability of risk vs benefit. And there are some claims that are so against science and evidence (like homeopathy) that any stance other than rejection is a violation, in my opinion, of medical ethics and the trust that society places in medical professionals.

In Zilberberg’s world, however, any such judgments are the equivalent of pronouncing that these treatments over here in pile A are deemed “scientific” (as if by the word of God) and are accepted. And these over here in pile B are deemed “nonsense” and are to be ridiculed. But the false dichotomy is in her mind, not in science-based medicine. We are the ones railing against the false dichotomy — that of CAM which seeks to create a double standard. All we advocate is one consistent standard of science and evidence when evaluating all medical claims, and the rational application of science to the practice of medicine.

One final note — I would much prefer to have a conversation with the critics of science-based medicine that does not constantly involve defending SBM and myself from false accusations of arrogance and paternalism. I think it says a lot about their intellectual position when that is constantly the best they have.

Dr. Zilberberg Responds

Dr. Zilberberg responded to my original post and significantly modified her tone, to her credit. (She was simultaneously responding to Orac’s analysis of her posts as well.) Here is my analysis of that post.

The Tone Thing

I will address her main points below, but first my final thoughts on the “tone” thing. While she admits fault in setting the “confrontational tone,” I don’t think she quite gets what Orac and I were objecting to. I actually don’t mind a confrontational approach — as long as it is substantive (that’s the way science works — if you have a point to make, bring it on). We were objecting to her mischaracterizing our position and making ad hominem attacks in place of substantive criticism — essentially using the “arrogant” gambit with which we are all too familiar. Her readers obviously picked up on this, and piled one, accusing us of being bullies and thanking her for slapping us down. We objected to her logical fallacies, not her tone.

Interestingly Zilberberg’s initial response was dismissive, and she reiterated the charge of paternalism and arrogance, writing: “If the shoe fits?” At least now she seems to realize that if we are going to have a productive discussion, focusing on ad hominem attacks will be counterproductive.

Incidentally, having written about medicine for years I have definitely seen a strong pattern. When I criticize the logic and factual premises of another person’s argument I am frequently accused of being mean by people who then attack me personally. It seems many people do not understand the difference between a strong but substantive criticism and a personal attack. Zilberberg was falling into this category, but has significantly (if incompletely) backed off from that with her latest post.

One more minor point — “allopathic” is derogatory and does not apply to modern medicine (it was coined by Samuel Hahneman to refer to the poisons that passed for medicine in his time, and was definitely meant to be a criticism). I would suggest she drop this term rather than defend it.

Evidence in Medicine

Zilberberg then launches into a meaty discussion of what her position actually is. She observes that perhaps we are not that far off in our positions, which I think is true. There is a meaningful difference in spin — the final conclusions drawn from the analysis, but her analysis of the role of evidence in medicine is reasonable. But again, to clarify, Orac and I were not objecting to the point that evidence in medicine is messy and complex. We were objecting to the accusation that we do not understand this, and that we are promoting an overly simplistic and cheerleading approach to science in medicine. This left me with the impression that Zilberberg has not read deeply into the Science-Based Medicine website, or at least has failed to grasp what it is we are actually saying.

If she had she would have seen post after post in which SBM authors were pointing out all of the complexities and deficiencies of evidence in medicine that she and others might also point to. That is core to the point of SBM — evidence is complex. She might, in fact, have read my series of posts on evidence in medicine. We do spend a great deal of time pointing this out in the context of so-called CAM, because CAM proponents are the ones who most profoundly take a simplistic approach to the evidence. They engage in black-and-white thinking, display intolerance of ambiguity, and frequently advocate for the reliance on very problematic low-grade evidence to support their claims. But we also consistently apply the same standards to surgery and the pharmaceutical industry, and anything “mainstream.’

Zilberberg reviews the relative roles of experimental evidence vs observational evidence. Her analysis is reasonable, but I think she overstates the utility of observational data a bit (and she admits to a fondness for this type of data). The bottom line is that each type of evidence (basic science, observational, and experimental — and even anecdotal) has its own strengths and weaknesses, and the best result comes from analyzing all kinds of scientific evidence and looking for a consensus of evidence. That is, in fact, OUR criticism of evidence-based medicine -over reliance on randomized controlled clinical trials and undervaluing other forms of scientific evidence. That is why we advocate for “science”-based medicine, and not just “evidence”-based medicine.

Each type of evidence, in fact, is abused. We criticize the inappropriate extrapolation from basic science to clinical claims, assuming causation from observational correlation, failure to realize the limits of clinical trials, and the use of pragmatic studies as if they were evidence for efficacy.

Zilberberg also clarifies her position by saying that she feels there is good scientific evidence for some of medicine, but it seems she differs from my position in how evidence-based modern medicine actually is.

We can argue endlessly about this question — how much of modern medicine is based upon solid evidence — each pointing to limited examples and essentially giving our bias. But there are some facts we can point to. Zilberberg writes:

While it is true that the oft-cited 5-20% number representing the proportion of medical treatments having solid evidence behind them is very likely outdated, the kind of evidence we are talking about is a different matter.

The “5-20% number” is not outdated — it’s a myth. Actually, I had previously heard 15% as the low end, but I guess that number keeps dropping. I wrote previously about this myth here. The 15% number was based upon an extremely small survey of primary care practices in the north of England — in 1961. That’s almost 50 years ago. The number was never very relevant, and now it’s a joke.

More recent surveys of medical practice come to very different numbers. Bob Imrie reviewed the published evidence:

Thus, published results show an average of 37.02% of interventions are supported by RCT (median = 38%). They show an average of 76% of interventions are supported by some form of compelling evidence (median = 78%).

Of course, where you draw the line for “supported by compelling evidence” will determine what the percentage figure is. But the bottom line is that the 15% figure is basically an urban legend, and “5%” is nothing short of propaganda. More reasonable estimates range much higher.

And — the point of EBM and SBM is that we can and should do better. We also need to do better in adhering to EBM guidelines where they exist, and in utilizing continuing medical education and other mechanisms of quality control to improve adherence to the evidence where it does exist.

The difference in spin is not subtle. We can look at the evidence and say: modern medicine has a culture of science, endeavors to be scientific, and basically the system works but the process is complex and messy and there are multiple ways in which we can do better. Meanwhile someone else can look at the same data and conclude: modern medicine is broken, it is based upon arrogance, authority, and greed, and we can just throw up our hands and conclude that any treatment is as likely to be of value as any other, no matter how silly it may seem scientifically.

My position is essentially the former. Zilberberg came off originally as being close to the latter (and judging by the comments, many readers took her position to be supportive of the latter), but now has clarified that she is somewhere in the middle.

CAM

Zilberberg also clarifies her position on CAM. She had previously written that she advocates a position of “equipoise” towards clinical claims. Even though she might not use certain modalities herself, she sees no basis to condemn the use of them by others. I characterized this position as political correctness gone wild — to the point of practical post-modernism. Now she writes:

My belief is that all modalities that may impact what happens to public’s health need to be evaluated for safety, not question. I think we both agree, since there is really no reason to think that something like homeopathy has anything that can help, by the same token we do not believe that it have anything that can hurt. Same with healing crystals, reiki and prayer. So, if a person wants to engage in these activities, and they are perfectly safe physically, be my guest. Other modalities, such as chiropractic, acupuncture, herbalism and the like, definitely need to be evaluated more stringently, as there is reason to think that they may cause harm.

This is a common position to take. Val Jones at SBM coined the term “shruggie” to refer to this position — in essence, if there is no direct harm, then who cares what people do. First, as I discussed very recently on SBM, there are many types of harm from unscientific medical modalities other than direct physical harm. So I do not find this position tenable for that reason alone.

Further, context is everything. There are actually a variety of positions that authors at SBM take when it comes to regulating medical practice. We all generally believe that medical professionals should not engage in nor promote unscientific methods. In fact, we should oppose their adoption and promotion, we should oppose their inclusion in universities and mainstream hospitals, and spending public funds on researching extremely implausible or already disproven modalities. That seems to be a point of difference between myself and Zilberberg.

I personally do not oppose individuals doing whatever they want when it comes to their own health. If you want to chew on tree bark (a vivid example given to me by someone else), go right ahead. What I object to is someone selling the tree bark and claiming that it cures cancer based upon nothing but legend and anecdote, and scaring their customers away from proven therapies in order to make the sale. I object to distortions of logic and science in order to confuse the public so as to better market worthless or harmful products. And I object to medical professionals looking the other way out of misguided political correctness, or simply a naivete as to the significant harm that is done.

SBM has a huge consumer protection mission, and it puzzles frustrates me that this mission is so often and so thoroughly misrepresented. This misrepresentation is deliberate — part of the “health freedom” movement — and seeks to portray all health care consumer protection activity as arrogant elitism and protectionism. This is identical to the intelligent design movement’s representation of all attempts at quality control in education as arrogant elitism.

What I don’t understand is Zilberberg’s apparent position that, while she knows homeopathy is utterly worthless, a physician should refrain from telling her patients exactly that.

Vaccine Skepticism

Zilberberg goes on to argue that she is not anti-vaccine, as she has been accused (not by me). I have no reason not to accept her word on this, and it is good that she has clarified her position.

But I do think she is displaying a lack of appreciation for the nature of the anti-vaccine movement. As an example, if one publicly expresses doubt about an aspect of currently accepted Darwinian evolution it would be nice if they understand the many ways in which the scientific discourse is exploited by creationists, so that they don’t accidentally give succor to an anti-scientific movement.

Likewise, any public discussion about vaccines, while it should be candid and completely honest, should ideally be done with an adequate familiarity with the anti-vaccine movement’s propaganda so that one’s words and positions are not easily exploited. In fact, while expressing skepticism about a particular vaccine or vaccine program, I would recommend specifically clarifying one’s position to distance themselves from the extremists. Otherwise you are inviting misinterpretation.

Conclusion

The take home message from this exchange is that, in my opinion, accusations of using harsh tone or of arrogance are an ad hominem distraction from the real issue — what is the optimal relationship between the practice of medicine and the underlying science of medicine.

Zilberberg engaged fully in this distraction, but is now slowly backing away (but not enough, in my opinion). I think this was largely due to the fact that she has been taken in by the very active and sophisticated propaganda campaigns of CAM proponents. She seems to have bought into their rhetoric, and did not read carefully enough into our writing at SBM to see through it.

We are approaching 1000 blog posts at SBM. I don’t expect critics to read every post, but a tiny bit of scholarly due diligence would be nice, before essentially buying into the lies and distortions of our critics.

We at SBM write frequently about the complexity and limitations of the science of medicine. That is our mantra — a nuanced and sophisticated approach to evidence is needed. But at the end of the day, some treatments are better than others. We can accept and reject practices based upon plausibility and evidence, even while there is a vast gray zone in the middle where we just don’t know yet.

It is misleading and ironic in the extreme to criticize promoters of SBM for taking a simplistic approach to evidence. That is the opposite of the truth. Meanwhile, promoters of all sorts of so-called CAM do take a simplistic and highly distorted approach to evidence, display an intolerance of uncertainty, systematically misrepresent the evidence to their clients and the public, think in stark black-and-white terms, engage in bait-and-switch deceptions, distort the positions of their critics, rely upon low grade evidence and logical fallacies for their claims, and then hide behind political correctness, post-modernism, distractions about “health care freedom”, special pleading (science can’t test my claims), and accusations of arrogance and paternalism.

All of this behavior is carefully documented in the pages of Science-Based Medicine. Would-be critics of SBM should try reading some of them before launching into misguided criticism of what is ultimately a straw man of our actual positions.

I take Zilberberg at her word that she is interested in genuine discussion, and she has at least moved in that direction. I recommend she step back, read some more of SBM and see what we actually have to say about science and medicine.

Fatigued by a Fake Disease

One of the realities of being a pharmacist is that we’re easily accessible. There’s no appointment necessary for consultation and advice at the pharmacy counter. Questions range from “Does this look infected?” (Yes) to “What should I do about this chest pain?” to more routine questions about conditions that can easily be self-treated. Part of the pharmacist’s role is triage — advising on conditions that can be self-managed, and making medical referrals when warranted. Among the most common questions I receive are related to stress and fatigue. Energy levels are are down, and patients want advice, and solutions. Some want a “quick fix,” believing that the right combination of B-vitamins are all that stand between them and unlimited energy. Others may ask if prescription drugs or caffeine tablets could help. Evaluating vague symptoms is a challenge. Many of us have busy lifestyles, and don’t get the sleep and exercise we need. We may compromise our diets in the interest of time and convenience. With some simple questions I might make a few basic lifestyle recommendations, talk about the evidence supporting supplements, and suggest physician follow-up if symptoms persist. Fatigue and stress may be part of life, but they’re also symptoms of serious medical conditions. But they can be hard to treat because they’re non-specific and may not be easily distinguishable from the fatigue of, well, life.

This same vague collection of symptoms is called something entirely different in the alternative health world. It’s branded “adrenal fatigue,” an invented condition that’s widely embraced as real among alternative health providers. There’s no evidence that adrenal fatigue actually exists. The public education arm of the Endocrine Society, representing 14,000 endocrinologists, recently issued the following advisory:

“Adrenal fatigue” is not a real medical condition. There are no scientific facts to support the theory that long-term mental, emotional, or physical stress drains the adrenal glands and causes many common symptoms.

Unequivocal words. But facts about adrenal fatigue neatly illustrate why a science-based approach is a consumer’s best protection against being diagnosed with a fake disease.

The adrenals are a pair of glands that sit on the kidneys and produce several hormones, including the stress hormones epinephrine and norepiniephrine that are associated with the “fight or flight” response.  Can you tire these glands out? In the absence of any scientific evidence, chiropractor and naturopath James Wilson coined the term “adrenal fatigue” in his 1998 book of the same name. Take a look at the James’ own questionnaire, at adrenalfatigue.org, to see if you have have it. Do you ever experience the following?

  1. Tired for no reason?
  2. Having trouble getting up in the morning?
  3. Need coffee, cola, salty or sweet snacks to keep going?
  4. Feeling run down and stressed?
  5. Crave salty or sweet snacks?
  6. Struggling to keep up with life’s daily demands?
  7. Can’t bounce back from stress or illness?
  8. Not having fun anymore?
  9. Decreased sex drive?

If you answered yes to any of these questions, you may have adrenal fatigue.

Some lifestyles are apparently more vulnerable to adrenal fatigue, including single parents, shift workers, an “unhappily married person”, and the “person who is all work, no play.” There’s no information provided to substantiate the quiz, qualify the vague terminology, or link to any relevant literature. (Of course there is the usual Quack Miranda warning which makes all of this possible: “These statements have not been evaluated by the Food & Drug Administration … etc.”)

Based on this quiz, it’s safe to assume that adrenal fatigue is the most prevalent fake disease in the world. And sure enough, that’s what its proponents think, too:

Dr. John Tinterra, a medical doctor who specialized in low adrenal function, said in 1969 that he estimated that approximately 16% of the public could be classified as severe, but that if all indications of low cortisol were included, the percentage would be more like 66%. This was before the extreme stress of 21st century living, 9/11, and the severe economic recession we are experiencing.

So let’s look into the medical literature on adrenal fatigue. There’s no entry in Dorland’s medical dictionary, nor does the ICD classify it as a medical condition. Pubmed lists only one relevant paper which is a review by two naturopaths, and published in the Alternative Medicine Review. But there’s no evidence for them to review.

Fake diseases are compilations of various symptoms into conditions without any scientific basis. Peter Lipson has examined this in detail here at SBM. As Dr. Lipson points out, it’s human nature to want answers and to understand patterns of symptoms. Defining a cluster of symptoms in general terms is the first mistake. Symptoms need to be collated in a rational way to understand the parameters of the disorder. With adrenal fatigue, there’s no objective operational description, nor is there a validated symptom score. Using a vague list of symptoms to identify patients is the second mistake. While laboratory tests are advertised for identifying adrenal fatigue, there’s no persuasive data to demonstrate that blood or saliva tests provide any meaningful information, or are correlated with any underlying pathology.

Adrenal fatigue shouldn’t be confused with adrenal insufficiency, a legitimate medical condition that can be diagnosed with laboratory tests and has a defined symptomatology. Addison’s disease causes primary adrenal insufficiency and usually has an autoimmune cause, with symptoms appearing when most of the adrenal cortex has been destroyed. Secondary adrenal insufficiency is cause by pituitary disorder that gives insufficient hormonal stimulation to the adrenals. Some liken adrenal fatigue to a milder form of adrenal insufficiency — but there’s no underlying pathology that has been associated with adrenal fatigue. That’s actually a common method of disease invention: take a real disease and claim that it exists in a subclinical form, though of course it lacks a single unambiguous sign or symptom. We are supposed to believe that it’s still a serious problem even though it is, by definition, so mild that it is undiagnosable by any physician.

While adrenal fatigue may not exist, the same can’t be said for the treatments. When you’re treating a fake disease, anything goes. Everything from homeopathy to herbal remedies to hydrotherapy, to traditional Chinese medicine and vitamin supplements are advocated for treatment. The endpoints of treatment are as nonspecific as the criteria for diagnosis. Young, conveniently, has his own supplement programs. The Adrenal Fatigue Institute (apparently unrelated to Young) sells a supplement called Cylapril via TV infomercials and online ads. Disappointingly but perhaps not suprisingly, there are a number of health professionals that offer adrenal fatigue services, from labs that will diagnose it with scientific-looking lab reports [PDF], to pharmacies that offer specialty-compounded adrenal fatigue products.

Conclusion

While adrenal fatigue may not exist, this doesn’t mean the symptoms people experience aren’t real. These same symptoms could be caused by true medical conditions such as sleep apnea, adrenal insufficiency, or depression. Accepting a fake disease diagnosis from an unqualified practitioner is arguably worse. Patients don’t receive a science-based evaluation of their symptoms, and they may be sold unnecessary treatments that are probably ineffective and potentially harmful. There’s no question that it would be frustrating to be experiencing fatigue symptoms and then to be told by a health professional that there is nothing medically wrong. But that is arguably better than the distraction of treating a fictitious condition.

Corporate pharma ethics and you

Although I’m one of the few non-clinicians writing here at SBM, I think about clinical trials a great deal – especially this week.

First, our colleague, Dr. David Gorski, had a superb analysis and highly-commented post on The Atlantic story by David H. Freedman about the work of John Ioannadis – more accurately, on Freedman’s misinterpretation of Ioannadis’s work and Dr. Gorski’s comments. While too rich to distill to one line, Dr. Gorski’s post struck me in that we who study the scientific basis of medicine actually change our minds when new data become available. That is a GoodThing – I want my physician to guide my care based on the latest data that challenges or proves incorrect previously held assumptions. However, this concept is not well-appreciated in a society that speaks in absolutes (broadly, not just with regard to medicine), expecting benefits with no assumption of risk or sacrifice in reaping those benefits. Indeed, the fact that we change our minds, evolving and refining disease prevention and treatment approaches, is how science and medicine move forward.

Then, I had the opportunity to hear an excellent talk on pharmaceutical bioethics by Ross E. McKinney, Jr., MD, Director of the Trent Center for Humanities, Bioethics, and History of Medicine at Duke University School of Medicine. McKinney is a pediatrics infectious disease specialist who led and published landmark Phase I and Phase II trials zidovudine (AZT) for pediatric AIDS patients. While he continues working in this realm, McKinney also studies clinical research ethics, conflicts of interest, and informed consent. I was absolutely fascinated and refreshed by hearing from an expert who while describing and citing major ethical lapses in our system of drug development is also willing to propose solutions and do the hard thinking required for us to maximize the benefits we derive from pharmaceuticals while minimizing unethical behavior.
From his presentation abstract:

The system the United States uses to develop and approve new drugs and devices is fraught with ethical problems. On the one hand, tremendous strides have been made in the treatment of HIV, cancer, and heart disease. Drug development can work and save human lives. On the other hand, drug companies have repeatedly withheld vital information that directly affects human health. Sins of omission that cost human lives have become part of the cost of doing business. Why have we allowed this situation to evolve, and what can we do to improve ethical behavior on the part of the pharmaceutical and device industry.

(Related: See this Dr. Peter Lipson SBM post on our “tremendous strides” in heart disease.)

Evil drug companies

The primary case for discussion was the well-known Avandia episode where GlaxoSmithKline was shown in a 2007 NEJM paper by Steven Nissen to have knowledge of the increased cardiovascular risk of their PPAR? agonist diabetes drug, rosiglitazone, effects not reported for pioglitazone (Actos), a similar drug from Takeda. He then cited the 2008 Winklemeyer article in Archives of Internal Medicine that retrospectively assessed the risks of the two drugs in 28,000+ patients over 29,000+ patient-years and concluded that rosiglitazone was associated with 15% greater mortality and 13% more cases of congestive heart failure than in patients taking pioglitazone. It was in the public’s best interest that a prospective trial of the two drugs be done and while GSK ultimately tried to launch such a study, patient recruitment was hindered by news of rosiglitazone’s safety.

McKinney began by noting that we need to accept the fact that a pharmaceutical company’s primary mission is to produce a return for shareholders by bringing the most effective drugs to market for the largest population whose benefits far outweigh their adverse effects.  While sitting there, I also began to think about this concept more braodly: for readers who think that “drug companies” are evil profit-mongers, I encourage you to take a look at the precise stock holdings in the mutual funds of your 401(k) or 403(b) retirement accounts.

These are my words, not Dr. McKinney’s: It’s disingenuous and intellectually lazy to say that all “drug companies” care about is profits when many, many folks – including those objectors who populate the comment threads of this blog and others – benefit financially from the business practices of the industry. Let he without fault cast the first stone.

What would YOU do?

What I enjoyed next was that McKinney challenged the audience to declare what they would have done next if they were working for the company and their jobs and the jobs of others depended on the sales of what had become a $3 billion/year drug. He wouldn’t just let us sit passively and – for just a brief moment – you had to really think about being in the decision making shoes. I took a moment during the talk to pull up the Nissen paper and look at the actual numbers and look at the absolute risk of adverse effects instead of the relative numbers.  I encourage you right now to go to Table 3 and look at the actual numbers of myocardial infarctions and deaths from cardiovascular causes in control patients versus patients taking rosiglitazone in each of the trials. Yes, the analysis of the data as a whole showed that rosiglitazone exhibited significant risk but can you see how easy it might be to convince yourself that there wasn’t really a problem with your drug?

In another part of his talk, he challenged us (still as hypothetical company employees) to come up with designing a study to test our hypothetical new drug for mild-to-moderate pain and expressing whether we thought it best to compare against aspirin, ibuprofen, codeine, or celecoxib (Celebrex). What’s the right comparison drug to test yours against if you want to do the study correctly? Do you want to chance your $200/month drug against the pennies-per-dose aspirin or ibuprofen? Do you want to play hardball against equally-expensive Celebrex and risk that your drug might not perform better?

What’s the right study to do in the interest of patients?

What’s the right study to do in the interest of your continued employment?

Solutions?

McKinney also spent time talking about how the need for stronger disincentives for pharma management to behave unethically. The $2.4 billion that GSK had to set aside for Avandia litigation may not be large enough of a penalty. For a drug that had such a huge market, this might simply be viewed as the cost of doing business. Recent legislation to reward inside whistle-blowers personally might increase the revelations of wrongdoing similar to this week’s award, also related to GSK, to a drug manufacturing quality manager.

Finally, McKinney also spoke of the unavoidable conflicts of interest by academic investigators conducting industry-sponsored clinical trials – again reminding the uninitiated in the audience that the NIH funds vanishingly small numbers of clinical trials and that Pharma’s total clinical trial expenditures are roughly twice that of the entire NIH budget.

Caring too much can also be a COI

McKinney noted that conflicts of interest are not necessarily always nefarious or driven by money. As a physician who treats infants and kids with HIV/AIDS, McKinney stated that he has a conflict of interest in just simply wanting a new drug to work for his patients. Trying to keep kids from suffering is a strong motivator. In fact, the desire is so strong that if an investigator is not blinded, some bias may creep in on variables that are more subjective.

What can we do? We’re only addressing half the job by simply pointing out problems with the system. We have to propose and experiment with solutions. We have to work hard to minimize the introduction of any bias into studies. We have to provide strong disincentives to companies to behave unethically. But solutions will also have their own costs we must also be willing to accept. For example, if fines are levied that drive a major multinational company to bankruptcy, we must accept the loss of innovation to the collective worldwide drug discovery effort.

The solutions are not easy. The discussions are difficult. It’s just as easy to bleat that doctors don’t care if they kill patients because they take drug company money as it is to say that rainbows and unicorns flow forth from drug company research campuses. Having the discussions, pushing others to evaluate their own ethics, and thinking through tough financial and clinical decisions is grueling. I was delighted to have the opportunity this week to be pushed outside my comfort zone. It should happen more often.

Lies, damned lies, and…science-based medicine?

I realize that in the question-and-answer session after my talk at the Lorne Trottier Public Science Symposium a week ago I suggested in response to a man named Leon Maliniak, who monopolized the first part of what was already a too-brief Q&A session by expounding on the supposed genius of Royal Rife, that I would be doing a post about the Rife Machine soon. And so I probably will; such a post is long overdue at this blog, and I’m surprised that no one’s done one after nearly three years. However, as I arrived back home in the Detroit area Tuesday evening, I was greeted by an article that, I believe, requires a timely response. (No, it wasn’t this article, although responding to it might be amusing even though it’s a rant against me based on a post that is two and a half years old.) Rather, this time around, the article is in the most recent issue of The Atlantic and on the surface appears to be yet another indictment of science-based medicine, this time in the form of a hagiography of Greek researcher John Ioannidis. The article, trumpeted by Tara Parker-Pope, comes under the heading of “Brave Thinkers” and is entitled Lies, Damned Lies, and Medical Science. It is being promoted in news stories like this, where the story is spun as indicating that medical science is so flawed that even the cell-phone cancer data can’t be trusted:

Visit msnbc.com for breaking news, world news, and news about the economy

Let me mention two things before I delve into the meat of the article. First, these days I’m not nearly as enamored of The Atlantic as I used to be. I was a long-time subscriber (at least 20 years) until last fall, when The Atlantic published an article so egregiously bad on the H1N1 vaccine that our very own Mark Crislip decided to annotate it in his own inimitable fashion. That article was so awful that I decided not to renew my subscription; it is to my shame that I didn’t find the time to write a letter to The Atlantic explaining why. Fortunately, this article isn’t as bad (it’s a mixed bag, actually, making some good points and then undermining some of them by overreaching), although it does lay on the praise for Ioannidis and the attacks on SBM a bit thick. Be that as it may, clearly The Atlantic has developed a penchant for “brave maverick doctors” and using them to cast doubt on science-based medicine. Second, I actually happen to love John Ioannidis’ work, so much so that I’ve written about it at least twice over the last three years, including The life cycle of translational research and Does popularity lead to unreliability in scientific research?, where I introduced the topic using Ioannidis’ work. Indeed, I find nothing at all threatening to me as an advocate of science-based medicine in Ioannidis’ two most famous papers, Contradicted and Initially Stronger Effects in Highly Cited Clinical Research and Why Most Published Research Findings Are False. The conclusions of these papers to me are akin to concluding that water is wet and everybody dies. It is, however, quite good that Ioannidis is there to spell out these difficulties with SBM, because he tries to keep us honest.

Unfortunately, both papers are frequently wielded like a shibboleth by advocates of alternative medicine against science-based medicine (SBM) as “evidence” that it is corrupt and defective to the very core and that therefore their woo is at least on equal footing with SBM. Ioannidis has formalized the study of problems with the application of science to medicine that most physicians intuitively sense but have not ever really thought about in a rigorous, systematic fashion. Contrast this to so-called “complementary and alternative medicine” (i.e., CAM), where you will never see such a questioning of the methodology and evidence base behind it (mainly because its methodology is primarily anecdotal and its evidence base nonexistent or fatally flawed) and most practitioners never change their practice as a result of any research, and you’ll see my point.

Right from the beginning, the perspective of the author David H. Freedman is clear. I first note the title of the article (Lies, Damned Lies, and Medical Science) is intentionally and unnecessarily inflammatory. On the other hand, I suppose that entitling it something like “Why science-based medicine is really complicated and most medical studies ultimately turn out to be wrong” wouldn’t have been as eye-catching. Even Ioannidis restrained himself more when he entitled his PLoS review an almost as exaggerated Why Most Published Research Findings Are False, which has made it laughably easy for cranks to the misuse and abuse of his article. My annoyance at the title and general tone of Freedman’s article notwithstanding, coupled with the sorts of news coverage it’s getting notwithstanding, there are still important messages in Freedman’s article worth considering, if you get past the spin, which begins very early in describing Ioannidis and his team thusly:

Last spring, I sat in on one of the team’s weekly meetings on the medical school’s campus, which is plunked crazily across a series of sharp hills. The building in which we met, like most at the school, had the look of a barracks and was festooned with political graffiti. But the group convened in a spacious conference room that would have been at home at a Silicon Valley start-up. Sprawled around a large table were Tatsioni and eight other youngish Greek researchers and physicians who, in contrast to the pasty younger staff frequently seen in U.S. hospitals, looked like the casually glamorous cast of a television medical drama. The professor, a dapper and soft-spoken man named John Ioannidis, loosely presided.

I’m guessing the only reason Freedman didn’t liken this team to Dr. Greg House and his minions is because, unlike Dr. House, Ioannidis is dapper and soft-spoken, although like Dr. House’s team apparently Ioannidis’ team is full of good-looking young doctors. After describing how Ioannidis delved into the medical literature and was shocked by the number of seemingly important and significant published findings that were later reversed in subsequent studies, Freedman boils down the what I consider to be the two most important messages that derive from Ioannidis’ work:

This array suggested a bigger, underlying dysfunction, and Ioannidis thought he knew what it was. “The studies were biased,” he says. “Sometimes they were overtly biased. Sometimes it was difficult to see the bias, but it was there.” Researchers headed into their studies wanting certain results—and, lo and behold, they were getting them. We think of the scientific process as being objective, rigorous, and even ruthless in separating out what is true from what we merely wish to be true, but in fact it’s easy to manipulate results, even unintentionally or unconsciously. “At every step in the process, there is room to distort results, a way to make a stronger claim or to select what is going to be concluded,” says Ioannidis. “There is an intellectual conflict of interest that pressures researchers to find whatever it is that is most likely to get them funded.”

Perhaps only a minority of researchers were succumbing to this bias, but their distorted findings were having an outsize effect on published research. To get funding and tenured positions, and often merely to stay afloat, researchers have to get their work published in well-regarded journals, where rejection rates can climb above 90 percent. Not surprisingly, the studies that tend to make the grade are those with eye-catching findings. But while coming up with eye-catching theories is relatively easy, getting reality to bear them out is another matter. The great majority collapse under the weight of contradictory data when studied rigorously. Imagine, though, that five different research teams test an interesting theory that’s making the rounds, and four of the groups correctly prove the idea false, while the one less cautious group incorrectly “proves” it true through some combination of error, fluke, and clever selection of data. Guess whose findings your doctor ends up reading about in the journal, and you end up hearing about on the evening news? Researchers can sometimes win attention by refuting a prominent finding, which can help to at least raise doubts about results, but in general it is far more rewarding to add a new insight or exciting-sounding twist to existing research than to retest its basic premises—after all, simply re-proving someone else’s results is unlikely to get you published, and attempting to undermine the work of respected colleagues can have ugly professional repercussions.

Of course, we’ve discussed the problems of publication bias before multiple times right here on SBM. Contrary to the pharma conspiracy-mongering of many CAM advocates, more commonly the reason for bias in the medical literature is what is described above: Simply confirming previously published results is not nearly as interesting as publishing something new and provocative. Scientists know it; journal editors know it. In fact, this is far more likely a problem than the fear of undermining the work of respected colleagues, although I have little doubt that that fear is sometimes operative. The reason is, again, because novel and controversial findings are more interesting and therefore more attractive to publish. A young investigator doesn’t make a name for himself by simply agreeing with respected colleagues. He makes a name for himself by carving out a niche and even more so if he shows that commonly accepted science has been wrong. Indeed, I would argue that this is the very reason that comparative effectiveness research (CER) is given such short shrift in the medical literature, so much so that the government has decided to encourage it in the latest health insurance reform bill. CER is nothing more than comparing already existing and validated therapies head-to-head against each other to see which is more effective. To most scientists, nothing could be more boring, no matter how important CER actually is. Until recently, doing CER was a good way to bury a medical academic career in the backwaters. Hopefully, that will change, but to my mind the very problems Ioannidis points out are part of the reason why CER has had such rough sledding in achieving respectability.

More importantly, what Freedman appears (at least to me) to portray as a serious, nigh unfixable problem in the medical research that undergirds SBM is actually its greatest strength: it changes with the evidence. Yes, there is a bias towards publishing striking new findings and not publishing (or at least not publishing in highly prestigious journals) less striking or negative findings. This has been a well-known bias that’s been bemoaned for decades; indeed, I remember learning about it in medical school, and you don’t want to know how long ago I went to medical school.

Even so, Freedman inadvertently echoes a message that we at SBM have discussed many times, namely that high quality evidence is essential. In the article, Freedman points out that 80% of nonrandomized trials turn out to be wrong, as are “25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.” Big surprise, right? Less rigorous designs produce false positives more often! Also remember, in an absolutely ideal world with a perfectly designed randomized clinical trial (RCT), by choosing p<0.05 as the cutoff for statistical significance, we would expect that at least 5% of RCTs will be wrong by random chance alone. Add type II errors to that and the number is expected to be even higher, again, just by random chance alone. When you consider these facts, then having only 10% of large randomized trials turn out to be incorrect is actually not too bad at all. Even if only 25% of all randomized trials turn out to be wrong, that isn’t all that bad either; these include smaller trials. After all, the real world is messy; trials are never perfect, nor is their analysis. The real messages should be that lesser quality trials that are unrandomized are highly unreliable and that even randomized trials should be replicated if at all possible. Unfortunately, resources are such that such trials can’t always be replicated or expanded upon, which means that we as scientists need to do our damnedest to work on improving the quality of such trials. Also, don’t forget that the probability of a trial being wrong increases as the implausibility of the hypothesis being tested increases, as Steve Novella and Alex Tabarrok have pointed out in discussing Ioannidis’ results. Unfortunately, with the rise of CAM, more and more studies are being done on highly implausible hypotheses, which will make the problem of false-positive studies even worse. Is this contributing to the problem overall? I don’t know, but that would be a really interesting hypothesis for Ioannidis and his group to study, don’t you think?

Another important lesson from Ioannidis’ work cited by Freedman is that hard outcomes are much more important than soft outcomes in medical studies. For example, death is the hardest outcome of all. If a treatment for a chronic condition is going to claim benefit, it behooves researchers to demonstrate that it has a measurable effect on mortality. I discussed this issue a bit in the context of the controversy over Avastin and breast cancer, where the RCTs used to justify approving Avastin for use against stage IV breast cancer found an effect on disease-free survival but not overall survival. However, this issue is not important just in cancer trials, but in any trial for an intervention that is being used to reduce mortality. “Softer” outcomes, be they disease-free survival, reductions in blood lipid levels, reductions in blood pressure, or whatever, are always easier to demonstrate than decreased mortality.

Unfortunately, one thing that comes through in Freedman’s article is something similar to other work I’ve seen from him. For instance, when Freedman wrote about Andrew Wakefield back in May, he got it so wrong that he was not even wrong when he described The Real Lesson of the Vaccines-Cause-Autism Debacle. To him the discovery of Andrew Wakefield’s malfeasance is as nothing compared to what he sees as the corruption and level of error present in the current medical literature. In other words, Freedman presented Wakefield not as a pseudoscience maven, an aberration, someone outside the system who somehow managed to get his pseudoscience published in a respectable medical journal and thereby caused enormous damage to vaccination programs in the U.K. and beyond. Oh, no. To Freedman, Wakefield is representative of the system. One wonders, given how much he distrusts the medical literature, Freedman actually knew Wakefield was wrong. After all, all the studies that refute Wakefield presumably suffer from the same intractable problems that Freedman sees in all medical literature. In any case, perhaps this apparent view explains why, while Freedman gets some things right in his profile of Ioannidis, he gets one thing enormously wrong:

Ioannidis initially thought the community might come out fighting. Instead, it seemed relieved, as if it had been guiltily waiting for someone to blow the whistle, and eager to hear more. David Gorski, a surgeon and researcher at Detroit’s Barbara Ann Karmanos Cancer Institute, noted in his prominent medical blog that when he presented Ioannidis’s paper on highly cited research at a professional meeting, “not a single one of my surgical colleagues was the least bit surprised or disturbed by its findings.” Ioannidis offers a theory for the relatively calm reception. “I think that people didn’t feel I was only trying to provoke them, because I showed that it was a community problem, instead of pointing fingers at individual examples of bad research,” he says. In a sense, he gave scientists an opportunity to cluck about the wrongness without having to acknowledge that they themselves succumb to it—it was something everyone else did.

To say that Ioannidis’s work has been embraced would be an understatement. His PLoS Medicine paper is the most downloaded in the journal’s history, and it’s not even Ioannidis’s most-cited work—that would be a paper he published in Nature Genetics on the problems with gene-link studies. Other researchers are eager to work with him: he has published papers with 1,328 different co-authors at 538 institutions in 43 countries, he says. Last year he received, by his estimate, invitations to speak at 1,000 conferences and institutions around the world, and he was accepting an average of about five invitations a month until a case last year of excessive-travel-induced vertigo led him to cut back.

Yes, my ego can’t resist mentioning that I was quoted in Freedman’s article. My ego also can’t help but be irritated that Freedman gets it completely wrong in how he spins my anecdote. Instead of the interpretation I put on it, namely that physicians are aware of the problems in the medical literature described by Ioannidis and take such information into account when interpreting studies (i.e., that Ioannidis’ work is simply reinforcement of what they know or suspect anyway), Freedman instead interprets my colleagues’ reaction to Ioannidis as “an opportunity to cluck about the wrongness without having to acknowledge that they themselves succumb to it—it was something everyone else did.” I suppose it’s possible that there is a grain of truth in that — but only a small grain. In reality, at least from my observations, the reason that scientists and skeptics have not only refrained from attacking Ioannidis but in actuality have embraced him and his findings of deficiencies in how we do clinical trials is for the right reasons. We want to be better, and we are not afraid of criticism. Try, for instance, to imagine an Ioannidis in the world of CAM. Pretty hard, isn’t it? Then picture how a CAM-Ioannidis would be received by CAM practitioners? I bet you can’t imagine that they would shower him with praise, publications in their best journals, and far more invitations to speak at prestigious medical conferences than one person could ever possibly accept.

Yet that’s how science-based practitioners have received John Ioannidis.

In the end, Ioannidis has a message that is more about how little the general public understands the nature of science than it is about the flaws in SBM:

We could solve much of the wrongness problem, Ioannidis says, if the world simply stopped expecting scientists to be right. That’s because being wrong in science is fine, and even necessary—as long as scientists recognize that they blew it, report their mistake openly instead of disguising it as a success, and then move on to the next thing, until they come up with the very occasional genuine breakthrough. But as long as careers remain contingent on producing a stream of research that’s dressed up to seem more right than it is, scientists will keep delivering exactly that.

“Science is a noble endeavor, but it’s also a low-yield endeavor,” he says. “I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”

We should indeed. On the other hand, those of us in the trenches with individual patients don’t have the luxury of ignoring many studies that conflict (as Ioannidis suggests elsewhere in the article). Moreover, it is science that gives us our authority with patients. If patients lose trust in science, then there is little reason not to go to a homeopath. Consequently, we need to do the best we can with what exists. Nor does Ioannidis’ work mean that SBM is so hopelessly flawed that we might as well all throw up our hands and become reiki masters, which is what Freedman seems to be implying. SBM is our tool to bring the best existing care to our patients, and it is important that we know the limitations of this tool. Contrary to what CAM advocates claim, there currently is no better tool. If there were, and it could be demonstrated conclusively to be superior, I’d happily switch to using it.

To paraphrase Winston Churchill’s famous speech, many forms of medicine have been tried and will be tried in this world of sin and woe. No one, certainly not those of us at SBM, pretends that SBM is perfect or all-wise. Indeed, it has been said (mainly by me) that SBM is the worst form of medicine except all those other forms that have been tried from time to time. I add to this my own little challenge: Got a better system than SBM? Show me! Prove that it’s better! In the meantime, we should be grateful to John Ioannidis for exposing defects and problems with our system while at the same time expressing irritation at people like Freedman for overhyping them.

Energy Bracelets: Embedding Frequencies in Holograms for Fun and Profit

A salesman is demonstrating a new product at a sports store in the local mall. He has a customer stand with his arms extended horizontally to the sides; he presses down on an arm and the customer starts to fall over. Then he puts a bracelet on the customer and repeats the test; this time he is apparently unable to make the customer lose his balance. He has the customer turn his head as far as he can without the bracelet, and shows that he can turn his head a few degrees more after he puts on the bracelet. (Try this yourself: if you turn your head, wait a couple of seconds and try again, you will always be able to turn it further on the second trial). He similarly shows that the customer is stronger when he wears the bracelet. The customer and the onlookers are mightily impressed by the demonstration, by the salesman’s testimonials, and by the endorsements of famous athletes: they buy the bracelets to improve their athletic performance.

These so-called energy bracelets (also pendants and cards) allegedly contain a hologram embedded with frequencies that react positively with your body’s energy field to improve your balance, strength, flexibility, energy, and sports performance; and they also offer all sorts of other benefits (such as helping horses and birds and relieving menstrual cramps and headaches). The claims and the language on their websites are so blatantly pseudoscientific it’s hard to believe anyone would fall for them. Here are just a few examples from the Power Balance website:

  • We react with frequency because we are a frequency.
  • Your body’s energy field likes things that are good for it.
  • Why Holograms? We use holograms because they are composed of Mylar—a polyester film used for imprinting music, movies, pictures, and other data. Thus, it was a natural fit.
  • A primitive form of this technology was discovered when someone, somewhere along the line, picked up a rock and felt something that reacted positively with his body.

People have actually been convinced that this gobbledygook is a scientific explanation. Many sports celebrities swear by the bracelets. Millions have been sold.

Recently an account executive from a public relations/marketing communications firm contacted me about energy bracelets, asking if I would like free samples to check out for myself and if I would be interested in writing about the topic and maybe interviewing his “C-levels,” whatever those are. He represents one of at least 8 companies marketing such devices, but this company, EFX, is allegedly unique because it’s the only one that is embracing scientific studies. He says the other competing brands are avoiding any and all medical/scientific analysis, but EFX currently has “many independent studies being conducted, and is in the process of gathering funds to have an independent double-blind study implemented with seniors.” Are you impressed? They don’t have any evidence that their product works, but they are “in the process of gathering funds” to test it. After selling how many of them?

I don’t know how I got on his list, but he initially addressed me as “Ms.” rather than “Doctor” and he apparently had no idea that I had debunked (actually, ridiculed) a similar product, “Power Balance,” in an article in Skeptical Inquirer some time ago. An expanded version of that article is available online on Device Watch, a Quackwatch affiliate.I am not the only one to pick on them. Richard Saunders, the prominent Australian skeptic, has written about them and has even conducted a double blind test for Australian TV,where they failed miserably. He and Rachael Dunlop also produced their own video about applied kinesiology, explaining the simple biomechanical and psychological tricks that salesmen use to give people the false impression that the products improve their balance, strength, and flexibility. Brian Dunning has debunked energy bracelets on Skepticblog. And Time magazine recently did a story explaining that there is no science behind them but that users don’t seem to care and continue to use them as a kind of mechanized superstition.

When I wrote the Power Balance article, I pointed out that you can’t have a frequency in isolation. A frequency requires a periodic process; you can’t have “33 1/3 per minute” by itself but you can have “33 1/3 revolutions per minute.” A radio wave and a vibrating tuning fork can have a frequency: an armadillo and a tomato can’t. A person can’t “be” a frequency. I e-mailed the company and asked some simple questions like “How do you measure the frequency of a rock?” They didn’t answer.

So when I heard from the EFX account executive, I jumped at the chance to get some answers. I asked if he could ask a company representative

  1. How are the frequencies chosen? How do you determine which ones are beneficial?
  2. Why would one frequency work for different individuals? Aren’t we unique?
  3. What do they mean by frequencies, since a frequency can’t exist alone but has to refer to a number of repetitions of a periodic process per period of time. What is the periodic process that generates the frequency involved in the bracelet technology?
  4. How are frequencies embedded in a hologram? Yes, I know there are proprietary secrets, but perhaps you could provide a general answer that would give me a clue.

The proffered answers to my questions were revealing:

  1. We choose the frequencies based upon research. The electromagnetic spectrum is vast, but there are specific frequencies that have an immediate positive effect on the human body. We determine which ones are the best to use through a lot of trial and error.
  2. We are unique, and we think that no two people react exactly the same to our holograms. However, some frequencies are universal to the human body, which is why our holograms work with 95% of the people that try the product. Some have a relatively mild reaction, and with others the reaction to our holograms is profound.
  3. Yes, a frequency is the number of waves that pass a fixed point in a period of time. It is quite possible (I do it when I program) to use a frequency generating machine, modified to work for our needs to “embed” frequencies onto a hologram (that includes a metallic substance) that will hold those frequencies. You are not going to find much support for this “theory” in mainstream science. Many will say that it is “impossible.” I say that there is still much that science does not know. I have been doing this long enough to know that it does work, it is real, and I don’t worry about the people saying that the idea makes no sense. Time is on my side.
  4. Not going to give you any information about how we embed frequencies in a hologram. That is a trade secret.

The account executive was personally convinced because the headaches he used to get after 3-hour (!) cardio workouts vanished, and since he didn’t anticipate that, he can’t accept it as a placebo effect. He commented

My only estimation is that these frequency generating machines are somehow able to embed a self-sustaining frequency onto the mylar material. I haven’t had the opportunity to do in depth research on the theory personally, but from what I understand, this isn’t a theory that has much research discrediting or supporting it for that matter.

A self-sustaining frequency? In Mylar? I asked if he believed in perpetual motion. He answered

I’m not a scientist and I don’t know enough about how they “embed” the frequencies to verify how it works. All I know is that the team that I’ve met with internally at EFX are very adamant about the product, which is why they are willing to submit to the peer reviewed/double blind studies. “If this doesn’t work, we want to be proven wrong” was something the president once said to me. If he were a scam artist, I doubt very highly that he would be eager to submit his product to these tests.

I had asked if they could supply me with a bracelet that had not had the frequencies embedded, so I could use it as a placebo control to test the “active” bracelet. They couldn’t, because

We are currently engaged in independent peer reviewed double blind studies and would prefer to conclude those before sending blanks if you don’t mind.

I don’t think I need to point out what’s wrong with these answers and this type of thinking. The energy bracelet phenomenon is just one more demonstration that humans are a superstitious lot and that consumers can’t tell science from bullshit. This amounts to a high-tech version of carrying a rabbit’s foot for luck. At least the energy bracelets don’t require killing innocent bunnies: can this be considered progress of a sort?

There’s something out there — part 2

The view from Sedna

(Be sure to read part 1)

Seven years ago, the moment I first calculated the odd orbit of Sedna and realized it never came anywhere close to any of the planets, it instantly became clear that we astronomers had been missing something all along. Either something large once passed through the outer parts of our solar system and is now long gone, or something large still lurks in a distant corner out there and we haven’t found it yet.

Of all of the planets, comets, asteroids, and Kuiper belt objects in the solar system, Sedna is the only one that tells us this astounding fact so glaringly. The orbit of every single other object in the entire solar system can be explained, at least in principle, by some interaction with the known planets. Sedna alone requires something else out there.

But what?

In our 2004 paper announcing the discovery of Sedna (give it a try; though it – like all research papers – has some technical details that might not make sense, I believe it to be relatively readable), we suggested three possibilities. Our first idea was that perhaps there was an unknown approximately earth-sized planet circling the sun about twice the distance of Neptune. Sedna could have gotten too close to this Planet X and been given a kick which would have flung it out into a far corner of our solar system. But, like always, nothing can kick you into a far corner and make you stay there. You always come back to the spot where you were kicked. So Sedna’s new orbit would be one that came in as close as this Planet X and went far into the outer solar system – just like Sedna’s orbit. Back in 2003 I had liked this idea a lot. Our search of the skies had only begun a few years earlier, so the prospect that there might be an earth-sized planet awaiting discovery seemed pretty exciting indeed. It was, admittedly, a long shot, but discovering planets always is.

The second possibility that we considered and wrote about was that perhaps a star had passed extremely close to our solar system at some point during the lifetime of the sun. “Extremely close” for a star means something like 20 times beyond the orbit of Neptune, but that is 500 times closer than the current nearest star. A star passing by that close would have been brighter than the full moon and would have been the brightest thing in the night sky for hundreds of years. Perhaps our early ancestors even temporarily lived under a dual-star sky. Sedna, before the rogue star came calling, would have been a normal Kuiper belt object with a looping orbit which would take it out to the distant solar system but then eventually back to Neptune (which had, presumably kicked it around earlier). But on one of its trips to the edge of the solar system, Sedna would have accidentally gotten too close to this interloping star, and the star would have given Sedna another little kick. Suddenly, Sedna would find itself on a new orbit which no longer went back to Neptune. The orbit would, of course, have to go back to the spot where Sedna had gotten the kick from the passing star, but the star would be long gone by then. This idea was a fun one, and, best of all, we could do a reasonably good job estimating the probability that something like this might have occurred. Looking at the number of star near us in the galaxy and fast they all move relative to each other, we found that the chances of such a rogue star encounter happening sometime in the past 4.5 billion years was around 1%. Not good odds to hang your theory on. (People often ask: can’t you just go back and find the star that did it and see if it is there? Sadly, there is no chance. The sun is 4.5 billion years old and it takes about 250 million years to orbit around the galaxy, so it’s gone around about 18 times. So has everything else in the vicinity. Everything is now so mixed up that there is no way to know for sure what was where back when.)

The third possibility was the one that we deemed the most likely. Instead of getting one big kick from an improbably passing star, imagine that Sedna got a lot of really small kicks from many stars passing by not quite as closely. The chances of this happening might seem low, too, but astronomers have long known that most stars are born not alone, but in a litter of many stars packed together. How tightly? In our region of the galaxy, there is currently something like one star per cubic parsec (don’t worry too much about these units here; suffice it to say that a parsec is a little less than the distance to the nearest star, so it is not surprising that in a box with edges about that length there is about one star). In the cluster of stars in which the sun might have been born there would have been thousands or even tens to hundreds of thousands of stars in this same volume, all held together by the gravitational pull of the massive amounts of gas between the still-forming stars. I firmly believe that the view from the inside of one of these clusters must be one of the most awesome sights in the universe, but I suspect no life form has ever seen it, because it is so short-lived that there might not even be time to make solid planets, much less evolve life. For as the still-forming stars finally pull in enough of the gas to become massive enough to ignite their nuclear-fusion-powered cores they quickly blow the remaining gas holding everything together away and then drift off solitary into interstellar space. Today we have no way of ever finding our solar siblings again. And, while we see these processes occurring out in space as other stars are being born, we really have no way to see back 4.5 billion years ago and see this happening as the sun itself formed.

Until now.

Maybe.

If Sedna got put on its peculiar orbit by the interactions of all of these stars 4.5 billion years ago, it is now a fossil record of what happened at the time of the very birth of sun. Everything else in the solar system has been kicked and jostled and nudged by planets big and small so there is no way to trace them back 4.5 billion years. Sedna, on the other hand, has been doing nothing but going around and around the sun in its peculiar elongated orbit every 12,000 years. After almost half a million of those orbits, Sedna remains lonely and untouched by anything else. By watching the orbit of Sedna we could be watching 4.5 billion years in the past.

All of these possibilities are exciting! A new planet! A rogue star! Fossils from the birth of the sun! And in the years since Sedna’s discovery other astronomers have chimed in with their own ideas, including the possibility that Sedna was kicked by something large out in the Oort cloud (small planet? Brown dwarf? Nemisis? Who knows) and, in the most imaginative spin, that Sedna was kicked by the sun. The sun? Yes, because, in this hypothesis, Sedna used to orbit a different star, and the sun got close and kicked Sedna around and stripped if away. Sedna would then be the first known extra-solar dwarf planet. Or something like that.

Sedna is telling us something profound, but what? With only a single object, there is absolutely no way to know. It would be like finding a fossilized skeleton of a T. Rex and trying to infer the history of the dinosaurs. If you had just that one skeleton you would know just what to do: head back out into the desert and start digging. When we found Sedna, we, too, knew what was next: head back out into the night and keep looking. Until we found more, we wouldn’t know what this profound bit of the solar system was trying to scream so loudly in our ears.

Next week: The search for more Sednas.

There is something out there — part 1

Is it real, or is it cat hair?

Seven years ago this week I was preparing one of my favorite lectures for The Formation and Evolution of Planetary Systems, a class I frequently teach at Caltech. “Preparing” is probably the wrong word here, because this lecture, called The Edge of the Solar System, was one I could give even if instantly wakened from a cold deep sleep and immediately put on stage with bright lights in my eyes and an audience of thousands and no coffee anywhere in sight. The lecture explored what was known about the edge of our main planetary system and the ragged belt of debris called the Kuiper belt that quickly faded to empty space not that much beyond Neptune. Conveniently, one of my most active areas of research at that time was trying to figure out precisely why this ragged belt of debris had such an edge to it and why there appeared to be nothing at all beyond that edge. I could wing it. So instead of preparing the lecture, I really spent that morning doing what I did whenever I had a few spare moments: staring at dozens of little postage-stamp cutouts of pictures of the sky that my telescope had taken the night before and my computer had flagged as potentially interesting. Interesting, to my computer, and to me, meant that in the middle of the postage stamp was something that was moving across the sky at the right rate to mark it as part of the Kuiper belt. I was not just lecturing about this debris at the edge of the solar system, I was looking for more of it, too.

I didn’t find more objects in the Kuiper belt every morning I looked, but that previous night seven years ago had been a good one. I quickly found two of the typical debris chunks moving slowly across the sky, and I was about ready to walk over to give my lecture, when, with only about a minute to spare, the outer solar system seemed to change before my eyes.
There, on my computer screen, was a faint object moving so slowly it could only have been something far more distant than what I was just going to walk into the classroom and declare to be the edge of the solar system. Maybe. The object was so faint that I didn’t know whether to believe it was real or not. If you look at enough sky – and, really, I had – you are bound to find some chance alignment of blips of noise or variable stars or cat hairs that looks just like something real.
I went into the classroom, delivered the lecture as I knew it, but stopped short at the end.
“Here is the way I was going to end this lecture,” I told them.
I proceeded to talk about how nothing existed beyond the edge of the Kuiper belt (yes, yes, you sticklers, the Oort cloud is way out there, but that is not supposed to start up until 100 or 200 times further out than the edge of the Kuiper belt).
“But I’m not sure I believe this anymore,” I said.
 I told them about that morning’s blip. I couldn’t promise them that it was real, but I told them that if it was, the solar system might be very different place than I was just telling them.
That little blip, far more distant than what was supposed to have been the edge of the solar system, was indeed real. It was Sedna.
Sedna is the Inuit goddess of the sea, often depicted with the body of a seal, long hair, and no fingers.
A few weeks later, after confirming that Sedna was real and determining its unprecedentedly strange orbit around the sun, I came back, told the class all about it, and wrote down a few simple equations on the blackboard to show just how strange the orbit is and also the many different ways it might have gotten that way.
“Come back and take my class again next year, and I’ll have it all figured out,” I confidently told them.
That was seven years ago. Any poor student taking my advice would have sat through the last six years of lectures and still not learned what put Sedna where it is, since I still don’t know the answer.
What makes Sedna’s orbit so strange?
Sedna takes 12,000 years to go around the sun on its elongated orbit, and it never comes close to any of the planets.

Many objects out in the Kuiper belt have shockingly elongated orbits like Sedna. For almost all of these objects, this characteristic makes sense. These small leftover pieces of debris have been kicked around by planets throughout their existence. Whenever they come too close to one of the planets (usually Neptune, since it is the closest to these objects), they get a gravitational kick that can send them on a looping orbit to the distant outskirts of the solar system. But – and this is the key part here – unless they get kicked all the way out of the solar system, they always come back to where they were kicked. If you get kicked by Neptune, you can go zooming off into the unchartered regions far beyond the Kuiper belt, but you will come back to see Neptune again. When we look at the Kuiper belt, we see the results of all of this kicking clearly: the Kuiper belt objects that come closest to Neptune are on the most elongated orbits. Those far away are more free to go about their circular orbiting lives.

The exception to this rule is, of course, Sedna. Sedna has one of the most elongated orbits around, but it never comes anywhere close to Neptune or to any other planet. Indeed, the earth comes closer to Neptune than Sedna ever does. And the earth is not in danger of being kicked out of its orbit by Neptune anytime soon.
Something had to have kicked Sedna to have given it its crazy orbit. But what?
The answer is: something large that is no longer there, or that is there, but we don’t know about yet.
This answer is astounding. The orbit of every single other object in the entire solar system can be explained, at least in principle, by some interaction with the known planets (and, again, for you Oort cloud sticklers out there, the known galactic environment). Sedna alone requires Something Else Out There.
What is it? Seven years out, we still don’t know. The hypothesized culprits have included passing stars, hidden planets, Oort cloud brown dwarfs, and, of course, Sumerian-inspired alien conspiracy theories. Whatever it is, it is bound to answer profound questions about the origin and evolution of the solar system, as well as inspire many new questions we had never known to ask.

(Read part 2)

Rail~Volution

portland streetcar

Rail~Volution is a conference for passionate people from all perspectives who believe strongly in the role of land use and transit as equal partners in the quest for greater livability and greater communities.

Rail~Volution started in 1989 as a series of outreach and advocacy events geared towards developing real advocates for the Portland metropolitan region’s MAX Light Rail System. At the conference in 1994, Congressman Earl Blumenauer (District 3, Oregon) announced that in 1995, Rail~Volution would become a national conference. From this point, Rail~Volution acted as a loose federation of sponsoring Partners and Affiliates, united by common interests and dedication. In the year 2000, the National Steering Committee realized the need for a more formal organization, and a strategic planner was brought in to assist this process. The National Steering Committee decided Rail~Volution should develop into a 501(c)(3) non-profit charitable organization.

Railvolution has recently wrapped up here in Portland, Oregon. The excitement was marked across the blog-o-sphere with pictures of Portland Streetcars and lightrail MAX trains. Klaus Philipsen, an urban design, architect, and planner blogged the following:

Railvolution has preached the transit land use nexus for twenty years and finally everybody seems to be in tune. Obama instructed the Department of Housing and Community Development to collaborate with the Department of Transportation and the Department of Energy in Livable Communities.

In addition to a number of mobile tours exploring some of Portland’s successes in transportation, such as a visit to the United Streetcar factory and a walk along the transit Mall, there are a series of talks addressing the many challenges we are working to address. Presenters include transportation and planning leaders both local and national, including representatives from Dallas, Atlanta, San Francisco, and Vancouver BC.

One common theme seems to be that all the transportation infrastructure in the Portland Metro region seem to be an effective strategy in creating dense urban and livable neighborhoods and districts. Transit oriented development has been the tool used by partnerships undertaken by the city of Portland and private investments. In the case of the Portland Streetcar, according to Sam Adams (mayor of Portland), $125 million in investment from the city has spurred about $3.5 billion in private investment along the streetcar corridor since inception. Check out video of Mayor Sam Adams at the Railvolution conference below.

Thoughts, Comments, Questions…

Green Streets Initiative

Green Streets Concept | Portland, Oregon

The Green Streets Initiative is a program developed by the city of Portland to implement sustainable management of stormwater runoff. Green streets also beautify inner-city neighborhoods and in the case of some bioswales, provide a safe and predicable pathway for bicycles and pedestrians.

The use of green streets meets regulatory code to manage runoff in a natural way. The use of plants and shrubs absorb runoff water and and treat toxics naturally before the water re-enters the watershed. By allowing the plants to treat the water and keep polluted runoff from the sewer system, the city helps prevents sewer backups, prevents basement floods, and improves air quality. Green Streets convert stormwater from a waste directed into a pipe, to a resource that replenishes groundwater supplies. They also create attractive streetscapes and urban green spaces, provide natural habitat, and help connect neighborhoods, schools, parks, and business districts.

The City of Portland is committed to green development practices and sustainable stormwater management. Green Streets are an innovative, effective way to restore watershed health. They protect water quality in rivers and streams, manage stormwater from impervious surfaces, and can be more cost efficient than new sewer pipes. Green Streets offer many benefits that sewer pipes can’t.

An EcoDistrict summit is just finishing up at Portland State University which was hosted by Portland Sustainability Institute. Their is a lot of buzz in Portland over these new found programs to further bolster Portland’s sustainable development and leadership in urban planning. Most of these summits or conferences going on are tailored for policy makers, however, it is for any interested parties. A nexus of sustainable and green visionaries are truly creating a hub the future.

Portland Green Streets from Mayor Sam Adams on Vimeo.

Thoughts, Comments, Questions…

Biomass is not Oregon’s clean-energy future as currently promoted

Also read Biomass Energy Generation Myths

woody biomass
The federal Environmental Protection Agency has proposed that biomass incinerators be required to report greenhouse gas emissions when the government starts regulating carbon next year. But The Oregonian’s editorial board argues that this will “shackle” the biomass industry “with hobbling costs.” Is the fear that greenhouse gas reporting will expose the heavy carbon burden of burning wood to make energy?

The Clean Air Act requires that facilities measure, report and minimize air pollution and climate-altering greenhouse gases. Biomass plants should be no different in this regard than other industrial processes. The EPA decision denying the industry’s request for an exemption from the Clean Air Act is based on the evidence that burning trees to generate energy can actually increase rather than help curb greenhouse gas emissions.

The EPA isn’t the only agency casting doubts on the wisdom of burning biomass for energy. The state of Massachusetts’ Department of Energy Resources published a decision in July to require that biomass plants report their greenhouse gas profile. Reporting will be required so that the state can meet its renewable energy standard and carbon reduction goals. Massachusetts will require, for example, that biomass energy production demonstrate maximum energy efficiency standards, a 50% reduction in GHG over a 20-year cycle, and forest practices that are measurably sustainable, and a limit on the total timber per acre eligible to be harvested for biomass fuels.

The Department of Energy was convinced by a Massachusetts study that concluded that burning forest biomass creates a “carbon debt.” The debt occurs when we outpace the earth’s ability to absorb carbon dioxide. The carbon debt increases as trees are removed from forests, because their ability to absorb carbon from the atmosphere is diminished, and the carbon naturally stored in their woody tissue is prematurely released by burning them in an incinerator. According to another study, the significant carbon debt can take as long as over two and a half centuries to repay if biomass is used as a fossil fuel replacement.

Burning biomass is also a dirty air problem. Even with air pollution controls, these plants will collectively pump ton after ton of toxins into the air every day — chemicals that will rain down on the neighborhoods closest to the plant. A number of professional medical societies are warning the public that breathing sooty emissions from biomass incinerators is known as the most dangerous form of pollution and a significant health risk. The Oregon Chapter of the American Lung Association is predicting that patients, particularly children with asthma, respiratory and cardiac ailments, will experience increases in the incidence of respiratory problems. These diseases can be worsened by small micro pollutants, the type of pollution that will increase with the proliferation of biomass plants in Oregon.

The environmentalist Aldo Leopold reminded us that the first rule of intelligent tinkering was to “keep all the pieces,” not burn them.

Burning biomass, a process that depletes natural resources and pollutes our neighborhoods, is not the renewable and clean-energy panacea that commercial timber companies would have us believe. If we are to go down this path, Oregon residents must call upon our elected officials to require reasonable safeguards, starting with a complete state environmental impact report, carbon life-cycle accounting, and compliance with future, tighter Clean Air Act mandates.

Lisa Arkin is executive director of the Oregon Toxics Alliance.
Editorial published from The Oregonian

Thoughts, Comments, Questions…