Spin City: Using placebos to evaluate objective and subjective responses in asthma

As I type this, I’m on an airplane flying home from The Amazing Meeting 9 in Las Vegas. Sadly, I couldn’t stay for Sunday; my day job calls as I’ll be hosting a visiting professor. However, I can say—and with considerable justification, I believe—that out little portion of TAM mirrored the bigger picture in that it was a big success. Attendance at both our workshop on Thursday and our panel discussion on placebos on Saturday was fantastic, beyond our most optimistic expectations. There was also a bit of truly amazing serendipity that helped make our panel discussion on placebo medicine an even bigger success.

If there’s one thing about going away to a meeting, be it TAM or a professional meeting, it’s that it suddenly becomes very difficult for me to keep track of all the medical and blog stuff that I normally keep track of and nearly impossible to keep up with the medical literature. This is the likely explanation for why I had been unaware of a study published in the New England Journal of Medicine (NEJM) on Thursday that was so relevant to our discussion and illustrated out points so perfectly that it was hard to believe that some divine force didn’t give it to us in order to make our panel a total success.

Just kidding. It was TAM, after all. It was, however, embarrassing that I didn’t see the study until the morning of our panel, when Kimball Atwood showed it to me.

Before I get to the meat of this study and why it fit into our nefarious plans for world domination, (or at least the domination of medicine by science-based treatments), a brief recap of the panel discussion would seem to be in order. First, for the most part, we all more or less agreed that the term “placebo effect” is a misnomer and somewhat deceptive because it implies that there is a true physiologic effect caused by an inert intervention. “Placebo response” or “placebo responses” seemed to us a better term because what we are observing with a placebo is in reality a patient’s subjective response to thinking that he is having something active done having something done. In general, we do not see placebo responses resulting improvement in objective outcomes; i.e., prolonged survival in cancer. The relative contributions of components of this response, be they expectancy effects (if you expect to feel better you likely will feel better), conditioning, or one that is frequently dismissed or downplayed, namely artifacts of the design of randomized clinical trials and even subtle (or even not-so-subtle) biases in trial design. This issue of placebo responses being observed only in subjective patient-reported clinical outcomes (pain, anxiety, and the like) and not in objectively measured outcomes is an important one, and it is one that goes to the heart of the NEJM study that so serendipitously manifested itself to us. As Mark Crislip so humorously pointed out, the placebo response is the beer goggles of medicine (this is not a spoiler or stealing Mark’s line; several TAM attendees have already tweeted Mark’s line), and much of what is being observed are changes in the patient’s perception of his symptoms rather than true changes in the underlying pathophysiology. This study drove the point home better than we could.

Another point discussed by the panel is also quite relevant. As more and more studies demonstrate very convincingly that “complementary and alternative medicine” (CAM) or “integrative medicine” (IM) therapies do not produce improvements in symptoms greater than placebo. Moreover, multiple studies, including a famous NEJM meta-analysis and a recently updated Cochrane review, demonstrate, placebo responses probably do not constitute meaningful responses. In light of these findings, CAM apologists, driven by ideology rather than science and masters of spin, have begun to admit grudgingly that, yes, in essence their treatments are elaborate placebos. Not to be deterred, instead of simply concluding that their CAM interventions do not work, they’ve moved the goal posts and started to try to argue that it doesn’t matter that CAM effects are placebo effects because placebos are “powerful” and good and—oh, yes, by the way—there are a lot of treatments in science-based medicine that do little better than placebos. In other words, CAM advocates elevate the subjective above the objective and sell the subjective, and that’s exactly what they are doing with this study.

Perception versus physiology

The study under question was performed at Harvard, with Michael E. Wechsler as its first author and Ted Kaptchuk as its senior author. Studies done by groups including Ted Kaptchuk have actually presented us here at SBM with copious blog fodder before, all designed to promote placebo medicine, either through making an argumentum ad populum, claiming in a truly Humpty Dumpty moment that it is possible to have placebo effects without deceiving the patient, and or rebranding of exercise as “alternative” in the New England Journal of Medicine (NEJM) last year.

The current study is entitled, Active Albuterol or Placebo, Sham Acupuncture, or No Intervention in Asthma. Personally, I like this title. It’s a fine title, as it tells the reader in essence what the trial design is in only a few words. And it’s actually a reasonably good pilot study. Of course, it’s not so much the trial design that goes disastrously awry. Rather, it’s the interpretation of the results of the RCT that devolves into propaganda for quackademic medicine in which subjective improvement is used to argue that placebo medicine is good, even when no objective improvement is observed in a disease for which we have good drugs that produce objective improvements as well as subjective improvements.

This study basically compared four different interventions

  • Treatment with Albuterol
  • Sham acupuncture using the classic retractable needle (note that this was only single-blinded)
  • Placebo inhaler
  • No treatment at all

Inclusion criteria were as follows:

  • Men and women age>or= 18 with a diagnosis of asthma
  • Meet American Thoracic Society diagnostic criteria for asthma
  • Currently using a stable asthma regimen (no med. changes for 4 weeks)
  • Ability to withhold short-acting bronchodilators for 6 hours prior to each visit (see Spirometry description)
  • Ability to withhold long-acting bronchodilators for 48 hours prior to each visit (see Spirometry description)
  • Presence of reversible airflow obstruction as demonstrated by an improvement in FEV1 of at least 12 % following the inhalation of a ?-agonist after 10 am. at screening visit.

Exclusion criteria were straightforward:

  • Lung disease other than asthma
  • Respiratory tract infection within the last month
  • Active tobacco use
  • Asthma exacerbation requiring the use of systemic
  • corticosteroids within the past 6 weeks
  • Prior experience with acupuncture

These criterial guaranteed that the patients selected have only mild to moderate asthma with no complications from the asthma, such as pneumonia or pulmonary fibrosis. Of course, it would be highly unethical to take people with severe asthma off of their bronchodilators; so medical ethics pretty much prevents testing placebos on people with more severe disease. Still, I can’t help but wonder whether the results reported would have been different in more severe asthma and if the subjective improvement would have been nearly as great. In any case, this study ended up including 39 patients, after 79 were screened, 46 underwent randomization, and 7 dropped out during the protocol. Patients who completed the protocol underwent the following procedure:

These patients returned within a week and were assigned to a randomly ordered series of four interventions — active albuterol inhaler, placebo inhaler, sham acupuncture, or no-intervention control — administered on four separate occasions, 3 to 7 days apart (block 1) (Figure 2). This procedure was repeated in two more blocks of four visits each (blocks 2 and 3), during which the interventions were again randomly ordered and administered. Thus, each subject received a total of 12 interventions. Albuterol and the placebo inhaler were administered in a double-blind fashion and sham acupuncture in a single-blind fashion, and the no-intervention control was not blinded. As before, short-acting and long-acting bronchodilator therapy was withheld for 8 hours and 24 hours, respectively, before each intervention. The no-intervention control condition differs from the natural history of asthma, since it controls for nonspecific factors such as attention from study staff, responses to repeated spirometry, regression to the mean, natural physiological variation, and any effects arising from the hospital setting. Nonetheless, no-intervention controls are the best approximation of no treatment in an experimental design. The study was conducted in accordance with the protocol (available at NEJM.org).

I’m not entirely sure why Kaptchuk thought he had to place a comment in there about no-intervention controls being only an approximation of no treatment in an experimental design. After all, that’s the sort of thing that clinicians and clinical researchers simply know; it does not need to be pointed out to them, much as it shouldn’t need to be pointed out that an RCT is an intentionally artificial method designed to remove as many biases as possible. Be that as it may, one thing that is clear is that these patients could not have truly severe asthma. Ruling out anyone requiring steroids for an acute exacerbation in the recent past and only including patients who could be off their long-acting bronchodilators for 48 hours and their short-acting bronchodilators for 6 hours pretty much guaranteed that.

Everyone’s heard the old cliche that a picture is worth a thousand words, and this is exactly the sort of situation where that’s true. All I need to do is to show you two graphs, and instead of one of my usual 5,000 word blog posts, you can have a 4,000 word post. Funny how that works. In any case, for your edification, here a graph of the objective results of this study, namely the FEV1 for the four groups:


(Click to embiggen)

Not surprisingly, a known, effective bronchodilator had a very strong effect on the actual, objectively measured lung function of these patients. However, it should be noted that all groups improved, even the no-treatment group; it just improved much less than the albuterol group, and the sham acupuncture and placebo albuterol groups were indistinguishable from the no-treatment arm. In fact, in the supplemental data, there is also a table showing that in 32 of the patients exhaled nitric oxide (FENO) was measured, with identical results. Immediately after treatment, FENO increased in patients treated with double-blind albuterol by 5.9%, in contrast to patients treated with placebo inhaler, placebo acupuncture, and no treatment, all of whom demonstrated no significant change in FENO. This graph is about as clear and compelling evidence as there can be within the limits of a relatively small trial, that placebo responses do not change the underlying physiology of the disease of asthma or produce any objectively measurable improvements in lung function the way that real medicine does.

Now, for your edification and comparison, here is a graph of the self-reported subjective improvements.


(Click to embiggen.)

The results are pretty striking, aren’t they? They were so striking that Steve couldn’t resist flipping back and forth between these two graphs for several seconds in order to drive home the point to the audience. The albuterol, sham acupuncture, and placebo albuterol groups all demonstrated a significant improvement in symptoms, while the no-intervention control did not. However, here’s an important point. The scale used was a visual analog scale from 0 to 10 in which 0 means no improvement and 10 means complete resolution. So, again, even though the albuterol, sham acupuncture, and placebo albuterol groups all demonstrated subjective improvement, so did the no-treatment control arm, just less. In other words, all groups reported improvement, even those who received no treatment.

There’s another graph buried in the back of the supplemental data that I now wish we had also shown. Basically, it’s a look at how many patients responded objectively to treatment, as defined by an improvement in FEV1 of 12% or more, at each of the three sessions they did. The results and pattern are striking


(Click to embiggen.)

Notice that, as expected, the vast majority of the patients responded at each session to the albuterol (3/3 sessions). In contrast, only 3% of patients responded 3/3 times to placebo, sham acupuncture, or no treatment. In fact, what’s striking is how similar the three graphs look and how different they look from the graph of patient responses to albuterol. Again, the message is very clear: Real medicine produces real, objectively measurable changes in physiology towards a more normally functioning state. Placebo medicine does not. In any rational, science-based discussion, this would be the end of the story. Placebos don’t work in asthma.

But that’s not the message that was being spread about this story, and here’s where the NEJM, less than a year after its massive fail in publishing a credulous Michael Berman acupuncture article and a clever bait-and-switch article looking at Tai Chi in fibromyalgia, allowed quackademic language to try to make left right, up down, and a negative result an indication that placebo medicine is a good thing.

Spin, spin, spin, spin

As I read the discussion of this paper, I could almost hear the cracking of bones as Kaptchuk went into major contortions to try to explain his negative result. Even though nowhere did the authors really explicitly state their real hypothesis, the design of the study made it painfully clear to anyone who understands clinical research that their hypothesis going in was that placebo responses would result in changes in objectively measured lung function in asthma. They were sorely disappointed, and the contortions of language that went into the discussion were plain to see. The authors implied that it might have been their use of a new, not really validated, patient-reported measure of asthma improvement. Or maybe, they argue, FEV1 isn’t a good measure of the severity of constriction of the airways in asthma, even though spirometry has been a reliable, well-validated test for asthma severity for decades. This is especially true in an academic medical center with a lot of pulmonary specialists. While spirometry can be unreliable in primary care settings and other settings where there isn’t a lot of experience performing it, such a description does not apply to Harvard-affiliated hospitals. At least I would hope not.

Overall, the spin on this study is not that placeboes don’t result in objectively measurable improvements, which is the correct conclusion. Rather, the spin is that subjective symptoms are as important or more important than objective measures; so let’s use placeboes. In the paper itself, Kaptchuk doesn’t quite say that. He first makes a perfectly reasonable point that, if subjective and objective findings don’t correlate, go with the objective findings. Then he does some handwaving:

Indeed, although improvement in objective measures of lung function would be expected to correlate with subjective measures, our study suggests that in clinical trials, reliance solely on subjective outcomes may be inherently unreliable, since they may be significantly influenced by placebo effects. However, even though objective physiological measures (e.g., FEV1) are important, other outcomes such as emergency room visits and quality-of-life metrics may be more clinically relevant to patients and physicians.

My jaw dropped when I read this. “Other outcomes” besides objective measures of disease severity may be “more clinically relevant”? The spin goes way beyond that, though. I have to think that the reviewers kept the authors from getting too frisky with their desire to advocate placebo medicine and promote subjective outcomes as being more important than objective outcomes. No such restraint seemed to inhibit the author of the accompanying editorial, Daniel E. Moerman, Ph.D., who, alas, appears to be based practically in my back yard at the University of Michigan-Dearborn. I had never heard of him before; so I did what all bloggers do when they encounter an unknown. I Googled him. His CV is here, and this is what I found:

Daniel E. Moerman is the William E. Stirton Professor of Anthropology at the University of Michigan — Dearborn, so recognized for his distinguished scholarship, teaching, and professional accomplishments. Because of his work in the field of Native American ethnobotany, Professor Moerman often receives calls from the American Indian community, such as an inquiry from the Menominee in Wisconsin, asking him what kinds of plants they should include in the restoration of their indigenous ecosystem. He acknowledges that we are deeply indebted “to those predecessors of ours on the North American continent who, through glacial cold in a world populated by mammoths and saber-toothed tigers, seriously, deliberately, and thoughtfully studied the flora of a new world, learned its secrets, and encouraged the next generations to study closer and to learn more. Their diligence and energy, their insight and creativity, these are the marks of true scientists, dedicated to gaining meaningful and useful knowledge from a complex and confusing world.”

He’s also known for having written a book entitled Medicine, Meaning and the “Placebo Effect,” part of which can be found here, in particular this doozy of a quote:

There is much objection among physicians to the very existence of something called the placebo effect. It often seems to bother doctors enormously that the fact of receiving medical treatment (rather than the content of medical treatment) can initiate a healing process. Why? I think it is because medicine is rich in a particular kind of science. Medical education is filled with science. In the US, all students must score high on the “Medical College Admission Test” in order to be admitted to medical school. Students are allowed a total of 345 minutes to complete the exam. Eight five minutes are devoted to “verbal reasoning,” and 60 minutes to “writing sample.” The remaining 200 minutes (58.5%) are split evenly between “physical sciences” and “biological sciences.” It is apparently important that physicians understand levers, inclined planes, the acceleration of falling bodies, the life cycle of insects, and the process of photosynthesis. The kind of science that doctors have to learn is the simpler sort of science, the mechanical kind. Physicists worked out the mechanics of simple machines (levers, planes) in the seventeenth century. In our times, they have been working on much slipperier subjects: quarks, chaos, the “weak force,” and the oddest of quantum phenomena. Cause and effect are far less easy to detect in these matters than in the study of falling bodies…But it is the latter, not the former, in which physicians are schooled. And there is very little social science in medical education where one must address the complexities and subtleties of, say, emotion, or ritual, or culture.

If you detect shades of Deepak Chopra in there, you are correct, all with a dollop of utter contempt for Newtonian physics, which, I will remind you, are still accurate enough for most real-world purposes here, where few things we do reach relativistic speeds. Instead, Moerman invokes quarks, quantum theory, and other complexities and contrasts it to the “simpler” sciences that physicians apparently learn. One can almost feel the contempt for us poor, deluded physicians. Perhaps if I had known a bit about Professor Moerman, my jaw ouldn’t have dropped so far when I read this in the editorial accompanying the NEJM study:

What do we learn from this study? The authors conclude that the patient reports were “unreliable,” since they reported improvement when there was none — that is, the subjective experiences were simply wrong because they ignored the objective facts as measured by FEV1. But is this the right interpretation? It is the subjective symptoms that brought these patients to medical care in the first place. They came because they were wheezing and felt suffocated, not because they had a reduced FEV1. The fact that they felt improved even when their FEV1 had not increased begs the question, What is the more important outcome in medicine: the objective or the subjective, the doctor’s or the patient’s perception? This distinction is important, since it should direct us as to when patient-centered versus doctor-directed care should take place.

Apparently Moerman thinks that patient-centered care means inducing a patient through placebo responses to think that he feels better when in actuality the disease-impaired function of his organ (in this case, the lungs) puts him at risk for serious complications. He then goes on to write:

For subjective and functional conditions — for example, migraine, schizophrenia, back pain, depression, asthma, post-traumatic stress disorder, neurologic disorders such as Parkinson’s disease, inflammatory bowel disease and many other autoimmune disorders, any condition defined by symptoms, and anything idiopathic — a patient-centered approach requires that patient-preferred outcomes trump the judgment of the physician. Under these conditions, inert pills can be as useful as “real” ones; two inert pills can work better than one; colorful inert pills can work better than plain ones; and injections can work better than pills.

I find it hard not to notice that Moerman has cast a very wide net; virtually any condition outside of trauma could fit into his definition. I can’t help but think that, if I, for instance, had asthma and the severity of my symptoms didn’t correlate well with my objectively measured lung function as estimated by FEV1, then I would want my lung function tuned up. And if I didn’t want my lung function to be improved, I would hope that my doctor would be able to educate me as to why it is important to make my lungs function better, even though I feel OK. Moerman would seem to advocate telling me, “Oh, no, Dr. Gorski, don’t worry about those blue lips you have. That’s just an ‘objective’ finding. You feel OK, and, since I practice ‘patient-centered’ care, which teaches, among other things, that symptoms are the most important thing and the reason why you come to a doctor in the first place, your feeling better is all that matters!”

I’ll give you another example. Consider an epidural hematoma. If you crack your head hard enough, it can sheer or damage one of the epidural arteries. The typical clinical course is that the patient will be knocked unconscious due to head trauma. Later, he will regain consciousness and experience what is known in the biz as a “lucid interval” that can last several hours. What’s happening during that “lucid interval” is that the blood is still accumulating, but the hematoma hasn’t reached a large enough size yet to cause damage, but when it does the patient deteriorates rapidly. Frequently, one of those “objective findings” is a CT scan that shows a little epidural hematoma, which may or may not blossom into a life threatening epidural hematoma that can squash the brain against the inside of the skull. That’s an “objective” finding. Even though the patient feels well; that hematoma could expand and kill him in a few hours.

No doubt Professor Moerman or Ted Kaptchuk would claim that these are ridiculous and unfair examples. No doubt they would say that this is not what they’re talking about, and that’s probably true. I’ll even concede that the example of the epidural hematoma example was a bit over the top, but that was intentional.
However, whether they realize it or not, by elevating the subjective beyond the objective, and then offering placebo medicine for the subjective, these are exactly the sort of arguments they are making, when you strip them to their essence. No doubt Moerman or Kaptchuk would like to think that they would never, ever use such an approach for diseases with such potentially bad outcomes, but where do they draw the line? When, exactly, do we decide that subjective improvement is more important than objective improvement and by what criteria?Moerman makes a great show of saying, “First, do no harm”:

Do we need to control for all meaning in order to show that a treatment is specifically effective? Maybe it is sufficient simply to show that a treatment yields significant improvement for the patients, has reasonable cost, and has no negative effects over the short or long term. This is, after all, the first tenet of medicine: “Do no harm.”

Clearly implicit in Moerman’s statement is the assumption that not intervening in the abnormal physiology of some diseases (for instance, asthma) doesn’t do harm. He’s wrong. Sometimes doing nothing is harmful, as it allows the disease to continue unchecked, possibly resulting in permanent end organ damage or even the death of the patient, and placebo medicine does nothing to prevent that.

Let’s return to asthma, since that is the disease that this study examined. Even if a person with asthma seems to feel fine with a lowered FEV1, there is a price to be paid for leaving asthma untreated, which, let’s face it, is what placebo medicine is, leaving the functional disorder untreated. For instance, there is evidence that early treatment after the diagnosis is made can prevent the airway remodeling that occurs in chronic asthma, in which airway constriction and inflammation lead to further narrowing of the airway and further functional decline. Moreover, if a case of asthma’s severe enough, a patient could be walking on the proverbial tightrope, where all it would take is a small insult to push him over into a life-threatening asthma exacerbation or pneumonia, whereas if lung function in an asthmatic is tuned up as well as it can be, I’ll have a lot farther to deteriorate to reach that dangerous point. Let’s also not forget: Asthma can and does kill, some 250,000 deaths per year worldwide. Choosing alternative medicine over effective asthma treatment because placebo responses lead to feeling better without altering the underlying illness, could very well lead to preventable asthma deaths.

In the end, I’m a bit torn about this study. On the one hand, it irritates me to no end how it is being sold to the public as evidence of “powerful” placebo effects and as evidence that we physicians should be doing more placebo medicine. On the other hand, the fact that CAM advocates are reduced to spinning studies like this the way they are is pretty darned conclusive evidence that they now know that, from the standpoint of therapy, the vast majority of CAM modalities do nothing and are in fact placebo medicine. The problem is, in some diseases, such as asthma, placebos run the risk of allowing serious harm from lack of effective intervention that actually alters the course of disease. If the therapeutic relationship is so damaged in the U.S. that the beneficial effects of provider-patient interactions are not being realized, whether you want to refer to these effects at the “placebo response” or something else, the answer is to fix medicine to make it easier and more rewarding for physicians to spend that time with patients. The answer is not to embrace magical thinking like that behind acupuncture, homeopathy, and huge swaths of CAM. To argue otherwise is a false dichotomy.

Facebook Google Buzz Digg <!--<!--LinkedIn StumbleUpon LiveJournal Share

Related Posts

Comments are closed.