Answering a criticism of science-based medicine

Attacks on science-based medicine (SBM) come in many forms. There are the loony forms that we see daily from the anti-vaccine movement, quackery promoters like Mike Adams and Joe Mercola, those who engage in “quackademic medicine,” and postmodernists who view science as “just another narrative,” as valid as any other or even view science- and evidence-based medicine as “microfascism.” Sometimes, these complaints come from self-proclaimed champions of evidence-based medicine (EBM) who, their self-characterization otherwise, show signs of having a bit of a soft spot for the ol’ woo. Then sometimes there are thoughtful, serious criticisms of some of the assumptions that underlie SBM.

The criticism I am about to address tries to be one of these but ultimately fails because it attacks a straw man version of SBM.

True, the criticism of SBM I’m about to address does come from someone named Steve Simon, who vocally supports EBM but doesn’t like the the criticism of EBM implicit in the very creation of the concept of SBM. Simon has even written a very good deconstruction of postmodern attacks on evidence-based medicine (EBM) himself, as well as quite a few other good discussions of medicine and statistics. Unfortunately, in his criticism, Simon appears to have completely missed the point about the difference between SBM and EBM. As a result, his criticisms of SBM wind up being mostly the application of a flamethrower to a Burning Man-sized straw man representing what he thinks SBM to be. It makes for a fun fireworks show but is ultimately misdirected, a lot of heat but little light. For a bit of background, Simon’s post first piqued my curiosity because of its title, Is there something better than Evidence Based Medicine out there? The other reason that it caught my attention was the extreme naiveté revealed in the arguments used. In fact, Simon’s naiveté reminds me very much of my very own naiveté about three years ago.

Here’s the point where I tell you a secret about the very creation of this blog. Shortly after Steve Novella invited me to join, the founding members of SBM engaged in several e-mail frank and free-wheeling exchanges about what the blog should be like, what topics we wanted to cover, and what our philosophy should be. One of these exchanges was about the very nature of SBM and how it is distinguished from EBM, the latter of which I viewed as the best way to practice medicine. During that exchange, I made arguments that, in retrospect, were eerily similar to the ones by Simon that I’m about to address right now. Oh, how epic these arguments were! In retrospect, I can but shake my head at my own extreme naiveté, which I now see mirrored in Simon’s criticism of SBM. Yes, I was converted, so to speak (if you’ll forgive the religious terminology), which is why I see in Simon’s article a lot of my former self, at least in terms of how I used to view evidence in medicine.

The main gist of Simon’s complaint comes right at the beginning of his article:

Someone asked me about a claim made on an interesting blog, Science Based Medicine. The blog claims that Science Based Medicine (SBM), that tries to draw a distinction between that practice and Evidence Based Medicine (EBM). SBM is better because “EBM, in a nutshell, ignores prior probability (unless there is no other available evidence and falls for the p-value fallacy; SBM does not.” Here’s what I wrote.

No. The gist of the science based medicine blog appears to be that we should not encourage research into medical therapies that have no plausible scientific mechanism. That’s quite a different message, in my opinion, that the message promoted by the p-value fallacy article by Goodman.

First off, Simon’s complaint makes me wonder if he actually read Dr. Atwood’s entire post. To show you what I mean, I present here the whole quote from Dr. Atwood in context:

EBM, in a nutshell, ignores prior probability† (unless there is no other available evidence) and falls for the “p-value fallacy”; SBM does not. Please don’t bicker about this if you haven’t read the links above and some of their own references, particularly the EBM Levels of Evidence scheme and two articles by Steven Goodman (here and here). Also, note that it is not necessary to agree with Ioannidis that “most published research findings are false” to agree with his assertion, quoted above, about what determines the probability that a research finding is true.

Simon, unfortunately, decides to bicker. In doing so, he builds a massive straw man. I’m going to jump ahead to the passage the most reveals Simon’s extreme naiveté:

No thoughtful practitioner of EBM, to my knowledge, has suggested that EBM ignore scientific mechanisms.

Talk about a “no true Scotsman” fallacy!

You know, about three years ago I can recall writing almost exactly the same thing in the aforementioned epic e-mail exchange arguing the very nature of EBM versus SBM. The problem, of course, is not that EBM completely ignores scientific mechanisms. That’s every bit as much of a straw man characterization of SBM as the characterization that Simon skewered of EBM being only about randomized clinical trials (RCTs). The problem with EBM is, rather, that it ranks basic science principles as being on either very lowest rung or the second lowest rung on the various hierarchies of evidence that EBM promulgates as the way to evaluate the reliability of scientific evidence to be used in deciding which therapies work. The most well-known of these is the that published by the Centre for Evidence-Based Medicine, but there are others. Eddie Lang, for instance, places basic research second from the bottom, just above anecdotal clinical experience of the sort favored by Dr. Jay Gordon (see Figure 2). Duke University doesn’t even really mention basic science; rather it appears to lump it together at the very bottom of the evidence pyramid under “background information.” When I first started to appreciate the difference between EBM and SBM, I basically had to be dragged, kicking and screaming, by Steve and Kimball, to look at these charts and realize that, yes, in the formal hierarchies of evidence used by the major centers for EBM, basic science and plausible scientific mechanisms do rank at or near the bottom. I didn’t want to accept that it was true. I really didn’t. I didn’t want to believe that SBM is not synonymous with EBM, which would be as it should be in an ideal world. Simon apparently doesn’t either:

Everybody seems to criticize EBM for an exclusive reliance on randomized clinical trials (RCTs). The blog uses the term “methodolatry” in this context. A group of nurses who advocate a post-modern philosophical approach to medical care also criticized EBM and used an even stronger term, micro-fascism, to describe the tendency of EBM to rely exclusively on RCTs.

But I have not seen any serious evidence of EBM relying exclusively on RCTs. That’s certainly not what David Sackett was proposing in the 1996 BMJ editorial “Evidence based medicine: what it is and what it isn’t”. Trish Greenhalgh elaborates on quite clearly in her book “How to Read a Paper: The Basics of Evidence Based Medicine” that EBM is much more than relying on the best clinical trial. There is, perhaps, too great a tendency for EBM proponents to rely on checklists, but that is an understandable and forgivable excess.

I must to admit to considerable puzzlement here. EBM lists randomized clinical trials (RCTs) and meta-analyses or systematic reviews of RCTs as being the highest form of evidence, yet Simon says he sees no serious evidence of EBM relying exclusively on RCTs. I suppose that’s true in a trivial sort of way, given that there are conditions and questions for which there are few or no good RCTs. When that is the case, one has no option but to rely on “lower” forms of evidence. However, the impetus behind EBM is to use RCTs wherever possible in order to decide which therapies are best. If that weren’t true, why elevate RCTs to the very top of the evidence hierarchy? Simon is basically misstating the the complaint anyway. We do not criticize EBM for an “exclusive” reliance on RCTs but rather for an overreliance on RCTs devoid of scientific context.

Simon then decides to try to turn the charge of “methodolatry,” or as revere once famously called it, the profane worship of the randomized clinical trial as the only valid method of investigation, against us.This misinterpretation of what SBM is leads Simon, after having accused SBM of leveling straw man attacks against EBM, to building up that aforementioned Burning Man-sized straw man himself, which he then begins to light on fire with gusto:

I would argue further that it is a form of methodolatry to insist on a plausible scientific mechanism as a pre-requisite for ANY research for a medical intervention. It should be a strong consideration, but we need to remember that many medical discoveries preceded the identification of a plausible scientific mechanism.

While this is mostly true, one might point out that, once the mechanisms behind such discoveries were identified, all of them had a degree of plausibility in that they did not require the overthrow of huge swaths of well-settled science in order to be accepted as valid. Let’s take the example of homeopathy. I use homeopathy a lot because it is, quite literally, water and because its proposed mechanism of action goes against huge swaths of science that has been well-characterized for centuries. I’m not just talking one scientific discipline, either. For homeopathy to be true, much of what we currently understand about physics, chemistry, and biology would have to be, as I am wont to say, not just wrong, but spectacularly wrong. That is more than just lacking prior plausibility. It’s about as close to being impossible as one can imagine in science. Now, I suppose there is a possibility that scientists could be spectacularly wrong about so much settled science at once. If they are, however, it would take compelling evidence on the order of the mass of evidence that supports the impossibility of homeopathy to make that possibility worth taking seriously. Extraordinary claims require extraordinary evidence. RCTs showing barely statistically significant effects do not constitute extraordinary evidence, given that chance alone will guarantee that some RCTs will be positive even in the absence of an effect and the biases and deficiencies even in RCTs. Kimball explains this concept quite well:

When this sort of evidence [the abundant basic science evidence demonstrating homeopathy to be incredibly implausible] is weighed against the equivocal clinical trial literature, it is abundantly clear that homeopathic “remedies” have no specific, biological effects. Yet EBM relegates such evidence to “Level 5”: the lowest in the scheme. How persuasive is the evidence that EBM dismisses? The “infinitesimals” claim alone is the equivalent of a proposal for a perpetual motion machine. The same medical academics who call for more studies of homeopathy would be embarrassed, one hopes, to be found insisting upon “studies” of perpetual motion machines. Basic chemistry is still a prerequisite for medical school, as far as I’m aware.

Yes, Simon is indeed tearing down a straw man. As Kimball himself would no doubt agree, even the most hardcore SBM aficianado does not insist on a plausible scientific mechanism as a “pre-requisite” for “ANY” research, as Simon claims. Rather, what we insist on is that the range of potential mechanisms proposed do not require breaking the laws of physics or that there be highly compelling evidence that the therapy under study actually has some sort of effect sufficient to make us doubt our understanding of the biology involved.

Simon then appeals to there being some sort of “societal value” to test interventions that are widely used in society even when those interventions have no plausible mechanism. I might agree with him, except for two considerations. First, no amount of studies will convince, for example, homeopaths that homeopathy doesn’t work. Witness Dana Ullman if you don’t believe me. Second, research funds are scarce and likely to become even more so over the next few years. From a societal perspective, it’s very hard to justify allocating scarce research dollars to the study of incredibly implausible therapies like homeopathy, reiki, or therapeutic touch. (After all, reiki is nothing more than faith healing based on Eastern mystic religious beliefs rather than Christianity.) Given that, for the foreseeable future, research funding will be a zero sum game, it would be incredibly irresponsible to allocate funds to studies of magic and fairy dust like homeopathy, knowing that those are funds that won’t be going to treatment modalities that might actually work.

When it all comes down to it, I think that Simon is, as I was, in denial. When confronted with the whole concept of SBM compared to EBM, I denied what I didn’t want to believe. To me, it seemed so utterly obvious that the scientific plausibility of the hypothesis under study has to be taken into account in evaluating the evidence. I just couldn’t imagine that any system of evaluating evidence could be otherwise; it made no sense to me. So I imposed this common-sense view on EBM, and I rather suspect that many other advocates of EBM like Simon labor under the same delusion I did. The problem is, though, that critics of EBM are basically correct on this score. Still, realizing it or admitting it did not come easy. For me to accept that EBM had a blind spot when it came to basic science, it took having my face rubbed in unethical and scientifically dubious trials like that of the Gonzalez therapy for pancreatic cancer or chelation therapy for cardiovascular disease. Let’s put it this way. To be willing to waste money studying something that is nothing but water and has as its “scientific basis” a hypothesis that is the equivalent of claiming that a perpetual motion machine can be constructed tells me that basic science basically means close to nothing. Ditto wasting money on studying a therapy whose major component is coffee enemas used to treat a deadly cancer. Simon cheekily suggests at the end of his post that “maybe we should distinguish between EBM and PIEBM (poorly Implemented Evidence Based Medicine). The problem is, trials of therapies like the Gonzalez regimen, homeopathy, and reiki are a feature of, not a bug in EBM. In fact, I challenge Simon to provide a rationale under EBM as it is currently constituted to justify not having to do a clinical of these therapies. There is none.

I realize that others have said it before here (and probably said it better than I), but we at SBM are not hostile to EBM at all. Rather, we view EBM as incomplete, a subset of SBM. It’s also too easily corrupted to provide an air of scientific legitimacy to fairy dust like homeopathy and reiki. These problems, we argue, can be ameliorated by expanding EBM into SBM. Personally, I suspect that the originators of EBM, as I do (and, I suspect, Simon does), never thought of the possibility of EBM being applied to hypotheses as awe-inspiringly implausible as those of CAM. It simply never occurred to them; they probably assumed that any hypothesis that reaches a clinical trial stage must have good preclinical (i.e., basic science) evidence to support its efficacy. But we know now that this isn’t the case. I can’t speak for everyone else here, but after agreeing with Kimball that EBM ought to be synonymous with SBM I also express the hope that one day there will be no distinction between SBM and EBM. Unfortunately, we aren’t there yet.

NOTE: There will be one more post later today; so don’t go away just yet.

Integrating patient experience into research and clinical medicine: Towards true “personalized medicine”

We advocate science-based medicine (SBM) on this blog. However, from time to time, I feel it necessary to point out that science-based medicine is not the same thing as turning medicine into a science. Rather, we argue that what we do as clinicians should be based in science. This is not a distinction without a difference. If we were practicing pure science, we would be theoretically able to create algorithms and flowcharts telling us how to care for patients with any given condition, and we would never deviate from them. It is true that we do have algorithms and flowcharts suggesting guidelines for care for a wide variety of conditions, but there is wide latitude in them, and often a physician’s “judgment” still ends up trumping the guidelines. While it is also true that sometimes physicians have an overinflated view of the quality of their own “clinical judgment,” sometimes to the point of leading them to reject well-established science, as Dr. Jay Gordon frequently does, what I consider to be physician’s judgment is knowing how to apply existing medical science to individual patients based on their circumstances and, yes, even desires and values.

Indeed, if there’s one area where SBM has all too often fallen short in the past, it’s in taking into account the patient’s experience with various treatments. What got me thinking (again) about this issue was an article by Dr. Pauline Chen in the New York Times last Thursday entitled Listening to Patients Living With Illness. She begins her article with an anecdote:

Wiry, fair-haired and in his 60s, the patient had received a prostate cancer diagnosis a year earlier. When his doctors told him that surgery and radiation therapy were equally effective and that it was up to him to decide, he chose radiation with little hesitation.

But one afternoon a month after completing his treatment, the patient was shocked to see red urine collecting in the urinal. After his doctors performed a series of tests and bladder irrigations through a pencil-size catheter, he learned that the bleeding was a complication of the radiation treatment.

He recalled briefly hearing about this side effect three months earlier, but none of the reports he had been given or collected mentioned it, and once he had recovered from the angst of the emergency room and the doctor’s office visits and the discomfort of the clinical work-up, he didn’t give it more thought — until a few weeks later, when he started bleeding again.

By the time I met him, he was in the middle of his third visit to the hospital. “I feel like I’m tied to this place,” he said. He showed me a plastic jug partly filled with urine the color of fruit punch, and he described a post-treatment life marked by fear of going to the bathroom and discovering blood. “If I had known that my life would be like this after radiation,” he sighed, “I would have chosen the surgery.”

To this, I’ll add a little random bit of personal experience of my own. No, I wasn’t a patient who had to face something like this patient, but I do see something similar in my patients. Back when I was in my surgical oncology fellowship — and before that, in my general surgery fellowship — I was always taught that lumpectomy was preferable to mastectomy because it saves the breast and most women want to save their breasts. After all, lumpectomy plus radiation therapy results in the same chance of survival as mastectomy; so we should offer lumpectomy whenever tumor characteristics (the main one being size relative to the rest of the breast) permit it. Yet this assessment often neglects to acknowledge that, for some women, undergoing six or seven weeks of radiation is horribly inconvenient, and that there are often complications. It also often neglects to acknowledge that there is a price for saving the breast besides having to undergo radiation therapy: there’s the possibility of more surgery to achieve clear surgical margins, not to mention a higher risk of local recurrence in the breast. For some women, this latter possibility is a deal-breaker. Even though they acknowledge that their chances of survival would be the same with lumpectomy or mastectomy, the thought of an approximately 8% local recurrence rate eats at them to the point that they opt for mastectomy.

Then there is the issue of chemotherapy. We frequently recommend cytotoxic chemotherapy for women with relatively early stage breast cancer, even though the addition of chemotherapy in such patients only increases the chance of survival by perhaps 2–3% on an absolute basis, depending upon the tumor. Of course, as I’ve pointed out before, the benefits of chemotherapy are more marked in more advanced operable tumors, but in early stage tumors they are rather modest. This is therapy that causes hair loss, increased risk of infections, and can cause damage to the heart, but it is the standard of care. Most women are willing to undergo this sort of therapy, too; I can’t locate the study, but I’ve seen one survey where women respond that they would be willing to undergo chemotherapy for a 1% increased chance of survival.

The point is that these sorts of questions are value judgments that often depend upon what patients consider important. The patient described by Dr. Chen, for instance, would apparently prefer risks of surgery rather than peeing blood all the time and having to go back to the doctor’s office and hospital time and time again for this problem. Science can tell a physician and patient like this that radiation or surgery will produce an equivalent chance of surviving his cancer. It can tell them what the complications of each choice are likely to be, and what the odds are of each complication. That’s part of what I mean when I refer to science-based medicine. What it can’t tell the patient and doctor is which constellation of risks would be more easily bearable by the patient. The same is true for whether to choose mastectomy or radiation or whether to opt for chemotherapy after breast cancer. Science provides the numbers and the “price” of each choice, but it can’t — nor should it — tell the patient what to value. Moreover, what the patient values may not be what the physician values. As Dr. Chen points out:

Whether conducted at a laboratory bench or in clinical trials, medical research has long been driven by a single overriding goal — the need to find a cure. Usually referred to more modestly as a search for “the most effective treatment,” this standard has served as both a barometer of success and a major criterion for funding. Most published studies are marked by a preponderance of data documenting even minor blips in laboratory values or changes in the size of a spot of cancer or area of heart muscle damage on specialized X-rays. Some studies bolster the apparent success of their results with additional data on societal effects like treatment costs or numbers of workdays missed.

Few studies, however, focus on the patient experience.

She then refers to a study by Dr. Albert W. Wu, lead author and a general internist and professor of health policy and management at the Johns Hopkins Bloomberg School of Public Health in Baltimore published in the journal Health Affairs entitled Adding The Patient Perspective To Comparative Effectiveness Research. In this study, Wu et al argue for the inclusion of the patient’s perspective in comparative effectiveness research. What this involves is patient-reported outcomes. To illustrate the concept, Wu et al use this chart for patients with chronic obstructive pulmonary disease (COPD).

These sorts of measures are particularly appropriate for comparative effectiveness research (CER). For the reason, consider what CER is: basically CER compares existing treatment modalities already determined to be effective in prior clinical trials in order to determine which is more effective. Other important measures include cost-effectiveness. However, although some efforts go into assessing patient-reported quality of life outcomes of the sort listed above, all too often it’s hit-or-miss whether these sorts of measurements are included in clinical trials. One initiative that this article describes is the Patient-Centered Outcomes Research Institute, whose mandate is to:

  • Establish an objective research agenda;
  • Develop research methodological standards;
  • Contract with eligible entities to conduct the research;
  • Ensure transparency by requesting public input; and
  • Disseminate the results to patients and healthcare providers.

Wu et al suggest that the PCORI can only realize its potential if it supports initiatives that integrate measures of patient experience into not just research but into routine clinical care. A number of possibilities are suggested, including how to integrate general and disease-specific tools into clinical trials in order to measure patient-reported outcomes. Also suggested are various means of integrating these tools not just into clinical research but into routine clinical care, including using them in administrative claims data, linking this data to electronic medical records, and even promoting the collection of such data as being required for reimbursement.

One problem I can perceive immediately in trying to use the PCORI is that it has no real power. In fact, the health insurance reform bill known as the Patient Protection and Affordable Care Act (PPACA), which mandated the creation of the Patient-Centered Outcomes Research Institute, provides no power to it. Indeed, its main charge is to assess “relative health outcomes, clinical effectiveness, and appropriateness” of different medical treatments, both by evaluating existing studies and conducting its own. Even given that huge mandate, the law also states that the PCORI does not have the power to mandate or even endorse coverage rules or reimbursement for any particular treatment. Indeed, so toothless is the PCORI, at least in its present form, that it has been disdainfully described as being like the UK’s NICE but without any teeth, which is all too true. Basically, the law says that Medicare may take the institute’s research into account when deciding what procedures it will cover, as long as the new research is not the sole justification and the agency allows for public input. Moreover, if the political reaction to the USPSTF’s revision of the guidelines for mammographic screening last year is any indication, if politicians don’t like a PCORI recommendation, you can be quite sure that they’ll behave similarly. After all the ranting about “rationing” that was used to attack the PPACA, it was not politically feasible to make the PCORI a government agency or to imbue it with any real authority.

Politics aside, let’s get back to the sorts of initiatives suggested by Wu et al. One that in particular interests me is the concept of using patient portals to collect this information. Patient portals are websites that offer a variety of services to patients, including secure e-mail communication with the clinician, the ability to schedule appointments and request prescription refills, as well as the opportunity to complete intake and other forms that used to be completed on paper in the office. The authors propose using such portals to collect patient-centered quality of life measurements and give an example of how this might be done in the case of a hypothetical breast cancer patient:

In one possible scenario, a woman with breast cancer is being followed by an oncologist who would like to know how she is doing on the chemotherapy regimen she is receiving. The oncologist logs on to PatientViewpoint.org, enters the patient’s number, and orders the BR-23 Breast Cancer–Specific Quality of Life Questionnaire for her to complete online before her next visit. The patient receives an e-mail notification to do this, logs on to PatientViewpoint.org, and completes the survey.

The patient’s results are automatically calculated and are made available both on the website and within the hospital’s electronic health record alongside all of her other laboratory test results. At the visit, the oncologist pulls up the results and asks the patient about an increase in her depression scores. It would also be possible to aggregate all of the patient’s questionnaire results with those of other patients receiving chemotherapy for similar breast cancer cases and to use these data to help compare the effectiveness of different regimens.

Dr. Wu’s site is currently only set up to accommodate breast and prostate cancer patients, but it could be expanded. There now exist a large number of tools like the BR-23 to assess quality of life, and, with what appears to be the nigh inevitable infiltration of the electronic medical record into medicine over the next several years, integrating such tools into routine clinical care should become increasingly easy and inexpensive. On the other hand, one problem with such tools is that clinicians are already buried in “information overload.” Whether they would actually read and use the results of such studies outside the context of clinical trials is not assured, at least not if there is no incentive to do so. If this sort of approach is going to work, the government and insurance companies are going to have to pony up. Another problem is that a lot of doctors don’t like this sort of measurement. They consider it unscientific and “squishy” or they don’t know what to do with the information. Whether these attitudes will change or not as CER becomes increasingly embedded in clinical research is impossible to say.

Dr. Wu’s article leads me to reflect upon two things. First, it’s important to remember that the reason these “softer,” “squishier” measures are becoming more important is precisely because SBM has been so successful. Diseases that were once fatal are now chronic. A prime example is HIV/AIDS. Back when I was in medical school, HIV was invariably fatal. AIDS patients died rapidly — and in most unpleasant ways. Thanks to SBM, which developed the cocktails of antiretroviral drugs, HIV/AIDS has become a chronic disease, so much so that babies born with HIV are now approaching adulthood. What this success means is that, although not completely, by and large mortality is no longer the be-all and end-all of HIV treatment. Now, we are seeing quality of life issues coming to the fore. The same is true for some cancers, and it’s certainly true for diabetes and heart disease. As Wu et al point out:

Patient-reported outcomes directly support the primary goal of much of health care: to improve health-related quality of life, particularly for people with chronic illnesses. No one can judge this better than the patient. For example, the main objective of hip replacement surgery is to reduce pain and improve the capacity to get around. The main goal of cataract extraction is to improve visual functioning—that is, the ability to perform activities that require eyesight, such as reading, walking without falls, and working on a computer.

In addition, there are often trade-offs between the length and quality of life. Important considerations are the side effects of treatment of HIV disease, the temporary diminution of functioning after coronary bypass surgery, or fatigue resulting from cancer chemotherapy. Even for life-saving treatments, this kind of trade-off can influence a patient’s decision making among alternative courses of care.

Once again, these decisions and the trade-offs patients decide to accept should be informed by the science. The options presented to the patient and their cost in terms of potential complications and impact on the patient’s ability to go about his daily activities and in essence live his life must be based on science. However, that does not mean that the final determination will always be purely based on estimates of efficacy. If the patient decides, for instance, that the survival advantage that chemotherapy will provide after her breast cancer surgery is not sufficient to be worth months of hair loss, fatigue, and the risk of heart damage, then that is her choice. The key is that we as clinicians must make sure that she has accurate, science-based information upon which to base that choice. Informed consent must be based on sound, scientifically verified information. Anything else, such as the sorts of “informed consent” advocated by “health freedom” groups is in reality misinformed consent. It is our responsibility as science-based practitioners to do our best to make sure that the treatments we offer our patients are based in science and that the information about the relative benefits, risks, and costs about these treatments is also based in science.

The second thing that comes to my mind is the complete contrast between the sorts of efforts that Wu et al are undertaking and what purveyors of unscientific so-called “complementary and alternative medicine” (CAM) do. SBM is, through CER, undertaking systematic measurements of quality of life measures, and the use of genetic tests that provide information about prognosis and predict response to therapy, making its first real steps towards truly “personalized” medicine. Yes, these steps are halting — stumbling at times, even — but they are steps towards the day when SBM can offer patients treatment options based on science and personalized to the characteristics of the biology of their disease that are unique to them, all while taking the patients’ own values and desires into account. Contrast that to so-called CAM, where “personalized medicine” basically means making it up as the practitioner goes along, and I think you’ll see what I mean. Whatever the deficiencies and faults of SBM (and it’s impossible not to concede that there are many), SBM is far closer to true “personalized medicine” than any CAM, and it is using CER to come even closer still. CAM has nothing to compare.

CAM and the Law, Part 1: Introduction to the issues

When I write or talk about the scientific evidence against particular alternative medical approaches, I am frequently asked the question, “So, if it doesn’t work, why is it legal?” Believers in CAM ask this to show that there must be something to what they are promoting or, presumably, the government wouldn’t let them sell it. And skeptics raise the question often out of sheer incredulity that anyone would be allowed to make money selling a medical therapy that doesn’t work. It turns out that the answer to this question is a complex, multilayered story involving science, history, politics, religion, and culture. 

While we science types tend to be primarily interested in what is true and what isn’t, that is a sometimes surprisingly minor factor in the process of constructing laws and regulations concerning medicine. What I hope to do in this series of essays is look at some of the major themes involved in the regulation of medical practice, particularly as they relate to alternative medicine. I will begin by touching on some of the general philosophical and legal issues that have defined the debate among the politicians and lawyers responsible for shaping the legal environment in which medicine is practiced. The I will review some of the specific domains within this environment, including: medical licensure and scope-of-practice laws; malpractice law; FDA regulation of drugs, homeopathic remedies, and dietary supplements; truth-in-advertising law; and anti-trust law.

But first…

The Disclaimer

Obviously, an exhaustive and comprehensive look at the Byzantine and unstable landscape of medical law is beyond the scope of both this blog and my own knowledge and expertise. I am no lawyer, and for the details of the laws and judicial opinions concerning this subject I must rely on sources whose accuracy I am not qualified to verify independently. Much of the published material I have found on CAM and the law seems written from a political and ideological perspective sympathetic to the postmodernist notion of multiple equally legitimate “ways of knowing,” and also to a laissez-faire approach to regulation generally. So clearly the details provided and the interpretations given in such writings may not fairly represent the legal or regulatory environment. In any case, while I hope to provide some useful insight into how CAM fits into the system of medical law and regulation in the United States,  nothing I say should be taken as the definitive word on the law or as legal advice.  

Caveat Emptor v. Caveat Venditor

There is a deep ideological divide in America on the subject of who is responsible for ensuring that the products we buy are safe and perform as advertised, and the area of medicine is not exempt from this political debate. On one extreme is the self-identified “health freedom” lobby, which argues that the consumer and the market should be the only forces to regulate healthcare products and services. As an example, economist Randall Holcombe has written:

An auto mechanic does not have to be a medical expert to use market information to find good health care, any more than a doctor has to be an automobile expert to find a good car…Deregulation not only provides incentives for patients to look for, and physicians to offer, better care, it permits all parties concerned the freedom to decide what better care is. For instance, in the debate over alternative medicine, such as herbal treatments, chiropractors, acupuncture, and so on, the question is not only whether alternative medicine is effective, but whether people should be allowed to use these alternatives even if their physical health may not improve or may even suffer….In a free country, people should be free to choose whatever health care options they want for whatever purpose…even if healthcare professionals believe that care is substandard.1

Those more sympathetic to laws and regulations intended to protect consumers from unsafe and ineffective therapies argue against this concept of “medical anarchism:”

Why not let the market decide? Why not trust the citizenry to sort out what works from what doesn’t work in medicine as we do in other aspects of life?

The answer has to do with knowledge and risk. People do let the market decide with regard to goods like ice cream cones and baseball bats, and services like travel booking. If the ice cream is not good, people won’t buy it; if the service is defective, people will go elsewhere. However, in such situations, people are easily able to evaluate the quality and value of the goods and services they receive…Nor are such services administered under duress, nor are they represented as necessary for one’s health or well-being…

But in the area of medicine, too much is at stake. If one chooses the wrong therapeutic modality, once can lose health, life, and limb. Furthermore, few individuals are sufficiently wealthy, educated, or possessed of the resources to test putative medical therapies. In fact, there are so many putative therapies, that it is impossible for an individual to try them all. When people are ill, they do not have time to test even a handful.2

These arguments tend to run in parallel, and to be only tenuously connected, with the usual focus of this blog; the question of how one evaluates medical therapies and what the evaluation indicates about safety and efficacy. Of course, many proponents of  CAM who invoke the “health freedom” position do actually believe the therapies they promote are beneficial. But the fundamental position itself does not hinge on this, since from a perspective such as Dr. Holcombe’s people should be free to choose even therapies that are ineffective or harmful without “burdensome” government regulatory interference. The self-evident notion that it is the role of government to protect the public from quackery turns out not to be self-evident to many Americans, and thus demonstrating that a given approach is quackery may not be sufficient to convince them that it should be prohibited or even officially discouraged. 

The Right to Privacy v. State Police Powers

In the legal arena, the political conflict between those favoring or opposing aggressive consumer protection regulations in the area of healthcare takes the form of statutes and judicial opinions balancing the competing constitutional principles of an individual right to privacy and a governmental authority, or even mandate, to protect the public health. Neither a right to privacy or absolute authority over one’s own body nor a government role in regulating healthcare are specifically mentioned in the U.S. Constitution, but both are held to exist by long-standing interpretation. A right to privacy, including control over one’s own body and the care of it, is generally believed to be established by a broad reading of the 14th Amendment, though there is some controversy about this as about most areas of constitutional law. The authority of the state to abrogate this right in the process of protecting the public health is usually understood to be based in the “police powers” established by the 10th Amendment.

In 1824, the Supreme Court made reference to “health laws of every description” as encompassed within the “state police powers,” those powers not specifically delegated to the federal government nor prohibited to the States which are thus held, under the Tenth Amendment, to be the prerogative of the individual states.3 The court cited and expanded this opinion in a subsequent case in 1905, in which a state mandate to protect the public health was held to override, at least in some circumstances, the individual right to control one’s own body. The case involved a man prosecuted for refusing a mandatory smallpox vaccination. The opinion stated:

The authority of the state to enact this statute is to be referred to what is commonly called the police power…this court …distinctly recognized the authority of a state to enact quarantine laws and “health laws of every description…”

The defendant insists that his liberty is invaded when the state subjects him to fine or imprisonment for neglecting or refusing to submit to vaccination…and that the execution of such a law…is nothing short of an assault upon his person. But the liberty secured by the Constitution of the United States…does not import an absolute right in each person to be, at all times and in all circumstances, wholly freed from restraint. There are manifold restraints to which every person is necessarily subject for the common good.4

The court went on to specifically balance the “liberty secured by the 14th Amendment,” including “the control of one’s body” against “the power of the public to guard itself against imminent danger” and concluded that under at least some circumstances the authority to protect the public health trumps he right of an individual to control his or her own body. 

This precedent was further developed and expanded in subsequent cases to validate the state’s authority to define and regulate medical practices, to control what practices could be offered and by whom via licensing and scope-of-practice laws, and to prohibit individual’s from choosing specific medical treatments if these were considered to be ineffective or dangerous. I will discuss the specifics of these cases in subsequent posts. But for now I simply want to illustrate that the legal basis for the regulations of medical practice which today pertain to CAM, as well as scientific medicine, is generally seen by the courts as a balance between the individual right to privacy and the state authority to protect public health.1,5

Just the Facts, Ma’am?*

I feel it is important to emphasize again that the question of the medical facts in such cases, and how these are established, are not always seen by the courts to be as relevant as the legal or political issues. For example, in Jacobson v. Massachusetts the court specifically addressed the factual claims by the defendant that the vaccine was ineffective and unsafe. The court’s reasoning will seem familiar, and disturbing, to those of us dealing with the anti-vaccination movement today:

The appellant claims that vaccination does not tend to prevent smallpox, but tends to bring about other diseases, and that it does much harm, with no good. It must be conceded that some laymen, both learned and unlearned, and some physicians of great skill and repute, do not believe that vaccination is a preventative of smallpox. The common belief, however, is that it has a decided tendency to prevent the spread of this fearful disease…While not accepted by all, it is accepted by the mass of the people, as well as by most members of the medical profession…A common belief, like common knowledge, does not require evidence to establish its existence, but may be acted upon without proof by the legislature and the courts…The fact that the belief may be wrong, and that science may yet show it to be wrong, is not conclusive; for the legislature has the right to pass laws which, according to the common belief of the people, are adapted to prevent the spread of contagious diseases. In a free country, where the government is by the people, through their chosen representatives, practical legislation admits of no other standard of action, for what the people believe is for the common welfare must be accepted as tending to promote the common welfare, whether it does in fact or not. Any other basis would conflict with the spirit of the Constitution, and would sanction measures opposed to a Republican form of government.4

While the decision in this case, to support the authority of the state to enforce mandatory vaccination as a public health measure, might be welcomed by supporters of science-based public health policy, the decision itself was by no means based in science or scientific reasoning. 

The laws and judicial opinions which govern the practice of medicine may sometimes support and sometimes oppose legitimate, science and evidence-based medicine. But the legislators, lawyers, and judges responsible for these laws and opinions are not scientists, and their reasoning about scientific and medical issues often has a philosophical and epistemological basis often incompatible with the scientific approach. Such policy mistakes as DSHEA and NCCAM are much easier to understand, and hopefully prevent, if we clearly understand this.

If we are to be effective at promoting scientific medicine and containing unscientific approaches and ineffective or unsafe therapies, we must be aware of the limitations of scientific and fact-based arguments in persuading legislators and judges, as well as the general public. Though science and facts derived from scientific knowledge and investigation must be the foundation of our medical approach, they are not always the most effective means of making the case for this approach, even with our colleagues much less with the citizens, politicians, and legal professionals who ultimately control what sort of influence and oversight government has on medicine. Non-scientists tend to view debates about regulation of CAM in terms of individual rights, consumer protection, truth-in-advertising, fair competition in the marketplace, and other such political and philosophical frames which are as important, or even more important, to them as the issue of what is factually true about CAM and whether particular therapies help or harm. 

In this series of essays, I will look at laws and regulations concerning CAM primarily from these perspectives. The kinds of questions that arise in this process may initially seem odd to those of us accustomed to a straightforward emphasis on the relevant facts and evidence. Are doctors allowed to offer unproven or even clearly bogus therapies? Are they required to offer them if a patient wants them? Can a mainstream doctor, be sued for providing or failing to provide an alternative therapy? Can an alternative practitioner be sued for providing, or failing to provide, mainstream scientific medical care? Can and should patients have whatever care they want regardless of whether science supports it? And from my perspective as a veterinarian, since pets are legally property not persons, is there any legal or regulatory control over alternative veterinary medicine at all? Such questions and the reasoning behind asking and answering them, shapes the landscape within which we operate as healthcare providers and advocates for science-based medicine, so I hope an examination of them will be interesting and useful.

* Our friends at snopes.com tell me that Joe Friday never actually said this, but due to its cultural resonance I choose to invoke the phrase anyway. Oh, I hope all this exposure to legal argument and reasoning hasn’t damaged my respect for actual facts! Return to text.

References

  1. Jesson LE, Tovino SA. Complementary and alternative medicine and the law. Durham (NC), USA: Carolina Academic Press, 2010. p. 279. Return to text.
  2. Ramey DW, Rollin BE. Untested therapies and medical anarchism. In: Complementary and alternative veterinary medicine considered. Ames (IA), USA: Iowa State Press, 2004. p.168-9. Return to text.
  3. Gibbons v. Ogden, 22 U.S. 1, 78 (1824). cited in Jesson LE, Tovino SA. Complementary and alternative medicine and the law. Durham (NC), USA: Carolina Academic Press, 2010. p. 26. Return to text.
  4. Jacobson v. Massachusetts, 197 U.S. 11 (1905). cited in Jesson LE, Tovino SA. Complementary and alternative medicine and the law. Durham (NC), USA: Carolina Academic Press, 2010. p. 26-29. Return to text.
  5. Cohen MH. Legal issues in alternative medicine: A guide for clinicians, hospitals, and patients. Victoria (BC), Canada: Trafford Publishing, 2003. Return to text.

Dwarf planets are crazy

[yet another Eridian digression in lieu of continuing the Sedna story. Sorry! Plus this one is written while staying up all night and trying to run a telescope at the same time. Not the best for coherent writing or fixing typos, but I am losing sleep pondering the strange results from last week’s Eris occultation. And I have not been getting enough sleep to be able to afford the loss.]
I’m in Hawaii for a few precious nights to point the Keck telescope – one of the largest in the world – at dwarf planet Eris – one of the largest in the solar system. A week ago I would have just said “the largest in the solar system,” but as of last weekend I’m less sure.
The news that Pluto might indeed remain the largest dwarf planet is big, and, even though it will have no impact on the well-settled question of whether Pluto is a planet or not (still: no), it should make fans of the former planet smile a little bit. At least until they remember that Eris is substantially more massive, and in a head-on collision Pluto would suffer the brunt of the damage.
To me, though, Eris has suddenly become substantially more interesting than it was a week ago.
I will admit to you a somewhat embarrassing secret of mine: when I go around the world giving scientific talks about all the exciting scientific insights coming from the new discoveries of the dwarf planets, I often spend almost no time on Eris itself. I talk about the strange orbit of Sedna, the rapid rotation and collisional family of Haumea, and the irradiated surface of Makemake. But when it comes to Eris there has not been much more to say than “Yeah, it has a surface composition like Pluto and is a little bigger and more massive than Pluto, but, basically, it is a ever-so-slightly-larger Pluto twin.” Scientifically, finding a twin of Pluto doesn’t teach you much new, except that whatever made Pluto did it twice. Good to know, but not a huge scientific insight.
Now it seems that Eris is far from being a Pluto twin. The only plausible way for Pluto and Eris to be essentially the same size but for Eris to be 27% more massive is if Eris contains substantially more rock in its interior than Pluto. In fact, the amount of extra rock that Eris contains is about equal to the mass of the entire asteroid belt put together. That counts as a pretty big difference.
Now, crazy dwarf planet, what are you trying to tell us?
First, why is this difference in composition so surprising? It’s because Pluto and Eris should be identical. Here’s the standard scenario: About 4.5 billion years ago, the outer solar system was a swirling cloud of ice and rocky dust. Over time the ice and rock began sticking together to make small bodies, which in turn grew to bigger and bigger bodies until things the size of Pluto and Eris finally formed. Making Pluto or Eris requires sweeping up so much material from around the ice and dust cloud that any small local variations should be averaged out. Even if some of the smaller objects that initially coagulated differed from each other in composition, by the time objects got to the size of Pluto or Eris they would all be more or less the same.
This scenario even had good evidence to support it. For years, the only two icy objects known in the outer solar system were Pluto and the much larger Triton, the moon of Neptune which is thought to be a captured Kuiper belt object. When the masses and sizes of these two objects were finally measured and their densities and interior compositions computed it was realized that, as predicted, they are indeed nearly identical. Entire theories were constructed out of the fact that Pluto and Triton gave us the most pristine measurement of the composition of the entire early cloud of ice and gas and dust that led to the whole solar system and planets and life. The only thing left to do was to book the flight to Stockholm.
Things started getting a little more confusing about 5 years ago as we finally were able to start to measure the interior compositions of some of the newly discovered Kuiper belt objects. Eris, one of the early measurements, was, at least, consistent with having the same composition as Pluto and Triton (and thus we assumed that it probably did, because, well, that just made sense), but then there were objects like Haumea which is mostly rock with a thin layer of ice, and small Kuiper belt objects made almost entirely of ice, and a new measurement of Charon showing it has more ice than Pluto, and the word this summer that Quaoar has essentially no ice in it whatsoever.  Orcus, just to keep things interesting, is somewhere in the middle.
Interiors of all Kuiper belt objects with known densities. Yellow/brown is rock, blue is ice. Objects range from 100% ice to 100% with everything in between. The largest object is Triton, the next two are Pluto and Eris. The elongated body is, of course, Haumea. Quaoar, Charon, Orcus, and smaller objects also appear. Truly, the dwarf planets are crazy.
So Eris is not the only crazy dwarf planet. They all are. I have absolutely no faith that any dwarf planet out there gives you a confident measure of what the early solar system was like. They are all thoroughly different. How could this possibly be?
No answer is immediately obvious, but it is immediately obvious that one or more of the assumptions of the standard scenario are going to have to be discarded. Earlier this summer I had constructed a new hypothesis that did an adequate (though, frustratingly not great) job of explaining some of the crazy variability in the Kuiper belt as being due to a random series of giant collisions which knocked the ice off of some objects, leaving just the rocky cores. I gave a couple of talks on the hypothesis, and even wrote the first draft of a scientific paper describing the details. But I fear now that the draft is going to have to go to the recycle bin. Even in my hypothesis once things grow to a certain size they should be more or less the same. Eris and Pluto are just too big to be different. So what happened instead? Did they form in different places? In different solar systems? Did Eris spend time close to the sun? None of these hypotheses is immediately appealing, but somewhere in there there must be a kernel of what really happened. Pluto and Eris and all of the rest of the dwarf planets must have a widely divergent set of histories of formation or evolution or interaction or all of the above.
And, so, in just a week, Eris has gone from not really teaching us much new about the solar system to potentially demanding that we throw out some of our most cherished assumptions. Eris, goddess of discord and strife, indeed.
These two nights studying Eris at the Keck telescope are not turning out to be terribly effective. Clouds, instrument problems, and, now, fog. But we’ll be back. We’ve now got an entire solar system to figure out, and Eris might just be that puzzle piece that someday makes all of it finally make sense.

The shadowy hand of Eris

[sorry: a brief interruption in the ongoing Sedna story for some late breaking news]

Eris, the goddess of discord and strife and the most massive dwarf planet, is up to her usual tricks.

On Friday night Eris was predicted to pass directly in front of a relatively faint star in the constellation of Cetus. You might think that this sort of thing happens all of the time, but you’d be wrong. Eris is so small in the sky and stars are such tiny points of light that, though they get close frequently, their actually intersections are rare. When they do intersect, though, something amazing happens: the star disappears. And since we know how fast Eris is moving across the sky, seeing how long the star disappears gives us a very precise measure of the size of Eris. Or, to be more exact, a very precise measure of a single chord passing through the body.
Predicting that such an event is going to take place is a lot of hard work. Teams of astronomers around the world continuously measure and refine the orbit of Eris (and other objects out in the Kuiper belt) and carefully pinpoint the positions of stars potentially in its path. When a collision (ok, “occultation” is the right word here) looks possible, more and more effort is put into better understanding the precise location of the star and alerts go out throughout the world to try to watch a disappearing star.
The predicted size and path of the shadow of Eris. South America was the mostly likely location to make a good detection, but the uncertainties in the position of Eris and the star might have put the shadow as far north as N. America or even down in Argentina.

There is one important catch. Even if every single telescope in the world were watching, most would not see a thing. That’s because Eris is so small that if it is blocking the star from one spot on the earth it is not blocking it from most others. The easiest way to think of this is to imagine that that one star being occulted is the only star in the entire sky and it is super bright. As Eris moves in front of the star it makes a shadow on the earth, and that shadow is the size of Eris itself. Eris has a diameter about 5 times smaller than that of the earth, so the shadow covers an area something like 5 times 5 = 25 times smaller than the earth itself. It’s not quite that bad, though, because, like a lunar eclipse, the shadow of Eris sweeps across the face of the earth, making a track that looks something like the picture here.

It’s hard enough knowing that Eris is going to occult the star. Knowing precisely where on earth to be to see the occultation is even harder. So on Friday night, dozens of astronomers from Europe, South America and North America all watched one little spot in the sky to see if a faint star would disappear. I gave it a try myself from my robotically controlled 0.6 meter telescope at Palomar Observatory (which I operated remotely while making dinner for me and Lilah), though the star was so low in a still-bright evening sky at the time of the predicted occultation that I wouldn’t have been able to tell one way or another if anything happened. Dozens might seem like a large number of astronomers, but it’s not enough to blanket the entire earth; there were gaps between telescopes where the shadow could pass and we would never know. We would require some luck. And, happily, we got lucky.

Watch the disappearing star! (Atacama Celestrial Explorations Observatory)

The first positive report came from Sebastian Sarabia, Alain Maury and Caisey Harlingten at the San Pedro de Atacama Celestial Explorations Observatory in Chile, who saw the star disappear for 76 seconds. Later it was reported that Emmanuel Jehin at the TRAPPIST telescope, about 700 km south in La Silla Chile also saw the star disappear. This means we’re in business. While each single detection gives you only one chord across the body, it only takes two different chords to precisely define the size of a circle. And since we are pretty certain Eris is massive enough to be spherical (Pluto, only 80% the mass of Eris, is spherical), that means a size can be measured.

Yes! This is huge! Most of the ways we have of measuring the sizes of objects in the outer solar system are fraught with difficulties. But, precisely timed occultations like these have the potential to provide incredibly precise answers. The earliest measurement came from trying to infer the size of Eris by measuring the total amount of heat coming from it, kind of like closing your eyes and holding your hold out and trying to tell the difference between a small flame and a huge bonfire next to you. Those early results – as you might guess -- had large uncertainties but suggested a diameter of 3000 km with an uncertainty of 400 km, making it comfortably larger than Pluto, with a radius of about 2300 km. Soon after, my students and I obtained some beautiful images with the (now, sadly, defunct) High Resolution Camera on the Hubble Space Telescope. These images allowed us to (just barely) measure the size of the tiny disk of Eris. We found that our best measurement gave Eris a diameter of 2400 km with an uncertainty of 100 km. This means that Eris is, within the uncertainties, more or less the same size as Pluto! Later measurement of the orbit of Dysnomia showed that Eris is a 25% more massive than Pluto, so, still a more substantial body, but their sizes could be remarkably similar. Of course, if they are the same size by Eris is more massive that must mean that Eris contains considerably more rock in its interior than Pluto. I can’t think of any good reason for this to be true, so my best guess for the past few years has been that Pluto and Eris have similar interiors and that Eris is 25% more massive because it is 7% larger. That would suggest a diameter for Eris around 2480 km, which is well within our measurement uncertainties and not too far off of the thermal measurements. If you had asked me to bet on Eris’s true size a week ago, this is what my bet would have been. I would even have bet a lot of money. Too bad you didn’t take that bet a week ago, because you could have won some money.
The preliminary results from the two occultation detections suggest that Eris is on the smaller end of our uncertainty range. Indeed, Bruno Sicardy, who masterminded the entire worldwide attempt to detect these occultations, suspects that Eris can be no bigger than 2320 km across. Sadly, the uncertainties in these occultation measurements are larger than they might have been; the only detections of the occultations came from fairly small telescopes, which means that to detect the faint star they had to take long exposures. Long exposures mean that you don’t know as precisely when the star appeared and disappeared. With the success of these observations, though, bigger telescopes are now likely to try to get in on the action. One or two good occultations with big telescopes taking fast data, and we will know the diameter a Eris even better than we currently know the diameter of Pluto. I can’t wait.
Though the results from Friday are preliminary and not as precise as I would like, it is still fun to speculate on what they might mean. If these preliminary results stand up, Eris and Pluto are very different bodies. Though Eris is substantially more massive, they are essentially the same size. Eris must be made almost entirely of rock with a little coating of frost – which we see – on the outside. How could Eris and Pluto look so similar in size and exterior composition yet be totally unalike on the inside? As of today I have absolutely no idea. Two other large objects in the outer solar system – Haumea and Quaoar – also appear to be mostly rocky with a little ice on the outside. In the past we’ve been willing to make up special explanations for them. But Eris, too? Having to continue making up special explanations is becoming unpalatable. Something is going on in the outer solar system, and I don’t know what.
What’s next? We are all eagerly awaiting a more precise analysis of the results to see what they really show. Eris reveals her secrets slowly, but we now already know so much more about that little world than we did on that moment in January 2005 when we first saw her crawling across the sky. And there will be more to come. We don’t yet know when or where, but, once again, some astronomer will be watching a faint star in the constellation of Cetus suddenly blink out as the next shadow of Eris crosses the face of the earth and the exploration of the most massive dwarf planet will continue.

Next up: What does all of this mean for the dwarf planets and the solar system?

Vestas’ New HQ in Portland Shoots for LEED Platinum

Wind turbine company Vestas-American Wind Technology is staying in Portland and will soon have a new headquarters in the Pearl District (home of The Environmental Blog as well). Vestas, the world’s largest wind turbine maker, announced that it will convert a former Portland, Oregon department store warehouse into its new North American headquarters. An impressive undertaking generously supported by the local government, once the renovation is complete, Vestas will become home to the city’s largest array of solar panels and a gorgeous eco-garden terrace. Gerding Edlen Development, a leader in green architecture, has been chosen to oversee the conversion of the former Meier & Frank warehouse, and with five stories and 194,000 square feet in the pipeline, Vestas will finally be able to house its entire staff under one, green roof. More that just a building of adaptive reuse, the new construction is shooting to achieve LEED Platinum certification.

Though Vestas had been looking in other states for potential sites, the company today announced it would renovate the old Meier & Frank warehouse on Northwest Everett Street and 14th Avenue into their headquarters [picture below]. Gerding Edlen Development purchased the building several years ago and has been waiting to redevelop it, according to Mark Edlen, president of Gerding Edlen.

old meier and frank warehouse

The strategic objectives that helped the Portland Development Commission and the city of Portland decide to generously help Vestas stay in Portland are as follows:

• Anchors Portland’s Clean Tech cluster –priority industry in Economic Development Strategy
• Attracts new firms to Portland and helps existing local firms access extensive supply chains.
• Validates Portland’s position as the US renewable energy capital and our reputation as a global Clean Tech leader
• Deepens already extensive talent pool in development and management of renewable energy development systems

“This is a wonderful building built by one of Oregon’s iconic companies of the 20th century,” Edlen said. “Now it will be occupied by one of Portland’s iconic companies of the 21st century.”

Between the city of Portland and the state, $8 million of public money will go toward the project, according to Mayor Sam Adams. The city has negotiated a 15-year, interest-free loan with Vestas to keep it in Oregon.

“We had to compete for this,” Adams said. “Other states were offering deals. They put us to the ringer to get the best site.”

Currently at 400 employees, Vestas has made a deal with the state to hire at least 100 new employees in the coming years. Vestas-American Wind Technology president Martha Wyrsch said the company will hire people with backgrounds in engineering, finance and sales.

The $66 million project is designed by GBD Architects and Ankrom Moisan Associated Architects and will be built by Skanska USA. Other project consultants include KPFF Engineering, Harper Houf Peterson Righellis Inc., and Peter Meijer Architects. The project will break ground in soon and will be finished in early 2012.

College Degrees to Get You in the Environmental Field

With the “green” movement currently at its peak and more and more people becoming interested in the environment every day, it’s fair to say that the world is in the midst of a veritable environmental renaissance. As a result, there’s never been a better time to go to school for an environmental degree. There are a wide variety of degree programs available, each with its own implications for one’s career path. The following are some of the top environmental college degrees.

Environmental Engineering
For those who are interested in improving the way the world interacts with the environment, environmental engineering colleges are perfect places to gain a degree. A degree program in environmental engineering will give you a well-rounded understanding of match, science and environmental politics that will help you to work towards a job in the field. Environmental engineers often find themselves working at the state or federal level, and enjoy quite luxurious salaries once they have paid their dues. Perhaps the most satisfying aspect of being an environmental engineer, however, is the fact that engineers have a huge impact on both the environment and the people who live there. With technology and science in constant flux, a career as an environmental engineer can be extremely rewarding.

Environmental Policy
If environmental engineers like to get their hands dirty, those who invest in environmental policy like to stretch their brain with a good argument. Environmental policy governs just about everything about the environment, and is extremely interesting to those who actively care about the world around them. Taking a course in environmental policy will usually teach you basics in political science, planning and policy analysis, as well as a variety of other skills that will make it easier for you to secure employment. A variety of positions can be secured after obtaining a degree in environmental policy, from policy analyst to planner.

Environmental Health Sciences
For many people, one of the major reasons for studying the environment is to attempt to understand the effect it has on human beings. A degree program in environmental health sciences will teach you how to assess potential risk factors amongst the environment. In general, these programs teach students a wide-variety of health and science-related information, including toxicology, environmental health hazards and more. A degree in environmental health sciences can help to point you in the direct of a career as a toxicologist, public health director or other prestigious position.

Portland Federal Building Begins Green Makeover

The Edith Green – Wendell Wyatt Federal Building is a high rise structure in downtown Portland, Oregon, United States. Opened in 1975, the 18 story-tower is owned by the Federal Government. The international style office building has more than 370,000 square feet of space. Designed by the Skidmore, Owings and Merrill architecture firm, the building is named after Wendell Wyatt and Edith Green who both served in the United States House of Representatives.

Construction began Wednesday on the $139 million renovation of the Edith Green-Wendell Wyatt Federal Building in downtown Portland, according to Ross Buffington, spokesman for the U.S. General Services Administration.

Construction at Edith Wydan

As part of a plan to reduce energy use among its portfolio of federal buildings, the U.S. General Services Administration and Howard S. Wright Construction has begun construction to modernize the building to Leadership in Energy and Environmental Design (LEED) platinum standards, the highest green building rating offered by the U.S. Green Building Council.

The renovation is expected to reduce lighting energy use by 40 percent and water use by 65 percent thanks to advanced lighting systems and low-flow fixtures. A solar array on the building’s roof is also expected to offset 6 percent of the building’s power.

The building was designed by SERA Architects of Portland and Cutler Anderson Architects of Bainbridge Island, Wash., the building’s original design included a 250-foot tall living wall to shade and insulate the structure for better energy performance. Concerns over the cost and maintenance of the vegetated façade, however, led designers in August to change the design to a shading system made of aluminum rods. Planters with climbing plants will be added to the base of the building, however, to shade the bottom three stories of the building. The vegetated “green wall” picked up notice from The New York Times. The new style is the picture below on the bottom right.

Edith Green - Wendell Wyatt Federal Building - Portland, OR

Initial plans to renovate the 18-story, 350,000-square-foot building were outlined three years ago, but never made it to the top of the government’s funding priority list. That changed when Congress approved the American Recovery and Reinvestment Act – the federal stimulus program.

This project is finally getting started and will provide a boost of hundreds of construction jobs in the local economy. Most notably, the project is yet another shining project for Portland’s environmentally-friendly portfolio and a boost to it’s downtown Eco District initiative.

Collaborative Learning and Land Use Tools to Support Community Based Ecosystem Management by Chris Feurt of the Wells National Estuarine Research Reserve

Date: 
Wednesday, December 1, 2010

Collaborative Learning and Land Use Tools to Support Community Based Ecosystem Management by Chris Feurt of the Wells National Estuarine Research Reserve (December 1, 2 pm EST/11 am PST/7 pm GMT).  In an ideal world, all land use planners would be able to predict how development would change their community economically, ecologically, and aesthetically. In the real world, land use decisions are made by multiple stakeholders with divergent perspectives from different institutions—a situation that hinders the application of scientific findings and tools that could foster the adoption of an ecosystem-based approach to development.  With CICEET support, a project team from the Wells National Estuarine Research Reserve has developed a method to overcome these barriers. They used Collaborative Learning—a process that facilitates environmental decision-making among diverse stakeholders—to apply geospatial and visualization tools to the development of a conservation and land use plan for Sanford, Maine.  The plan used a green infrastructure approach based upon community-identified priorities for preserving ecosystem services. The team also piloted a regional training on the use of GIS, CommunityViz, and keypad polling for land use planning based upon EBM principles.  This presentation will cover the Collaborative Learning methodology and, using the development of the Sanford Plan as a model, how Collaborative Learning can be used to facilitate community-based EBM dialogues.  Learn more about the project and read the Sanford plan. Register for this webinar at https://www1.gotomeeting.com/register/796010449