Now that everything was dry we did the silicone work. Went for a walk at the information bay there are carved heritage bush poles crafted by members of the community. Beautiful sunsets that evening.
Monthly Archives: April 2011
Day 4 Contiki New York
Our second day of our New York Contiki tour we pretty much get to choose what we do. We're thinking we would like to take a Movie TV Show tour and go to a Broadway Show. We might also take a stroll across the Brooklyn Bridge and do some shopping. We need to pick up a few souvenirs for people back home
Scribe met PA over students’ transfer – Times of India
Scribe met PA over students' transfer Times of India MUMBAI: Vishnu Patil (42), personal assistant of medical education minister Dr Vijaykumar Gavit, was arrested on Friday night for allegedly molesting a woman journalist working for a news channel in Chembur. Senior inspector of the Chembur police ... |
Harvard names Stanford Medical School Professor Alan M. Garber as its next provost – Stanford Report
![]() Stanford Report | Harvard names Stanford Medical School Professor Alan M. Garber as its next provost Stanford Report "In his 25 years as a distinguished member of the Stanford University faculty, he has brought an interdisciplinary ethos and leadership to teaching, to the research community and to the medical profession. His service to the academy and to the nation ... Alan M. Garber Appointed ProvostHarvard Magazine |
Nazi medical experiments exhibit opens today in Boston – Boston Herald
![]() WHDH-TV | Nazi medical experiments exhibit opens today in Boston Boston Herald By AP BOSTON -- The United States Holocaust Memorial Museum and Harvard Medical School are set to open a Boston exhibit on the effects of deadly Nazi medical experiments during the Holocaust. The traveling "Deadly Medicine: Creating the Master Race" is ... Five Things to Know Today, April 14Patch.com |
GSK Tells BC and Goody’s to Take a Powder
After spending the first 21 years of life in New Jersey and Philadelphia, I ventured to the University of Florida for graduate school. For those who don’t know, UF is in the north-central Florida city of Gainesville – culturally much more like idyllic south Georgia than flashy south Florida.
It was in Gainesville – “Hogtown” to some – that I first encountered the analgesic powder. I believe it was BC Powder, first manufactured just over 100 years ago within a stone’s throw of the Durham, NC, baseball park made famous by the movie, Bull Durham. I remember sitting with my grad school buddy from Kansas City watching this TV commercial with hardy men possessing strong Southern accents enthusiastically espousing the benefits of BC. I looked at Roger – a registered pharmacist – and asked, “what in the hell is an analgesic powder?”
What I learned is that powders of analgesic compounds were one of the individual trademark products of Southern pharmacies during the early 1900s. Many of these powders became quite popular with mill and textile workers needing to calm headaches induced by long hot days with loud machinery. The original powders contained a precursor to acetaminophen called phenacetin. However, phenacetin was found to cause renal papillary necrosis, such as in this 1964 case report in Annals of Internal Medicine.
Today, most of these powders are comprised of aspirin, acetaminophen, and caffeine. This combination has also been adopted outside of the powder world with Excedrin’s migraine product the most popular of these. This 2010 review in the European Journal of Neurology covers the historical ground that tends to support the greater combined efficacy of this combination in headache and migraine than with monotherapy of any alone. However, I still have yet to find a convincing mechanistic explanation to account for the well-documented analgesic potentiation activity of caffeine.
Nevertheless, this combination as a powder is a cultural tradition of Southern pharmacy. Unfolding one of these packets in public in the northern US is a sure-fire way to attract suspicious eyes wondering if you are a cocaine addict. In the South, you either mix these with water and slam it back – the idea being that the powder form is absorbed more quickly than a tablet or capsule that needs to disintegrate in the gastrointestinal tract. However, the bitter taste of these compounds reminds me of why they were first formulated into tablets that had a very short residence time in the mouth.
In 1977, Goody’s powder became one of the first non-automotive sponsors of a NASCAR racer, beginning a long relationship with NC native and current resident, Richard Petty, a relationship that continues today. For fun, you have to read the page at Goody’s on how to take a powder:
How to take a powder
Let’s face it, Goody’s works incredibly fast because they’re powders, but that also means that they’re kind of different to take. There are three general approaches, but feel free to add your own personal touches.The Dump and Chase
Probably the most popular technique. Open up the paper wrapper, fold it over, dump quickly on the back of your tongue and chase right down with your favorite drink. There, now that wasn’t so hard…The Stir ‘N Sip
A technique preferred by the less adventurous. Just mix your Goody’s into a glass of water, juice or soda. Then drink up.The Tough Guy
This is how The King does it. Very simple. Open it, fold, dump on your tongue and swallow. Then, very casually continue whatever you were doing.How to take a powder tips
- The farther back on your tongue, the better.
- Fold wrapper so all the powder leaves the wrapper at once.
- Don’t inhale through your mouth with powder in there. It could get ugly.
- Beginners generally need a bit of coaching. Be gentle with them.
- Got a great technique? Shoot us an email and we may share it here.
Yes, folks, The King (Petty, not Elvis) does The Tough Guy.
Guess what? I tried to do The Tough Guy. I think I ended up with an esophageal erosion.
But why do I write this post other than because I’m a natural products pharmacologist living in the South?
Late yesterday afternoon, I learned from Raleigh News & Observer business editor, Alan Wolf, that GlaxoSmithKline is jettisoning 19 of their consumer health products, including Goody’s and BC.
From Wolf’s post:
“Individually, the brands to be divested have strong heritage and good prospects, but GSK has lacked sufficient critical mass in some product categories and certain brands have lacked focus due to other global priorities,” GSK wrote in a statement. “GSK therefore believes that other companies are better placed to maximise the potential they offer.”
I actually hadn’t known that GSK owned both powder brands. But it now makes sense to me. As recently as last summer, there had been a “Pick A Powder” battle online (at pickapowder.com, surprisingly) between Goody’s Richard Petty and BC’s country singer, Trace Adkins. Clever marketing: take your own products and pit them against one another with a fabricated battle between two celebrities that appeal to distinct but overlapping demographic groups.
But this rivalry isn’t all fun and games. Petty’s team raises money for Victory Junction Camp, established in honor of his son, Adam, to provide enriching experiences for kids with chronic or serious illnesses; Adkins raises money for the Wounded Warrior Project, a comprehensive non-profit program that serves our brave men and women injured in combat.
Hopefully, the companies that pick up these colorful, historic powders of Southern pharmacy will keep the rivalry and public service going on.
But my recommendation to all: no need to be “adventurous.” There’s no shame in “The Stir ‘N Sip.”
For further reading, The North Carolina History Project has separate entries for Goody’s and B.C. powders.
What does a new drug cost?
Despite the variety of health systems across hundreds of different countries, one feature is near-universal: We all depend on private industry to commercialize and market drug products. And because drugs are such an integral part of our health care system, that industry is generally heavily regulated. Yet despite this regulation, little is publicly known about drug development costs. But aggregate research and development (R&D) data are available, and the pharmaceutical industry spends billions per year.
A huge challenge facing consumers, insurers, and governments worldwide are the acquisition costs of drugs. On this point, the pharmaceutical industry makes a consistent argument: This is a risky business, and it costs a lot to bring a new drug to market. According to PhRMA, the U.S. pharmaceutical industry’s advocacy group, it cost $1.3 billion (in 2005 dollars) to bring a new drug to market. The industry argues that high acquisition costs are necessary to support the multi-year R&D investment, and considerable risks, in to meet the regulatory requirements demanded for new drugs.
But what goes into this $1.3 billion figure? To understand the cost of a new drug, we need to consider both the cost of drugs that were marketed, but also factor in the costs of the failures – those discontinued during development. While most pharmaceutical companies are publicly held, no company produces detailed breakdowns of “per marketed drug” R&D costs, or the specific amounts spent on drugs that were later abandoned. Yet there have been attempts to estimate these values. The most detailed and perhaps controversial paper is a 2003 paper from DiMasi et al, entitled, The Price of Innovation: New Estimates of Drug Development Costs.[PDF] DiMasi’s estimates has been subject to considerable criticism, most recently in a paper by Light and Warburton, entitled Demythologizing the high costs of pharmaceutical research. They claim the median R&D cost is a fraction of DiMasi’s estimate: Just $43.4 million. “Big Pharma lies about R&D to justify illicit profits” shouted Natural News. Who’s right?
Drug Development
Drugs can be developed in different ways, but the usual model used describes a series of phases. The pre-clinical development stage constitutes preliminary studies of chemicals that have been synthesized or isolated, and are then screened. This process can take years: Identifying promising leads, validating them, tweaking with their chemical structures, and conducting endless in vitro studies. Only a fraction of drugs that show promise in pre-clinical studies will every progress to clinical trials. Clinical trials are generally grouped into three stages, each one representing an important milestone in a drug’s development. Phase I studies are small studies in healthy volunteers designed to help understand the basic pharmacology and pharmacokinetics in humans: how a drug is absorbed, distributed, metabolized, and eliminated. It’s in Phase II that the drug is tested in groups with the condition of interest. These trials are larger, and may be randomized, with multiple arms, possibly evaluating different dosing regimens. Endpoints are usually related to basic efficacy and safety parameters. Phase III studies are the largest studies, that may be randomized and double-blind, in order to establish a drug’s efficacy against a given condition. Regulators like the FDA will usually require one or more Phase III trials to support an approval to market a drug. In cases where real outcomes need to be measured (like mortality or morbidity), phase III studies can be massive. (Like this one, with over 18,000 participants!).
While the trial pathway is usually illustrated as a straight-line path, that’s a post hoc view: A tree may be a more appropriate model. Clinical trials may be conducted in different doses, treating different patient groups, using different protocols, in order to understand a drug’s effectiveness.
At any one time, multiple drugs may be in development, so only the most promising products may move forward in the development pathway, as subsequent phases of development mean a significant increase in costs. A drug’s development can be discontinued at any point along the path. Developers may identify toxicity issues, or lack of effectiveness issues. Or clinical practice may change, and all of a sudden, the clinical trials are measuring the wrong endpoints in the wrong patients. Decisions are always made in the face of uncertain evidence about efficacy and toxicity, and for every drug that moves forward that is eventually found to fail, it could be that there is an effective drug that didn’t reveal itself as promising, and went back on the shelf.
The development process is laborious and typically takes several years from discovery to clinical trials. The pharmaceutical industry estimates that it takes 10,000 molecules developed to bring a single drug to market. Without validating that number (a whole other post), it’s fair to say that the number of drugs that make it to market is a tiny fraction of the number of products identified or synthesized that enter initial screening. So there will be a substantial investment into drugs that never make it to market. Without including the cost of abandoned drugs into the costs of drugs that are marketed, we’d be underestimating the investment incurred. So any analysis needs to consider this cost, too.
The DiMasi Paper
The DiMasi paper, from the Tufts Center for the Study of Drug Development is the most widely cited paper on drug development costs. While the methodology they use is described in detail, some essential information is unfortunately (though perhaps understandably) opaque. The authors used a sample of drug costs drawn from ten companies that volunteered (out of 24 that were asked) that were willing to provide R&D data on a per-chemical basis. Data were collected, and stratified by development phase. Only the costs of self-originated drugs (i.e., they developed the drug themselves) were included. In total, data on 68 products were collected, and the sample consisted of mostly small-molecule drugs, but also included four recombinant proteins, two monoclonal antibodies, and one vaccine. No further information is provided, so there’s no way to know just how representative this sample is.
The methodology for the different calculations is fairly well detailed, but as I noted, the underlying data are not provided. Whether this basket of drugs studied represents a fair measure of the market is impossible to determine. The authors compiled actual costs wherever possible, broken down by development phase. A notable exception is the “preclinical” development period where it’s difficult to draw a direct link between expenditures and a specific drug that ends up being commercialized. For this segment, they inferred, using their own database, costs of $121 million per approved new drug.
To account for the costs of drugs that were abandoned (for any reason) during development, the authors used their own database of investigational drugs to estimate the odds any given new drug would reach a particular development milestone. Setting aside a detailed analysis of the methodology, let’s look at the two biggest cost drivers of the final that have been subject to repeated criticism: Tax issues, and the cost of capital.
Tax Issues
A major criticism of the DiMasi paper has been that the preferential taxation provisions for R&D expenses have not been factored into the analysis. Essentially, if R&D costs are given preferential tax treatment, this should reduce the net cost of R&D to the company. I have no particular insight into this issue other than to flag it as one that has caused controversy. Given preferential treatment of R&D expenses isn’t unique to the pharmaceutical sector, the extent to which this biases the validity of this particular analysis isn’t clear to me. But I’m a pharmacist, not a tax expert.
The Cost of Capital
Probably the biggest criticism of the DiMasi paper is that the authors factor in what’s called the cost of capital into the development cost. Looking at the calculations, DiMasi estimated the out-of-pocket costs per new drug at $403 million (2000 dollars). But this is then capitalized, based on the opportunity cost of that investment – at 11%, bringing the “total” cost up to $802 million. Adjusting this cost to 2005 dollars, and we’re at the $1.3 billion that PhRMA is calling “the average cost to develop one new drug.”
The cost of capital can be a bit baffling to understand. If I’m going to invest my money in something now, with a possible payoff down the road, I need to factor in the opportunity cost of something else I could have invested in – but decided not to. It is a true cost, because by choosing to invest in one thing, you’re forgoing the investment in another.
DiMasi uses a cost of capital of 11% – that is, they assumed that the drug developers, by moving forward with the development of a drug, where forgoing investments which would be expected to yield 11%. Is 11% valid? From a personal investment perspective, 11% seems rich. But the cost of capital that companies use is dependent on the risk involved. Different industries have different business risks. The DiMasi paper bases the 11% estimate based (in part) on historic returns in the industry. Given that half of the reported “cost” of a new drug is based on the cost of capital, the value we use use has a massive influence on what the final “cost” of a new drug will be. But is 11% appropriate? Many argue no – that current returns don’t match past returns, and therefore the CoC should be lower. I took a look at a cost of capital table created by Aswath Damodaran, a Professor of Finance at the Stern School of Business at New York University. He calculates that pharma’s cost of capital is 8.59%. But there is no single “right” answer here. It’s an assumption that goes into our calculation.
Other Reviews
Other authors have made their own attempt at estimating the cost of a new drug. Paul Adams of the Federal Trade Commission, writing in Health Economics estimates that the DiMasi estimate is low, and the 2003 cost is closer to $1 billion per new drug, but noted there is significant variation between products. A 2006 Congressional Budget Office report on drug development [PDF] largely supports the DiMasi estimates. Most recently, Light and Warburton argued that, “based on independent sources and reasonable arguments, R&D costs companies a median of $43.4 million per new drug, just as company supported analysts can conclude they are over 18 times larger, or $802 million.” This figure seems implausibly small, given a single clinical trial can involve hundreds to thousands of patients. (For a more detailed critique of the Light and Warburton paper, I’ll refer the interested reader to Derek Lowe’s excellent In the Pipeline blog (and its comments) where it was dissected in detail here and here.) One of the best ways to contemplate the costs and calculations is to manipulate the numbers yourself: There’s a model developed by venture capitalist Bruce Booth, where you can enter your own estimates and see what cost it spits out. I tried working with the model for a while, and I couldn’t get it anywhere near $43 million – it was always in the hundreds of millions.
Other considerations
One important factor that isn’t considered in any of these analyses (from what I can see) are the costs of new indications for existing chemical entities. Consider the case of cancer drugs, where drugs are often approved for the treatment of metastatic disease, and only after efficacy is demonstrated, is it studied as a potential “adjuvant” treatment for early stages of disease. Additionally, the DiMasi analysis only looked at drugs developed solely in-house. Given the growing role of smaller biotech companies that develop, and then sell, promising drugs to pharmaceutical companies, the impact on costs isn’t clear. In contrast, the cost of the “me-too” drugs that seem to fill the pharmaceutical marketplace aren’t discussed explicitly, either. When your new drug is a variation on a competitor’s (or your own) product, how does this influence overall R&D expense? Again, it’s not clear.
Conclusion
Is the $1.3 billion new drug a myth? New drugs could be hitting, or even exceeding this mark – it depends on what your assumptions are. When we try to summarize all the variables of drug development into a single number, accounting for the hits and the misses, we can end up with a number that sounds impressive. But is it meaningful? Without transparency, only the manufacturer will know what it cost for their own drugs. It’s probably more important to understand the key drivers of R&D costs, noting that there are a huge number of variables that may influence the final cost of bringing a new drug to market.
CAM In Medical Schools
A recent US News and World Report article on the incorporation of complementary and alternative medicine (CAM) into US medical schools credulously repeats the pro-CAM marketing hype. There is no evidence that the author, Meryl Davids Landau, spoke to a single critic of CAM, or is even aware that such criticism exists. The result looks more like marketing copy than serious journalism.
She begins:
Now that nearly 40 percent of American adults swear by some form of complementary and alternative medicine, or CAM—from nutrition and mental relaxation to acupuncture, magnet therapy, and foreign healing systems like traditional Chinese medicine and Indian ayurveda—a growing number of medical schools, too, are supplementing medication with meditation.
There is much to deconstruct just in this first paragraph. The entire article in an argument from popularity. This is a game the pro-CAM community has been playing for years. People are using CAM because it’s popular; medical schools should teach it because people are using it; the government should research it because of all the interest in it; and CAM should be popular because it’s being researched and taught in medical schools. CAM is like Paris Hilton – famous for being famous.
What’s missing from this circular argument for popularity is evidence that any particular CAM modality actually works, or even has scientific plausibility or the potential to teach us something new about the human body and healing.
The argument is not only fallacious – it’s wrong, or at least highly deceptive. This stems from the core fallacy of CAM, and that’s the very concept of CAM itself. It is a false category, which does not describe any cohesive philosophy or approach to medicine but rather exists solely as a marketing ploy to carve out a double standard – to exempt certain modalities from the rigors of science, evidence, and logic. This false dichotomy results in lumping a wide variety of treatments under one umbrella, and then claiming that the entire category is popular.
When we look a little closer at the numbers we find that the vast majority of so-called CAM use in the US is either massage (16%), chiropractic or osteopathic manipulation (21.9%), and yoga (9.5%). (There is overlap in use so you can’t simply add these percentages, but the vast majority of the mythical “40%” figure comes from these categories.) So some form of exercise, stretching, or muscle manipulation accounts for the vast majority of CAM use. Throw in some other modalities that are not really CAM, like relaxation and nutrition (since when has the science of nutrition been alternative to science) and that accounts for even more.
What’s left for the real hardcore CAM modalities like homeopathy and acupuncture? Not much. These modalities have been languishing in the single digits and not significantly increasing. But by lumping them in which relaxation and massage you can generate the false impression that the whole category is popular. The entire exercise is intellectually sloppy and deceptive – by design. And this deception is being used to convince medical schools that “CAM” deserves access to the limited resources of the school, which is then used to convince patients that it’s legitimate (more circular reasoning).
And we’re just at the first paragraph. She continues:
Interest in teaching alternative approaches “has exploded, especially this last year,” says Laurie Hofmann, executive director of the Institute for Functional Medicine, which is based in Gig Harbor, Wash. The nonprofit institute educates healthcare professionals to look for underlying systemic imbalances as a cause of illness rather than focus on treating symptoms and, when possible, to correct with lifestyle changes and mind-body techniques.
No evidence is offered for this alleged “explosion” in interest. What’s missing from the article is any mention of the Bravewell Collaboration – a funding organization that pays medical schools to open up CAM centers. This is part of a very deliberate “quiet revolution” that Wally Sampson has written extensively about, an attempt to change the practice of medicine by influencing medical education (rather than through compelling evidence).
Landau then repeats, without the slightest hint of journalistic skepticism, the claim that “integrative” medicine looks for underlying causes of illness, while mainstream medicine simply treats the symptoms. This is pure CAM marketing mythology, having no basis in reality. Science-based medicine is built upon a systematic attempt to understand the underlying cause of illness. Of course, when scientific medicine searches for underlying causes this is denigrated by CAM proponents are “reductionist.” When they do it, it’s “holistic.”
The difference between the scientific approach and the typical CAM approach is that science is based in reality. It slowly builds a knowledge base that is internally consistent. Whereas most CAM modalities are philosophy-based – they are based on pre-scientific superstitious notions of health and illness that have not been subjected to any kind of systematic study, or that have been left behind by scientific advance (such as the notion of life energy). These philosophies are often mutually exclusive, which doesn’t seem to bother the “big tent” CAM movement. In the end, the alleged underlying “imbalances” sought for by CAM practitioners are illusory and not based in reality. That doesn’t stop them from being smug in their dismissal of scientific medicine.
What follows is a long list of medical schools integrating nonsense into their curriculum – as if this is a good thing. Landau admits that many medical schools find it difficult to find time in their busy curriculum for CAM teaching. This is because there is a large body of medical knowledge that needs to be crammed into four years of medical school. At Yale where I teach every department is clamoring for one more hour here or there to teach their material. There just isn’t enough time, and we have to be creative in maximizing classroom time for the students. This just highlights the importance of not wasting this limited resource teaching the fads of the day.
This gets to the deeper question that is not even addressed in the article – what is the responsibility of academic medicine in determining the standards that should be followed in medical education? Medical schools are being offered what are essentially bribes, and are being told that CAM is popular as reasons to spend precious time teaching (often really promoting) CAM. This is often accomplished without open debate and discussion, and many faculty members are shocked to find out what is going on in their own institution (quiet revolution indeed). But isn’t it the responsibility of medical schools to maintain high standards of science and academia, to resist the forces of pseudoscience, sectarian beliefs, and popular culture? Perhaps I am being too idealistic.
In the end Landau’s article was devoid of any serious discussion of the actual issues. The result was a propaganda piece (intentional or not) for the sectarian beliefs and economic agenda of CAM, at the expense of academic integrity.
The Role of Experience in Science-Based Medicine
Before we had EBM (evidence-based medicine) we had another kind of EBM: experience-based medicine. Mark Crislip has said that the three most dangerous words in medicine are “In my experience.” I agree wholeheartedly. On the other hand, it would be a mistake to discount experience entirely. Dynamite is dangerous too, but when handled with proper safety precautions it can be very useful in mining, road-building, and other endeavors.
When I was in med school, the professor would say “In my experience, drug A works better than drug B.” and we would take careful notes, follow his lead, and prescribe drug A unquestioningly. That is no longer acceptable. Today we ask for controlled studies that objectively compare drug A to drug B. That doesn’t mean the professor’s observations were entirely useless: experience, like anecdotes, can draw attention to things that are worth evaluating with the scientific method.
We don’t always have the pertinent scientific studies needed to make a clinical decision. When there is no hard evidence, a clinician’s experience may be all we have to go on. Knowing that a patient with disease X got better following treatment Y is a step above having no knowledge at all about X or Y. A small step, but arguably better than no step at all.
Experience is valuable in other ways. First, there’s the “been there, done that” phenomenon. Older doctors have seen more: they may recognize a diagnosis that less experienced doctors simply have never encountered. My dermatology professor in med school told us about a patient who had stumped him: she had an unusual dermatitis of her hands that was worst on her thumb and index finger. His father, also a doctor, asked her if she had geraniums at home. She did. She had been plucking off the dead leaves and was reacting to a chemical in the leaves. The older doctor had seen it before; his son hadn’t.
Then there’s what we loosely call “intuition.” It can be misleading, but it can also be a function of pattern recognition that has not risen to the level of conscious awareness. Experience can help us perceive that “something just isn’t right” about a patient or a working diagnosis. An experienced doctor may get a feeling that a patient might have a certain disease. He couldn’t justify his hunch to another doctor, but he has subconsciously recognized a constellation of findings that were present in other patients he has seen. Of course, he would still need to do appropriate tests to confirm the diagnosis, but he might do more tests and do them sooner than a less experienced doctor. This kind of pattern recognition has been called the “Aunt Tillie” phenomenon: you can spot your Aunt Tillie’s face in a crowd, but you couldn’t tell someone else how to do it. You just know Aunt Tillie when you see her. Computer face recognition is learning how to do this, but it uses measurements, not the gestalt method our brains use.
Then there’s the wisdom that (sometimes) comes with age. I’ve just been reading Marc Agronin’s book How We Age where he shows that old age is not all bad. As we get older, we are not able to accomplish mental tasks as fast, and our short-term memory declines; but there are compensations. We are more able to integrate thinking and feeling, less likely to get carried away by emotions, better able to see both sides of an issue, and better able to cope with ambiguity. We can develop more patience, acceptance, tolerance, and pragmatism in dealing with complex situations. We have a vast store of life experiences to bring to the table, helping us put things into a more realistic perspective. Wisdom is elusive: not every elder develops it. I’m sure you can all think of many counterexamples.
Medicine is an applied science, and the same science can be applied in different ways by different doctors. There are times when two science-based doctors can look at the same body of evidence and still disagree about what it really means or about what to do for a specific patient. There is room for disagreement and for different approaches. Scientific medicine is often criticized for focusing on the disease rather than on the person who has the disease. I have known patients who have turned to alternative providers because of a bad experience with a science-based doctor’s poor communication skills or “bedside manner.” We can aspire to a kinder, gentler, more personal science-based medicine where experience and improving people skills are integrated with science (a kind of “integrative medicine” that actually makes sense.)
It’s not clear whether you are better off with a young doctor or an older one. A young doctor is more likely to be up to date on the latest science; an older doctor might make better patient-centered decisions. A younger doctor might be better at tuning up your bodily vehicle; an older one might be better at helping you decide when to drive it, where to go, and how fast. A young doctor might offer the latest treatment; an older one might question whether it is really preferable to an older treatment for that particular individual, or even question whether any treatment is really necessary at all.
Conclusion:
In summary, while “in my experience” claims can be dangerous, experience does have a role to play in science-based medicine.
Disclaimer:
As an ORF (Old Retired… something) and a Medicare-card-carrying senior citizen, I am biased. I have a vested interest in thinking that I have improved with age and experience. This is an opinion piece and I can’t cite any controlled studies to support my opinions. I’m almost tempted to insert tongue firmly into cheek and say “Trust me; I’m a doctor.”
Hope and hype in genomics and “personalized medicine”
“Personalized medicine.” You’ve probably heard the term. It’s a bit of a buzzword these days and refers to a vision of future medicine in which therapies are much more tightly tailored to individual patients than they currently are. That’s not to say that as physicians we haven’t practiced personalized medicine before; certainly we have. However it has only been in the last decade or so that our understanding of genomics, systems biology, and cell signaling have evolved to the point where the vision of personalized medicine based on each patient’s genome and biology might be achievable within my lifetime.
I was thinking about personalized medicine recently because of the confluence of several events. First, I remembered a post I wrote late last year about integrating patient values and experience into the decision process regarding treatment plans. Second, a couple of months ago, Skeptical Inquirer published an execrably nihilistic article by Dr. Reynold Spector in Skeptical Inquirer in which he declared personalized medicine to be one of his “seven deadly medical hypotheses,” even though he never actually demonstrated why it is deadly or that it’s even really a hypothesis. Come to think of it, with maybe–and I’m being very generous here–one exception, that pretty much describes all of Dr. Spector’s “seven deadly medical hypotheses”: Each is either not a hypothesis, not deadly, or is neither of the two. Third, this time last week I was attending the American Association for Cancer Research (AACR) meeting in Orlando. I don’t really like Orlando much (if you’re not into Disney and tourist traps, it’s not the greatest town to hang out in for four days), but I do love me some good cancer science. One thing that was immediately apparent to me from the first sessions on Sunday and perusing the educational sessions on Saturday was that currently the primary wave in cancer research is all about harnessing the advances in genomics, proteomics, metabolomics, and systems and computational biology, as well as the technologies such as next generation sequencing (NGS) techniques to understand the biology of each cancer and thereby target therapies more closely to what biological abnormalities drive each cancer. You can get an idea of this from the promotional video the AACR played between its plenary sessions:
Which is actually a fairly good short, optimistic version of my post Why haven’t we cured cancer yet? As I mentioned before, with this year being the 40th anniversary of the National Cancer Act, as December approaches expect a lot of articles and press stories asking that very question, and I’m sure this won’t be the last time I write about this this year.
“Personalized medicine” in CAM
In the meantime, before I discuss a couple of examples of how science-based medicine is moving ever more closely to personalized medicine, I can’t help but note that part of what inspired this bit of my typical blather on this topic was sitting in the audience at AACR hearing about all these tour de force genomic analyses that begin to reveal the individuality and complexity of tumors, and, more importantly, to suggest strategies to target the specific abnormalities that drive the growth and metastasis of each cancer and contrasting in my mind the claims of “personalized” or “individualized” medicine that practitioners of “complementary and alternative medicine” (CAM) and “integrative medicine” (IM) like to make. As I pointed out about a year ago, “individualized” treatment in CAM-world basically means “making it up as you go along.” Consider, for example, homeopathy, which postulates prescientific ideas for the cause of disease, claiming that “like cures like,” and then using unscientific “provings” to determine which remedies can be used to “treat” each condition. Never mind that homeopathy is water (does it really need to be repeated?), consisting of remedies serially diluted and succussed so many times that in many of them it is highly unlikely that there is a single molecule of remedy left in the concoction. Not only that, but one of the most popular homeopathic remedies for flu consists of basically the ground-up liver and heart of a Muscovy duck.
In CAM-world, “personalization” or “individualization” means “making it up as you go along.” A good example of this is a post on that wretched hive of scum and quackery, The Huffington Post, by Dr. Mark Hyman, he of “functional medicine” fame, where, under a section entitled “Treating individuals, not diseases” he writes:
There is no effective known treatment for dementia. But we do know a lot about what affects brain function and brain aging: our nutrition, inflammation, environmental toxins, stress, exercise, and deficiencies of hormones, vitamins, and omega-3 fats.
It is not just one gene, but the interaction between many genes and the environment that puts someone at risk for a chronic disease such as dementia. And we know that many things affect how our genes function — our diet, vitamins and minerals, toxins, allergens, stress, lack of sleep and exercise, and more.
Hyman then goes on to describe an anecdote of a man with developing dementia. Typical of many CAM doctors, Dr. Hyman chased down all sorts of “abnormalities,” prescribed all sorts of supplements to “fix” those abnormalities, and subjected the man to various “detox” regimens, including unspecified “medications that helped him overcome his genetic difficulties by getting rid of toxins.” As Steve Novella pointed out at the time, in reality what Dr. Hyman was doing was a “bait and switch,” in which he extrapolates from preliminary results in real science and uses them to come up with proposed treatments that have not been validated for the purposes that he uses them for. His evidence for success? Not science, not clinical trials, not even preclinical data, that’s for sure. Instead Dr. Hyman presents a couple of anecdotes.
Indeed, much of this sort of “making it up as you go along” is on full display in the anti-vaccine movement. Indeed, the anti-vaccine crank blog Age of Autism has numerous examples of just this sort of “personalization,” including the hijacking of mitochondrial disorders as a predisposing factor for “vaccine injury” causing autism, the serial use of all manner of “biomedical quackery” to “recover” their children, up to and including stem cells, and de facto unethical experimentation on autistic children. Be it chelation therapy, supplements, hyperbaric oxygen, dubious stem cell therapies, and other quackery, the anti-vaccine movement, “biomedical” quacks “individualize” their treatments to each autistic children, using dubious labs like Doctor’s Data to come up with lab abnormalities to “treat.” Of course, there is a fundamental disconnect between the claims of DAN! doctors to “individualize” therapy to each patient with their unwavering belief that vaccines cause autism. No matter how much they try to hide it, vaccines remain The One True Cause of the conditions known as autism and autism spectrum disorders.
There are many, many more examples of this kind of “personalization” of medicine in CAM based on either no science or unjustified extrapolation from existing science, dubious lab tests, practitioner biases, and a veritable panoply of One True Causes of disease. So let’s contrast with evolving personalized medicine in science-based medicine.
Personalized medicine in SBM
Because I’m a cancer researcher and surgeon, I find that currently the most promising examples of how genomics can contribute to personalized medicine come from cancer. This should not be surprising, because cancer is not one disease. It’s hundreds, perhaps thousands, of diseases, and even cancers arising from the same cells in the same organ can have very different biology. Basically, as I put it before, stealing liberally from The Hitchhiker’s Guide to the Galaxy, Cancer is complicated. You just won’t believe how vastly, hugely, mind-bogglingly complicated it is. I mean, you may think it’s complicated to understand basic cell biology, but that’s just peanuts to cancer. This point was driven home in the AACR video above, and I see it driven home with each new study of cancer genomics or heterogeneity that comes out in high impact journals every month.
For example, couple of months ago, I described a tour de force study of changes that occur in the genome of prostate cancer cells compared to normal prostate. The study demonstrated a number of alterations in the genome affecting molecular pathways that drive growth and metastasis and that could potentially be targeted for therapy. Then, not long before going to the AACR meeting, I came across this article in my news feed, Lung Cancer Evolves With Treatment, Study Finds, which refers to this study from Harvard by Sequist et al, Genotypic and Histological Evolution of Lung Cancers Acquiring Resistance to EGFR Inhibitors. The study itself demonstrates something we’ve known for quite some time, which is that tumor cells evolve under selection by various modalities used to treat them. In this case, the authors studied how non-small cell lung cancer (NSCLC) evolves resistance under treatment with drug therapy targeted to the specific mutation driving their growth. In this case, the gene in which mutations result in its being turned on is the epidermal growth factor receptor (EGFR), and the targeted therapy consists of inhibitors of EGFR such as gefitinib (brand name: Iressa) or erlotinib (brand name: Tarceva). Basically, the investigators subjected tumor tissue from patients who had developed resistance to EGFR tyrosine kinase inhibitors to systematic genetic and histological analysis. What they found represented a confirmation of some known genetic changes that occur to result in resistance but also some unexpected changes, including:
All drug-resistant tumors retained their original activating EGFR mutations, and some acquired known mechanisms of resistance including the EGFR T790M mutation or MET gene amplification. Some resistant cancers showed unexpected genetic changes including EGFR amplification and mutations in the PIK3CA gene, whereas others underwent a pronounced epithelial-to-mesenchymal transition. Surprisingly, five resistant tumors (14%) transformed from NSCLC into small cell lung cancer (SCLC) and were sensitive to standard SCLC treatments. In three patients, serial biopsies revealed that genetic mechanisms of resistance were lost in the absence of the continued selective pressure of EGFR inhibitor treatment, and such cancers were sensitive to a second round of treatment with EGFR inhibitors. Collectively, these results deepen our understanding of resistance to EGFR inhibitors and underscore the importance of repeatedly assessing cancers throughout the course of the disease.
Or, as Dr. Lecia Sequist, lead author of the study, put it:
“It is really remarkable how much we oncologists assume about a tumor based on a single biopsy taken at one time, usually the time of diagnosis,” lead author Dr. Lecia Sequist said in an MGH news release. “Many cancers can evolve in response to exposure to different therapies over time, and we may be blind to the implications of these changes simply because we haven’t been looking for them.”
“Our findings suggest that, when feasible, oncogene-driven cancers should be interrogated with repeat biopsies throughout the course of the disease,” Sequist said. “Doing so could both contribute to greater understanding of acquired resistance and give caregivers better information about whether resumption of targeted therapy or initiation of a standard therapy would be most appropriate for an individual patient.”
Now, that would be personalized medicine, based on science, in marked contrast to what passes for “personalized” medicine in CAM.
Another cancer for which new findings in genomics and systems biology hold great promise is the cancer I spend most of my time treating and researching, breast cancer. Current methods to predict prognosis and guide treatment are crude and include stage as measured by volume of the primary tumor; presence and number of lymph node metastases; presence or absence of distant metastases; tumor grade as measured histologically; expression or lack of expression of important hormone receptors such as estrogen receptor (ER) and progesterone receptor (PR); and amplification of ErbB2 (HER2). Used together, these factors allow, albeit roughly, a degree of prediction of prognosis, as well as of personalization of therapies such as hormonal treatments (tamoxifen or aromatase inhibitors) or agents targeted at HER2 (trastuzumab). Aromatase inhibitors had not yetbecome widely available, and Herceptin (trastuzumab) had been FDA-approved for women with HER2-positive metastatic cancer but not yet for the adjuvant therapy of women with earlier-stage HER2-positive breast cancer. (That did not come until 2006, and then only with chemotherapy.) Then, in 2000, Perou et al published a classic paper in Nature that used then state-of-the-art cDNA microarrays to divide breast cancers into subtypes based on gene expression patterns, which, based on this work and work done since then, currently include normal-like, basal-like, luminal (A and B), and HER2(+)/ER(-), and there is a growing body of literature (for example, this study) that suggests that these different subtypes respond differently to different chemotherapy and targeted agents.
We learned many things from this work, which has accelerated over the last decade. For example, we now know that there are several intrinsic groups of breast cancer based on the patterns of gene expression they exhibit, and these groups subdivide many of the “classic” divisions we have been using for at least two decades, such as ER(+) or ER(-). More importantly, there is one form of breast cancer that expresses none of these markers. Dubbed “triple negative breast cancer” (TNBC), this form of breast cancer is defined as expressing neither ER, PR, nor HER2. TNBC is a close relative of a type of breast cancer categorized a decade ago through gene expression profiling and dubbed “basal-like” (synonymous terms include “basal-type,” “basal-epithelial phenotype,” “basal breast cancer,” and “basaloid breast cancer”). For the most part, currently the same treatments are used for TNBC and basal-like breast cancer because the sine qua non of TNBC is that there are no known molecular targets in this breast cancer subtype, while non-TNBC basal-like breast cancer tends to express HER2 and thus be susceptible to Herceptin. Because TNBCs do not respond to drugs targeting ER or HER2, cytotoxic chemotherapy is currently the only option for adjuvant or neoadjuvant therapy in women with operable TNBC or for systemic treatment for metastatic disease. Paradoxically, TNBCs are more sensitive than ER(+) luminal tumors to standard chemotherapy regimens, but unfortunately this increased chemosensitivity does not translate into prolonged overall or disease-free survival. Consequently, the identification of new molecular targets or oncogene signatures that can be targeted for therapy, either with new agents and/or of new synergistic combinations of old agents, is a critical problem to be overcome for TNBC. Tantalizing hints of how this might be done arose at AACR, for example, this study in which the genomes of 50 breast cancers were sequenced.
Another area in which genomics might assist us as clinicians in breast cancer is through answering a rather vexing question regarding racial disparities in cancer outcome. For example, although the incidence of breast cancer among premenopausal African-American women is lower than among Caucasian women, African-American women are more likely to die from their disease, with a breast cancer-specific mortality of 33 per 100,000 and five year survival of 78% compared to 23.9 per 100,000 and 90%, respectively, for Caucasians. Breast cancer among African-American women tends to be characterized by higher grade, later stage at diagnosis, and worse survival, even after controlling for age and stage. Although it is true that the causes of these observed differences are likely to be multifactorial and include socioeconomic factors, such as differences in access to screening and treatment, there also appear to be biological differences in breast cancer in AA women. Indeed, evidence supporting biological differences as a major part of the explanation for these observed racial disparities was reported from the Carolina Breast Cancer Study. This study reported that that the TNBC/basal-like breast cancer subtype is nearly three times more common among premenopausal African-American women than among Caucasian women In marked contrast, the HER2(+)/ER(-) subtype did not vary appreciably with race or menopausal status, and the less aggressive ER(+) luminal A subtype was less prevalent in premenopausal African-American women. These results suggest that two questions remain open: (1) whether there is a difference in breast cancer biology that drives the tendency of young AA women to develop TNBC at a much higher rate than Caucasian women and (2) whether these biological differences, if they exist, can be exploited therapeutically to develop personalized regimens targeted at patients’ individual tumors. These are the sorts of questions that genomics can potentially answer and personalized medicine be based upon.
Don’t get me wrong. We are not yet near true personalized medicine for breast cancer. Indeed, if you want to get an idea of the challenges that remain, several of the talks I attended are available for free at the AACR website. Talks that are worth watching include:
- Harold Varmus, New Directions in Cancer Research
- Lynda Chin, Translating cancer genomics: From discovery to medicine (unfortunately, nearly none of the slides from this talk have been made available)
- Waun Ki Hong, The landscape of cancer prevention: Personalized approach in lung cancer
- Andrea Califano, A systems biology approach to integrative cancer genomics (Another one without slides.)
There are many others, but unfortunately most of the relevant talks are either not posted yet, require payment, or both. It’s annoying to me, but on the other hand I understand that it costs money to produce these and put them up on the web. Still, the sheer number of talks on The Cancer Genome Atlas, which goes by the annoyingly cutesy acronym TCGA, is telling. Every day, or so it seemed, there were multiple talks on TCGA, reporting findings, telling investigators how to access the data on its website, and discussing the progress. Basically, TCGA is becoming a repository of genome sequences of many cancers, a resource that can be mined. It’s not too hard to envision that one day, when it has many thousands of cancer genomes stored away from before and after treatment, its database might serve as the basis for computer algorithms that compares a patient’s tumor genome to the database and comes up with a list of recommended treatments.
Unfortunately, the move towards personalized medicine not without its share of opportunists and companies selling kits based on genetic tests that either haven’t been validated in clinical trials sufficiently to support their clinical use, for example Anne Wojcicki of 23andMe, whose pitch is has a lot more in common with “health freedom” arguments than it does with actual scientifically validated uses of genomic data, complete with heavy promotion in various social media. It’s a trait shared with enthusiasts of direct-to-consumer genetic testing, whose language really does harken to that of the “health freedom” movement. For example, compare this post with this post by Mike Adams, and the main difference you’ll find in the arguments will be in degree, not kind. It’s “health freedom” all around. Even while promoters, in a fit of cognitive dissonance, simultaneously accuse physicians of paternalism on this issue and admit that the burgeoning personal genomics industry needs to be “purged of scammers and bottom feeders,” it’s an effort designed to create an army of people who “will go nuts” at any attempt by legislators to legislate direct access to personal genomic data or regulatory agencies to control more tightly access to direct-to-consumer genetic testing. As Harriet Hall put it correctly, when it comes to routine genomic testing, we’re not there yet, not the least of which because, as Scott Gavura has pointed out, there are lots of problems with the testing. As I put it, much of the promotion of personal genomic testing is disturbingly similar to the promotion of various autism “biomedical” therapies by DAN! doctors or Dr. Hyman’s panoply of woo to which he subjects his patients in that it is an extrapolation from data that are too preliminary to justify widespread use. That may well change one day, but today is not yet that day, and saying so, to me at least, is akin to pointing out that the “do it yourself” use of unproven cancer therapies like dichloroacetate is usually not a good idea.
We are, however, making progress, and we’re making that progress not based on speculative extrapolation of preliminary science or accepting dubious science as true. I’ll close with an example that is now routinely used in the treatment of ER(+) breast cancer. As I mentioned above, ER(+) breast cancer tends not to be as sensitive to chemotherapy as ER(-) (and in particular, triple negative) breast cancer, even though it has a better prognosis. Over the last decade, a 21-gene assay has been developed for women with ER(+)/HER2(-) cancer that has not yet spread to the axillary lymph nodes called the OncotypeDX® assay. Based on the results of this assay, a recurrence score can be calculated. If it’s high, the tumor is likely to be sensitive to chemotherapy, which will improve the woman’s chances of survival. If it’s low, the tumor is likely to be insensitive to standard chemotherapy, and the recommendation is for the woman not to undergo anything other than anti-estrogen therapy. In this way, thousands of women will be spared chemotherapy that will not help them. Current trials are investigating the utility of this and other genetic prognostic tests in node-positive tumor or, in the case of Oncotype, in determining whether patients with intermediate scores can be spared chemotherapy, and, if so, which ones.
The revolution in genomics has been likened to a flow of water. In the late 1990s, it was a trickle. Five years later, it was a firehose. Today, it’s Niagara Falls, with terrabytes of data being produced every month. The cost of sequencing an entire genome has fallen from $100,000,000 ten years ago to under $30,000 per genome by late 2010, with the era of sub-$1,000 genome sequences in sight. There is definite promise in genomics to result in truly personalized medicine. The key will be to combine it with proteomics, metabolomics, and an understanding of environmental influences. Doing that will not be easy, and, despite the Niagara Falls of data currently deluging us, it will not be fast. Unlike the claims of personalized medicine that arise from CAM and IM, it will take science and clinical trials to tease out the true associations from the noise, to differentiate correlation from causation, to separate and quantify effects of environment from effects of genome, and to figure out the interactions between them all. Oddly enough, that’s why I find last year’s AACR promotional video to be a bit more realistic than this year’s, annoying techno background music aside:
There are successes, challenges and even failures, but always hope. It’s our time, but we have to stick to science-based medicine.
Lone Star Uke Fest 2011
When spring is in the air, and the first breath of April showers blows across the plains, the mind turns to simple passions. Luckily, for the past three years, the first week in April has wrapped lovingly around the Lone Star Ukulele Festival in Dallas happily just down the street from SoftLayer’s main office.
Now I know what half of you out there are thinking: “UKULELE festival?!? Isn’t the ukulele that kitschy instrument that every college kid in the 1930′s carried around in the pockets of their raccoon coat? Isn’t that the twangy noise maker that Tiny Tim crooned to on variety shows in the late 60′s and early 70′s?”
The answer, of course, is yes to both questions. Nevertheless, as attested to in the excellent documentary film Mighty Uke, the ukulele is enjoying something of a renaissance these days. From television commercials, to the Grammy award winning “Hey, Soul Sister” by Train, and across the globe the ukulele has plucked its way back into popular culture.
The other half of you are thinking, “Ukulele FESTIVAL?!? What the heck is a ‘Ukulele Festival’?”
A Ukulele festival is an opportunity for ukulele players of all levels to gather together and learn. The Lone Star Ukulele Festival featured nationally recognized talent like Kimo Hussey, Pops Bayless, Michelle Kiba, Ukulele Bart, Four Strings of Swing, and Texas’ own middle American good times band, The Wahooligans. These professionals taught seminars in topics ranging from basic music theory, to songwriting, to advanced blues and jazz chord structure, to performance skills. Such an event cannot be all work and no play, however, so the festival also hosted an Open Mic night, a Songwriter’s Contest, and not just one, but two concerts, one of which was dedicated to classical music interpreted on the Ukulele. Vendors had a forum to sell their instruments and books and there was no shortage of “Jam sessions” where participants gathered in the hotel lobby just to enjoy each other’s company and play songs together.
In short, a Ukulele Festival is a fun, relaxed atmosphere were like-minded people can revel in a common interest, share their favorite beverages, meet new friends and generally have a ball.
Now somewhere out there between the half of you that were wondering about the ukulele part, and the other half that were wondering about the festival part, I sense there is a cross section that is wondering what in the heck this has to do with SoftLayer and hosting. For those folks, I can only say this: At SoftLayer we work hard. We deal with large, complex systems and the difficult problems that arise from keeping those systems up and running. Sometimes the sailing is relatively smooth, and sometimes the waters can be a bit choppy, but we like to move forward and to do that we have to keep rowing.
Every once in a while though, when one has the chance, it’s nice to take a break: let the current carry you. When that time comes, for me anyway, I pick up my trusty ukulele. For me, the Lone Star Ukulele festival was a great place for me to rest and refresh.
My friends, we’ve all made it through winter and into spring. April’s showers bloom May flowers, ready for someone to stop and smell them. Hard work and dedication are important; get your work done! But equally important, be sure to set aside time to find your own simple passions, a place to indulge them, and friends to sit at your side once in a while. When you find your place, perhaps you could even pick up your own uke, and join them in a song.
-Scott
Technology Partner Spotlight: MigrationBox.com
Welcome to the first installment in a new blog series highlighting the companies in SoftLayer’s new Technology Partners Marketplace. These Partners have built their businesses on the SoftLayer Platform, and we’re excited for them to them tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
- Paul Ford, SoftLayer’s VP of Community Development
This is a guest post from Eduardo Fernandez of MigrationBox, a SoftLayer Tech Marketplace Partner specializing in simplifying the process of transferring email and other types of data between services. To learn more about MigrationBox, visit MigrationBox.com.
Take Control of Your Cloud Data
Online services are great, but moving your data to the cloud and moving it between cloud services is very difficult and time-consuming. Think about all the data that you have online: email, contacts, calendars, documents … What happens when you want to switch to a different provider? Maybe your company changed names or is acquiring another company or you want to move to a cheaper or better email provider. It’s really difficult to move this data, especially when you’re talking about hundreds or thousands of accounts.
I first ran into this problem about a year ago. I was doing consulting work for a client, and he asked me to move their company email to Google Apps. I found out that it’s really hard to transfer email in bulk. I’m a hacker, so it didn’t take me too long to come up with a tool that did a pretty good job at transferring the accounts one-by-one. Then I thought I could just make a product out of this tool so that other people could use it as well.
At that point, I found it wasn’t that easy.
Processing email at scale is challenging. You see problems like buggy protocol implementations, unreliable network connections and bandwidth throttling. I had to bring people to the team like our Chief Architect Carlos Cabañero, and it took us several months to come up with an scalable migration platform. The good news is that we made this platform service-agnostic, so it’s not only able to transfer email, it also transfers any type of data – we only have to write connectors to deal with various services.
At the moment, we’re focusing on email and the Google Apps suite, but we will be expanding our offering to support popular business applications like Microsoft Exchange and SharePoint, and consumer apps like Flickr and Delicious.
Vendor lock-in is a growing concern when companies move to the cloud. Our objective is to give you control of your data, so you are free to move it to another service. With MigrationBox, you are not locked in anymore.
When our customer base started to grow, we ran into scalability problems ourselves. Data migration is a bandwidth-intensive process that requires lots of RAM and computing power. Fortunately, with SoftLayer we have more raw server power and automation capabilities than we’ll ever need.
The wave of moving your data online is just getting started. The cloud is popular, but only 5% of enterprises have moved their email into the cloud so far. This is just the beginning, and email is just one service. Everything is moving to the cloud: CRM, storage, document management … Cloud migration problems are going to grow and grow over the next five years, and MigrationBox will be there to help.
-Eduardo Fernandez, MigrationBox
7 Keys to Startup Success
We recently announced a partnership with the Tech Wildcatters Incubator Program, a Dallas-based “microseed” fund and startup accelerator, and we couldn’t be happier with the results we’ve seen thus far. Much of the press coverage of the sponsorship focused on the $1,000/mo of cloud, dedicated or hybrid hosting solutions we offered the program’s startup companies, but the most exciting aspect of the relationship thus far has been getting to engage with the participating up-and-coming entrepreneurs.
Having been in their seats about six years ago when SoftLayer was born in a living room, the SoftLayer team is especially qualified to give insight about the struggles and successes of running a startup, and that aspect of our partnership is where we hope to provide the most value. Over the past few weeks, we’ve met with the current Tech Wildcatters participants and seen some of the amazing ideas they have in the works, and we’re pumped to see them succeed … By all accounts, we can’t really call SoftLayer a “startup” anymore, but our investment in this community reinvigorates the startup culture we’ve tried to maintain as the company has grown.
Recently, I had the chance to share a few “Keys to Success” with program participants, and since those thoughts might be interesting for other startups and small business users, I thought I’d share some of the highlights here. There are no “guaranteed win” formulas or “super-secret secrets to success” in business (regardless of what an infomercial at 3am on a Tuesday morning may tell you), but these ideas may help you position your business for success:
1. Hire people smarter than you.
Your goal should be to get people on your team who can handle specific responsibilities better than you can. Just because you’re running the business doesn’t mean you can’t learn from it, and the best people to learn from are brilliant people.
2. Hire a diverse group.
Different people think differently, and different perspectives lead to better conversations and better business decisions. Filling your organization with one kind of employee will lead to a lot of “That’s the best decision ever” moments, but whether or not that “best decision ever” decision is good for anyone else is a crap shoot.
3. Founders should put skin in the game.
With all of the startup company trials and tribulations, you want everyone on your team to have a vested interest in the business’s success. Clock-punchers and coasters need not apply.
4. Boot-strap the beginning.
Along the lines of the previous recommendation, if you’ve remortgaged your house or sold your car or maxed out your credit cards on a new business, you’re going to care a lot more if it fails. By boot-strapping your initial financing, you become even more accountable for your success.
5. Operate with financial sense, operational sense and common sense.
Balance your business responsibly. If you disregard any of those “senses,” your tenure as a small business owner may be relatively short-lived. When it comes to financial sense, I also recommend that you invest in professional accounting support and software to save you a ton of headache and heartache down the road when it’s time to go after “real money.”
6. CBNO – Challenging But Not Overwhelming
You can always do something more for the business. You and your team should be maximizing your efforts to grow the business but not at the expense of burning out. If you’ve got “skin in the game,” your threshold for what is overwhelming may increase, but you have to understand the need for balance.
7. Have fun and make money.
In that order. If you’re not having fun, it doesn’t matter how much money you make. Startups are run by passionate people, and the second you lose that passion, you lose a significant piece of what makes your business or idea great.
I touched on about a dozen more points when it comes to how to orient your business to your customers, but I’ll save that bit for later.
CBNO
Thou Shalt Transcode
Deep in the depths of an ancient tomb of the great Abswalli, you and your team accidentally awaken the Terbshianaki ghost army. You’re disconnected from the supply caravan with the valuable resources that could not only sustain your journey but also save your team. As Zeliagh the Protesiann hunter fires his last arrow, you come to the sudden realization that continuing your quest is now hopeless. Alas, true terror was unknown before this moment as you come to the most surprising realization: The one thing you truly can’t live without is your trusty server that converts one type of media into another.
Fear not great adventurer, for I, Phil of the SLAPI, have come, and I bear the gifts of empowerment, automation and integration. Freedom from the horror of your epiphany can be found in our complementary media transcoding service.
Before we can begin, some preparation is required. First, you must venture to our customer portal and create a transcoding user: Private Network->Transcoding. As you know from the use of your other SoftLayer spoils, you shan’t be obligated to access this functionality from your web browser. You can summon the API wizardry bequeathed to you by coders of old in the the SLDN scroll: SoftLayer_Network_Media_Transcode_Account::createTranscodeAccount
.*
*For the sake of this blog, we’ll abbreviate “SoftLayer_Network_Media_Transcode_Account” as “SNMTA” from here forward … Shortening it helps with blog formatting.
You must then construct an object to represent a SoftLayer Network Media Transcode Job, like our SoftLayer Network Media Transcode Job template object. This template object will be built with a number of properties. Your pursuit in relieving your aforementioned horror only necessitates the use of the required properties.
You will need to decide in which format the final treasure will take form. You may find this information with the SNMTA::getPresets
method.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
$client = SoftLayer_SoapClient::getClient('SoftLayer_Network_Media_Transcode_Account', $trandcodeAccountId, $apiUsername, $apiKey); $transcodePresets = $client->getPresets(); print_r($transcodePresets); Array ( [0] => stdClass Object ( [GUID] => {9C3716B9-C931-4873-9FD1-03A17B0D3350} [category] => Application Specific [description] => MPEG2, Roku playback, 1920 x 1080, Interlaced, 29.97fps, 25Mbps, used with Component/VGA connection. [name] => MPEG2 - Roku - 1080i ) [1] => stdClass Object ( [GUID] => {03E81152-2A74-4FF3-BAD9-D1FF29973032} [category] => Application Specific [description] => MPEG2, Roku playback, 720 x 480, 29.97fps, 6.9Mbps, used with Component/S-Video connection. [name] => MPEG2 - Roku - 480i ) [2] => stdClass Object ( [GUID] => {75A264DB-7FBD-4976-A422-14FBB7950BD1} [category] => Application Specific [description] => MPEG2, Roku playback, 720 x 480, Progressive, 29.97fps, 6.9Mbps, used with Component/VGA connection. [name] => MPEG2 - Roku - 480p ) ..... |
The freedom to use this power (the more you know!) is yours, in this instance, I scrolled through let my intuition find the option which just felt right:
14 15 16 17 18 19 20 |
stdClass Object ( [GUID] => {21A33980-5D78-4010-B4EB-6EF15F5CD69F} [category] => Web\Flash [description] => [name] => FLV 1296kbps 640x480 4x3 29.97fps ) |
To decipher this language we must know the following:
- The
GUID
is the unique identifier which we will use to reference our champion - The
category
section is used to group like presets together, this will be useful for those who’s journey leads down the path of GUI creation - A description of the preset, if one is available, will be listed under
description
name
is simply a human-readable name for our preset
You are nearly ready to restore your yearned for transcoding service as the ghostly horde presses the defensive perimeter. We have but one more task of preparation: We must provide the transcoding service a file! Using your Wand of File Transference +3, or your favorite FTP client, you enter the details for your transcode FTP account found on the Transcoding page of the IMS (or of course SNMTA::getFtpAttributes
) and choose the “in” directory as the destination for your source file. Lacking no other option, you may call upon Sheshura, a fairy sprite, specializing in arcane documents for a source video file: Epic Battle
The battle rages around you, as the Wahwatarian mercenaries protect your flank. The clicking of your laptop keys twist and merge in the air around your ears only to transcend into a booming chorus of “The Flight of the Valkyries” as you near transcoding Utopia. You strike:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
<?php // Create a transcoding client $client = SoftLayer_SoapClient::getClient('SoftLayer_Network_Media_Transcode_Job', null, $apiUsername, $apiKey); // Define our preset GUID and filename $presetGUID = '{95861D24-9DF5-405E-A130-A40C6637332D}'; $inputFile = 'video.mov'; /* * The transcoding service will append the new file extension to the output file * so we strip the extension here. */ $outputFile = substr($inputFile, 0, strrpos($inputFile, '.')); try { // Create a SoftLayer_Network_Media_Transcode_Job template object with the required properties $transcodeJob = new stdClass(); $transcodeJob->transcodePresetGuid = $presetGUID; $transcodeJob->inputFile = "/in/$inputFile"; $transcodeJob->outputFile = "/out/$outputFile"; // Call createObject() with our template object as a parameter $result = $client->createObject($transcodeJob); // $result will contain a SoftLayer_Network_Media_Transcode_Job object print_r($result); } catch ( Exception $e) { die( $e->getMessage()); } |
If your will did not waver nor did your focus break in the face of ever-closing ghouls pounding your resolve, your treasure will be waiting. Brandish your Wand of File Transference +3, or utilize your favorite FTP client to retrieve your reward: “out/video.flv”
If the gods be with thee, your resulting file should look like this: Epic Battle (in .flv)
With your victory fresh upon the tablets of history, you can now encode to any of our supported formats. Try using the process above to convert the video to .mp4 format so your resulting file output is Epic Battle (in .mp4)!
-Phil
P.S. If you’re going to take off your training wheels, the second example uses “[description] => MPEG4 file, 320x240, 15fps, 256kbps for download
” for the bandwidth impaired.
Take Me Out to the Ball Game
If you didn’t read the title to this post in the singsong seventh-inning stretch tune, the rest of this post probably won’t be for you. For those of you who just got to “Buy me some peanuts and Cracker Jack,” as the song kept playing in your head, you’re going to love the news we have to share. We’ll wait for you to finish belting out “At the old ball game!” first, though.
[Pausing here for everyone to finish the song.]
Now that everyone’s back together, I want you to make sure you don’t lose any of that late-inning adrenaline because you might need it at the end of this post.
SoftLayer is all about customer experience. Just ask Skinman. If you’re a SoftLayer employee and you don’t have “the customer” in the top slot of your “work priorities” list, you’ll either need to update that list quickly or update your résumé. This post isn’t about THE SoftLayer customer experience, though … It’s about A SoftLayer customer experience.
THE SoftLayer customer experience is all about automation, efficiency, service and innovation. A SoftLayer customer experience uses the term in a much more general sense: It’s any opportunity we have to give back to our customers in the form of events, contests, and in this case, baseball tickets! If you’re a SoftLayer customer, you’re entitled to more fun than our competitors’ customers … And if that’s not in our terms of service, it probably should be.
Throughout the 2011 Major League Baseball season, SoftLayer will be giving away tickets to Texas Rangers home games in Arlington, Texas! We’re going to keep you guessing about how/when/where we’ll be giving them away, but if you keep your eye on the SoftLayer Blog, follow @SoftLayer on Twitter, subscribe to SoftLayerTube on YouTube and “Like” us on Facebook, you’ll be the first to hear.
We’re pretty sure customers in the DFW area are going to be the most excited, since they can root for the home team, but as the season progresses, the net may be cast significantly wider … Reaching out to customers in other parts of the country (world?) who love SoftLayer and want to catch a game while they’re in town for a data center tour. But let’s not get too far ahead of ourselves just yet. Let’s give away our first set of tickets!
Texas Rangers v. Anaheim Angels
- Date: Monday, April 18, 2011
- Time: 7:05pm
- Location: Rangers Ballpark in Arlington
- Seats: 2 – Section 26 (Lower Level, behind Home Plate!)
- Transportation: You’re responsible for transportation to/from the park
How to Enter
Since our first giveaway doesn’t include transportation to/from the game, the primary pool of participants will be customers who live within driving distance (or happen to be in the DFW area on April 18). Entry into the competition is simple: Comment on this post about why you love SoftLayer.
When you’re entering your email, please use a contact address associated with your SoftLayer account. Submissions will be accepted from now until 10 a.m. CDT on Thursday, April 14, so get to writing! We’ll have a quick internal vote for all of the submissions after removing your contact information to obscure on which account goes with which response. If your submission wins, we’ll email you on Thursday to arrange for ticket delivery … You’ll have the whole weekend to get excited about the game!
Play Ball!
So … What Does SoftLayer Do?
In the first quarter of 2011, SoftLayer presented in, exhibited at or sponsored at least thirty different conferences and events. We’ve globe-trotted to places like Orlando, San Francisco, New York, Las Vegas and Europa Park, Germany, to spread the word about who SoftLayer is and what we do. We’ve talked about data center pods over beers in Boston and showed some server skin at SxSW in Austin, and in the process, we got to share the SoftLayer story with literally thousands (if not tens of thousands) of people.
It turns out, there might be a few billion people on the planet that haven’t heard of SoftLayer (yet), so every event we attend give us an opportunity to meet more people and explain the value SoftLayer can bring to their business. Take this week’s Web 2.0 Expo for example: More than 500 people came by the our booth to learn more about us, get some cool swag or grab a beer during the booth crawl, and a hundred of them lined up to try their hand at the Server Challenge.
Of the attendees who made it to the front of the pack to chat with one of us, it’s remarkable to note how consistent some of the conversations were. Since we haven’t really done a refresh to catch everyone up on what it is SoftLayer does (and since this blog is being presented in the sponsor section of Techmeme), a reintroduction may be in order.
Since the conversations I had at Web 2.0 Expo are so fresh in my mind, I thought I’d frame this little post around the most common questions we were asked by attendees learning about us for the first time (Warning: The responses to these general questions are SoftLayer’s value statements, so they’ll come off as very salesy … Leading you to believe we’d answer any other way would be disingenuous, though):
“What does SoftLayer do?”
Simply put, SoftLayer is a hosting provider. That generalization doesn’t do our business justice, though. We have ten data center facilities in Houston, Dallas, Washington D.C. and Seattle, and we host more than 80,000 servers for more than 25,000 customers worldwide. We offer cloud, dedicated and hybrid environments and resources that allow businesses to outsource their IT, so they can focus on their core competencies and give their hosting infrastructure to the experts.
“How are you different from <Hosting Competitor>?”
SoftLayer was built with a focus on a few core values: Innovation, Empowerment, Integration and Automation. Our hosting platform offers a true hybrid experience – dedicated and cloud services are seamlessly integrated – that can be accessed and controlled by customers in a number of ways. Each SoftLayer server supports three different kinds of network traffic (public, private and out-of-band management), and customers have complete access to their server via free KVM over IP included with every server. Everything you can do in our portal can be done with our API, and we’re an industry leader in product innovation.
Oh, and you can also spin up a cloud server on our platform in under 15 minutes and a dedicated server with your specs and operating system in 2-4 hours.
“Do you support <XYZ Technology>?”
If <XYZ Technology> is current, legal and useful, you can probably use it on our platform. If we don’t directly offer software or services you need, we have droves of customers and partners who probably do, and we’re happy to refer you to them. Given the unparalleled access you get to your SoftLayer hosting environment, the world is your oyster.
“Impressive sales pitch, but how do I know it’s more than just a pitch?”
Our business is designed around making our customers happy. Our services are offered on month-to-month contracts, so we have to continue to provide great service to you or we lose your business. We can share customer success stories until we’re blue in the face, but what really matters is what your experience is when you try us out.
Somewhere in there, you might have written me off as just some public relations guy, but I believe every word in those responses (as I’m sure my 550+ coworkers do). I’m not trying to claim that if you host with SoftLayer you’ll never have another problem or that people don’t make mistakes on our end, but as a company, you’d be hard-pressed to find a more devoted group of people focused on providing the best experience in the business.
The Path to Hosting 19+ Million Domain Names
If you own a business, your goal is to be wildly successful. You might look at financial growth, operational efficiencies or customer satisfaction, but at the end of the day, you want to execute on your vision to continue it. With SoftLayer’s management team, company culture, innovative platform and focus on the customer experience, we’ve managed to become a phenomenally successful and fast-growing company.
I run the Market Intelligence group at SoftLayer, and my team is responsible for reviewing success metrics internally and in comparison with many of our competitors. We have a wealth of data at our fingertips, and one of the most interesting statistics I track is related to market switching data.
Today, I was looking closely at some of our most recent domain name data, and I came across some pretty amazing information. We have millions of data points instantly available for filtering and sorting, so we can produce some pretty insightful market intelligence that can help us make better business and customer decisions.
While reviewing that domain name information, I did a quick pivot exercise in Excel to see the number of domain names hosted by SoftLayer – not just DNS hosted by us, but a pretty comprehensive view of the number of domains hosted on our infrastructure. As of March 1, 2011, we had 19,164,117 domains. Yes, you read that correctly: More than 19 million domains are hosted by SoftLayer. To give that a little context, the total domain name pool was 282,602,796, so we hosts about 6.78% of all domain names on the Internet.
That’s impressive, but it’s not the end of the story.
The number of net new domains coming to SoftLayer on a monthly basis is even more remarkable … From October 2010 to March 2011 – a 6 month snapshot – the total number of domains hosted on SoftLayer infrastructure had compounded growth of 124%:
What will the next six months hold? You can bet I’ll be refreshing the data to keep an eye on it. Without extrapolating much other information, I’d say that the growth numbers are astounding and they’re indicative of an unwavering confidence from our customers.
-Todd
The Rise of the Geek
Whether fact or fiction, in business, sports, politics or the arts, everyone loves a triumphant underdog story, and who could be more of an underdog than a bookish, socially awkward geek? You know … the ones that were overlooked and under-appreciated (until they made their first million dollars). The history of the Internet is littered with geeks changing the way nearly every person in the developed world interacts with the people around them. In honor of these stereotypically statistical underdogs, May 25 – the premiere date of the first Star Wars film (among other geeky holidays) – has come to be known as Geek Pride Day.
With more than 80,000 active servers and 550+ employees, SoftLayer is essentially a Geek Think Tank of employees and and proving ground of sorts for customers. As I’m writing this, the faint hum of our generators and cooling systems remind me that the next Facebook or Microsoft might be getting started in the data center pod right below my desk at our Dallas Alpha HQ.
Just considering that prospect reinforced to me that the geeks have really done it! The 2.0 millennium has been marked by the rise YouSpaceTwitterWikiMyTube sites spurred on by textbook-definition nerdy underdogs … It’s right in line with Lance’s theory of world domination. No longer are geeks merely the Steve Urkels of the business world.
They’re successful, smart, savvy, innovative early adopters.
Let’s take a moment and explore some of the more polarizing geeks of our day – Geeks who made being a geek cool:
- Steve Jobs – 500,000 iPads sold by the end of the first week of release. Apple’s market cap exceeds that of Microsoft for the first time since 1989. Open Source application development and support is a key part of its ongoing software strategy.
- Bill Gates – Windows, Microsoft Office, Xbox and their new “To the cloud” focus.
- Mark Zuckerberg – The Founder of Facebook:
- More than 500 million active users
- Entrepreneurs and developers from more than 190 countries build with Facebook Platform
- Many of whom use SoftLayer as their Infrastructure host
- Many of whom use SoftLayer as their Infrastructure host
- People spend over 700 billion minutes per month on Facebook
- Peter Parker – Spider Man – Peter has a natural gift for the sciences and is considered by some genius. After being bit by a radioactive spider Peter develops super physical human strength and ability along with a sixth sense for danger.
- Dwight Schrute – Top salesman for the Dunder Mifflin Paper Company. Winner of numerous sales awards. One-time Assistant to the Regional Manager and beet farmer extraordinaire
Alright … I might be getting carried away lumping fictional characters into the mix, but you get my point.
As a member of the SLales department, I am forever “geeking out” over new and exciting applications, products and tools the our customers are coming up with. Although I don’t believe I can truly claim my geek badge of honor yet, I aspire to reaching that rank.
-Arielle
PS: For the geeks out there, (without cheating) what year was the first Star Wars film released? Did you see it in the theater? If you weren’t alive when it was released, when did you first see it?
WorldHostingDays 2011
This week, Lance and I hopped over the pond to attend WorldHostingDays 2011 at Europa-Park in Rust, Germany. If you haven’t heard of WorldHostingDays, you may be a little more familiar with WebhostingDays, its more narrowly focused predecessor. Because many of the sessions and discussions at the event have evolved and grown significantly from the pure-play “web hosting” market, the name change was a good one … And it didn’t even require tweaking the WHD abbreviation.
Given the ambitious scope of WorldHostingDays, we weren’t sure what to expect from the sessions, but we were excited to hear fresh perspectives on the European-centric hosting market. We walked away from the sessions with a few new ideas to implement into SoftLayer’s business, and it was interesting to hear the (regionally accented) conversations focus on the same problems and questions the US hosting industry is tackling: Public and private clouds, IPv6, scalability, stability and security.
Many European companies that are relatively new to the hosting scene are experiencing some phenomenal growth (similar to what we’ve seen at SoftLayer), and the opportunity is growing exponentially beyond their growth as new markets turn up with fresh needs for quality infrastructure. In these developing markets, local events in Europe like WHD will be invaluable to educate and relate how this relatively new industry might change the face of the local business environment … And when those efforts carry into Asia, the sky is the limit on the opportunity.
We have some pretty huge global plans on the horizon, and we’re excited to position ourselves for worldwide recognition. When WorldHostingDays 2012 rolls around, you’re going to see an even bigger, badder and better SoftLayer.
3 Bars | 3 Questions: SoftLayer Channel Sales
In this week’s “3 Bars | 3 Questions” episode, I was nominated by Tom Blair to talk about SoftLayer’s Channel Sales team and the competitive advantages our three partner programs (strategic, referral and reseller) have over our competition.
As you’ll see in the video, we actually covered seven or eight questions, but the basic framework for the chat were these three:
- How does SoftLayer define the channel?
- What’s happening in the SoftLayer channel program?
- How does SoftLayer’s referral program differ from the programs offered by competitors?
Because we had quite a bit of ground to cover, the video goes about 15 minutes, but I hope it’s entertaining and informative throughout. Be sure to stick around through the end of the video to hear the best analogy I can think of for SoftLayer’s program.
To learn more about the new referral partner program I mention, email referral@softlayer.com, and we can fill you in.
Since we recently announced an awesome partnership with TechWildcatters, I’m looking forward to hearing what SoftLayer VP of Community Development Paul Ford has to say about what else is coming up. Paul, enjoy the hot seat!
-Drew