Lifespan and Care New England Plan Monopoly (Again)

For the second time in ten years, Lifespan and Care New England, Rhode Island's two large health systems, plan to merge into a single entity to be called Lifespan.

In 1998, the two entities applied for regulatory approvals needed to merge, but pulled their applications in 2000.  If allowed to combine, the resulting entity will control nearly three-fourths of Rhode Island's hospital system.

Lifespan President and CEO George Vecchione expects this regulatory process to only take six to nine months and for the merger to result in some efficiencies, specifically in central-office operations and alignment of system-wide services, but without substantial job cuts.

According to Lifespan, clinical enhancements that would occur under the merger include:

  • Butler Hospital will create the state’s first Brain Sciences Institute, which will support research, education and behavioral health treatment. In addition, the Butler campus would be sold or otherwise developed to fund a new Butler Hospital facility on or near the RIH campus
  • Kent Hospital will apply to become a level II trauma center and will also seek to create an emergency medicine residency program. Together, these improvements will enhance statewide disaster responsiveness
  • Women & Infants will retain its leadership role in neonatal and women’s reproductive health. There will also be a greater opportunity to develop services for conditions that disproportionately affect women and to maximize Women & Infants’ referral network and strong regional presence
  • Continuation of Care New England’s VNA under the Lifespan system

Mixed responses to the merger plans include Rhode Island Governor Donald L. Carcieri (R-RI) who notes that the creation "of such a dominant healthcare network" raises "a number of important concerns" and  Lt. Governor Elizabeth H. Roberts, who states that she will "advocate for a focus on the core mission of hospitals to serve the public and recognize the importance of this proposal’s potential for economic growth in the state.”?

Mass Governor Asks Blue Cross to Keep Higher Employer Contribution

At the request of Governor Deval L. Patrick (D-MA), the state's largest health insurer, Blue Cross and Blue Shield of Massachusetts, scrapped a new policy that would have allowed owners of small businesses to contribute just one-third of the cost of their employees' health plan premiums.  Blue Cross is the state's largest health insurer with about 3 million members.

Prior to 1 July, Blue Cross required a minimum 50 percent contribution to premiums from employers with 50 or fewer workers.  The average contribution by Massachusetts employers is about 75 percent.

On 1 July, Massachusetts's healthcare reform law took effect, under which, if a company does not offer health insurance, low income works can receive subsidized coverage under the state's Commonwealth Care plan.  They are ineligible for assistance, however, if their employer offers a company health plan, regardless of the company's contribution to premiums.

Company's not offering health insurance to their employees or contributing less than what the state deems "fair and reasonable" toward their employees' health plan premiums are required to pay an annual fee of $295 per employee.

Harvard Pilgrim Health Care, the state's second largest health insurer with about 1 million members, has said that the insurer will retain its 50 percent contribution after earlier reviewing its policies as a result of Blue Cross's lowering its minimum contribution to 33 percent.?

AMA Sounds the Alarm, Medicare Making Yet Another Attempt to Cut Reimbursement

The American Medical Association (AMA) must once again don its armor, this time preparing to go to battle on behalf of its approximately 240,000 members over pending cuts to Medicare reimbursement.  Physicians received below-inflation updates in 2004 and 2005 and zero percent updates in 2006 and 2007.

Without congressional action, Medicare physician payment rates will be reduced 10 percent effective 1 January 2008.  By 2016, the cuts will total about 40 percent, while practice costs are expected to increase by 20 percent.

In addition to steep pay cuts, the AMA charges that the Medicare physician payment update formula:

  • has kept average 2007 Medicare physician payment rates about the same as they were in 2001
  • prevents physicians from making needed investments in staff and health information technology to support quality measurement
  • punishes physicians for participating in initiatives that encourage greater use of preventive care in order to reduce hospitalizations
  • has led to a severe shortfalls in Medicare’s budget for physician services that have driven Congress to enact short-term interventions with funding methods that have increased both the duration of cuts, as well as the cost of a long-term solution
  • hurts access to care for America’s military families, has payment rates in the Department of Defense’s TRICARE program are tied to Medicare rates

An AMA Physician Payment Action Kit is available for more information and the AMA Physician Grassroots Network to receive updates on physician payment rate legislation.

The impacts of Medicare physician payment cuts in New England are significant:

  • New England physicians will lose $306 million for the care of elderly and disabled patients in 2008 due to the 10 percent cut in Medicare payments beginning 1 January.  The region's physicians will lose $12.1 billion for the care of elderly and disabled patient by 2016 due to eight years of cuts
  • 149,461 employees, 2,007,382 Medicare patients and 234,343 TRICARE patients in New England will be affected by these cuts
  • 42 percent of New England's practicing physicians are over 50, an age at which surveys have shown many physicians consider reducing their patient care activities

CT

ME

MA

NH

RI

VT

Losses in 2008

$92 million

$27 million

$137 million

$22 million

$18 million

$10 million

Losses by 2016

$3.7 billion

$1 billion

$5.4 billion

$860 million

$720 million

$380 million

Affected:

  Employees

39,803

13,671

63,187

14,144

11,613

7,043

  Medicare Patients

485,970

220,081

884,894

170,937

155,540

89,960

  TRICARE Patients

51,403

46,849

70,159

28,786

24,818

12,328

Physicians Aged 50+

42%

46%

38%

43%

37%

43%

  • Compared to the rest of the country, Connecticut, Massachusetts, Rhode Island, and Vermont, each at 14%, has an above-average proportion of Medicare patients
  • Compared to the rest of the country, Maine, at 17%, has the second highest proportion of Medicare patients and, at 17 practicing physicians per 1,000 beneficiaries, has a below-average ratio of physicians to Medicare beneficiaries, even before the cuts take effect
  • In 2008, on top of the 10 percent cuts across the country, the "Southern Maine" Medicare payment area faces cuts of an additional 1.1 percent, the "Rest of Maine" Medicare payment area faces cuts of 2.1 percent; New Hampshire faces cuts of an additional 1 percent; and, Vermont faces cuts of an additional 1.7 percent

Countering the congressional inaction and the resulting 10 percent rate cut, the AMA is advocating a 1.7 percent increase in reimbursement in 2008, in line with the estimated practice cost increase; long-term, the AMA wants Congress to create a new reimbursement formula.

Over-stepping their role as a payment mechanism and forgetting that they're not actually providers of medical care, the talking-heads of the health insurance industry charge that physicians are partly to blame, contributing to costs by ordering unnecessary and expensive services.  Mohite Ghose, spokesman for the insurance trade association, America's Health Insurance Plans, was even disingenuous enough to question whether physicians are always providing "appropriate services at the right setting at the right time."

BLOG Medicine must concur with the AMA's statement that, "utilization of physician services is not the cause of the Medicare program's financial predicament, and cuts in physician payment rates are not the way to improve Medicare's financial sustainability."  Congress needs to bring up the house-lights and call a close to this "annual dance of death" -- it's time to pay the piper.?

Pollyanna With a Pen: Maine Governor Signs 18 New Health Care Bills into Law

On Tuesday, 17 July, Governor John Baldacci (D-ME), joined by the state's legislative Democrats, signed into law 18 new health care bills meant to protect the health and welfare of the people of Maine.

You couldn't see the rose-colored glasses on his face, but Baldacci's "Pollyanna" was definitely showing in his prepared statement: "What all these have in common is that they provide further evidence that Maine is the leader in health care reform and in efforts to expand access to quality, affordable health care."

Maine, already heavily burdened with healthcare legislation, has added laws that require health insurers to extend coverage to policy-holder's adult children until age 25, to require health insurers to cover hearing aides, to prohibit advertising of prescription drugs on software sold in Maine, to ensure sterile supplies for needle exchange programs, and to regulate access and screening for HIV and cancer.

Increasing health care costs, postpartum depression, eating disorders, and the role of dental hygienists are all to be reviewed by study groups.  November will be Lung Awareness Month, Free Health Clinics will have lower taxes and, disturbingly, despite widely being viewed as an expensive failure and having stopped accepting new enrollees as of 1 July due to cost concerns, Dirigo Health will now be allowed the even more expensive proposition of self-insurance.

Noticeably absent from Tuesday's "Glad Game" shenanigans was a resolution for the much-needed reform to MaineCare, Maine's overloaded and very broken Medicaid program and a new, functional, self-supporting funding-mechanism for Dirigo Health.

The Maine Legislative Documents signed into new law include:

LD 4 -- An Act to Amend the Prescription Privacy Law

LD 101 -- An Act to Enhance Screening for Breast Cancer

LD 144 -- An Act to Support Maine's Free Clinics

LD 243 -- An Act to Establish November as Lung Cancer Awareness Month

LD 429 -- An Act to Improve Access to HIV Testing in Health Care Settings

LD 431 -- An Act to Enable the Dirigo Health Program to be Self-Administered

LD 792 -- An Act Concerning Postpartum Mental Health Education

LD 807 -- An Act to Prevent Overcharging for Prescription Drug Copayments

LD 839 -- An Act to Establish a Prescription Drug Academic Detailing Program

LD 841 -- An Act to Extend Health Insurance Coverage for Dependent Children up to 25-Years of Age

LD 995 -- An Act to Reduce the Expense of Health Care Treatment and Protect the Health of Maine Citizens by Providing Early Screening, Detection and Prevention of Cancer

LD 1044 -- An Act to Address Eating Disorders in Maine

LD 1129 -- An Act to Increase Access to Oral Health Care

LD 1440 -- An Act to Prohibit Inappropriate Software Advertising of Prescription Drugs

LD 1514 -- An Act to Require Health Insurance Coverage for Hearing Aides

LD 1786 -- An Act to Reduce the Spread of Infectious Disease through Shared Hypodermic Apparatuses

LD 1812 -- Resolve, Regarding the Role of Local Regions in Maine's Emerging Public Health Infrastructure

LD 1849 -- An Act to Protect Consumers from Rising Health Care Costs.?

For an Operator, Please Press…

We've all experienced it -- calling customer service only to be put on never-ending hold, or, worse, having to listen to the numerous prompts, pressing all the appropriate keys only to be disconnected.

Paul English, founder of Gethuman.com, figured out a better way.  He and his core group of supporters tracked down and have published the shortcuts that cut out the computerized telephone middle-man and get you to a human operator.

English's site allows you to jump to specific categories (e.g., Insurance) as well as sort individually through the more than 500 companies to find both toll-free telephone numbers and the shortcuts that get you off hold and connected to a live person.  The site also has a link if you prefer a printer-friendly format rather than electronic version of the information.

In a corporate world dominated by impersonal, unhelpful, computerized interactive voice response, English's site is much-needed relief for an all-too-human frustration.?

Health Insurance Benefit Costs by Region

According to March 2007 data released by the U.S. Bureau of Labor and Statistics, among the four regions of the United States, the average cost per hour to employers for health insurance benefits ranges from $1.59 to $2.04.

Employer costs per hour worked for health insurance by region, private industry, March 2007

The Compensation Cost Trends program reports that the proportion of total compensation represented by health benefits was 6.7 percent in the West, 6.9 percent in the South and Northeast, and 7.8 percent in the Midwest.

Nationwide, the average cost for health benefits was $1.83 per hour worked, accounting for 7.1 percent of total compensation.?

Boo Bash 2009 – Desktop Costume Included!

Kevin HazardSince Halloween falls on a Saturday this year, The Planet’s annual Boo Bash is happening today. As you can see from our archives, there are a lot of creative people around here, and when a costume contest challenge is issued, you’re bound to get some interesting results. I’ve already seen a fully costumed Ghostbuster, a bumble bee, and about 45 people – including our CEO and CFO – dressed as Todd Mitchell. They say imitation is the sincerest form of flattery, so Todd must feel VERY flattered.

We will post our costumed competitors on The Planet Flickr for all to see, and you can post a comment here to vote for your favorites. Click the picture of “Todd” below to go directly to the Boo Bash 2009 album.

Todd Mitchell

To let you share in today’s costuming, we’ve got a present for you. As a part of our fundraising efforts to support the American Heart Association, we printed shirts for employees who donate. The shirt design has been so popular internally that I made it into a few wallpapers that you can use:

You Got Served

Versions Available:
Dual-Monitor Setup (2560 x 1024)
Single Monitor – Server Only (1280 x 1024)
Single Monitor – “You Got Served” Only (1280 x 1024)

After you get your desktop suited up in its new costume, remember to vote for your favorite Boo Bash 2009 entrant in the comment section below.

Trick or Treat!

-Kevin

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

Why No One Will Talk About “Cloud Computing” in 10 Years

Kevin HazardAt the 2009 Cloud Computing Conference in Santa Clara, Calif., The Planet Director of Product Management Rob Walters was one of five experts invited to participate in a panel discussion about enterprise-level cloud computing – whether it’s a far-off dream or a present-day reality. Conference Chair Jeremy Geelan covered everything from whether the term “cloud” was too broad to be useful to whether private clouds and public clouds can coexist.

I caught up with Rob in the expo hall to have him weigh in on each of the questions for our loyal blog readers (you!):

I love the analogy he uses to explain why “the cloud” is such a difficult concept to explain. It seems to be a paradigm shift unlike any we’ve seen in recent memory, so the transition from hype and confusion to understanding and adoption should prove to be an interesting adventure over the next few years.

One of the most interesting questions asked of the panel was whether or not we’d be talking about cloud computing in 10 years. The unanimous answer: No. Why? The resounding sentiment is that shift toward “the cloud” will be so pervasive that a given platform’s “cloudiness” will be implied. This opinion is shared by a group of experts at a “cloud computing conference,” so there may be a little bias here … What do you think? Will the cloud take over and become the de facto standard or will demand for traditional IT remain in the midst of the cloud’s surge?

-Kevin

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

Lights-Out in the Data Centers

Kevin HazardAs an avid reader of The Planet Blog, you’ve probably noticed some consistency in the 164 articles published here since Doug’s inaugural “Welcome to The Planet’s blog… I think?” post on May 14, 2007. We focus on our company culture, support, data centers and network to help you step through the looking glass and get an inside perspective on our business. With a continuous stream of changes and improvements, it’s tough to feature even a fraction of the work our team is doing to improve our service, so we keep an eye out for opportunities to “show” what we’ve “told” you about in the past. This is one of those opportunities.

On September 2, 2008, we announced the results of our lights-out energy efficiency initiative. A few days ago, I was sorting through a batch of data center pictures, and I came across a few great examples of what this news looks like in practice:

The Planet Lights Out Program

This is Phase Two of our H1 data center. With all the posts you see from H2 and D6, you might be curious about what our other data centers look like, so hopefully the picture above doesn’t surprise you. We have extremely high standards for our data centers, and you should expect the same enterprise-level quality across the board.

If you took a guided tour through H1, you’d see it all lit up as it is above. If you walked in during a normal DC shift, you’d probably find it a little different:

The Planet Lights Out Program

When the data center is unoccupied, the lights are switched off to save energy. How much energy? Well, across the board, we estimate the program saves more than 1.4 million kilowatt hours in a given year – or about $140,000 in power bills. It’s no small change.

As you’ve seen in our other posts about data center innovation and operational efficiency, we take a common-sense approach to energy conservation. It’s incredible to see the significant impact such simple changes can make.

It’s also pretty cool to see servers glowing in the dark:

The Planet Lights Out Program

-Kevin

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

Disruptive Technologies: Virtualization and The Cloud

Todd MitchellIf you weren’t able to attend the cPanel Conference 2009 last week in Houston, you missed out on a great show. With all the networking events, educational sessions and vendor booths to visit, it was pretty tough to keep up as a participant, so the cPanel team deserves a high-five or two —physical or virtual — for having everything so well prepared.

As you may have heard, I led a session about “Disruptive Technologies: The Road from Disruptive to Sustaining.” Instead of copying the bullet points from my presentation into this blog post, we recorded the whole session on a Flip MinoHD. If you’ve got a little time and you’re interested to hear my take on the effects of the Cloud and Virtualization on hosting, go grab a bag of popcorn, turn up your computer speakers, sit back and enjoy:

media
[See post to watch the Flash video]

I opened the floor for Q&A in the session and for additional follow-up after the session after we ran out of time, so I want to do the same for you: When you watch the video, if you’ve got any questions, please post them in a comment below and I’ll be happy to respond.

-Todd

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

Know Thy Backups – Part I

Ben KeenerMore often than not, server backups are misunderstood. With dozens of hardware options and hundreds of software options, finding the right backup can be intimidating. To assuage some of those fears and clear up a bit of that confusion, let’s go over a few of the most common backup schemes. This list isn’t all-inclusive, and the options presented shouldn’t be mistaken for backup plans. A backup scheme is simply a method of creating backups. A backup plan (or disaster recovery plan) is a scheduled implementation of a backup scheme. As we evaluate each scheme, we’ll look at the requirements, costs and benefits, and by the end of our tour, you can decide which best fits your business.

Before we get too far into the specifics of the different schemes, we should define some fundamental terms that we’ll use throughout the comparison:

  • An archive is a set of data that is being preserved
  • A reference point is a single archive against which comparisons are made
  • A restore point is the most recent working backup

The key question a backup scheme answers is this: “If a server suffers a catastrophic failure, what is needed to resume operations with minimal downtime and data loss?” Again, the backup scheme is not a complete disaster recovery plan — its focus is the restoration of data.

The four basic backup schemes we’ll compare are full-server backups, simple incremental backups, multi-level incremental backups and differential incremental backups. The primary considerations about the method that should be used are the server load generated by the backup process, the backup file size, and the speed with which a backup can be restored.

Full Server Backups

A full server backup is one of the simplest methods for a backup scheme. It takes only a single backup archive to create a restore point, which makes data restoration simple and fast. The drawbacks are the amount of time it takes to make the backup, the load it generates, and the total size of the backup. Each backup scheme we’re comparing uses a full backup of the server.

As we evaluate the other schemes, you’ll note they all start with a full backup as a reference point, and create their own restore points as they move forward.

Simple Incremental Backups

A simple incremental backup attempts to resolve some of the issues with full backups, and it does a good job. With an incremental backup, a single full backup is made that serves as both a restore point and the initial reference point. On subsequent backups, it becomes a little more complex. Instead of making a new full backup when it is updated, this scheme compares the current state of the server against the state of the server as it was in the reference point (the first full backup). If it locates any changes, it backs up those changes and generates a new snapshot of the drive as another reference point. This new reference point is then used for the next incremental backup.

This backup structure means the restore point on a server with this backup will consist of the initial reference point and all subsequent incremental backups that use this reference point. This dependency is the primary weakness in simple incremental backups: All of the backups — from the original reference point to the incremental additions recording changes from the reference point — must be uncorrupted and complete for the backup to fully restore the data. If any backup is missing, corrupt or incomplete, the restoration can’t be completed.

The server load created and storage space required for this type of backup is generally less than what you’ll see in a full backup scheme, especially when there aren’t many differences between the backup point and the reference point. On the other side of the spectrum, if the entire data set changes between backups, the storage requirements and server load will be the same as they were when full backups were being performed.

Example: Simple Incremental Backups

I am implementing incremental backups for a database that houses all of my users’ data. I decide I am going to start with a full backup each Sunday — the slowest day of the week for the database — and do an incremental backup on each subsequent day. This process starts over again every Sunday. On Friday, my server suffers a catastrophic hard drive failure. I am told by the technician who replaced the drive that the controller failed, and the heads were idly tapping the side of the drive cage. Everything on the drive is lost.

I gather my backups and begin to restore them on the new replacement drive. The backups from Sunday, Monday and Tuesday restore without a hitch, but Wednesday’s backup is corrupted and will not complete. This means I have lost all of the data from Wednesday and Thursday. Without Wednesday’s backup, the rest of my incremental backups are useless.

There are two incremental backup schemes that attempt to address this issue: the differential and the multi-level incremental backup schemes. In Part II of “Know Thy Backups,” we’ll explain the pros and cons of these methods, and you’ll be ready to plan your backup strategy.

-Ben

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

Know Thy Backups – Part II

Ben KeenerIn Know Thy Backups – Part I, we started discussing the most common strategies of backing up your data, and before we continue that discussion, I should clarify that we’re not talking about hardware configurations like RAID or backup products like Evault and Data Protection Servers. These backup schemes can be executed without spending a dime on additional equipment or resources. While there are best practices and recommendations for making backups and keeping them safe, if your budget is limited, you can protect and preserve your data using one of these schemes on your local workstation or on a secondary drive in your server.

When we looked at the full server and simple incremental backups in our previous post, we noticed a significant limitation: losing a single backup can be catastrophic to restoring data. In the next two schemes, we’ll evaluate solutions that protect us from this vulnerability.

Differential Incremental Backups

A differential scheme requires a full backup reference point and then makes a backup of all changes to the server from that reference point on each subsequent backup. This method requires more storage space than incremental backups but generally doesn’t need as much space as a full backup.

Based on the volume of changes made between the first backup, the reference point and the current backup, differential incremental backups may require additional server resources than an incremental backup. Simple and multi-level incremental backups constantly update the reference point with minimal load, while differential backups update the reference point with a new full backup.

Example: Differential Incremental Backups

As in the previous example, I am using a schedule of backups that starts with a full backup on Sunday, with additional backups on the following days. This time, I’m using differentials. Let’s say that on Thursday I find some inconsistencies in the database when compared to the paper files I received from a vendor. After investigating, I find that my database is corrupted. I determine that I will not be able to recover the database as it is, so I review my backups.

Somehow, I cracked the DVD that my Tuesday backup was stored on, but all of the other discs are here. I start by restoring the Sunday backup and then the Wednesday backup, hoping the corruption occurred after the backup was made. Thankfully, the restoration works, and we are up and running again after losing minimal data. If I had been using simple incremental backups, I would have been able to restore only up to Monday because Tuesday’s backup disc was broken.

Multi-level Incremental Backups

There’s a more granular and robust backup scheme that is less vulnerable than simple incremental backups and less server-intensive than differential backups: The multi-level incremental backup. Multi-level increments assign a level to each backup and then make a comparison against the last lower-level backup made. Only the changes between the reference point and the current data are saved.

This arrangement allows you to design a backup scheme around your needs and the capabilities of your server, and you can decide how many backups you will need for a full restoration to the latest restore point. You will control the number of backups required for a given restore by determining the number of levels in the system. In the event of a disaster, you need a single backup of each level, and each higher level backup must use the lower level as its reference point.

Example: Multi-Level Incremental Backups

This time I am in charge of a Sendmail server that is always under heavy stress. Because this server is extremely important to my business, I need to ensure both its availability and responsiveness at all times. I also need to maintain archives of the e-mail on the server. To do this, I decide to implement a multi-level incremental backup scheme since I need more granular backup configuration that does not generate a great deal of load on the server. This scheme meets that need. It still retains the weakness of incremental backups, but I partially mitigate those weaknesses with scheduling.

At the first of every month, a full backup is scheduled. This is my Level 0 backup, and it is named level0.name of the month. The following day I run a Level 1 backup. This backup holds only the changes since the most recent Level 0 copy called level1.first.name of the month. The subsequent days of that week, I create a Level 2 backup called level2.first.day of the week.name of the month. This process continues until the Sunday after the first Level 2 backup.

On the next Sunday, I make another Level 1 backup called level1.second.name of the month. The subsequent days of that week, I make Level 2 backups called level2.second.day of the week.name of the month. I continue in this vein with every Sunday being a Level 1 backup and the rest of the week being Level 2 backups until the end of the month. On the first day of the next month, I start all over with another Level 0 copy.

I make certain to save multiple copies of the files after I test the archive. I also check to be certain it’s not corrupted, to minimize the risk of data loss through a faulty archive. This scheme allows me to restore to any point within the month in just three steps, as long as all of the archived backups work.

If I need to restore the data from April 17, 2009, I would need the archives for level0.april, level1.third.april, and level2.friday.third.april. I would restore them in sequence from Level 0 to Level 1 to Level 2.

Choosing Your Backup Scheme

As I said in the beginning of this post, these backup schemes are available to you without the use of an additional server or any expensive backup management software. All of the above are viable options for making your backups; however, not every scheme is perfect for every situation. You should review your requirements and the available resources to determine which scheme best fits your needs.

-Ben

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

Showing You Your Servers

Kevin HazardA few weeks ago, we ran a one-hour contest for avid blog readers and @ThePlanet Twitter followers who wanted a picture of one of their actual servers in our data centers, and the results were phenomenal. We had more than 50 people contribute on the blog and on their own Twitter streams, and about 35 thrill-seeking adventure junkies completed all three tasks required to qualify for their picture.

The DC operations crews in Houston and Dallas were great sports about adding this photography project to their normal responsibilities, and we had the pictures out to customers within 48 hours of the contest’s conclusion. Here are a few of the snapshots we took during the contest:

As I warned, some of the pictures didn’t come out as professional photography masterpieces, but that just adds to their authenticity. We couldn’t be happier with the community’s participation, and we’ve heard the repeated requests to rerun the contest. We’ll be offering another opportunity in the near future for customers who missed out on this one. We’ll be tweaking it a little to allow more people to get up close and personal with their servers … even if they live half a world away and happen to be sleeping during the Texas workday. :-)

Thanks to everyone who joined us in the inaugural #showmemyserver experiment! If you have any suggestions on other ways we can give you insight into our business, leave a comment below … We’re all ears.

-Kevin

P.S. If you have some time to kill, visit the #showmemyserver blog and click through to visit some of our customers’ sites in the comments section. The “My Web site is ______, and I’m powered by The Planet” list is a great snapshot of the diversity of our customer base and what they do with their dedicated servers.

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

Pick Your Partnership: Referral Partners, Resellers and Affiliates

Lewis SchrockIf you haven’t already heard the news, we just launched The Planet Partner Plus Program!

This new program is designed to offer a combination of three distinct partner models, each designed to meet the needs of business that partner with The Planet in different ways. We’ve fine-tuned our Affiliate and Reseller programs for the Partner Plus launch, and in that process, we’ve spoken with many potential partners looking for a different type of relationship. Enter the new Referral Partner program.

Instead of just rattling off details, let’s put the Referral Partner model in context with the Reseller and Affiliate programs. That way, we can better explain which type of partnership will best benefit your business. The programs differ based on the discounts/commissions applied and how much a partner company is involved with the transaction. Here’s a high level look:

Affiliate Program

  • Partner Involvement: Affiliates use specially coded hyperlinks to direct potential customer traffic to The Planet. Our system tracks users sent by those affiliate links, and every new customer order qualifies the affiliate for a commission payment.
  • Commission/Discount: 100% of the first month’s contract value.

Reseller Program

  • Partner Involvement: Resellers often build their business around marketing and selling Web hosting solutions. Whether those solutions are managed, shared, VPS or dedicated, the reseller is responsible for the day-to-day operations of their servers and their customers’ hosting-related support. We never interact directly with resellers’ end-customers because they provide all service, support and billing.
  • Commission/Discount: Based on the volume of business they do with The Planet, a reseller partner will get monthly discounts on every server they order and maintain.

Referral Program

  • Partner Involvement: Referral Partners function in an advisory role for their customers, and they want us to perform the service, support and billing. Some Referral Partners may completely manage their customers’ environment and choose to outsource the day-to-day server maintenance responsibilities to a trusted partner. Others may simply generate and compare quotes for their customers’ infrastructure solutions. These partners work with our sales team to determine the right solution for their customer and help the customers transition to The Planet as a provider.
  • Commission/Discount: Based on the volume of business with The Planet, a Referral Partner receives a percentage of a referred account’s monthly recurring revenue.

Which Is Right For You?

Each of the programs offers you a unique opportunity build your businesses, and they aren’t necessarily mutually exclusive. If you provide a mix of hosting and consulting services, it may make sense to for you to sign up for both the Reseller and Referral Partner programs. If you do a majority of your business as a Referral Partner while operating a tech blog for small business owners and entrepreneurs, you may want to include an affiliate link in your blog’s sidebar so you can earn commission on new servers ordered by your visitors … without having to lift a finger.

Our goal with the Planet Partner Plus program is to provide you with a financial model that matches your business requirements, backed up by marketing materials to help you grow. Check out the programs on our partner page at http://www.theplanet.com/partner-program, and use the online forms to apply or send us any questions. We want to help make you successful because that’s how we define being better than just a partner; we want to be your Partner Plus.

-Lewis

P.S. If you’re attending the Channel Partners Conference & Expo in Miami this week, stop by our booth and say hello!

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

Server Form Factors: Towers v. Rack-Mounts

Jon LoweIf you’ve ever been on a tour of The Planet’s data centers, you’ve probably noticed a server segregation of sorts. In one aisle, you see big breadracks of tower servers that resemble desktop computers, and in the next, you find rack-mount servers stacked on top of each other in cabinets. Both form factors can connect to the same Internet with the same speed and performance … and they can even share identical hardware specifications. It may be confusing to see both up and running right next to each other. In fact, as a DC manager, I’m often asked about why we elect to use one over the other. Because the explanation is pretty straightforward, I thought it would be a great topic to cover in my blog debut.

Tower Servers and Rack Servers

Quite a bit has changed in the way we’ve built data centers over the last four years. When we opened our H2 data center, we only deployed racks of tower servers, and in our newest data center phase, D6 Phase 3, we only provision rack-mount servers. You might assume this shift to imply the complete dominance of rack-mount servers over its tower-chassis relative. Let me suggest that you’d be making an incorrect assumption.

To understand when one form factor may be better than the other, let’s look at the hardware, flexibility, space requirements and costs for each. There are no umbrella claims about rack-mounted and tower servers because each comes in different sizes/variations. Tower servers will generally share the same width, but their heights and depths can vary. Concurrently, rack-mount servers are measured by their heights in “rack units.” The rack-mount server we’ll compare is a 1U – a server that takes up one rack unit of height.

Tower Servers

Tower Servers and Rack Servers
Hardware/Flexibility: Given the tower server size and layout, it can accommodate a greater number of large components like hard drives, RAID and network cards.
Space Requirements: The benefits of having more space for drives and components come at the cost of taking up more data center space. A breadrack of towers can hold 20 servers, while 30 1U rack-mount servers fill a cabinet less than half the width of the tower racks. There are fewer tower servers in a given square-foot area, so we say that the data center space is less dense. When a data center is dense, it requires more power and more cooling, so a data center with only tower servers will generally require less power and cooling.
Cost: In the early 2000’s, rack-mount servers were nearly twice the price of tower servers, so the use of towers could have been a purely economical decision. Now that the rack-mount equivalent of a tower is available only a few hundred dollars more, a data center’s use of the tower form factor will likely be based on one of the other differentiators.

Rack-Mount Servers

Tower Servers and Rack Servers
Space Requirements: As we noted, rack-mount servers can be installed more densely in a data center than their tower counterparts. To fit more servers in the same amount of space, the rack-mount servers offer less available interior real estate. Because the server uses less space, it tends to run hotter – the heat emitted from the processor and components is contained in a smaller area – so cooling and air-flow are critically important.
Hardware/Flexibility: A 1U rack-mount server’s decreased real estate often limits the types of components that fit in a given layout and the number of drives that can be installed … it’s not likely that the server above will be employed as a huge network storage repository.
Cost: While the difference in cost between form factors isn’t egregious, the cost of running a data center filled with one or the other is significant. That’s one of the main reasons why you see the focus on efficiency in D6 Phase 3. With more rack-mount servers in a given space, inefficient use of power and cooling means thousands of additional dollars in utility bills.

When it’s all said and done, the form factor of the server you have with The Planet shouldn’t matter to you. You’re connected to the same network, in the same enterprise-class data centers, and you’re getting the same level of service and support regardless of what your server looks like. If you are interested in more the nitty-gritty details from the data center operations side of our business, leave a comment and let me know what you want to see or learn more about, and I’ll do my best to cover it.

-Jon

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

P.S. on the problem with science

I should have, of course, provided the two papers in question so you can decide for yourself. I can't quite do that. I can give you the link to my paper, here:

http://www.gps.caltech.edu/~mbrown/papers/ps/vimsclouds_final.pdf

And I can even provide you with a link to their paper:
http://www.nature.com/nature/journal/v459/n7247/full/nature08014.html

But it's possible that you can't read theirs. (but wait: read the comments below; people found all of the parts of this article posted online in various locations, so you're in luck!) Why not? Because, even after $1B of taxpayer money going to send Cassini to Titan and get these results, the copyright to the paper is now owned by Nature. And they say you're not allowed to read it unless you subscribe or pay. If you are logged in from an academic institution, you probably will get access from their subscription. But if you're elsewhere you are simply out of luck. Seems a bit crazy, huh?

If you do get the two papers, be sure to check out the supplementary information in the Nature paper: that is where all of the important details (like where there are and are not clouds) lie. At first glance the two papers look more or less like they say there are clouds in the same spots. It helps that the figures are all really really small so details are hard to discern. But when you blow them up and look carefully things just don't match up nearly as well as two papers using exactly the same data should.

How Big is 10 TB?

John WhitesideWe’ve been talking about terabytes (TB) a lot – specifically with regard to our newest special, offering 10 terabytes of bandwidth at no additional cost. In fact, today through Aug. 31, we’re offering a deluxe version of the promo: 10 TB of free bandwidth on top of our discounted server prices and FREE setup.

We talk about how great a deal the 10 TB bandwidth promotion is, but what does 10 terabytes of data look like, anyway? We all know it’s a lot… but I decided to figure out just how much it would be in terms of other measurements.

After a little Googling, I learned that 10 terabytes is equivalent to:

  • 10,995,116,277,760 bytes
  • 87,960,930,222,080 bits
  • The data in 800,000 phone books
  • 4 billion single-spaced, typewritten pages
  • 16,000 audio CDs
  • The memory capacity of eight human brains (we’re not saying whose)
  • The entire Library of Congress

Those are interesting, but we wanted to come up with our own visual, so we enlisted our calculators: Picture a small craft bead (an 11 mm x 8 mm cylinder), and imagine that the bead represents one bit (1 b) of data. Eight beads would equal one byte (1 B); 8,192 beads would equal one kilobyte (1 KB); 8,388,608 beads would equal one megabyte (1 MB); etc.

To hold the equivalent of 10 terabytes worth of “bit beads,” you would need more than 1.75 billion 10-gallon tanks. If you piled the beads one foot deep, they would cover 84.7 square miles. If they were used to cover Houston’s 579.4 square miles, we’d have a bead carpet 1.75 inches deep within the city limits.

It’s incredible, right?

Let’s think of it in terms of servers: What is a real-world example of what you can do with 10 TB of bandwidth every month? I’m glad you asked.

You can make an MP3 of yourself singing to your dog and make the file available on your server. When it becomes the latest viral phenomenon, your 10 TB of bandwidth would cover about 3.5 million downloads. You’d be well on your way to your own reality show by the time you got your next month’s server bill … where your 10 TB promo server wasn’t charged a penny of bandwidth overages.

No matter how you measure it … 10 terabytes is a lot.

Help us think big: How would you visualize and explain 10 terabytes?

-John

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

Fog! Titan! Titan Fog! (and a peer review experiment)

Look! Titan has fog at the south pole! All of those bright sparkly reddish white patches are fog banks hanging out at the surface in Titan's late southern summer.

I first realized this a year ago, but it took me until now to finally have the time to be able to put all of the pieces together into a scientific paper that is convincing enough that I can now go up to any person in the street and say: Titan has fog at the south pole!

I will admit that the average person in the street is likely to say hmph. Or yawn. Or ask where Titan is. So let me tell you why finding fog at the south pole of Titan has been the scientific highlight of my summer.

Titan is the only place in the solar system other than the earth that appears to have large quantities of liquid sitting on the surface. At both the north and south poles we see large lakes of something dark. Oddly, though, we don’t actually know what that dark stuff is. At least some of it must certainly be ethane (that’s C2H6, for all of you who have forgotten your high school chemistry). Ethane slowly drips out of the sky on Titan, sort of like soot after a fire, only liquid soot in this case. Over geological time, big ponds of ethane could accumulate into the things that look like lakes on Titan. Odd as they sound, big lakes of liquid ethane are, at least to me, the least interesting possibility. They are the least interesting because ethane is a one way street. Once the liquid ethane is on the ground, it can’t evaporate and is there pretty much forever, unless it somehow sinks into the interior.

Why does all of that ethane drip out of the sky? Because sunlight breaks down methane (CH4) to form ethane much the same way it breaks down car exhaust fumes to form smog in big cities. There’s plenty of methane in the atmosphere, so the supply of ethane is near endless. The dripping will not end soon.

But the methane is where all of the potential action is. Methane is to Titan what water is to the earth. It’s a common component in the atmosphere and, at the temperature of Titan, it can exist in solid, liquid, or gas form. Like water on the earth, it forms clouds in the sky. Like water on the earth, it probably even forms rain. But what we don’t know is whether or not that rain makes it to the surface and pools into ponds or streams or lakes which then evaporate back into the atmosphere to start the cycle over again. In short, we don’t know if Titan has an active methane atmosphere-surface hydrological cycle analogous to the water atmosphere-surface hydrological cycle on the earth.

Until now.

Because there is fog.

Fog – or clouds – or dew – or condensation in general – can form whenever air reaches about 100% humidity. There are two ways to get there. The first is obvious: add water (on Earth) or methane (on Titan) to the surrounding air. The second is much more common: make the air colder so it can hold less water and all of that excess needs to condense. This process is what makes your ice cold glass of water get condensation on the outside; the air gets too cold to hold the water that is in it, and it condenses on the side of your glass.

Terrestrial fog commonly forms from this process. That fog that you often see at sunrise hugging the ground is caused by ground-level air cooling overnight and suddenly finding itself unable to hang on to all its water. As the sun rises and the air heats, the fog goes away. You can also get fog around here when warm wet air passes over cold ground; the air cools, the water condenses. And, of course, there is mountain fog that is causes by air being pushed up a mountain side, where it cools and – you get the pictures – can no longer hold on to all of its water so it condenses.

Interestingly, none of this works on Titan.

It’s really really hard to make Titan air colder fast. If you were to turn the sun totally off, Titan’s atmosphere would still take something like 100 years to cool down. And even the coldest parts of the surface are much too warm to ever cause fog to condense.

What about mountain fog? A Titanian mountain would have to be about ~15,000 feet high before the air would be cold enough to condense. But Titan’s crust, made mostly of ice, can’t support mountains more than about 3000 feet high.

We’re left with that first process: add humidty.

On Titan, as on earth, the only way to add humidity is to evaporate liquid. On Titan this means liquid methane.

Liquid methane! There it is!

Evaporating methane means it must have rained. Rain means streams and pools and erosion and geology. Fog means that Titan has a currently active methane hydrological cycle doing who knows what on Titan.

But there’s one more twist. Even evaporating liquid methane on Titan is not sufficient to make fog, because if you ever made ground-level air 100% humid the first thing it would do after turning into fog would be to rise up like a massive cumulous cloud. There’s only one way to make the fog stick around on the ground for any amount of time, and that is to both add humidty and cool the air just a little. And the way to cool the air just a little is to have it in contact with something cold: like a pool of evaporating liquid methane!

Only final fun part of the story. The fog doesn’t appear to prefer hanging around the one big south polar lake or even around the other dark areas that people think might be lakes. It looks like it might be more or less everywhere at the south pole. My guess is that the southern summer polar rainy season that we have witnessed over the past few years has deposited small pools of liquid methane all over the pole. It’s slowly evaporating back into the atmosphere where it will eventually drift to the northern pole where, I think, we can expect another stormy summer season. Stay tuned. Northern summer solstice is in 2016.


Our paper describing these results (written by me, Alex Smith and Clare Chen, two Caltech undergraduate students, and Mate Ádámkovics, a colleague at UC Berkeley) was recently submitted to the Astrophysical Journal Letters. The paper will shortly go out for peer review, which is an integral part of the scientific process where the paper is vetted by experts. Peer review, as implemented in the current world of over-stressed astronomers, has some serious flaws, though. One problem is that the peer review is performed by one person! Sometimes that one person is thoughtful and insightful and provides excellent insight and commentary. Sometimes that one person misses or misunderstands crucial points. It is rare, though, that any one person can be a broad enough expert in all of the topics in a scientific paper to provide adequate review of the whole thing. Plus people are busy.

What is the solution? I don’t know. There has been much talk recently about all of this, and even some interesting experiments done by scientific journals. I thought I would try an experiment of my own here. It goes like this: feel free to provide a review of my paper! I know this is not for everyone. Send it directly to me or comment here. I will take serious comments as seriously as those of the official reviewer and will incorporate changes into the final version of the paper before it is published.

What kinds of things would I look at closely if I were a reviewer of this paper? Probably things like: is the claim of discovery of something fog-like convincingly made? Is the fog-like feature really at the surface rather than simply a cloud? Is our argument of how fog must form convincing? Is it correct? These were, at least, the things I thought hardest about as I was writing the paper. Perhaps you will find more!

Millard Canyon Memories

The Station Fire started near JPL on Thursday and went crazy yesterday, expanding to 20,000 then 35,000 and now who-know-how-many acres. Remarkably few structures have been lost.There is a good chance, though, that the little cabin that I lived in when I first arrived at Caltech is now ash (it's NOT! I just got word from an old neighbor that the canyon was saved. so hard to imagine looking at all of the destruction in the region). I might be wrong; in the major fires 15 years ago Millard Canyon was saved when fire skipped over the top of it. But from everything I can see things don't look good. The firefighters started protecting structures in the real city, not crazy cabins up in the woods. The cabin was at least 100 years old and had survived floods and fires that had slowly gotten rid of the cabins throughout the rest of the San Gabriels mountains.

It was a wonderful if somewhat eccentric place to live. I write about it in my forthcoming book (sadly, books take way too long, even after you finish writing them, so forthcoming means perhaps a year), and I wanted to give a little excerpt here, in memory of the little cabin that I fear met its doom yesterday or last night.

----

When I first started looking for planets, I lived in a little cabin in the mountains above Pasadena. Though I cannot prove it, I am willing to bet that I was the only professor at Caltech at the time who lacked indoor plumbing and, instead, used an outhouse on a daily (and nightly) basis. I worked long hours, and it was almost always dark, often past midnight, when I made my way back into the mountains to go home for the night. To get to my cabin, I had to drive up the windy mountain road in to the forest, past the National Forest parking lot, down to the end of a dirt road, and finally walk along the side of a seasonal creek along a poorly maintained trail. For some time after I first moved in I tried to remember to bring a flashlight with me to light my way, but more often than not I forgot. Eventually I had no choice but to give up on flashlights entirely and, instead, navigate the trail by whatever light was available, or, sometimes, by no light whatsoever.

The time it took to get from the top of the trail to the bottom, where my cabin waited, depended almost entirely on the phase of the moon. When the moon was full it was almost like walking in the daylight, and I practically skipped down the trail. The darker quarter moon slowed me a bit, but my mind seemed to be able to continuously reconstruct its surroundings from the few glints and outlines that the weak moonlight showed. I could almost walk the trail with my eyes closed. I had memorized the positions of nearly all of the rocks that stuck up and of all of the trees and branches that hung down. I knew where to avoid the right side of the trail so as to not brush against the poison oak bush. I knew where to hug the left side of the trail so as to not fall off the twenty foot embankment that we knew as “refrigerator hill” (named after a legendary incident when some previous inhabitants of the same cabin bought a refrigerator and had hauled it most of the way down the trail before losing it over the embankment and into the creek at that very spot; I never lost a major appliance, but I took extra care – and used ropes – one time when I had to get a hot water heater down the hill to install at the cabin; it was rough going, but the new found ability to take hot showers was definitely worth it).

I had almost memorized the trail, but, every 28 days, I was reminded that, really, there is quite a big difference between memorization and almost-memorization.

Every 28 days the moon became new and entirely disappeared from the sky and I was almost lost. If by luck there were any clouds at all in the sky I could possibly get enough illumination from the reflected lights from Los Angeles, just a few miles away, to help me on my way, but on days with no moon and no clouds and only the stars and planets to light the way I would shuffle slowly down the trail, knowing that over here – somewhere – was a rock that stuck out – there! – and over here I had to reach out to feel a branch – here! It was a good thing that my skin does not react strongly to the touch of poison oak.

These days I live in a more normal suburban setting and drive my car right up to my house. I even have indoor plumbing. The moon has almost no direct effect on my day to day life, but, still, I consciously track its phases and its location in the sky and try to show my daughter every month when it comes around full. All of this, though, is just because I like the moon and find its motions and shapes fascinating. If I get busy, I can go for weeks without really noticing where it is in the sky. Back at the time I lived in the cabin, though, the moon mattered, and I couldn’t help but feel the monthly absences and the dark skies and my own slow shuffling down the trail.

Contrary to how it might sound, however, back then the moon was not my friend. The 2 ½ year-old daughter of one of my best friends – a girl who would, a few years later, be the flower girl as I got married, would say, when asked about that bright object nearly full in the night sky: “That’s the moon. The moon is Mike’s nemesis.” And, indeed, the moon was my nemesis, because I was looking for planets.

----

The moon is nearing full tonight, but it's no longer my nemesis. That honor will now go to the Station Fire which I fear has taken away that place I loved so well.

The problem with science

Science is a great system. You examine reality, come up with ideas how it might work, test those ideas, keep the good ones, discard the bad ones, and move on. It’s got one big flaw, though, and that is that science is done by scientists, and scientists are people.

I have a whole slew of scientists mad at me this week – and I will admit that I am pretty irritated back – because none of us cool rational analytical scientists can truly separate our emotions and our egos from the reality-based science that we do. In this current dispute, I get to claim the scientific high ground, at least. My scientific paper that just came out this week unarguably demonstrates that their scientific paper has some rather embarrassing errors. But, in the end, I suspect that even with that seemingly unassailable high ground, I lose the war.

The papers in question are both on the mundane side. They both are catalogs of where the Cassini spacecraft has and hasn’t seen clouds on Titan over the past 4 years. Papers like these, though not going to make headlines anywhere, are nonetheless important contributions to understanding what is going on (at least I think so, or I wouldn’t have taken the time to write one!). Without complete and accurate catalogs of things like where there are clouds on Titan, we cannot begin to understand the more profound questions of why there are clouds on Titan and what does this tell us about the hydrological cycle on the moon. These papers don’t try to answer these questions, but they are necessary pieces of the puzzle.

You would think that two papers that examine the same set of pictures from the Cassini spacecraft to map clouds on Titan would come up with the same answers, but they don’t. And therein lies the root of the problem. When the main topic of a paper is where there are and aren’t clouds on Titan and you sometimes say there are clouds when there aren’t and there aren’t clouds when there are, well, then you have a problem. They have a problem, since theirs is the paper that makes the mistakes. So why are they mad at me? I think perhaps I know the answer, and, perhaps I even think they might have some justification. Let me see if I can sort it out with a little of the convoluted history.

I started writing my paper about 18 months ago. A few months later I realized the other team was writing the exact same paper. Rather than write two identical papers, I joined there team and the two papers merged. The problem was that as I worked with their team through the summer, it became clear that their analysis was not very reliable. I spent hours going over pictures in details showing them spots where there were or were not clouds in contradiction to their analysis. Finally I came to the conclusion that their method of finding clouds and thus their overall paper was unsalvageable. I politely withdrew my name from their paper and explained my reasons why in detail to the senior members of the team overseeing the paper. I then invited them to join me in my analysis done in a demonstratively more accurate way. The senior member of the team agreed that it seemed unlikely that their method was going to work and he said they would discuss and get back to me.

I felt pretty good about this. I had saved a team of people who I genuinely liked from writing a paper which would be an embarrassment to them, and I had done it – apparently – without alienating anyone. I remember at the end of the summer being proud of how adeptly I had navigated a potentially thorny field and come out with good science and good colleagues intact. Scientists are usually not so good at this sort of thing, so I was extra pleased.

I never did hear back from them about joining with me, so when I wanted to present the results of the analysis at a conference in December, I contacted the team again and asked them if they would like to be co-authors on my presentation in preparation for writing up the paper. I was told, no, they had decided to do the paper on their own. Oh oh. I though. Maybe things won’t end up so rosy after all.

Their paper came out first, in June of this year, in the prestigious journal Nature of all places (it’s not hard to figure out the reason for the catty comment often heard in the hallways “Just because it’s in Nature doesn’t necessarily mean that it is wrong.”). I was a bit shocked to see it; I think I had really not believed they would go ahead with such a flawed analysis after they had been shown so clearly how flawed it was (and don’t get me started about refereeing at this point). Our paper came out only this week, but, since their paper was already published, one of the referees asked us to compare and comment on their paper. I had avoided reading their paper until then, I will admit, because I didn’t want to bias our own paper by knowing what their conclusions were and because – I will also admit – I was pretty shocked that they had, to my mind, rushed out a paper that they knew to be wrong simply to beat me to publishing something. I hoped that perhaps they had figured out a way to correct their analysis, but when I read their paper and found most of the erroneous cloud detections and non-detections still there, I realized it was simply the same paper as before, known flaws and all.

So what did I do? In my paper I wrote one of the most direct statements you will ever read that someone else’s paper contains errors. Often things like that are said in couched terms to soften the blow, but, feeling like they had published something that they knew to be wrong, I felt a more direct statement in order.

And now they’re mad.

Reading all of that I certainly hope you come to the conclusion that I am 100% right and they are 100% wrong. You’re supposed to come to that conclusion because I wrote the whole thing from my own biased perspective. And I have my emotions and my ego in there. And I feel wronged.

I’m going to try an experiment from their point of view and see if I can see where I went wrong and irritated them.

Last summer they kindly invited me to be part of their paper, and they shared their non-publicly released data with me (though neither analysis made use of it). They fixed many of the errors that I identified that summer and honestly believed the paper was now good enough. They knew that the analysis wasn’t perfect, but felt like they had invested significant resources in the analysis and that the overall conclusions were correct. So they submitted the paper, and it got accepted in Nature, and they were pretty proud of the effort. Then, out of the blue, my paper is published that says in unusually direct words that their paper is not to be trusted.

Here are some reactions I can guess that they might have had:

(1) Mike Brown’s complaints about are paper are simply sour grapes because our paper came out first and in a more prestigious journal. He is trying to attack our paper so that his paper, which lost the race, somehow seems relevant.

(2) Mike Brown is a nit picker. If you look carefully you will find that while the details of the cloud maps are different between the two papers, the overall conclusions are largely the same. In the end, the conclusions matter, not the details like this.

(3) Mike Brown is a betrayer. He learned about our analysis last summer and then tried to use what he learned against us.

(4) Mike Brown is an impolitic ass, and even if he had concerns about the paper he aired them in an unkind way and now we detest him.

And now I must in the end admit that one of those is actually true. I plead guilty to (4). (1) and (3) are factually incorrect. (2) is bad science (yes: the details matter, not just the conclusions). But (4)? Yeah. OK. Probably. That’s the problem with science. All of those scientists. And few scientists are renowned for their social skills. Even me.

So there are some things that we can all agree with, and some things that we might disagree with. Reality admits little room for differences of opinion. Interpretation of reality, though, is always more subjective.

Everyone should agree: The paper that was published in Nature this June is at times incorrect about where there are and are not clouds. This is simply reality and not open to much discussion (which doesn’t mean there won’t be much discussion).

In my opinion: These errors are fatal for a paper purporting to be about where there are and are not clouds. In their opinion: These errors are not significant and don’t affect the conclusion of the paper. In my opinion my opinion is correct, but I am sure that in their opinion their opinion is correct. Unlikely we’ll come to a conclusion on this one, as this is not about reality, but about interpretation of reality. No analysis is 100% correct and everyone has their own opinion about when an analysis crosses the threshold from mostly correct to fatally incorrect. We have differences of opinions on where this threshold sits, obviously.

In their opinion: The statements in my paper discussing the problems with their paper are disproportionately harsh. In my opinion: The statements in my paper discussing the problems with their paper are harsh, but proportionate to the flaws in the paper. But I will admit that this is the part I am the most uncomfortable with. The statements in my paper are harsh. Maybe too harsh. Did I let too much emotion and pride come in to play as I wrote them? Probably. But as I wrote those statements I was fairly appalled at what seemed to me a lack of concern with reality on the part of their paper. Everyone makes mistakes in scientific papers. Sometimes even big ones. But I had never come across a paper where the mistakes were pointed out before the paper was submitted for publication and the authors had not fixed them. Again, though, my opinion is colored by the fact that I find their analysis fatally flawed. Their desire to go ahead is colored by the fact that they find their analysis good enough.

In their opinion: Mike Brown is a detestable ass. In my opinion: They are shooting the messenger for delivering a message that they already knew. But perhaps both opinions are correct.

Sadly, for me at least, I tried really really really hard to make this work. And to me, “make this work” meant make sure that any papers published which described clouds on Titan were factually correct while at the same time not alienating my colleagues. I failed at both.

So I think we end with this:

The other team will probably always think I crossed a line by writing so harshly of their paper. I will probably always think that they crossed a line by publishing a paper they knew to have factual errors.

Who is right? Probably both. I suspect they let their egos and emotions allow them to care more about publishing a paper in Nature than whether or not that paper was correct. I suspect my ego and emotions caused me to write more harshly than I needed to. That’s the problem with science. It’s done by scientists. Scientists have all of those egos and emotions just like everyone else and no one has figured out a way to leave them at the door when you walk in your lab or your telescope or wherever you sit down to write papers.

In the end though, the only losers in this process are the scientists themselves. While all of us are sitting around feeling wronged, reality marches on. If you would like to know where clouds are or are not you can go read an accurate account. But that’s probably the last paper you will read from me in this field, for I am bowing out. The study of Titan was always just my hobby. A hobby that causes this much anguish is not a very good hobby. Time for a new one. I’ll miss Titan and trying to finally figure out what is going on with all of those clouds, but there are many other interesting things out there in the universe. Time to start exploring once again.