Season 4: The Sabbatical Years

Mike Brown’s Planets is back. After a long break at the conclusion of Season 3 (I define these Seasons after the fact: if I haven’t written anything in a while I declare it to have been because, clearly. it is the end of the season), the writing will now resume. This season is destined to be the most exciting of all for the simple fact that it also coincides with my current sabbatical, which started last week and lasts for the next 6 months.

My sabbatical will be a funny thing. While most people take the opportunity to take their families to glamorous places and work in exciting new labs, I am taking the opportunity to spend more time in my comfy green chair at home, writing. Diane refers to it as my staybbatical, which I guess is about right. And, after a few days of tidying up loose ends from my office, I am finally here, sitting in the green chair. Let Season 4 commence.

Here are some of the things I am working on:

  • Heading south: 5 years after the discovery of the last dwarf planets, the race has finally commenced to scan the southern hemisphere. The 3 competing teams have familiar players. Who is going to win? I have predictions.
  • Guest posts: As an experiment I am conscripting some younger students and postdocs to write about what they do. First up: Amino acids on Titan. Stay tuned.
  • Where is Planet X hiding? Just in time for making your plans for 2012, I’ll critically review what might still be left lurking in the outskirts of the solar system, and I’ll tell you the probability that it will affect us in 2012. Well, OK, you can probably answer that one already.
  • Sedna is 7, and she still makes less sense than Lilah, who is only 5.
  • Why Pluto still matters. Nearly 5 years after no longer being a planet Pluto still actually matters. You never thought you would hear me say that, did you?
  • Nobody wants to go to the moon anymore. So maybe that means I should go to the moon.

My staybbatical won’t be entirely in my green chair, though. I will admit to having volunteered to be a chaperone on almost all of Lilah’s kindergarten field trips. I even signed up to help with quilting for 5 year olds (they insisted that all I need to know how to do is tie knots; which I do). But also look out for The Wacky Adventures of a Scientist on a Book Tour. Coming soon, to a city (possibly) near you. It should be a fun six months.

An Insider’s Perspective from The Planet Data Centers

Jeff ReynoldsGreetings!

My name is Jeff, and I am a data center technician here at The Planet. I support the servers hosted in our Dallas data centers. It’s not always an easy job, but it’s definitely interesting. That’s why I thought you’d like to know a little more about me and the other technicians who are the eyes and hands at the console. Since we are providing frontline support, the better you understand our jobs, the better we can serve you.

In case you were curious, this is MY office:

Jeff Reynolds Blog

I hail from Chicago, and prior to joining The Planet, I was enlisted in the U.S. Army as a combat engineer stationed in Baumholder, Germany. It’s a beautiful country with spectacular libations — or beer and brats to most of us. Germany was filled with historic sites, and my unit was stationed no more than 20 miles (or 32 kilometers) from three Medieval- to Renaissance-era castles. Despite not being much of a history buff, I was still amazed to be in the presence of structures that have remained standing through centuries of war and expansion. I was also able to see Rome, including the Colosseum and the Roman Forums, along with the Alps.

Jeff Reynolds Blog

Jeff Reynolds Blog

One other sight that really knocked my socks off (which didn’t have much to do with history or European culture) was a lovely young woman named Jacqueline. We got hitched in a small German courthouse and we’ve been going strong for four years. I’ve never regretted a moment, and I try my best to make sure she doesn’t either … every now and then, I get some looks that let me know when I need to step it up a notch or two in that department. We also have a little one named Tabitha. As you can see, she’s a big fan of snow:

Jeff Reynolds Blog

When my wife’s enlistment was up, we moved to Dallas and stayed with the in-laws. Things were a little dicey at first, but we hunkered down, and with the use of some military training, we made it through. At least one of us needed to become gainfully employed, and I was the lucky one. I got a call from Dallas DC managers Josh Daley and Doug Day about an opportunity to work “in the field” for The Planet. We spoke a while on the phone, and what I said must have impressed them enough, because I was invited to an interview a week later at the D2/D6 facility.

It wasn’t easy to sell myself during that little chat. While I was proud of my service, it’s hard to translate combat engineering and marching in cadence to the IT field. I’ve always had an interest in networking and server operating systems, though. I think it had a lot to do with the 1995 movie Hackers – and Angelina Jolie’s appearance in the film didn’t hurt.

I started out learning more about computer security, but my interests drifted once I began to learn more about the open-source community and this thing I’d never heard of: Linux. My first distribution was Red Hat Core 4, and I can admit that I spent at least 17 hours staring dumbfoundingly at my monitor before I was actually able to get it to work. I was only 12 years old back then, but something tells me that being older and having additional life experience wouldn’t have been much help in getting that OS to boot any sooner. Babies don’t sleep as well as I did that night, or that morning rather.

After serving in the Army, I finished college with a degree in Information Technologies. While in school, I was able to knock out a few certifications: CCNA, Network+ and Security+. Looking back, maybe it wasn’t such a hard sell for me to prove myself to The Planet. I was still plenty nervous, though.

Nervous, but excited.

Just walking into the Dallas facility made me know I wanted to be here. It may not have been impressive in the way the Alps were impressive, but I was still struck by it. My interview was conducted in a windowed room overlooking the D2 data center floor. It was the first time I had seen hundreds of racks containing live servers. Looking over the data center, I realized that this was it. This was the Internet. Rooms like these filled with thousands of computers, serving up whatever content was required of them.

Too often, people think of things in terms of how they view and use them. Sounds harmless enough doesn’t it? Why wouldn’t you equate something with its interface? When a lot of people think of the Internet, they think of their web browser. More often than I’d like to admit, friends and family have come to me saying that “the Internet is broken,” when Firefox or Internet Explorer won’t load a page. But as I’m sure the people reading this know, the Internet is a far broader thing than can be contained in a web browser.

I took a moment to wonder if any of the websites I frequent were served from here, and whether I might glance at the hardware that hosted the pages I use to find the weather, traffic information, or news about what my old Army unit is up to in Afghanistan.

It may not be as scenic as the Alps, but it’s something to appreciate to say the least.

Needless to say, I was lucky enough to snag a job on the floor in the D6 facility after the interview process wrapped up. My basic job description calls for installing and troubleshooting server hardware, operating systems and data center components. I’m pretty well versed in Linux-based/POSIX-compliant operating systems, as well as Windows Server 2003 and 2008. Should your server ever become unreachable, just give me a call, and I’ll get it back online for you.

I wanted to post on The Planet Blog to let you get a glimpse of how things happen on the floor, how frontline issues happen, and how we resolve them. I want you to be confident that my fellow techs and I have you covered. Look for more to come — this is just a little intro so you’ll know who I am the next time you see me.

If you need me, come find me at the helm of my KVM on wheels. Forget desks and cubicles, the data center is MY office.

-Jeff

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

Data Centric 2.0

Kevin HazardAs you can see from our handy sidebar widget, Data Centric is The Planet Blog’s reining “Most Popular Post.” While one could make the argument that visitors are all tuning in to read content written by a person many believe to be one of the most brilliant authors of his generation, let’s be honest … Everyone just wants to look at the pictures.

Given the title of this post, you can see that I’m not reinventing the wheel for my 100th official blog post. I convinced Todd to lug his fancy DSLR camera up to H2 for another data center tour a few months ago, and we’ve got a fresh photo tour of the facility for you. In some cases, you’ll notice that the DC looks a bit different than it did almost exactly three years ago — especially in Phase 3.

H2 Data Center Tour 2010

Since you’re already thinking of Phase 3, why don’t we start there? You probably don’t recognize this row of servers. The last time we posted a picture of this area in the data center, it was an expanse of floor tiles.

H2 Data Center Tour 2010

As you can see, we’re still obsessive about cable organization. Each color cable carries a different kind of traffic, and each individual cable is labeled on both ends. The yellow plastic conduit carries fiber between the transport cage – where the Internet “drops” into the data center – and the DC’s customer access routers.

H2 Data Center Tour 2010

In less colorful – but equally important – parts of our facility, the data center’s power rooms, generators and battery backup room are still awe-inspiring.

H2 Data Center Tour 2010

As we head back into Phase 3 to look at a “cold aisle” of rack-mount servers, it’s worth observing that you’re not greeted with the wave of 68-70 degree air you might have experienced a few years ago. We’ve improved our cold air distribution in the data center and increased the ambient temperature to keep the servers operating efficiently. The current ASHRAE standard temperature for a DC is now around 80 degrees. To allow for a little fluctuation up and down, we keep our facilities around 75 or 76 degrees. In the above picture, you can see the perforated tiles that allow the cold air from under the floor to enter the data center.

H2 Data Center Tour 2010

Swinging around the back of a rack-mount server rack, we get to admire more artistic cabling. As I mentioned, we label and run cables precisely from end to end. This precision looks fantastic, and it’s also entirely functional … If we had spools of slack on each side of the cables, it would be much more difficult to access cables and replace them if necessary.

H2 Data Center Tour 2010

Before you start thinking we’ve abandoned the tower server racks you saw three years ago, we’ll head back into Phase 1. In the first Data Centric post, I didn’t snap any pictures of a cold aisle between tower racks in H2, so I want to make sure I don’t omit that again. One of the first questions you might ask after seeing this picture is, “Why aren’t the servers aligned?” The answer is pretty straightforward: Because the size of a tower chassis can vary significantly, we have to choose whether to line up the front or the back. When our DC operations team works on servers, they generally access them from the hot aisle, so that’s where they are all aligned.

H2 Data Center Tour 2010

Does this picture look familiar? It should … it was taken from the exact same spot as this one. As you can see, the backs of all of the tower servers line up beautifully, and Todd’s camerawork is much better than mine was. If you’re looking for differences between the two, you’ll probably wonder why you can’t see the aisle on the other side of the CRAC unit in the new picture. That’s quite a keen eye you have there.

H2 Data Center Tour 2010

You can’t see the other row in the new picture because we’ve installed chimneys on all of our CRAC units. As you remember from physics class, warm air rises, so if we want to cool the facility most efficiently, we should pull the warmest air from the room. In the picture from three years ago, the air conditioning unit would pull colder air from lower in the room.

H2 Data Center Tour 2010

The white cabinets you see at the ends of the aisles here are power distribution units (PDUs). Power is sent from the power room into the data center under the raised floor. Each one of the PDUs in turn makes that power available to servers in its data center row.

H2 Data Center Tour 2010

If our data center operations team is responding to a ticket to help a customer from the floor, they’ll wheel around one of these bad boys to access the customer’s server directly. Three years ago, you might have seen a *gasp* CRT monitor.

H2 Data Center Tour 2010

I don’t have anything to say about this picture. It’s just a great shot of a row of rack-mount servers.

H2 Data Center Tour 2010

All good things must come to an end, so to close out the blog, we can walk you out of the data center just as we would if you came for an in-person tour. To the left, the data center operations team is keeping an eye on tickets, orders and DC stats.

If the dozen pictures we’ve included here don’t sate your appetite for data center goodness, head over to our Flickr photostream for more.

Did we miss anything? Is there anything else you want to see? Leave a comment below or let us know on Twitter or Facebook!

-Kevin

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

Doug Erwin in The Wall Street Journal

Kevin HazardOn July 20, our CEO Doug Erwin provided the lead quote in the lead story on the Marketplace section of The Wall Street Journal. On the heels of Intel’s strongest quarterly results in its 42-year history, WSJ’s Don Clark and Ben Worthen dove deeper into the chip maker’s success. As they were researching for “Spending Soars on Internet’s Plumbing,” they chatted with our CEO about The Planet’s take on the latest technologies.

“We’ve been buying thousands of computers this year,” says Doug Erwin, chief executive of ThePlanet.com Internet Services Inc., a Houston-based company that runs data centers to offer computing services. ThePlanet says it now owns about 50,000 Dell Inc. servers.

Customers have responded, in many cases paying up for servers with high-end chips that command higher prices. Mr. Erwin of ThePlanet says it moved swiftly this year to Intel’s new technology, saving his company money on power and labor costs and providing greater performance to offer customers at a higher price.

If you didn’t get a chance to read the article when it was published, be sure to check it out when you have a chance. It paints a fantastic picture of the evolving landscape of the web, the need for newer, faster, more efficient technologies, and how companies like Dell, Google, Advanced Micro Devices and Hewlett-Packard are dealing with both.

-Kevin

Unrelated: When I played basketball in high school, I made my way into a few newspapers, and my family always managed to save a few copies of each so I could show them off for years to come. That was a local paper. This is the Wall Street Journal. Needless to say, we’ve got a few copies of that issue lying around the office. :-)

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

Making ‘Social Media’ Social

At HostingCon 2010, I joined (Curtis) R. Curtis, Nick Longo and Matt Balleck on a panel to discuss how hosters and entrepreneurs can optimize social media for branding, traffic and sales. When you start talking “social media,” you’re almost guaranteed a great turnout, and this session was no exception. The standing room-only crowd asked some great questions of the panel, and despite my blog audience-selected attire, many flagged me down to keep the conversation going.

The session allotted each presenter a few minutes to share some best practices from their experience with Twitter, Facebook and YouTube before the floor was opened to questions. Nick Longo led off with an excellent rundown of Rackspace’s high-level Twitter strategy. He discussed how the company approaches social media and the paramount importance of engaging employees and personal networks to connect with people to share a value-rich message. His personal strategy when it comes to posting on Twitter is to load the channel with value for his audience … even when that value isn’t directly related to his company. By focusing on his audience’s interests and injecting business content only where relevant, he’s built a fantastic following.

vidiSEO‘s Matt Ballek, stepped up to the microphone next and brought the thunder by sharing some much-appreciated tips about how businesses can optimize their YouTube videos. If pictures are worth a thousand words, his “YouTube Video SEO – How to Optimize Your YouTube Video” interactive presentation is probably worth a few million, so I’d highly recommend you check it out to learn about the four pillars of optimizing your videos.

Following those future Hall of Famers, the pressure was on when I stepped up to the podium. Luckily, my presentation didn’t turn out to be a “Casey at the Bat” situation. I shared some of our social media successes with the crowd by explaining how and why #showmemyserver, the #500Club and The Planet Server Challenge worked as well as they did. If you’ve been around the neighborhood here for a while, you’re well versed with those campaigns, and if you’re unfamiliar, scroll down to the last paragraph to learn how you can earn the chance watch a video of my presentation. In the meantime, take a look at the slides we covered:

(Curtis) R. Curtis batted cleanup on the panel by sharing a few tips and tricks for businesses on Facebook. He touched on the fact that “personal” nature of the medium makes it tough to sell to users, but that shouldn’t dissuade businesses and entrepreneurs from building qualitative connections with customers by allowing them to connect and interact with the company and each other.

Given my presentation’s focus on user engagement, it seems only fitting that this blog have an opportunity for you to earn a bonus by becoming a part of the conversation. Leave a comment below with a few “words of wisdom” you’ve gleaned from your experience with social media, and I’ll email you a link to a video of my presentation. Along with my brilliant speech delivery, you’ll get a peek at “the hipster look.”

What else could you want?

-Kevin

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

Intel Guest Blog: Server Cloud Powered by Xeon 5500

Intel As part of the data center team at Intel, I was proud to see Intel Xeon® 5500 processors in the server platform The Planet chose for their new Server Cloud offering. What was even more exciting was seeing a major hosting services provider move strongly and strategically into the cloud services business, an area of IT that is rapidly progressing. So on behalf of everyone at Intel, I hope this new offering and business model will be a huge success for The Planet and its customers.

Success in any venture is a function of multiple variables, and the alignment of hardware technology and platform architecture is of utmost importance. The focus on an open-source stack, the simplicity of the offering, and the server platform all work together to make Server Cloud a value-rich offering. Because I’m very familiar with the family of processors powering each Server Cloud instance, I’d like to discuss a few attributes of the hardware technology that customers will benefit from.

In my previous blog, I talked about the performance benefits of our newest processors. CPU performance is important because it directly impacts how you can use the server and to what extent. Because cloud offerings are expected to scale with utilization and performance needs, the underlying platform architecture needs to support those capabilities. Intel Xeon processors have unique “intelligence” that our marketing team has chosen to call Intel Turbo Boost This increases the CPU performance when needed and scales back when it’s not needed. Server Cloud users get immediate access to performance of their vCPU(s) when they need it, and The Planet saves money when the CPU utilization (subsequently power utilization) is scaled back during off-peak application usage. Why do you care if The Planet saves money? Check out their pricing. For you to get the latest technology at that cost, you better believe the guys behind the curtain are doing their best to run the most efficient data center possible.

The high-performance i/o and memory architecture of the Xeon 5500 platform also allows you to rapidly access your data, whether it’s on the platform’s local hard drive, SAN or in the remote Cloud Storage offering. The Xeon 5500 platform can support up to 144 GB of memory via the super-fast Quick Path Interconnect (QPI) and memory controller integrated into the CPU. Best of all, like the CPU the memory modules and QPI links will also go into to a low power consumption state when not being used … also translating into lower operating costs and The Planet’s terrific prices.

As we continue building more powerful and more efficient processors, we’re excited to see The Planet incorporating them in innovative ways. The Planet has masterfully tied the Xeon 5500 series into their cloud hosting platform, and because they are committed to adopting the latest technologies when new processors are released, I am looking forward to seeing how the Server Cloud offering grows and evolves to maintain its performance dominance.

Thanks to The Planet for delivering a great product to the IT community – best of luck to all!

Cheers,

-Adarsh Sogal
Intel Corporation

About the Author: Adarsh Sogal is the Marketing manager for Cloud Service Providers in Intel’s Data Center Group. He has been with Intel for 10 years, focused on serving the needs of the service providers in Telco, IT Outsourcing, and Hosting market segments from all over the world.

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

Resellers in the Server Cloud

Kevin HazardYesterday, we shared the news about SiteGround standardizing their cloud hosting offering on our Server Cloud, so we thought you might be interested to know why that’s such a big deal.

As Carl mentioned in his introductory Server Cloud – Now Available blog post, the new cloud hosting platform was designed to meet the immediate needs of our customers. SiteGround was one of our beta program participants, and they provided a ton of feedback on tweaking the offering for its initial release. With their help and the suggestions from hundreds of other beta customers, we pinpointed a few key Server Cloud differentiations that will benefit hosting resellers:

Transparency

Server Cloud InfrastructureUnlike most other cloud hosting providers, we’re completely transparent about the cutting-edge infrastructure that powers our products.

Reseller Benefit: You’re investing in “the cloud,” and given the incessant confusion around that term, being able to point at hardware to say “this is the cloud infrastructure and I trust it” is huge when deciding where to place your trust (and your business).

 

Technology

Server CloudServer Cloud is built on the KVM hypervisor and is powered by Intel Xeon 5520 processors, Sun SAN data protection and a network maintained by Cisco and Juniper devices.

Reseller Benefit: The cloud doesn’t have to be a mystery. You and your customers should know what kind of processing power the platform provides, and you should be confident that your data is safe.

 

Dedicated Resources

Server CloudSimply having the hardware, software and network available isn’t enough. Each Server Cloud instance is assigned dedicated resources to guarantee you have full access to that amazing technology.

Reseller Benefit: Server Cloud is designed so you have access to 100% of your resources 100% of the time. Pushing the upper limits on your installation? You can flip the switch and spin up a bigger instance – also with guaranteed resources – in seconds.

 

Bandwidth

Server Cloud NetworkEvery Server Cloud instance is bundled with 1 TB of bandwidth at no additional cost, and additional bandwidth is only 10¢/GB.

Reseller Benefit: If your customer is hosting a website on your cloud hosting platform, they’ll need to access it and, they’ll want other people to access it. By including a terabyte of bandwidth with each Server Cloud instance, we give you a sizeable buffer before you are charged for incremental bandwidth.

 

Provisioning Speed

New Server Cloud instances can be spun up in as little as five minutes.

Reseller Benefit: You don’t need to carry a large inventory of servers or cloud instances and pay for them when they’re not being used. When you get an order, you can place that order with us and turn it around to your customer in a matter of minutes.

The Bottom Line

By any calculation, Server Cloud is a great value and is very competitively priced. When you factor in the new hardware performance and 1 TB bandwidth allocation, it’s almost unbelievable. Evaluate the cost of a competing platform with the same specs and any significant amount of bandwidth usage, and you’ll be amazed at the difference.

-Kevin

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

Audit Your MySQL Memory Usage

Matthew BoehmEver wonder why your MySQL server runs out of memory or your server starts swapping it like crazy? It could be that you are allowing too many connections or have a buffer that isn’t being used. Here are some simple formulas you can use to determine how much memory your MySQL server can use.

All of these variable values can be seen by using “SHOW GLOBAL VARIABLES” at the MySQL client prompt. They are given in bytes so you must convert to KB, MB or GB by dividing the value returned by 1024, 1048576 or 1073741824, respectively.

Remember: Each connection from your application is referred to as a thread by MySQL.

Per-Thread Memory Use (The amount of memory a single connection can use):

read_buffer_size + read_rnd_buffer_size + sort_buffer_size + thread_stack + join_buffer_size

Example: 1048576 + 2097152 + 1048576 + 262144 + 131072 = 4587520 bytes
Divide that by 1048576 to get the usage in MB, and you find that each thread can use up to 4.375 MB of memory.

Now, take that amount and multiply by your max_connections, and you’ll find the total potential memory usage. For this example, let’s set max_connections to 350. The example server could use 1,531.25 MB or 1.49 GB if all 350 connections were in use at a given time. If we have 4GB of RAM in the server, that accounts for almost one third of our available memory.

And that’s not all! There are several other “global buffers” that MySQL creates depending on which table engines you are using. The formula below assumes you have a mix of MyISAM and InnoDB tables and you are using the query cache:

Base MySQL Memory Usage

key_buffer_size + max_heap_table_size + innodb_buffer_pool_size + innodb_additional_mem_pool_size + innodb_log_buffer_size + query_cache_size

Example: 1073741824 + 33554432 + 2147483648 + 134217728 + 10485760 + 536870912 = 3936354304 bytes
To get the usage in GB, we divide by 1073741824 and see that 3.66 GB is being used!

Again, in a 4GB server, base MySQL – with no connections – can use 3.66GB, or about 90 percent of my server’s physical memory. Yikes! When you see that, your first move should be to contact your sales rep to get more memory installed. We advise, for system stability, never to assign more than 80 percent of overall system memory to MySQL (or any process for that matter).

One other area to check for memory reduction is your client application code. While the PHP manual says you don’t have to explicitly call mysql_close or mysqli_close, we highly recommend it as a best practice because there’s a known PHP bug that prevents connections from being properly closed on script termination. And if each connection to MySQL eats up a base amount of memory whether it’s being used or not, your server will suffer for no reason.

-Matthew, CMDBA 5.0

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

Structure Panel: Different Clouds, Different Purposes

Kevin HazardAs you may recall, The Planet made an appearance at Structure 2010, and our very own Carl Meadows participated in the conference’s closing panel, “Different Clouds, Different Purposes: A Taxonomy of Clouds.”

Moderated by GigaOm’s Stacey Higginbotham, the panel was designed to evaluate the the benefits of moving to the cloud, whether different cloud platforms will or should be interoperable, and whether businesses should build or buy cloud platforms. Executives from NetApp, Verizon Business, Yahoo! and StrataScale joined Carl on the panel, and each brought a different set of experiences and perspectives.

The session has been published by GigaOm on LiveStream so you can tune in to watch the half-hour session you missed if you weren’t able to join us in San Francisco. If you’re like me (a proud member of The Official Carl Meadows Fan Club), you can skip to “the good parts” to learn more about The Planet’s perspective from Carl at 6:50, 11:50, 20:00 and 25:30.

What are your thoughts on the panel topics? Any questions about Carl’s responses?

-Kevin

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

The Planet @ HostingCon 2010

Sherry WoodardWe are four days away from HostingCon 2010 in Austin, and you can feel the excitement in the air here. We’ve got so many fun things planned – from showing off our new booth design to featuring some of our smart people in the conference sessions.

Monday, July 19

Around here we love metrics … If we can’t measure it, we don’t do it. Though it gets a little tricky when we starting benchmarking how witty our employees are – especially in blog posts. Our Chairman and CEO Doug Erwin will discuss a few of the measurements we’ve found to be beneficial in running a hosting business Monday morning at 9:00 on the “Adding Value to Your Business With Financial and Operating Metrics” panel. I know Doug well, so I can guarantee he’ll find a way to make a complicated topic interesting and relevant.

If you miss him in the morning, don’t worry: You have a second chance to see him in the afternoon at 1. He’ll be sharing his thoughts on “Opportunities and Advantages for Hosting Resellers.” Many of our most successful customers are reseller partners, so we’ll bring some great perspective on strategies that will help you grow your business if you’re a reseller.

If you haven’t met Doug, take my word for it – you should stop by one of his sessions. I might be a bit biased (since technically I work for the man), but he’s a blast. He is wicked smart, charismatic and passionate about the business.

Tuesday, July 20

That was just the first day? I’m already tired. Actually, I am really jazzed about Day 2. The exhibit hall opens at 11 a.m., and we are excited for you to see our new look. In booth 124, we’ll be showing off our data centers, people and products. If you’re up for it, we’re bringing back the Server Challenge and giving away a netbook for the fastest time in rebuilding a Pentium 1950 server.

In case you were wondering, my personal best is 32 seconds … See if you can top that!

While the booth should be a blast, I am most excited about the 9 a.m. panel: “Optimizing Media for Branding, Traffic and Sales.” “Why?” you might ask … Because my colleague, office-mate, and travel companion to all these shows – Kevin W. Hazard, Jr. – will be speaking. Kevin is one of the most visible personalities at The Planet. He is our Evangelist. When you talk to us on Twitter or post in our forums, the response you usually get is from Kevin. He breathes a lot of life into our brand everyday through our social media channels. If you’ve ever wondered how you can use social media to build your brand, you’ll get a lot out of this session.

#EvangelistAttire

Kevin is well known around the office for his T-shirts, love of Halloween costumes and constantly changing facial hair. My colleague Katie and I have dared Kevin to show off his unique style at the panel, so we want you to help us choose our Evangelist’s Attire. This should all shake out as a quick Twitter contest where you vote on what Kevin should wear during the panel.

The five styles we’ve narrowed down for K. Hazard to sport are:

  1. A repeat of last year’s Halloween “Ron Burgundy” look – complete with pinky ring and mustache
  2. The hipster look with skinny jeans, V-neck T-shirt, zip-up hoodie and Chuck Taylors
  3. The clean-shaven “prepster in seersucker” look (my personal favorite)
  4. A tuxedo T-shirt with Lemmy-esque facial hair
  5. Let Kevin surprise us

It’s up to you, Citizens of The Planet.

Two ways to make your voice heard:

  • Leave a comment on this post with your vote.
  • Post a Tweet with the number of your vote and hashtag #EvangelistAttire.

The “polls” will open when this blog is posted and will close at 5 p.m. CDT on Friday, July 16.

We can’t wait to see you in Austin – in one of the sessions, at our booth or even on Sixth Street. If you haven’t registered to attend, we can save you a little cash: Register online at hostingcon.com, and use the promo code ThePlanet2010 for $10 off your exhibit hall pass.

Don’t forget to place your #EvangelistAttire vote by Friday at 5 p.m.!

- Sherry

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

It’s Your Planet. Host it Your Way.

Kyle SmithIf you visited our cloud hosting page when it officially launched on June 28, you got a sneak peek of our new website design. Several of our followers on Twitter quickly commented about how much they liked the new look, so we’ve been excitedly working to transition the full site.

We could have taken the “easy” route of re-skinning our previous layout with new images and styles, but we wanted to incorporate the usability feedback we’ve gathered to make our site the easiest to use and most value-rich in the industry. We identified choke points in the previous design, A-B tested various new styles, redefined our site structure, created new content and completely redesigned our shopping cart.

Needless to say, our team has been working pretty long hours over the past few months to get everything done so quickly.

The payoff came on Saturday evening when we officially flipped the switch and brought the new site to life:

The Planet Site Redesign

One of the biggest challenges in redesigning such a large site is finding an aesthetically pleasing way to show the full breadth and depth of The Planet’s products and services without over-complicating the user experience. If a user hears about Server Cloud from one of their friends, they should be able to visit our homepage and get to the content they need quickly. In that example, the user would click “Cloud Hosting” in the primary navigation to find links and content pertaining to Server Cloud:

The Planet Cloud Hosting

Being one of the few hosts in the industry to offer everything from colocation up through fully managed hosting, we were also faced with the challenge of helping customers get a high-level understanding of our product and service spectrum to help them decide what kind of hosting solutions best meet their needs. We had a good foundation with our “Power to Choose” graphic, but it was missing something. As a buyer, I always want to know what I’m responsible for and what I can expect from my service provider. So we worked that concept into a graphic that also illustrates the breadth of our service offerings:

Product Comparison

One of our biggest goals was to be as transparent and informative as possible without needlessly adding content for content’s sake. Our website is a resource for prospective customers and current customers alike. We made our navigation more intuitive. We also provided more technical content and consolidated that content on about 40% fewer pages. To simplify contacting us — whether you have a technical, billing or sales question — we’ve incorporated live chat and phone options throughout the site.

When a new customer comes to our site and decides to place an order, they’ll get to use our redesigned shopping cart to customize their solution:

The Planet Site Redesign

To revitalize the checkout experience, we made changes to more clearly display available options; provided additional information via help buttons; and tied together the entire process with clear review and checkout pages to help users ensure they are ordering exactly what they want.

The Planet Site Redesign

If you have a few minutes, click around the new site and come back here to leave a comment with your thoughts. What are your favorite changes? What do you wish we would have maintained from the previous site? How can we make the new site even better?

-Kyle

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

The Planet Server Cloud – Now Available!

Carl MeadowsThe Planet Server Cloud is now available.

It’s refreshing to be able to say that.

Our team has been working for months to build a production-ready cloud server product. We’ve conducted extensive surveys with our customers about the features that are most important to them, and accordingly, we’ve spared no expense in developing a platform that’s purpose-built for web-based businesses.

The industry is rife with cloud services hindered by inconsistency and a lack of redundancy. By contrast, our customers count on us for high availability, rapid provisioning, seamless scalability and generous bandwidth, so took an entirely new approach. The most obvious differentiators are the hardware devices that support the platform: Sun storage area networks, Intel Nehalem processors, a Cisco- and Juniper-powered network, and Dell servers running KVM – Kernel-based Virtual Machine.

With the feedback we received from our customers and beta testers, Server Cloud now features:

Dedicated Resources

Each Server Cloud instance includes completely dedicated CPU, RAM, storage and network capacity. This ensures your Server Cloud performance will never be negatively affected by another customer’s resource usage.

Redundant Storage

Server Cloud disks are powered by a high-performance, high-availability storage area network (SAN). Many cloud/VPS solutions operate on local disks from the host system, which means any compute or storage failure can result in downtime and potential data loss. By contrast, at The Planet your Server Cloud provides higher availability since computing and storage resources are separate, independent and redundant. In the event of a complete host server failure, your Server Cloud data is always protected.

Rapid Provisioning

Once your order has been placed and verified, provisioning is completely automated. Your server can be ready for login in as little as five minutes.

Cutting-Edge Technology

Server Cloud is built on the KVM virtualization platform, directly integrated into the mainline Linux kernel. It offers multiple benefits compared with legacy virtualization technologies like Xen:

  • Primary platform for the open-source development community: Backed by the Linux Foundation and the top Linux distributions, including Red Hat and Canonical, KVM is the platform of choice and the future for Linux-based virtualization.
  • High-performance, feature-rich platform: KVM supports the latest in virtualization technologies, including live migration, RAM de-duplication, para-virtualized storage and network capabilities, coupled with stellar performance and efficiency.
  • Interoperable and portable to physical servers: No operating system guest modifications are required by KVM, which makes it the ideal platform for a truly portable hybrid hosting infrastructure.
  • More up-to-date technologies: As part of the mainline kernel, KVM offers direct access to all Linux updates and security patches as they are released.

Seamless Upgrades

Need some more RAM? Want to add a CPU? Because your data is hosted outside the host server, once you decide on an upgrade, all we do is reboot your instance with the new resource allocations. If there isn’t enough available on your current host server, you’ll be automatically moved to another host server with capacity … in the same amount of time.

I could probably talk your ear off about why we designed the product this way or that way, but I’ll save some of that content for a follow-up post. All you need to know now is that The Planet Server Cloud is amazing, and you can take advantage of it right now.

-Carl

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

The Planet Server Cloud Infrastructure

Kevin Hazard“The Cloud.” It’s hyped: We’ve talked about it. Everyone else is talking about it.

It’s also a mystery.

Not so much in the sense that we don’t know what it can do, but more in the sense that no one lets you see what it looks like. Is it literally an Ethernet cord thrown into a magical mass of visible water droplets?

To coincide with the launch of our new Server Cloud product line, we’re taking you behind the scenes to see the actual hardware on which your cloud server is hosted. Because we’ve invested such a significant amount of time, effort and capital into building our custom cloud offering on the KVM virtualization platform, we want to show off the infrastructure. If you’ve read our releases or any of the other blogs about our Server Cloud offering, you know what you’ll see: Intel Nehalem (or newer) processors running on Dell servers, attached to a high-availability SAN. Reading about those fancy pieces of technology and seeing them in action are two entirely different things, though.

Without further ado, let’s head into D6 Phase 3 to see The Planet Server Cloud infrastructure:

The Planet Server Cloud Infrastructure
Meet “The Cloud.”
The Planet Server Cloud Infrastructure
Server Cloud host servers with Intel Xeon 5520s
The Planet Server Cloud Infrastructure
A glimpse into a host server cabinet
The Planet Server Cloud Infrastructure
Host servers connect to the rest of the Server Cloud and to the Internet
The Planet Server Cloud Infrastructure
Data is stored on a Sun Microsystems SAN
The Planet Server Cloud Infrastructure
SAN storage ensures data reliability and speed
The Planet Server Cloud Infrastructure
Sun SAN connectivity
The Planet Server Cloud Infrastructure
Network devices from Cisco, Foundry and Juniper
The Planet Server Cloud Infrastructure
Another Server Cloud data center row

As of today, you can get your piece of The Planet Server Cloud. If you’re still itching to see more, check out our Server Cloud Infrastructure Flickr set. If you’re not wholly convinced that grabbing a dedicated piece of this architecture for $49/month isn’t worth it, you may never be. :-)

-Kevin

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

The Planet @ Structure 2010

Kevin HazardHave you managed to get the “Grease”-inspired “Summer Shows” tune out of your head yet? I can attest that it is far from forgotten at our office … both because it is painfully catchy and because we’re in the midst of executing on the schedule referenced in it.

This week, we return to the beautiful city of San Francisco to attend Structure 2010. If you’re unfamiliar with Structure, it’s designed to bring together industry leaders to discuss the future of cloud computing and make predictions as to where the industry is headed. Discussions center around which technologies are emerging as standards, and conference speakers discuss best practices for implementing cloud infrastructures in the enterprise. With our investment in cloud innovation, it’s the perfect place to talk about the future of the cloud.

If you’re interested in learning more about The Planet’s contrarian approach to the cloud, swing by our booth in the sponsor room. You’ll be able to find us by looking for the crowd of people gathered around our new interactive display:

Structure 2010 Booth

One of our goals at every trade show is to engage attendees, and this screen is our latest brainchild. Instead of cycling through a slide show or showing off our website, we wanted to empower our booth’s visitors to explore our enterprise hosting environment very simply. Step up to the screen, pull up pictures, move them around, expand and shrink them, ask questions … We want you to “experience” The Planet.

Conference Panel

In addition to our event sponsorship and booth activities, Carl Meadows, The Planet’s senior product manager for cloud services, will participate in a conference panel tomorrow (June 24) titled, “Different Cloud, Different Purposes: A Taxonomy of Clouds.” The panel begins at 4:35 p.m. and will also feature executives from Yahoo!, Verizon, StrataScale and NetApp. The content of the session will be spectacular, and if you have questions for Carl, he will be available to chat at our booth after the panel.

If you happen to be at the UCSF Mission Bay conference center, stop by our booth and interact with our data centers. If you can’t make it, don’t worry … We’ll be posting the new infrastructure pictures on Flickr soon. As you can see in the right-hand sidebar, our DC tour blogs are some of our most popular posts, and they’re due for a refresh.

-Kevin

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

Another Newbie Drawn in by The Planet’s Gravity

Subrata MukherjeeIn April, I became the newest product manager at The Planet. As you can see from other newbie Planeteer posts this year, we are hiring. :-) The combination of the people I met during the interview process, the job description and the benefits made it an easy choice. I knew this was the right opportunity … The Planet’s gravitational pull was strong.

In my short tenure, these are the things that stand out most so far:

  • There are a lot of people here with a TON of experience in hosting. As someone who has worked in roles in engineering and product management across various tech sectors, ranging from semiconductors (processors and memory) to network and cable test equipment, this is exactly the kind of environment I needed to learn about the industry. As I learn the subtle industry differences, I hope to provide value by applying product management concepts to our products and services to create solutions to meet our customers’ needs.
  • People here are passionate about their work. I see it in everyone from engineers who drop by my cubicle to make a case for various technologies and vendors to Kevin and Tomy engaging with customers in the wild world of social media.

So, about the move …

After spending all of my life as a Northerner and living mostly in cities known for being rainy – yes, the last city was Seattle – or cold, I packed my bags and ventured down to Houston. It’s amazing how huge this country is and how different life can be in other regions. It remains to be seen whether I’m fully prepared for a Texas summer.

The move wasn’t as simple as packing bags, and I reached Houston later than The Planet’s gravitational pull (g=9.8m/s2) would have predicted. I considered various movers with widely varying reputations to get my furniture and vehicle from one side of the country to the other. I also evaluated hundreds of possible places to live without really knowing the Houston neighborhoods yet. As you’d expect, I spent more than my fair share of time hunkered over a search engine to get reviews, and my soon-to-be-coworkers were happy to provide feedback.

We make a lot of choices and have a lot of options available in life that affect our future.

For me, my choices affected the security of my possessions and where I’d live in this city. For you as a hosting customer, we know your choices are around your site’s security and how your hosted environment can affect the success of your business. As a product manager here, one of my personal goals is to give you reasons to love your decision to host with The Planet – which you’ll have the ability to share with your peers.

As I continue learning the ins and outs of hosting, I’m looking forward to chatting with our customers and potential customers about the kinds of products we can provide to create a better hosting experience. If we’re missing a product or service you think we should have, drop it in a comment below!

-Subrata

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

UNIX Sysadmin Boot Camp: Passwords

Ryan RobsonAre you still with me? Have you kept up with your sysadmin exercises? Are you starting to get comfortable with SSH, bash and your logs? Good. Now I have an important message for you:

Your password isn’t good enough.

Yeah, that’s a blanket statement, but it’s shocking how many people are perfectly fine with a six- or eight-character password made up of lowercase letters. Your approach to server passwords should be twofold: Stick with it and Be organized.

Remembering a 21-character password like ^@#*!sgsDAtg5t#ghb%!^ may seem daunting, but you really don’t have to remember it. For a server, secure passwords are just as vital as any other form of security. You need to get in the habit of documenting every username and password you use and what they apply to. For the sake of everything holy, keep that information in a safe place. Folding it up and shoving it in your socks is not advised. (See: blisters.)

Want to make your approach to password security even better? Change your passwords every few months, and make sure you and at least one other trusted colleague or friend knows where to find them. You’re dealing with sensitive material, but you can never guarantee that you will be available to respond to a server-based emergency. In these cases, your friends and co-workers end up scrambling through bookshelves and computer files to find any trace of useful information.

Having been one of the abovementioned co-workers in this situation, I can attest that it is nearly impossible to convince customer service that you are indeed a representative of the company having no verification information or passwords to provide.

Coming soon: Now you’ve got some of the basics, what about the not-so-basics? I’ll start drafting some slightly more advanced tips for the slightly more advanced administrator. If you have any topics you’d like us to cover, don’t hesitate to let us know in a comment below.

-Ryan

P.S. If you remember Laurence’s fourth Tech Tip from the Trenches, you’re probably already on top of this. It’s definitely a point worth reiterating, though.

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

Application Services and APIs

Duke SkardaThere’s an old saying that what’s down in the well comes out in the water. It just means that if the inside of something is good, then the outside will also be good and vice versa. That’s true for a web portal as well. If the machinery and systems behind the portal aren’t strong, then the portal will never be great. After all, a portal is just a window into the systems behind the curtain. The functionality and design of the portal are limited by the information it can access and the automated services it can utilize.

Let’s bring this back to The Planet: What does Orbit have access to? A lot. The Planet has spent years automating systems and refining our databases. We now have quite a collection of automated services and a strong data warehouse. Our current Orbit 2.0 platform offers a lot of automated functionality. With the iPhone web app, you can easily access the key functionality and data on your phone.

We’re working on a new way to leverage all of that automation and information. As we started looking at how we can take our portal to the next level, we took a long look at our software architecture and infrastructure. What we found was some great functionality built on a sort of mish-mash of technologies and architectures. This isn’t uncommon in systems that are 10 years old, but it definitely makes the job of upgrading the portal more time consuming.

We have started implementing a Services Oriented Architecture (SOA) from the ground up. This will provide us with a great foundation for an extensible internal API, as well as a robust external API. But first, we have a lot of work in front of us to untangle and normalize internal services and functional responsibilities. We are prioritizing that work according to the results of customer surveys and visits. All of this will result in more dependable portal functions and a much more powerful API.

What you will see is a pipeline of functionality: New internal services will be followed by portal improvements, which will be followed by new APIs.

SOA Services (Internal) » Portal Improvements » External APIs

Right now, we are in beta on some new RESTful APIs. We want to prove out our infrastructure and approach. Our SOA rollout and beta should be happening around the end of Q2/early Q3.

We’re working in a lot of directions at once right now. In my next blog, I’ll talk about the changes we’re making in all of our areas that will provide the foundation for this new and improved collection of services and APIs.

-Duke

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

UNIX Sysadmin Boot Camp: Your Logs and You

Ryan RobsonWe’re a few exercises into UNIX Sysadmin Boot Camp, and if you’re keeping up, you’ve learned about SSH and bash. In those sessions, our focus was to tell the server what we wanted it to do. In this session, we’re going to look at the logs of what the server has done.

Logs are like an overbearing mother who sneakily follows her teenage son around and writes down the addresses of each house he visits. When he realizes he lost a really important piece of baseball history at one of those houses, he’ll be glad he has that list so he can go desperately search for the soon-to-be-noticed missing bat. Ahem.

MAKE BEST FRIENDS WITH THIS DIRECTORY: /var/log/

When something goes wrong – when there’s hitch in the flux capacitor or too many gigawatts in the main reactor – your logs will be there to let you know what’s going on, and you can pinpoint the error with educated vengeance. So treat your logs with respect.

One of the best places to start harnessing this logged goodness is /var/log/messages. This log file reports all general errors with network and media, among other things. As you add to and learn your server’s command line environment, you’ll see specific logs for applications as well, so it’s a very good idea to keep a keen eye on these. They just might save your life … or server.

Some of the most commonly used logs (may vary with different Linux distributions):

  • /var/log/message – General message- and system-related info
  • /var/log/cron.log – Cron job logs
  • /var/log/maillog – Mail server logs
  • /var/log/kern.log – Kernel logs
  • /var/log/httpd/ – Apache access and error logs
  • /var/log/boot.log – System boot logs
  • /var/log/mysqld.log – MySQL database server logs
  • /var/log/secure – SSH authentication logs
  • /var/log/auth.log – Authentication logs
  • /var/log/qmail/ – Qmail log directory (more files inside this directory)
  • /var/log/utmp or /var/log/wtmp – Login records file
  • /var/log/yum.log – Yum log files

There are plenty more in-depth logs – particularly involving raw system components – and others that act similarly to logs but are a bit more active like tcpdumps. Those are a little more advanced to interpret, so I’ll save them for another guide and another day.

At this point in our UNIX workout series, you’re familiar with the command line, you know the basics of how to tell your server what to do and you just learned how to let the server tell you what it’s done. There’s still a bit of work to be done before you can call yourself a UNIX ninja, but you’re well on your way. In our next installment, we’re going to take a step back and talk about p455w0rd5.

Keep learning.

-Ryan

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

Hardware Geeks Rejoice: Quad Xeon 7550 in a Dell R910

Kevin HazardFour Intel Xeon 7550 Nehalem EX 2.0GHz eight-core processors with 18MB of cache and 1066MHz bus speed. Up to 15 SAS/SCSI hard drives. Up to 1TB of DDR3 RAM. Seven available RAID configurations: 0, 1, 5, 6, 10, 50 and 60. Virtual Rack connectivity. What else could you need in a server?

As you may have heard, The Planet was the first hosting provider to offer a Dell PowerEdge R910 server chassis with a four-socket configuration of Intel’s Nehalem EX processors. Because each processor has eight cores, the server operates 32 individual cores out of the box and 64 cores with hyper-threading enabled.

When I heard about this server, I had to see it. You could probably describe it the same way Sam Rockwell describes his company’s smart bomb in Iron Man 2: “If it were any smarter, it would write a book … A book that would make Ulysses look like it was written in crayon … And it would read it to you.” Never mind the fact that it retails for more than 10 times the cost of my first car.*

Chances are that if you’re reading this post, you’ve already been distracted by the pictures I’ve posted below, so I’ll quickly get out of your way so you can ogle the majesty of this magnificent piece of hardware. Click on any picture for additional detail.

Dell R910 with Quad Xeon 7550 Nehalem EX processors
Dell PowerEdge R910 Chassis
Dell R910 with Quad Xeon 7550 Nehalem EX processors
Opening the Quad Xeon 7550
Dell R910 with Quad Xeon 7550 Nehalem EX processors
Removing one of eight RAM risers
Dell R910 with Quad Xeon 7550 Nehalem EX processors
Up to eight 16GB RAM modules per riser
Dell R910 with Quad Xeon 7550 Nehalem EX processors
Six heavy-duty internal case fans
Dell R910 with Quad Xeon 7550 Nehalem EX processors
Heat sinks covering four Nehalem EX processors
Dell R910 with Quad Xeon 7550 Nehalem EX processors
Heat sink and Xeon 7550 processor removed
Dell R910 with Quad Xeon 7550 Nehalem EX processors
Xeon 7550 – Eight cores, 2.0GHz per core
Dell R910 with Quad Xeon 7550 Nehalem EX processors
Dell PowerEdge R910 connectivity

If you can’t get enough of these detailed images, visit The Planet Flickr photostream for more.

And if after seeing this server, you want to add one to your hosting infrastructure, we can help you out: Quad Xeon 7550.

-Kevin

*For those who are curious, my first car was a big, mostly white Dodge Ramcharger. :-)

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati

UNIX Sysadmin Boot Camp: bash

Ryan RobsonWelcome back to UNIX Sysadmin Boot Camp. You’ve had a few days to get some “reps” in accessing your server via SSH, so it’s about time we add some weight to your exercise by teaching you some of the tools you will be using regularly to manage your server.

As we mentioned earlier in this series, customers with control panels like cPanel/WHM and Plesk might be tempted to rely solely on those graphical interfaces. They are much more user-friendly in terms of performing routine server administration tasks, but at some point, you’re going to need to get down and dirty on the command line. It’s inevitable. This is where you’ll use bash commands.

Here are some of the top 10 essential commands you should get to know and remember in bash. Click any of the commands to go to its official “manual” page.

  1. man – This command provides a manual of other bash commands. Want more info on a command? Type man commandname, and you’ll get more information about “commandname” than you probably wanted to know. It’s extremely useful if you need a quick reference for a command, and it’s often much more detailed and readable than a simple --help or --h extension.
  2. ls – This command lets you list results. I showed you an example of this above, but the amount of options that are available to you with this command are worth looking into. Using the “manual” command above, run man ls and check out the possibilities. For example, if you’re in /etc, running ls –l /etc will get you a slightly more detailed list. My most commonly used list command is ls –hal. Pop quiz for you (where you can test your man skills): What does the -hal mean?
  3. cd – This command lets you change directories. Want to go to /etc/? cd /etc/ will take you there. Want to jump back a directory? cd .. does the trick.
  4. mv – This command enables you to move files and folders. The syntax is mv originalpath/to/file newpath/to/file. Simple! There are more options that you can check out with the man command.
  5. rm – This command enables you to remove a file or directory. In the same vein as the mv command, this is one of those basic commands that you just have to know. By running rm filename, you remove the “filename” file.
  6. cp – This command enables you to copy files from one place to another. Want to make a backup of a file before editing it? Run cp origfile.bla origfile.bak, and you have a backup in case your edit of origfile.bla goes horrendously wrong and makes babies cry. The syntax is simply: cp /source /destination. As with the above commands, check out the manual by running man cp for more options.
  7. tar – On its own, tar is a command to group a bunch of files together, uncompressed. These files can then be compressed into .gzip format. The command can be used for creating or extracting, so it may be a good idea to familiarize yourself with the parameters, as you may find yourself using it quite often. For a GUI equivalent, think 7-zip or WinRAR for Windows.
  8. wget – I love the simplicity of this little command. It enables you to “get” or download a target file. Yes, there are options, but all you need is a direct link to a file, and you just pull one of these: wget urlhere. Bam! That file starts downloading. Doesn’t matter what kind of file it is, it’s downloaded.
  9. top – This handy little binary will give you a live view of memory and CPU usage currently affecting your machine, and is useful for finding out where you need to optimize. It can also help you pinpoint what processes may be causing a slowdown or a load issue.
  10. chmod – This little sucker is vital to make your server both secure and usable, particularly when you’re going to be serving for the public like you would with a web server. If you read about how I learned to stop worrying and love the bomb, er permissions, you should be familiar with this command and its connotations. Combine good usage of permission and iptables, and you have a locked down server

When you understand how to use these tools, you can start to monitor and track what’s actually happening on your server. The more you know about your server, the more effective and efficient you can make it. In our next installment, we’ll touch on some of the most common server logs and what you can do with the information they provide.

Did I miss any of your “essential” bash commands in my top 10 list? Leave a comment below with your favorites along with a quick explanation of what they do.

-Ryan

StumbleUpon
Twitter
DZone
Digg
del.icio.us
Technorati