The Importance of Network Security

On Friday, April 27, 2011, I powered on my Sony Playstaton 3 and prepared to sit down for an enjoyable gaming session. As a Sony customer and a PlayStation Network (PSN) user, I expected my system to be able to connect to a service that I was told would be available. Because I had to sign an agreement to join the PSN, I expected my personal information to be secure. On that morning, I logged in and had no idea that my personal security might be at risk due to a lack of tight-knit practices and possible information redundancy.

My many years of brand loyalty held strong as I was told constantly that the PSN was down as a result of a maintenance. I understand that emergencies happen and proper planning by a professional company is in place to shorten the duration of impact. As it turned out, proper planning for this type of event seemed to have been lost on Sony. A malicious security cracker was able to infiltrate their network to gain access to numerous PSN customers’ sensitive personal information. This kind of blunder had every PSN customer wondering what could be done to prevent this kind of event from happening again.

You probably noticed that I used the word “cracker” as opposed to the more common “hacker.” A hacker is an extremely knowledgeable person when it comes to computers and programming who knows the ins and outs of systems … which is completely legal. The typical misconception is that all “hackers” are engaged in illegal activity, which is not true. If the hacker decides to use these skills to circumvent security for the purpose of stealing, altering and damaging (which is obviously illegal), then the hacker becomes a cracker. To put it simply: All crackers are hackers, but not all hackers are crackers.

When I started working at SoftLayer three years ago, I was told to pay very close attention to our company’s security policy. Each employee is reminded of this policy very regularly. Proper security practice is essential when dealing with private customer data, and with the advancement of technology comes the availability of even more advanced tools for cracking. As a trusted technology partner, it is our obligation to maintain the highest levels of security.

There is not a day at work that I am not reminded of this, and I completely understand why. Even at a personal level, I can imagine the detrimental consequences of having my information stolen, so multiply that by thousands of customers, and it’s clear that good security practices are absolutely necessary. SoftLayer recognizes what is at stake when businesses trust us with their information, and that’s one of the big reasons I’m to work here. I’ve gone through the hassle and stress of having to cancel credit cards due to another company’s negligence, and as a result, I’m joining my team in making sure none of our customers have to go through the same thing.

-Jonathan

Global Expansion: An Early Look at Singapore

Based on the blog’s traffic analytics, customers are very interested in SoftLayer’s global expansion, and in my update from Tokyo, I promised a few sneak peeks into the progress of building out the Singapore data center. We’ve been talking about our move into Asia for a while now, but we haven’t showed much of the progress. The cynics in the audience will say, “I’ll believe it when I see it,” and to them, I say:

These pictures were actually taken a few weeks ago before our Server Build Technicians came on site, and it looks even more amazing now … But you’ll have to check back with us in the coming weeks to see that progress for yourself. Both the Singapore and Amsterdam facilities are on track to go live by the middle of Q4 2011, and we’re already starting to hear buzz from our customers as they prepare to snatch up their first SoftLayer server in Asia.

If you want to have a little fun, you should compare these build-out pictures with the ones we’ve posted from the completed San Jose facility and the under-construction Amsterdam data center. As we’ve mentioned in previous posts, SoftLayer uses a data center pod concept to create identical hosting environments in each of our locations. Even with the data centers’ varying floor plan layouts and sizes, the server room similarities are pretty remarkable.

Stay tuned for updates on the build-out process and for information about when you can start provisioning new servers in Singapore. If you have any questions about the build-out process, leave a comment below or hit us up on Twitter: @SoftLayer.

-@toddmitchell

Virtual-Q: Tech Partner Spotlight

Welcome to the next installment in our blog series highlighting the companies in SoftLayer’s new Technology Partners Marketplace. These Partners have built their businesses on the SoftLayer Platform, and we’re excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
- Paul Ford, SoftLayer VP of Community Development

 

Scroll down to read a guest blog from Victor’s coworker, Sanjay Upadhyaya of Virtual-Q. Virtual-Q is a technology partner that delivers secure, scalable, and powerful cloud desktop computing from virtually any Internet-enabled device. To learn more about Virtual-Q, visit http://virtual-q.com/.

Taking Your Desktop to The Cloud

There’s good reason there’s so much awareness around cloud computing right now — it’s the fulfillment of an architecture we have all been awaiting, a platform with Cloud resources that flexibly accommodate essential business services. Today, businesses are looking to cloud computing because it wants fast time-to-market and to pay only for what it consumes; that requires IT resources to organically adapt to the business and deliver high performance computing.

Virtual-Q has developed such a platform that has surpassed traditional virtualization technology that exists today. There are several challenges presented in Virtual Desktop Infrastructure (VDI) techniques that make it impossible to use VMware, Citrix or Microsoft VDI solutions as delivered to the public for a hosted VDI service. By themselves, each respective vendor has an adequate solution for VDI. However when combined together, the resulting platform is far superior.

Using proprietary technology combined with the best technology available today, the Virtual-Q platform, better known as The Q, is setting a performance and scalability landmark across multiple industry sectors with several key benefits:

Extending the PC refresh cycle. The Q transfers the heavy processing from endpoint devices to the cloud. In the past, PCs lasted only three to four years because they couldn’t support the increased processing demands of new applications. Now that the heavy lifting has been offloaded, PCs can be used until they mechanically wear out after six to eight years. In new deployments thin-client devices can be deployed thereby significantly reducing overhead.

Increasing data security. The Q enables organizations to remove all data from the users’ machines and instead host it in the Cloud. Now, a lost or stolen machine means little more than the cost of the device, instead of the potential data breach that used to keep IT folks up at night.

Increasing user productivity and improving employee satisfaction. The Q helps increase user productivity and employee satisfaction in a variety of ways. Using either hosted desktop or hosted application virtualization, your workers can access their desktops, or applications that they could previously only access on their PC, from any device and any location. This enables workers to be productive from places such as a home office or hotel kiosk, instead of only in the office on their corporate desktop. In addition, unmanaged PCs like an employee’s own laptop can run corporate applications; this has proven to lead to an increase in worker productivity as well as satisfaction.

Lower support costs. The Q virtually eliminates one of the most expensive areas surrounding PC support is on-site visits — an on-site support visit can cost eight times as much as a phone-based support call. This means that IT staffers can fix desktop or application problems simply by logging into the server. For PC or thin client problems, organizations are finding that it is less expensive to replace the old or low-cost hardware than spend time troubleshooting. Local application virtualization can also reduce support costs because its isolation capabilities eliminate application conflicts. According to analysts at Forrester, local application virtualization decreases desktop application support costs by 80%.

Instant business continuity and disaster recovery. The Q allows workers to remotely access everything they need to continue working with minimal interruption from virtually any internet accessible device. The majority of workers will have the ability to work from home or from any other while still having access to the applications they need to do their job.

Faster time to complete mergers and acquisitions. Mergers and acquisitions take a lot of time and resources — especially for those charged with onboarding the new employees. The Q enables IT to simply provide access to a worker’s applications or desktop, instead of the previous world of full desktop provisioning.

Support for contractors and other unmanaged workers. The Q provides unmanaged workers secure, managed access to corporate resources.

So now there’s only one question to ask: Is The Q right for you?

-Sanjay Upadhyaya, Virtual-Q

The Beauty of IPMI

Nowadays, it would be extremely difficult to find a household that does not store some form of media – whether it be movies, music, photos or documents – on their home computer. Understanding that, I can say with confidence that many of you have been away from home and suddenly had the desire (or need) to access the media for one reason or another.

Because the Internet has made content so much more accessible, it’s usually easy to log in remotely to your home PC using something like Remote Desktop, but what if your home computer is not powered on? You hope a family member is at home to turn on the computer when you call, but what if everyone is out of the house? Most people like me in the past would have just given up altogether since there would be no clear and immediate solution. Leaving your computer on all day could work, but what if you’re on an extended trip and you don’t want to run up your electricity bill? I’d probably start traveling with some portable storage device like a flash drive or portable hard drive to avoid the problem. This inelegant solution requires that I not forget the device, and the storage media would have to be large enough to contain all necessary files (and I’d also have to know ahead of time which ones I might need).

Given these alternatives, I usually found myself hoping for the best with the portable device, and as anticipated, there would still be some occasions where I didn’t happen to have the right files with me on that drive. When I started working for SoftLayer, I was introduced to a mind-blowing technology called IPMI, and my digital life has never been the same.

IPMI – Intelligent Platform Management Interface – is a standardized system interface that allows system administrators to manage and monitor a computer. Though this may be more than what the common person needs, I immediately found IPMI to be incredible because it allows a person to remotely power on any computer with that interface. I was ecstatic to realize that for my next computer build, I could pick a motherboard that has this feature to achieve total control over my home computer for whatever I needed. IPMI may be standard for all servers at SoftLayer, but that doesn’t mean it’s not a luxury feature.

If you’ve ever had the need to power on your computers and/or access the computer’s BIOS remotely, I highly suggest you look into IPMI. As I learned more and more about the IPMI technology, I’ve seen how it can be a critical feature for business purposes, so the fact that it’s a standard at SoftLayer would suggest that we’ve got our eye out for state-of-the art technologies that make life easier for our customers.

Now I don’t have to remember where I put that flash drive!

-Danny

Global Expansion: PoP into Asia – Japan

By the end of the year, SoftLayer’s global network will include points of presence (PoPs) and data centers throughout Europe and Asia. As George explained in Globalization and Hosting: The World Wide Web is Flat, the goal is to bring SoftLayer’s network within 40ms of everyone on the planet. One of the first steps in reaching that goal is to cross both of the “ponds” between our US facilities and our soon-to-open international facilities.

Global Network

The location and relative size of Europe and Asia on that map may not make them viable resources when planning travel (Seattle actually isn’t geographically closer to Tokyo than it is to San Jose), but they illustrate the connections we’ll make to extend our network advantages to Singapore and Amsterdam.

Since I’m currently on-site in Singapore, I can give you an inside look at our expansion into Asia. The data center is coming along very nicely, but before I show off pictures from that build-out, I thought I’d give you a glimpse of our first official network point of presence in Asia: Tokyo!

If you’re familiar with SoftLayer, you’re probably aware that we build our data centers in a pod concept for a number of reasons, and our network points of presence are no different. If you manage to sneak by about 15 levels of security at any of our network PoPs, you’d find identical hardware (with different labels).

By the time you get to this paragraph, you’ve probably spent a while geeking out on the hardware pictures, and by reading the labels, you’ve inferred correctly that we’ll have nameservers and VPN online in Tokyo, and the Juniper routers already look poised and ready to start passing traffic … The logical conclusion you should draw is that you need to get poised and ready to order your server in Singapore.

SoftLayer VP of Network Operations and Engineering Will Charnock is in Hong Kong to build out that PoP, and you might see a few (similar looking) pictures from there in the near future, and I’ll be sure to sneak a few shots of the Singapore DC progress for you too.

Sayonara!

-@toddmitchell

Changing the (YouTube) Channel

As one of the newest members to the SoftLayer family, let me make something clear: One of the biggest changes in SoftLayer’s social media presence is directly a result of me. Okay … well I might not have directly initiated the change, but I like to think that when you’re a new kid on the block, you have to stick together with the other new editions. My new BFF and partner in crime at SL is the SoftLayer Channel on YouTube. He’s replaced SoftLayerTube Channel (though I should be clear that I haven’t replaced anyone … just become a big help to our registered Social Media Ninja KHazard).

This blog is my first major contribution to the InnerLayer, and when I was asked to write it I must admit I was very excited. On literally my 6th day of work, my hope was to make a major impact or at least prove that a ninja-in-training (that would be me) can hold her own with a full-fledged ninja … but I digress. The real reason I’m here is to talk about our move from SoftLayerTube to SoftLayer. With a little YouTube wizardry and some help from our friends in Mountain View, CA, we’ve been able to take the help of the better-branded /SoftLayer account.

Don’t worry, you are not going to lose any of your favorite SL videos … They’re just taking a permanent trip to the SoftLayer channel.

TL;DR Version
Old and busted: /SoftLayerTube

New SL YouTube Channel

New Hotness: /SoftLayer

New SL YouTube Channel

Subscribe!

-Rachel

SOAP API Application Development 101

Simple Object Access Protocol (SOAP) is built on server-to-server remote procedure calls over HTTP. The data is formatted as XML; this means secure, well formatted data will be sent and received from SoftLayer’s API. This may take a little more time to set up than the REST API but it can be more scalable as you programmatically interface with it. SOAP’s ability to tunnel through existing protocols such as HTTP and innate ability to work in an object-oriented structure make it an excellent choice for interaction with the SoftLayer API.

This post gets pretty technical and detailed, so it might not appeal to our entire audience. If you’ve always wondered how to get started with SOAP API development, this post might be a good jumping-off point.

Authentication
Before you start playing with the SoftLayer SOAP API, you will need to find your API authentication token. Go into your portal account, and click the “Manage API Access” link from the API page under the Support tab. At the bottom of the page you’ll see a drop down menu for you to “Generate a new API access key” for a user. After you select a user and click the “Generate API Key” button, you will see your username and your API key. Copy this API key, as you’ll need it to send commands to SoftLayer’s API.

PHP
In PHP 5.0+ there are built in classes to deal with SOAP calls. This allows us to quickly create an object oriented, server side application for handling SOAP requests to SoftLayer’s API. This tutorial is going to focus on PHP 5.1+ as the server side language for making SOAP function calls. If you haven’t already, you will need to install the soap client for php, here is a link with directions.

Model View Controller

Model-View-Controller or MVC is a software architecture commonly used in web development. This architecture simply provides separation between a data abstraction layer (model), the business logic (controller), and the resulting output and user interface (view). Below, I will describe each part of our MVC “hello world” web application and dissect the code so that you can understand each line.

To keep this entry a little smaller, the code snippits I reference will be posted on their own page: SOAP API Code Examples. Protip: Open the code snippit page in another window so you can seamlessly jump between this page and the code it’s referencing.

Model
The first entry on the API Code Examples page is “The Call Class,” a custom class for making basic SOAP calls to SoftLayer’s API. This class represents our model: The SOAP API Call. When building a model, you need to think about what properties that model has, for instance, a model of a person might have the properties: first name, height, weight, etc. Once you have properties, you need to create methods that use those properties.

Methods are verbs; they describe what a model can do. Our “person” model might have the methods: run, walk, stand, etc. Models need to be self-sustaining, that means we need to be able to set and get a property from multiple places without them getting jumbled up, so each model will have a “set” and “get” method for each of its properties. A model is a template for an object, and when you store a model in a variable you are instantiating an instance of that model, and the variable is the instantiated object.

  • Properties and Permissions
    Our model has these properties: username, password (apiKey), service, method, initialization parameters, the service’s WSDL, SoftLayer’s type namespace, the SOAP API client object, options for instantiating that client, and a response value. The SOAP API client object is built into php 5.1+ (take a look at the “PHP” section above), as such, our model will instantiate a SOAP API object and use it to communicate to SoftLayer’s SOAP API.

    Each of our methods and properties are declared with certain permissions (protected, private, or public), these set whether or not outside functions or extended classes can have access to these properties or methods. I “set” things using the “$this” variable, $this represents the immediate class that the method belongs to. I also use the arrow operator (->), which accesses a property or method (to the right of the arrow) that belongs to $this (or anything to the left of the arrow). I gave as many of the properties default values as I could, this way when we instantiate our model we have a fully fleshed out object without much work, this comes in handy if you are instantiating many different objects at once.

  • Methods
    I like to separate my methods into 4 different groups: Constructors, Actions, Sets, and Gets:

    • Sets and Gets
      Sets and Gets simply provide a place within the model to set and get properties of that model. This is a standard of object oriented programing and provides the model with a good bit of scalability. Rather than accessing the property itself, always refer to the function that gets or sets the property. This can prevent you from accidentally changing value of the property when you are trying to access it. Lines 99 to the end of our call are where the sets and gets are located.

    • Constructors
      Constructors are methods dedicated to setting options in the model, lines 23-62 of the call model are our constructors. The beauty of these three functions is that they can be copied into any model to perform the same function, just make sure you keep to the Zend coding standards.

      First, let’s take a look at the __construct method on line 24. This is a special magic php method that always runs immediately when the model is instantiated. We don’t want to actually process anything in this method because if we want to use the default object we will not be passing any options to it, and unnecessary processing will slow response times. We pass the options in an array called Setup, notice that I am using type hinting and default parameters when declaring the function, this way I don’t have to pass anything to model when instantiating. If values were passed in the $Setup variable (which must be an array), then we will run the “setOptions” method.

      Now take a look at the setOptions method on line 31. This method will search the model for a set method which matches the option passed in the $setup variable using the built in get_class_methods function. It then passes the value and name of that option to another magic method, the __set method.

      Finally, let’s take a look at the __set and __get methods on lines 45 and 54. These methods are used to create a kind of shorthand access to properties within the model, this is called overloading. Overloading allows the controller to access properties quicker and more efficiently.

    • Actions
      Actions are the traditional verbs that I mentioned earlier; they are the “run”, “walk”, “jump”, and “climb” of our person model. We have 2 actions in our model, the response action and the createHeaders action.

      The createHeaders action creates the SOAP headers that we will pass to the SoftLayer API; this is the most complicated method in the model. Understanding how SOAP is formed and how to get the correct output from php is the key to access SoftLayer’s API. On line 77, you will see an array called Headers, this will store the headers that we are about to make so that we can easily pass them along to the API Client.

      First we will need to create the initial headers to communicate with SoftLayer’s API. This is what they should look like:

      <authenticate xsi:type="slt:authenticate" xmlns:slt="http://api.service.softlayer.com/soap/v3/SLTypes/">
          <username xsi:type="xsd:string">MY_USERNAME</username>
          <apiKey xsi:type="xsd:string">MY_API_ACCESS_KEY</apiKey>
      </authenticate>
      <SoftLayer_API_METHODInitParameters xsi:type="v3:SoftLayer_API_METHODInitParameters" >
          <id xsi:type="xsd:int">INIT_PERAMETER</id>
      </SoftLayer_API_METHODInitParameters>

      In order to build this we will need a few saved properties from our instantiated object: our api username, api key, the service, initialization parameters, and the SoftLayer API type namespace. The api username and key will need to be set by the controller, or you can add in yours to the model to use as a default. I will store mine in a separate file and include it in the controller, but on a production server you might want to store this info in a database and create a “user” model.

      First, we instantiate SoapVar objects for each authentication node that we need. Then we store the SoapVar objects in an array and create a new SoapVar object for the “authenticate” node. The data for the “authenticate” node is the array, and the encoding is type SOAP_ENC_OBJECT. Understanding how to nest SoapVar objects is the key to creating well formed SOAP in PHP. Finally, we instantiate a new SoapHeader object and append that to the Headers array. The second header we create and add to the Headers array is for initialization parameters. These are needed to run certain methods within SoftLayer’s API; they essentially identify objects within your account. The final command in this method (__setSoapHeaders) is the magical PHP method that saves the headers into our SoapClient object. Now take a look at how I access the method; because I have stored the SoapClient object as a property of the current class I can use the arrow operator to access methods of that class through the $_client property of our class, or the getClient() method of our class which returns the client.

      The Response method is the action which actually contacts SoftLayer’s API and sends our SOAP request. Take a look at how I tell PHP that the string stored in our $_method property is actually a method of our $_client property by adding parenthesis to the end of the $Method variable on line 71.

View
The view is what the user interprets, this is where we present our information and create a basic layout for the web page. Take a look at “The View” section on SOAP API Code Examples. Here I create a basic webpage layout, display output information from the controller, and create a form for sending requests to the controller. Notice that the View is a mixture of HTML and PHP, so make sure to name it view.php that way the server knows to process the php before sending it to the client.

Controller
The controller separates user interaction from business logic. It accepts information from the user and formats it for the model. It also receives information from the model and sends it to the view. Take a look at “The Controller” section on SOAP API Code Examples. I accept variables posted from the view and store them in an array to send to the model on lines 6-11. I then instantiate the $Call object with the parameters specified in the $Setup array, and store the response from the Response method as $Result in line 17 for use by the view.

Have Fun!
Although this tutorial seems to cover many different things, this just opens up the basic utilities of SoftLayer’s API. You should now have a working View to enter information and see what kind of data you will receive. The first service and method you should try is the SoftLayer_Account service and the getObject method. This will return your account information. Then try the SoftLayer_Account service and the getHardware method; it will return all of the information for all of your servers. Take the IDs from those servers and try out the SoftLayer_Hardware_Server service and the getObject method with that id as the Init property.

More examples to try: SoftLayer Account, SoftLayer DNS Domain, SoftLayer Hardware Server. Once you get the hang of it, try adding Object Masks and Result Limits to your model.

Have Fun!

-Kevin

Free the dwarf planets!

 Most people will probably think of tomorrow as the 5 year anniversary of the demotion of former-planet Pluto. That seems fair; the Pluto demotion got all of the news, caused all of the fights, and promoted all of the discussion. But now that tempers have cooled and the world has come to terms with a new more scientific eight-planet solar system, it is time to remember the other important thing

The death of the 10th planet

A remembrance of 5 years ago, today, excerpted from How I Killed Pluto and Why It Had It Coming

As an astronomer, I have long had a professional aversion to waking up before dawn, preferring instead to see sunrises not as an early morning treat, but as the signal that the end of a long night of work has come, and it is finally time for overdue sleep. But in the pre-dawn of August 25th, 2005, I

SLDN 2.0 – The Development Network Evolved

SoftLayer is in a constant state of change … It’s not that bad change we all fear; it’s the type of change that allows you to stretch the boundaries of your normal experience and run like a penguin … Because I got some strange looks when coworkers read “run like a penguin,” I should explain that I recently visited Moody Gardens in Galveston and saw penguins get crazy excited when they were about to get fed, so that’s the best visual I could come up with. Since I enjoy a challenge (and enjoy running around like a penguin), when I was asked to design the new version of SLDN, I was excited.

The goal was simple: Take our already amazing documentation software infrastructure and make it better. A large part of this was to collapse our multi-site approach down into a single unified user experience. Somewhere along the way, “When is the proposal going to be ready?” became “When is the site going to be ready?”, at this point I realized that all of the hurdles I had been trampling over in my cerebral site building were now still there, standing, waiting for me on my second lap.

I recently had the honor to present our ideas, philosophy and share some insight into the technical details of the site at OSCON 2011, and KHazzy had the forethought to record it for all of you!

It’s a difficult balance to provide details and not bore the audience with tech specs, so I tried to keep the presentation relatively light to encourage attendees (and now viewers) to ask questions about areas they want a little more information about. If you’re looking at a similar project in the future, feel free to bounce ideas off me, and I’ll steer you clear of a few land mines I happened upon.

-Phil

SendGrid: Technology Partner Spotlight

Welcome to the next installment in our blog series highlighting the companies in SoftLayer’s new Technology Partners Marketplace. These Partners have built their businesses on the SoftLayer Platform, and we’re excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
- Paul Ford, SoftLayer VP of Community Development

 

Scroll down to read the guest blog from Tim Falls of SendGrid, a technology partner that provides cloud-based email infrastructure for reliable delivery, scalability, real-time analytics and flexible APIs for customers who want to focus on driving their own growth and profitability. To learn more about SendGrid, visit http://sendgrid.com/.

Understanding the Value of [Email] Infrastructure Services

The Fall of DIY … As We Know It
Today more than ever before, businesses depend on third party services to operate efficiently and achieve their objectives. As a business leader, you have countless web applications and software as service solutions at your fingertips, which collectively address just about any problem or demand imaginable. Examples include cloud-based file storage, cloud and dedicated web hosting, recurring billing applications, online HR management portals, APIs for telephony and geo-data, and managed email infrastructure and delivery services. Startups and established corporations alike can utilize these tools quickly and simply with a credit card and a few clicks on a trackpad.

So, what does this mean, and why is it worth recognizing and appreciating? Well, it means that your life is a lot easier than it was 10 years ago. And if you fail to recognize the opportunities and advantages that these resources offer, your competitors will soon leave you in their proverbial dust … if they haven’t already.

The gist:

  • You don’t have to do everything yourself anymore … So don’t!
  • Be the best at what you do, and rely on other experts to help with everything outside of your realm.

The Email Puzzle
Let’s face it. Email sucks. Not email in and of itself – obviously, it is an essential part of our lives and is arguably one of the most transformative communication tools in human history. But, from a business standpoint, the implementation and maintenance of an effective and efficient email system is truly a nightmare. If there is one thing that web developers across the world can agree upon, it may be this: Successfully integrating email into a web application just ain’t fun!

To better understand the challenges developers face when integrating email into their web applications, let’s look at an example (fresh from my imagination). Through this discussion, we’ll uncover the clear advantages of working with a partner in email infrastructure and delivery.

Let’s say you’re building PitLovabull.com — a social, online community for dog owners. Sound lame? Well, it’s not … because it’s “different.” As the clever name indicates, it’s specifically for pit bull owners and advocates. Community members interact with each other and your company in a number of ways: Forum discussions, photo sharing, commenting, direct messages, the “give a dog a bone” button (think “like”) and buying cool doggy stuff. Each of these features involves email notifications … “Sporty’s owner just responded to your forum post on Healthy Dog Diets.” “Barney’s owner just tagged your puppy Stella in a photo.” “Thanks for purchasing a new collar for Boss! We’ll notify you by email when your package has shipped!”

After six months of grassroots marketing, tens of thousands of passionate pit bull owners have joined your community, and your email volume has grown from 800/week to 8,000/day (that’s almost 250k/month!). As a budding bootstrapped startup, you cut costs wherever you can, and you choose to manage your own email servers. You quickly find out that server costs grow substantially as you send more mail, customers are complaining that they aren’t receiving their email notifications, and your support team is stretched thin dealing with confused and frustrated customers. The end result: Poor deliverability is directly (and negatively) affecting revenue! What’s more: You have no insight into what is happening to your emails – Are they being delivered? Opened? Are links within them being clicked? Have you been blacklisted by an ISP?

Upon deep reflection, you realize that your developers are spending more time on email than they spend building awesome features for the community! Plus, you find yourself, the CEO/Founder of the company, researching mundane crap like ISP rate limits, Sender Policy Framework, DKIM, and the CAN-SPAM Act of 2003 — a few of the less-than-interesting aspects of email that must be understood in order to achieve optimal deliverability of your notifications and newsletters.

Luckily, you just hired Joey, a fresh, young hacker who’s active in the developer ecosystem and always on top of the latest technologies. While exploring PitLovabull’s web hosting control panel on your SoftLayer servers, he discovers a better alternative: The Softlayer Email Delivery Service &ndahs; a hosted and managed email infrastructure that’s already built for you! Joey signs up with a credit card for $150/month (which covers a full 250k emails/month), changes a few settings on your web application, and within minutes all of your email is being relayed through SendGrid.

May All Your Email Dreams Come True
A few months go by … Email is in your customers’ inboxes. Deliverability is being tracked and displayed on your web dashboard, along with open and click rates, blocks, bounces, spam reports and unsubscribes. Customer Support receives fewer emails, calls, and IM chat requests. Engineering is busy implementing a backlog of feature requests (not doing email stuff). Sales are gradually increasing and overall customer satisfaction is higher than ever.

Empowering Developers
But wait, it gets better! After researching SendGrid’s APIs, you recognize the potential for extreme customization, in the form of internal and external features. Internally, the SMTP API allows you to assign a “category” to each of your emails (password reminders, purchase confirmations, etc.) and in turn collect unique statistics for each category. Externally, the Parse API allows you to receive incoming emails to your web app. In a single day, Joey codes up a new feature, and now any community member can email a picture of their pup to post@pitlovabulls.com, include a caption in the subject line, and the picture and caption are automagically posted to that user’s profile!

The New Meaning of Do-It-Yourself
We all know it’s difficult to trust a third party to handle the critical elements of any operation. With the help of proven SaaS models that employ advanced technology, cloud-based infrastructures and dedicated experts, companies can now feel more comfortable moving into a modern mode of doing-it-themselves: Pay a nominal monthly fee to a service that handles email (or recurring billing, or telephony), and let the service do the dirty work and liberate the brains of your brilliant developers so they can focus on innovating with the tools available to them.

I hope this story helps entrepreneurs and business leaders think smarter as they build their dream. The lessons illustrated in the context of email apply across the board. We’re in a fascinating time, where building an internet business has never required less capital and has never allowed for the laser focus that is afforded to companies today. Open your toolbox, work smart, and build something that people love!

-Tim Falls, SendGrid

Subtract Server. Add Humor.

Once in a blue moon, a SoftLayer customer has to cancel a server. Sometimes their business is growing and they’re moving up to more powerful hardware, sometimes they need to consolidate their equipment to cut their costs, and sometimes their reason can’t really be categorized. In this case, a happy customer with a few dozen servers decided he needed to shut one down, and the explanation he gave would clearly fall into the third category:

Initial Ticket

Customer
I would like to cancel this server on August 20th, 2011, but not before that date. Anytime on this date will be okay.

We no longer have a need for this server and would like to cancel it before our next billing period. Thank you for your help in this matter. Please send me an email when this server has been canceled on August 20th, 2011.

She’s been with us for a long time, but things just aren’t working out … She’s become a gold digger. It’s her, not me. Please let her down easy. I don’t like punking out and having someone do my dirty work, but I’m afraid she might be violent. Diamond rings hurt when you get hit with them.

SoftLayer
I’m sorry to hear things did not work out for the two of you. While your safety is important to us, I must ask that you end this relationship via official channels.

Please submit an official cancellation request by going to Sales –> Cancel Server and proceeding through the cancellation steps. The server will be reclaimed at the end of your billing cycle on August 22nd.

Please let us know if you have any questions.

Customer
She always tried to make it hard for me to break up with her. Done!
 
SoftLayer
Glad to hear things went smoothly. Things don’t always do, but we knew you could pull through it. :-)
 

Official Cancellation Request

Customer
Word to your moms I came to drop bombs, I got more rhymes than the Bible’s got Psalms.
 
SoftLayer
Thanks for your unique note, definitely was a nice break from the norm.

We’re glad to continue being part of your success!

Please contact us should future needs arise.

Customer
Thanks, it was a subtle reminder to get out your seat and jump around.
 

Let this be a lesson to all of you: Get out your seat and jump around.

-@khazard

SoftLayer at HostingCon 2011

In my “HostingCon, Here We Come!” blog post, I promised that SoftLayer would be Bigger, Better and Badder at HostingCon 2011, and we made some pretty ambitious plans to be sure that was the case: Six conference panels and speaking sessions, SoftLayer’s biggest expo hall presence ever, in-booth presentations about everything from Portal 4 to Social Media, our infamous Server Challenge, and the biggest party in HostingCon history … Heck, we even let PHIL attend to do some “research” for PHIL’s DC. We pulled out all the stops.

Now that the dust has settled and the sunburns have started to heal, I can share a glimpse into SoftLayer’s HostingCon experience with anyone who wasn’t able to make it to San Diego last week.

HostingCon Expo Hall

When you walked onto the conference floor, you saw SoftLayer, and if you managed to miss our 20′x40′ two-story booth or the commotion around it, you were probably in the wrong hall. Each person on our team had a chance to speak with hundreds of attendees, and at the end of every conversation, we gave some swag as parting gifts: Switch balls, foam rockets and limited-edition “Robot” T-shirts:

Robot Shirt

Our in-booth theater was the venue where Marc Jones showed off the private beta of our new Flex Images for dedicated servers, Jeff Reinis talked about how customers can take advantage of our international expansion, Stephen Johnson gave a tour of Portal 4, Kevin Hazard shared some tips and tricks to managing social media, and Phil Jackson dove into the API.

Take a virtual stroll around the conference center with us:

And as you can tell from the pictures, the Server Challenge was a big hit.

The Server Challenge

If you bring a cabinet of servers to a conference full of server geeks, you’re going to get some attention. Challenge them to a hardware competition, and you’ll be inundated with attendee traffic. If you aren’t familiar with the in-booth activity, Kevin’s blog about the Server Challenge at OSCON is a perfect place to get your crash course. If you already know all about it (and if you’ve competed in it), you’ll be even more interested in seeing some of the action from the show floor:

At 3:07 in that video, you can see the eventual winner of the HostingCon Server Challenge complete a run on Day 1. His iPad 2-winning time was 1:01.77, and he beat some pretty stiff competition for the title of Server Challenge Champ.

Geeks Gone Wild

Put SoftLayer, cPanel and Resell.biz in a room, and you have a party. Add free drinks, a thousand of our closest friends, The Dan Band and a legendary venue, and you’ve got yourself the biggest party in HostingCon history:

If you took part in any or all of the above shenanigans, thank you! We owe a great deal of our success at HostingCon to you. Once everyone finally catches up on the sleep they missed last week, we’ll get the wheels turning to figure out a way to go even bigger next year in Boston … Speaking of which, does anyone know where I can get a boat that was in the Boston Harbor on December 16, 1773?

-@gkdog

The Redemption of Snow White (Part 3 of 3)

(don't forget to read Part 1 and Part 2)

Snow White’s chance for redemption finally came last year.  I got an email from Adam Burgasser, an astronomer at UC San Diego, best known for his studies of brown dwarfs in the local universe (less well known, but perhaps more relevant in this case, is that I was his Ph.D. advisor a decade ago). Adam had just moved from MIT where he had helped design a

UNIX Sysadmin Boot Camp: An Intro to SSH

You’ve got a ‘nix box set up. For some reason, you feel completely lost and powerless. It happens. Many a UNIX-related sob has been cried by confused and frustrated sysadmins, and it needs to stop. As a techie on the front lines of support, I’ve seen firsthand the issues that new and curious sysadmins seem to have. We have a lot of customers who like to dive head-first into a new environment, and we even encourage it. But there’s quite a learning curve.

In my tenure at SoftLayer, I’ve come across a lot of customers who rely almost entirely on control panels provided by partners like cPanel and Parallels to administer their servers. While those panels simplify some fairly complex tasks to the touch of a button, we all know that one day you’re going to have to get down and dirty in that SSH (Secure Shell) interface that so many UNIX server newbies fear.

I’m here to tell you that SSH can be your friend, if you treat it right. Graphical user interfaces like the ones used in control panels have been around for quite a while now, and despite the fact that we are in “the future,” the raw power of a command line is still unmatched in its capabilities. It’s a force to be reckoned with.

If you’re accustomed to a UNIX-based interface, this may seem a little elementary, but you and I both know that as we get accustomed to something, we also tend to let those all-important “basics” slip from our minds. If you’re coming from a Windows background and are new to the environment, you’re in for a bit of a shell shock, no pun intended. The command line is fantastically powerful once you master it … It just takes a little time and effort to learn.

We’ll start slow and address some of the most common pain points for new sysadmins, and as we move forward, we’ll tackle advanced topics. Set your brain to “absorbent,” and visualize soaking up these UNIX tips like some kind of undersea, all-knowing, Yoda-like sea sponge.

SSH

SSH allows data to be exchanged securely between two networked devices, and when the “network” between your workstation and server is the Internet, the fact that it does so “securely” is significant. Before you can do any actual wielding of SSH, you’re going to need to know how to find this exotic “command line” we’ve talked so much about.

You can use a third-party client such as PuTTY, WinSCP if your workstation is Windows-based, or if you’re on Linux or Mac, you can access SSH from your terminal application: ssh user@ipaddress. Once you’ve gotten into your server, you’ll probably want to find out where you are, so give the pwd command a try:

user@serv: ~$ pwd
/home/user
user@serv: ~$

It’s as easy as that. Now we know we’re in the /home/user directory. Most of the time, you’ll find yourself starting in your home directory. This is where you can put personal files and documents. It’s kind of like “My Documents” in Windows, just on your server.

Now that you know where you are, you’ll probably want to know what’s in there. Take a look at these commands (extracted from a RedHat environment, but also usable in CentOS and many other distributions):

    user@serv: /usr/src $ ls    
This will give you a basic listing of the current directory.

    user@serv: /usr/src $ ls /usr/src/redhat    
This will list the contents of another specified directory.

    user@serv: /usr/src $ ls ./redhat    
Using a “relative pathname,” this will perform the same action as above.

    user@serv: /usr/src $ ls redhat    
Most of the time, you’ll get the same results even without the “./” at the beginning.

    user@serv: /usr/src $ cd /usr/src/redhat/    
This is an example of using the cd command to change directories to an absolute pathname.

    user@serv: /usr/src $ cd redhat    
This is an example of using the cd command to change directories to a relative pathname.

    user@serv: /usr/src/redhat $ cd /usr/src    
To move back on directory from the working directory, you can use the destination’s absolute path.

    user@serv: /usr/src/redhat $ cd ..    
Or, since the desired directory is one step down, you can use two dots to move back.

You’ll notice many similarities to the typical Windows DOS prompts, so it helps if you’re familiar with navigating through that interface: dir, cd, cd .., cd /. Everything else on the other hand, will prove to be a bit different.

Now that you’re able to access this soon-to-be-powerful-for-you tool, you need to start learning the language of the natives: bash. In our next installment, we’ll take a crash course in bash, and you’ll start to get comfortable navigating and manipulating content directly on your server.

Bookmark the SoftLayer Blog and come back regularly to get the latest installments in our “UNIX Sysadmin Boot Camp” series!

-Ryan

The redemption of Snow White (Part 2)

(read Part 1)

One of the nicest things about science is that, usually, when you’re wrong  you’re just wrong.  There is no use sitting around arguing about it or trying to persuade someone to change his mind, you’re just plain wrong and the universe has explained it to you. Game over. Thanks for playing. Try again later. Next?
Only there really was no “next.” Red? For the most part, colors of

The redemption of Snow White (Part 1)

Nearly four years ago, during the Ph.D. thesis research of my former graduate student Meg Schwamb, we discovered a distant bright Kuiper belt object. Our hope had been that something so distant would be like Sedna – far away, but part of an even more distant population. But it wasn’t. The object was more like Eris – far away, but on its way back in. The object got an official license plate

Blood, Sweat and Tears: The Server Challenge

When you’re walking down the aisles of an expo hall at a technical conference, what do you expect to see? Stacks of collateral? Maybe a few giveaway T-shirts? A fancy switch-ball or two? How about a crowd of people watching as a fellow attendee slams hard drive trays into a server enclosure and frantically plugs in network cables as a digital clock times them?

Cynical attendees might look at the Server Challenge and think of it as a gimmicky way to draw a crowd to our booth, but when you step up to the server enclosure to compete, you’re getting a crash course in SoftLayer’s business (along with an exciting tangible experience).

Before your first attempt, you’ll learn that SoftLayer is a hosting provider and that you’ll be reassembling a miniature version of the larger server racks we have filling data centers around the country (soon to be around the world). You see that one of SoftLayer’s biggest differentiators is our network configuration: A public network, a private network and an out-of-band management network connection to every SoftLayer server for free … And when the clock starts, we can share even more of the SoftLayer story.

Our goal is to let you experience SoftLayer while you’re just hearing about other companies. As it turns out, the experience draws people in:

One of the coolest parts of pulling together that time lapse video from OSCON was seeing the reactions on the faces of the participants when they finished. The challenge sparks a surge of adrenaline, so when competitors stop the clock, they expectantly check to see how they fare against the conference’s Top 10 times.

In the last conference alone, no fewer than five other companies (who don’t even have a connection with the hosting industry) approached us to ask how they could build their own Server Challenge. Needless to say, the Server Challenge is becoming a SoftLayer conference staple … And we’re looking forward to the hottest competition ever at HostingCon 2011 next week!

Between your study of server schematics and your dissection of the winning run’s strategy from the end of the OSCON video, make sure you click through to George’s HostingCon preview so you can learn where to find SoftLayer in San Diego.

-@khazard

P.S. Space is limited for the HostingCon Party, so if you’ll be in town, make sure to let us know so we can give you a promo code for free admission!

KN9USK8697N9

Technology Partner Spotlight: CyberlinkASP

Welcome to the next installment in our blog series highlighting the companies in SoftLayer’s new Technology Partners Marketplace. These Partners have built their businesses on the SoftLayer Platform, and we’re excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
- Paul Ford, SoftLayer VP of Community Development

 

Scroll down to read the guest blog from Chris Lantrip, CEO of CyberlinkASP, an application service provider focused on hosting, upgrading and managing the industry’s best software. To learn more about CyberLinkASP, visit http://www.cyberlinkasp.com/.

The DesktopLayer from CyberlinkASP

Hosted virtual desktops – SoftLayer style.

In early 2006, we were introduced to SoftLayer. In 2007, they brought us StorageLayer, and in 2009, CloudLayer. Each of those solutions met a different kind of need in the Application Service Provider (ASP) world, and by integrating those platforms into our offering, DesktopLayer was born: The on-demand anytime, anywhere virtual desktop hosted on SoftLayer and powered by CyberlinkASP.

CyberlinkASP was originally established to instantly web-enable software applications that were not online in the past. Starting off as a Citrix integration firm in the early days, we were approached by multiple independent software vendors asking us to host, manage and deliver their applications from a centralized database platform to their users across multiple geographic locations. With the robust capabilities of Citrix, we were able to revolutionize application delivery and management for several ISV’s.

Over time, more ISV’s starting showing up at our doorstep, and application delivery was becoming a bigger and bigger piece of our business. Our ability to provision users on a specific platform in minutes, delete them in minutes, perform updates and maintain hundreds of customers and thousands of users all at one time from a centralized platform was very attractive.

Our users began asking us, “Is it possible to put our payroll app on this platform too?” “What about Exchange and Office?” They loved the convenience of not managing the DBs for individual applications, and they obviously wanted more. Instead of providing one-off solutions for individual applications, we built the DesktopLayer, a hosted environment for virtual desktops.

We deliver a seamless and integrated user experience utilizing SoftLayer, Citrix XenApp and XenDesktop. When our users log in they see the same screen, the same applications and the same performance they received on their local machine. The Citrix experience takes over the entire desktop, and the look and feel is indistinguishable. It’s exactly what they are accustomed to.

Our services always include the Microsoft suite (Exchange, Office, Sharepoint) and is available on any device, from your PC to your Mac to your iPad. To meet the needs of our customers, we also integrate all 3rd party apps and non-Microsoft software into the virtual desktop – if our customers are using Peachtree or Quickbooks for accounting and Kronos for HR, they are all seamlessly published to the users who access them, and unavailable to those that do not.

We hang our hat on our unique ability to tie all of a company’s applications into one centralized user experience and support it. Our Dallas-based call center is staffed with a team of knowledgeable engineers who are always ready to help troubleshoot and can add/delete and customize new users in minutes. We take care of everything … When someone needs help setting up a printer or they bought a new scanner, they call our helpdesk and we take it from there. Users can call us directly for support and leave the in-house IT team to focus on other areas, not desktop management.

With the revolution of cloud computing, many enterprises are trending toward the eradication of physical infrastructure in their IT environments. Every day, we see more and more demand from IT managers who want us to assume the day-to-day management of their end user’s entire desktop, and over the past few years, the application stack that we deliver to each of our end users has grown significantly.

As Citrix would say “the virtual desktop revolution is here.” The days of having to literally touch hundreds of devices at users’ workstations are over. Servers in the back closet are gone. End users have become much more unique and mobile … They want the same access, performance and capabilities regardless of geography. That’s what we provide. DesktopLayer, with instant computing resources available from SoftLayer, is the future.

I remember someone telling me in 2006 that it was time for the data center to “grow up”. It has. We now have hundreds of SMB clients and thousands of virtual desktops in the field today, and we love having a chance to share a little about how we see the IT landscape evolving. Thanks to our friends at SoftLayer, we get to tell that story and boast a little about what we’re up to!

- Chris M. Lantrip, Chief Executive, CyberlinkASP

Under the Hood of ‘The Cloud’

When we designed the CloudLayer Computing platform, our goal was to create an offering where customers would be able to customize and build cloud computing instances that specifically meet their needs: If you go to our site, you’re even presented with an opportunity to “Build Your Own Cloud.” The idea was to let users choose where they wanted their instance to reside as well as their own perfect mix of processor power, RAM and storage. Today, we’re taking the BYOC mantra one step farther by unveiling the local disk storage option for CloudLayer computing instances!

Local Disk

For those of you familiar with the CloudLayer platform, you might already understand the value of a local disk storage option, but for the uninitiated, this news presents a perfect opportunity to talk about the dynamics of the cloud and how we approach the cloud around here.

As the resident “tech guy” in my social circle, I often find myself helping friends and family understand everything from why their printer isn’t working to what value they can get from the latest and greatest buzzed-about technology. As you’d probably guess, the majority of the questions I’ve been getting recently revolve around ‘the cloud’ (thanks especially to huge marketing campaigns out of Redmond and Cupertino). That abstract term effectively conveys the intentional sentiment that users shouldn’t have to worry about the mechanics of how the cloud works … just that it works. The problem is that as the world of technology has pursued that sentiment, the generalization of the cloud has abstracted it to the point where this is how large companies are depicting the cloud:

Cloud

As it turns out, that image doesn’t exactly illicit the, “Aha! Now I get it!” epiphany of users actually understanding how clouds (in the technology sense) work. See how I pluralized “clouds” in that last sentence? ‘The Cloud’ at SoftLayer isn’t the same as ‘The Cloud’ in Redmond or ‘The Cloud’ in Cupertino. They may all be similar in the sense that each cloud technology incorporates hardware abstraction, on-demand scalability and utility billing, but they’re not created in the same way.

If only there were a cloud-specific Declaration of Independence …

We hold these truths to be self-evident, that all clouds are not equal, that they are endowed by their creators with certain distinct characteristics, that among these are storage, processing power and the ability to serve content. That to secure these characteristics, information should be given to users, expressed clearly to meet the the cloud’s users;

The Ability to Serve Content
Let’s unpack that Jeffersonian statement a little by looking at the distinct characteristics of every cloud, starting with the third (“the ability to serve content”) and working backwards. Every cloud lives on hardware. The extent to which a given cloud relies on that hardware can vary, but at the end of the day, you &nash; as a user – are not simply connecting to water droplets in the ether. I’ll use SoftLayer’s CloudLayer platform as a specific example of that a cloud actually looks like: We have racks of uniform servers – designated as part of our cloud infrastructure – installed in rows in our data centers. All of those servers are networked together, and we worked with our friends at Citrix to use the XenServer platform to tie all of those servers together and virtualize the resources (or more simply: to make each piece of hardware accessible independently of the rest of the physical server it might be built into). With that infrastructure as a foundation, ordering a cloud server on the CloudLayer platform simply involves reserving a small piece of that cloud where you can install your own operating system and manage it like an independent server or instance to serve your content.

Processing Power
Understanding the hardware architecture upon which a cloud is built, the second distinct characteristic of every cloud (“processing power”) is fairly logical: The more powerful the hardware used for a given cloud, the better processing performance you’ll get in an instance using a piece of that hardware.

You can argue about what software uses the least resources in the process of virtualizing, but apples-to-apples, processing power is going to be determined by the power of the underlying hardware. Some providers try to obfuscate the types of servers/processors available to their cloud users (sometimes because they are using legacy hardware that they wouldn’t be able to sell/rent otherwise), but because we know how important consistent power is to users, we guarantee that CloudLayer instances are based on 2.0GHz (or faster) processors.

Storage
We walked backward through the distinct characteristics included in my cloud-specific Declaration of Independence because of today’s CloudLayer Computing storage announcement, but before I get into the details of that new option, let’s talk about storage in general.

If the primary goal of a cloud platform is to give users the ability to scale instantly from 1 CPU of power to 16 CPUs of power, the underlying architecture has to be as flexible as possible. Let’s say your cloud computing instance resides on a server with only 10 CPUs available, so when you upgrade to a 16-CPU instance, your instance will be moved to a server with enough available resources to meet your need. To make that kind of quick change possible, most cloud platforms are connected to a SAN (storage area network) or other storage device via a back-end network to the cloud servers. The biggest pro of having this setup is that upgrading and downgrading CPU and RAM for a given cloud instance is relatively easy, but it introduces a challenge: The data lives on another device that is connected via switches and cables and is being used by other customers as well. Because your data has to be moved to your server to be processed when you call it, it’s a little slower than if a hard disk was sitting in the same server as the instance’s processor and RAM. For that reason, many users don’t feel comfortable moving to the cloud.

In response to the call for better-performing storage, there has been a push toward incorporating local disk storage for cloud computing instances. Because local disk storage is physically available to the CPU and RAM, the transfer of data is almost immediate and I/O (input/output) rates are generally much higher. The obvious benefit of this setup is that the storage will perform much better for I/O-intensive applications, while the tradeoff is that the setup loses the inherent redundancy of having the data replicated across multiple drives in a SAN (which, is almost like its own cloud … but I won’t confuse you with that right now).

The CloudLayer Computing platform has always been built to take advantage of the immediate scalability enabled by storing files in a network storage device. We heard from users who want to use the cloud for other applications that they wanted us to incorporate another option, so today we’re happy to announce the availability of local disk storage for CloudLayer Computing! We’re looking forward to seeing how our customers are going to incorporate cloud computing instances with local disk storage into their existing environments with dedicated servers and cloud computing instances using SAN storage.

If you have questions about whether the SAN or local disk storage option would fit your application best, click the Live Chat icon on SoftLayer.com and consult with one of our sales reps about the benefits and trade-offs of each.

We want you to know exactly what you’re getting from SoftLayer, so we try to be as transparent as we can when rolling out new products. If you have any questions about CloudLayer or any of our other offerings, please let us know!

-@nday91