Lock-In: Why Your OS Choice Matters in the Cloud

Public Cloud Lock-In

I ran across an article last week about the fear of cloud lock-in being a “key concern of companies considering a cloud move“.  The article was spot on in pointing out that dependence upon some of the higher level public cloud service features hinders a user’s ability to migrate to another cloud.  There is a real risk in being locked into a public cloud service, not only due to dependence on the vendor’s services, but also the complexity and costs of trying to move your data out.  The article concludes by stating that there “aren’t easy answers to this problem“, which I think is true…but I also think by simply keeping two things in mind, a user can do a lot to mitigate the lock-in risk.

1. Choose an Independently Produced Operating System

Whatever solutions you decide to deploy, it’s absolutely critical that you choose an operating system not produced by the public cloud provider.  This recent fad of public cloud providers creating their own specific OS is just history repeating itself, where HP-UX, IRIX, Solaris, and AIX are being replaced with the likes of GCEL and Amazon Linux.  Sure, the latter are Linux-based, but just like the proprietary UNIX operating systems of the past, they are developed internally, only support the infrastructure they’re designed for, and are only serviceable by the company that produces them.  Of course the attraction to using these operating systems is understandable, because the provider can offer them for “free” to users desiring a supported OS in the cloud.  They can even price services lower to customers who use their OS as an incentive and “benefit”, with the claim it allows them to provide better and faster support.   It’s a perfect solution….at first.  However, once you’ve deployed your solution to a public cloud vendor-specific OS, you have made a huge first step towards lock-in.  Sure, the provider can say their OS is based on an independently produce operating system, but that means nothing once the two have diverged due to security updates and fixes, not to mention release schedules and added features.  There’s no way the public cloud vendor OS can keep up, and they really have no incentive to, because they’ve already got you….the longer you stay on their OS, the more you will depend on their application and library versions, thus the deeper you get.  A year or two down the road, another public cloud provider pops up with better service and/or prices, but you can’t move without the risk of extended downtimes and/or loss of data, in addition to the costs of paying your IT team the overtime it will take to architect such a migration.  We’ve all been here before with proprietary UNIX and luckily Linux arrived on the scene just in time to save us.

2. Choose an Operating System with Service Orchestration Support

Most of the lock-in features provided by public clouds are simply “Services as a Service”, be it a database service,  big data/mapreduce service, or a development platform service like rails or node.  All of these services are just applications easily deployed, scaled, and connectable to existing solutions.  Of course it’s easy to understand the attraction to using these public cloud provider services, because it means no setup, no maintenance, and someone else to blame if s**t goes sideways with the given service.  However, again by accepting these services, you are also accepting a level of lock in.  By creating/adapting your solution(s) to use the load balancing, monitoring, and/or database service, you are making them less portable and thus harder/costlier for you to migrate.  I can’t blame the providers for doing this, because it makes *perfect* sense from a business perspective:

I’m providing a service that is commoditized…I can only play price wars for so long….so how can I keep my customers once that happens….services!  And what’s more, I don’t want them to easily use another cloud, so I’ll make sure my services require you to utilize my API….possibly even provide a better experience on my own OS.

Now I’m not saying you shouldn’t use these services, but you should be careful of how much of them you consume and depend on.  If you ever intend or need to migrate, you will want a solution that covers the scenario of the next cloud provider not having the same service…or the service priced at a higher rate than you can afford…or the service quality/performance not being as good.  This is where having a good service orchestration solution becomes critical, and if you don’t want to believe me…just ask folks at IBM or OASIS.  And for the record, service orchestration is not configuration management….and you can’t get their by placing a configuration management tool in the cloud.  Trying to get configuration management tools to do service orchestration is like trying to teach a child to drive a car.  Sure, it can be done pretty well in a controlled empty parking lot…on a clear day.  However, once you add unpredictable weather, pedestrians, and traffic, it gets real bad, real quick.  Why?  Because just like your typical configuration management tool, a child lacks the intelligence to react and adapt to the changing conditions in the environment.

Choose Ubuntu Server

Obviously I’m going to encourage the use of Ubuntu Server, but not just because I work for Canonical or am an Ubuntu community member, but because I actually believe it’s currently the best option around.  Canonical and Ubuntu Server community members have put countless hours and effort into ensuring Ubuntu Server runs well in the cloud, and Canonical is working extremely hard with public cloud providers to ensure our users can depend on our images and public cloud infrastructure to get the fastest, cheapest, and most efficient cloud experience possible.   There’s much more to running well in the cloud than just putting up an image and saying “go!”.   Just to name a few examples: there’s insuring all instance sizes are supported, adding in-cloud mirrors across regions and zones to ensure faster/cheaper updates, natively packaging API tools and hosting them in the archives, updating images with SRUs to avoid costly time spent updating at first boot, daily development images made available, and ensuring Juju works within the cloud to allow for service orchestration and migration to other supported public clouds.

Speaking of Juju, we’ve also invested years (not months….YEARS) into our service orchestration project, and I can promise you that no one else, right now, has anything that can come close to what it can do.  Sure, there are plenty of people talking about service orchestration…writing about service orchestration….and some might even have a prototype or beta of a service orchestration tool, but no one comes close to what we have in Juju…no one has the community engagement behind their toolset…that’s growing everyday.  I’m not saying Juju is perfect by any means, but it’s the best you’re going to find if you are really serious about doing service orchestration in the cloud or even on the metal.

Over the next 12 months, you will see Ubuntu continue to push the limits of what users can expect from their operating system when it comes to scale-out computing.  You have already seen what the power of the Ubuntu community can do with a phone and tablet….just watch what we do for the cloud.

Advertisements

About Robbie

I live in the awesome city of Austin, TX and work for Canonical, sponsors of the best damn operating system in the world...Ubuntu.

Posted on March 4, 2013, in Ubuntu and tagged , . Bookmark the permalink. Leave a comment.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: