Monthly Archives: October 2011

Ubuntu is the OS for the Cloud, and here’s why…

Over the last few days, I felt the compelling need to explain why I think Ubuntu is the best operating system for the cloud. In my mind, it comes down to three key differentiators that I think benefit both users and the overall advancement of the cloud.

1: Ubuntu Supports the Latest Technologies

Cloud computing and the technologies surrounding it are advancing at an absolutely incredible pace of innovation.  Consider how fast OpenStack has matured in the last year, the recent explosion of Hadoop solutions, and the entire movement around Open Computing.  Legacy “enterprise” Linux solutions simply cannot keep up given their existing release processes.  Users of the cloud and other scale-out technologies can’t afford to wait years for the next supported release to come, especially when that release is destined to be out-of-date the day of release, due to the slow-moving technology transition model utilized by the distribution provider, i.e. opensource project foo releases at time A, then it gets into the “community” version of the release at time B six or more months later, then it *might* get put into the enterprise version at a much later time (years) C.

If you ask these legacy distributions why they move so slow, they’ll undoubtedly say it’s because they are aligning with the hardware release cycles of most server OEMs, which is absolutely true. This is why I’m so excited by the Open Compute Project and it’s potential to reduce what Andreas “Andy” Bechtolsheim recently called gratuitous differentiation in a keynote discussion at this year’s Open Compute Summit in NYC.  In short, most OEMs have traditionally introduced features that are more about customer lock-in, than really answering their customer’s needs, e.g. releasing a new blade, that requires a new bladecenter, that won’t work with the older model nor work in another OEMs bladecenter…or even worse, having special server racks to match their servers, that won’t work with anyone elses…insane! The only benefit I’ve seen from gratuitous server technology differentiation is that it’s probably a big reason why so many businesses have jumped to the cloud…where they don’t have to worry about this stuff anymore. Hopefully, we can avoid having different APIs and custom Linux distributions by each cloud service provider, as I feel these are just more attempts at customer lock-in, and don’t really provide that much value to the users themselves.

Legacy Linux distributions also like to tout their ABI compatibility, that they enforce for the benefit of their customers and ISV partners. The logic is that by guaranteeing ABI at the kernel and plumbing layer throughout a given release and its updates, ISVs and their customers are assured that their applications (assuming they don’t change) will work for the life of the release. Besides again fitting to the slow-paced legacy OEM server release model, this makes perfect sense in a legacy server software world too. An ISV can build a release once, and then issue fixes thereafter, until the next major release in a year or so. As we move toward a faster-paced, continuous integration, scale-out computing world, ABI compatibility becomes more of a hinderance than advantage for users. The rate of innovation is now so fast, that even packaging certain webscale applications is frowned upon by the upstreams that provide them because they don’t want their user’s experience limited to a distributions release cycle. Also, it becomes difficult, to sometimes impossible, for most of the legacy Linux distributions to introduce new hardware architectures, i.e. ARM server support, post release. Server OEMs are forced to either go through the pain of backporting huge amounts of code into a forked kernel (that receives little outside testing), slip out their own hardware roadmaps to match the distribution release cycle, or try to convince (usually with money) the legacy Linux distributor to issue some “special” release to accommodate them.

Canonical’s Ubuntu Support Model is Scale-out Friendly

Ubuntu is free, and Canonical has made the promise that it always will be. By free, we mean no license fees or paid subscriptions to receive updates. Around 10 years ago when the first legacy Linux distributions were coming about, the movement to a subscription-based model was seen as a revolutionary change in the software business. Instead of charging licenses at a per user base, which was the accepted model for operating systems and software as a whole at the time (in addition to support contracts), these companies had the ingenious approach of giving away the software, and creating an updates subscription model. Realizing that software requires updates, and that most (but not all) users will want them, they created a system that allowed them dependable, consistent revenue per installation, while giving customers the freedom to have as many users on the system they needed, as well as machines that simply sit and do their job, never needing an update (think mail or DNS server). Later on, they partnered with server OEMs and brilliantly started to differentiate these subscription costs based on the architectures and cpu cores of the hardware…learning tricks the OEMs had played with their own proprietary operating systems of the day.

The subscription + support model has done well…extremely well over the past decade, but in the cloud…in scale-out computing, the model begins to hurt…extremely in some cases. One of the main benefits of cloud computing is the ability to scale on demand. A given deployment can have a guest instance count in the low 10s for 6 months, but then need to scale out to the 100s or 1000s for another 4, returning back to original levels after peak demand has subsided, e.g. demands on online retail infrastructure increase dramatically during the holidays and then subside soon after. For a subscription-based model, these means customers must budget for an increase in fees to account for the scaling, and if they underestimate, their own profits are impacted because of it. Furthermore, making someone pay for fixes and security updates just seems wrong to me…what if Google or Mozilla started charging people for fixes and security updates for their web browsers…people would lose their minds. Finally, because applications (especially scale-out/webscale ones) are innovating so fast now…adopting new development methodologies like continuous integration, it’s unthinkable that someone would deploy software and never want the updates. Charging someone for fixes and updates is now as archaic as charging them for the number of users.

The service model is the next evolutionary step, away from the subscription model. It recognizes that a Linux distributors real value to the customer is the expertise they have from producing the distribution, having the upstream relationships, and knowing the integrated technologies, inside and out. Thus, the business model is built around the support and services they are able to provide because of their unique position, not the bug fixes and security updates that users should expect to get for the same cost as they received the original software…free.

Ubuntu’s Release Process is Dependable and Transparent

To the average consumer, I suspect the Ubuntu release cadence is not much more than a nice thing to have. There’s no need to speculate on when the next release is, or what it will have, because we plan transparently. While we always deliver on a 6 month cadence, users aren’t forced to upgrade that often, as we support each release for 18 months…and up to 5 years for the LTS that comes every 2 years. And yet, despite having such a predictable release cycle, we still manage to generate more growing excitement for each one (personally that’s just amazing to me).

Now if you’re someone deploying a private cloud, a solution into a cloud, or even releasing hardware focused at the cloud, the cadence becomes less of a “nice thing” and more of a necessity. Whether your planning a hardware or software release, being able to depend on an operating system release schedule not slipping is a huge benefit and relief. There are enough internal moving parts to any significant software or hardware release project, then add the rapid pace of cloud innovation, and no one wants to then worry that your entire business plan can be jeopardized by the OS vendor slipping out their release schedule…to accommodate a partner, possibly even your direct competitor.

A dependable, transparent release process not only provides peace of mind, it allows for the best possible collaboration. Transparency allows users, partners, and upstreams alike, to observe and influence the direction of each Ubuntu release. There’s no waiting for the first pre-release ISO to see if your feature made it in, or if this next ISO will boots on your new hardware, because you can track every bug and feature work item. As part of our transparent and dependable process, we produce pre-release Ubuntu ISOs and cloud images daily. While each daily isn’t guaranteed to be installable, bootable, or tested to the level of an alpha or beta release, it’s usually good enough to give users and partners something to sniff out and provide feedback on…giving them confidence their cloud solution that depends on our OS won’t be in jeopardy at release. You won’t find this with legacy Linux distributions…not even their closest business partners get this level of access.

We’re Not Perfect…

As I’ve said in the past, Canonical’s investment in Ubuntu Server is focused on cloud computing. So to be clear – While we have a tremendous community to look after the quality of support for traditional server workloads and a solid inheritance of dependability and stability from Debian, I would be lying to you if I said Ubuntu is the best choice for every type of server deployment. Hell, I challenge anyone to name one operating system that really is. All I’m saying is that Ubuntu is the best operating system for cloud computing….and Canonical will continue to focus our innovation to ensure it stays that way.

Smart != Success

The Back Story

Having been in the technology field all my adult life, as both a student and professional, I’m used to working with extremely bright people…what most would consider people with “high I.Q.s”.  I have friends and family who often characterize me as being “smart” or a “genius”, of which I usually respond with a smile…and then let them know right away…that there are loads more people I know and work with much smarter than I.  I take great pride in my ignorance, as it keeps me humble and hungry for improvement and knowledge.  In my quest for “less ignorance” I recently decided to re-read a book I picked up years back, when I was studying for my Engineering Management masters degree…recent events had me questioning certain truths I hold dear, so I figured I should re-evaluate them.   The book is called “Working with Emotional Intelligence”, by Daniel Goleman, PhD.  Doctor Goleman is an excepted expert of behavioral and brain sciences, and has a series of books on the subject of emotional intelligence.  I stumbled across his earlier book, simply called “Emotional Intelligence” while in a book store, and just the book jacket synopsis was enough for me to buy it…and I’ve been a true believer of the concept ever since.  Basically, he argues that how we’ve typically defined and measured “intelligence” has been far to narrow, ignoring a critical range of abilities that matter tremendously in determining how we succeed in life.  How do we explain why people with high IQs fail in life, while those with average traditional IQ scores succeeding amazingly well.  He suggests that factors such as self-awareness, self-discipline, empathy, etc are incorrectly left out of typical I.Q. measurements, and that these should be included when evaluating an individuals capacity…that the emotional IQ is more important to success than most imagine.

Trained Emotional Incapacity

In Dr Goleman’s book, there’s short section called “The Computer Nerd: Trained Incapacity”, which resonated so much with me, that I felt the need to share it.  Goleman starts out with what most have observed, that many people in IT with high level of technical skill often have a hard time dealing with people.  He states that he used to think it was just a negative stereotype, or “cultural misperception”, because he assumed one’s emotional intelligence and traditional IQ were independent of each other.  However he continues to discuss how a colleague at MIT of his observed that people with extremely high levels of IQ often lacked social skills…that the smarter they are the less competent they seem to be emotionally and socially. “It’s as though the IQ muscle strengthened itself at the expense of muscles for personal and social competence.”  He writes about how the mastery of technical pursuits demanded long hours…often spent working alone…starting early on in childhood or teenage years….a critical period in emotional development.  He also states that self-selection plays a role, in that people lacking in sufficient emotional intelligence are probably drawn to study fields such as computer science or engineering…because cognitive excellence is stressed over anything else.

The Secret to Success

To be clear, Goleman is not implying that all high-IQ scientists are socially incompetent, that would be stupid.  However, what he is suggesting is that people with good emotional intelligence in a technology field are in high demand,i.e. someone with “high science skills and high social skills” has the potential to be a highly successful in an engineering or technical organization.  Dr. Goleman goes further to site a UC Berkeley study from the 1950s, where 80 PhD students in science were tested for IQ and personality competence, along with extensive interviews with psychologists, all to measure emotional balance and maturity, integrity, and interpersonal effectiveness.  Forty years later, researchers tracked the surviving students down and made estimates on their career success based on resumes, evaluations by experts in their field, and credible scientific publications.  The result was that the emotional intelligence abilities were four times more important in determining professional success and prestige than traditional IQ.

The Bottom Line

If you are in a technology field and interested in management or even team leadership, don’t assume that just because you can code the best or solve the most technically challenging problems the fastest, that you can lead or manage people.  On the flip side, if you find yourself sometimes struggling to keep up with other engineers on your team, or just not picking things up as quickly, don’t let that deter you from pursuing a leadership position.   There is more to succeeding as a leader than simply being the smartest person in the room. 😉