Priorities & Perseverance

Screenshot from 2014-09-17 22:48:22

This is a not a stock ticker, rather a health ticker…and unlike with a stock price, a downward trend is good.  Over the last 3 years or so, I’ve been on a personal mission of improving my health.  As you can see it wasn’t perfect, but I managed to lose a good amount of weight.

So why did I do it…what was the motivation…it’s easy, I decided in 2011 that I needed to put me first.   This was me from 2009

Screenshot from 2014-09-10 09:54:50IMG_84318618356313

At my biggest, I was pushing 270lbs.  I was so busy trying to do for others, be it work, family, or friends, I was constantly putting my needs last, i.e. exercise and healthy eating.  You see, I actually like to exercise and healthy eating isn’t a hard thing for me, but when you start putting those things last on your priorities, it becomes easy to justify skipping the exercise or grabbing junk food because your short on time or exhausted from being the “hero”.

Now I have battled weight issues most of my life.  Given how I looked as a baby, this shouldn’t come as a surprise. LOL

20140917_231831

But I did thin out as a child.

530336_10151620134146242_1946930333_n

To only get bigger again

20140917_231901

And even bigger again

20140917_232423

But then I got lucky.  My metabolism kicked into high gear around 20, and I grew about 5 inches and since I was playing a ton of basketball daily, I ate anything I wanted and still stayed skinny

10475956_10152202484326242_6086082912878589217_o

I remained so up until I had my first child, then the pounds began to come on.  Many parents will tell you that the first time is always more than you expected, so it’s not surprising with sleep deprivation and stress, you gain weight.  To make it even more fun, I had decide to start a new job and buy a new house a few years later, when my second child came…even more “fun”.

2014-08-24 22.07.43

To be clear, I’m not blaming any of my weight gain on these events, however they became easy crutches to justify putting myself last.  And here’s the crazy part, by doing all this, I actually ended up doing less for those I cared about in the long run, because I was physically exhausted, mentally fatigued, and emotionally spent a lot of the time.

So, around October of 2012 I made a decision.  In order for me to be the man I wanted to be for my family, friends, and even colleagues, I had to put myself first.  While it sounds selfish, it’s the complete opposite.  In order to be the best I could be for others, I realized I had to get myself together first.  For those of you who followed me on Facebook then, you already know what it took…a combination of MyFitnessPal calorie tracking and a little known workout program called Insanity:

Insanity-Workout

Me and my boy, Shaun T, worked out religiously…everyday…sometimes mornings…sometimes afternoons…sometimes evenings.  I carried him with me all for work travel on my laptop and phone…doing Insanity videos in hotels rooms around the world.  I did the 60day program about 4 times through (with breaks in between cycles)…adding in some weight workouts towards the end.  The results were great, as you can see in the first graphic starting around October 2012.  By staying focused and consistent, I dropped from about 255lbs to 226lbs at my lowest in July 2013.  I got rid of a lot of XXL shirts and 42in waist pants/shorts, and got to a point where I didn’t always feel the need to swim with a shirt on….if ya know what I mean😉.  So August rolled around, and while I was feeling good about myself…didn’t feel great, because I knew that while I was lighter, and healthier, I wasn’t necessarily that much stronger.  I knew that if I wanted to really be healthy and keep this weight off, I’d need more muscle mass…plus I’d look better too😛.

So the Crossfit journey began.

Now I’ll be honest, it wasn’t my first thought.  I had read all the horror stories about injuries and seen some of the cult-like stuff about it.  However, a good friend of mine from college was a coach, and pretty much called me out on it…she was right…I was judging something based on others opinions and not my own (which is WAY outta character for me).  So…I went to my first Crossfit event…the Women’s Throwdown in Austin, TX (where I live) held by Woodward Crossfit in July of 2013.  It was pretty awesome….it wasn’t full of muscle heads yelling at each other or insane paleo eating nut jobs trying to out shine another…it was just hardworking athletes pushing themselves as hard as they could…for a great cause (it’s a charity event)…and having a lot of fun.  I planned to only stay for a little bit, but ended up staying the whole damn day! Long story, short…I joined Woodward Crossfit a few weeks after (the delay was because I was determined to complete my last Insanity round, plus I had to go on a business trip), which was around the week of my birthday (Aug 22).

download

1381407_609309165778302_680124169_n

Fast forward a little over a year, with a recently added 21-day Fitness Challenge by David King (who also goes to the same gym), and as of today I’m down about 43lbs (212), with a huge reduction in body fat percentage.  I don’t have the starting or current percentage, but let’s just say all 43lbs lost was fat, and I’ve gained a good amount of muscle in the last year as well…which is why the line flattened a bit before I kicked it up another notch with the 21-Day last month.

Now I’m not posting any more pictures, because that’s not the point of this post (but trust me…I look goooood :P).  My purpose is exactly what the subject says, priorities & perseverance.  What are you prioritizing in your life?  Are you putting too many people’s needs ahead of your own?  Are you happy as a result?  If you were like me, I already know the answer…but you don’t have to stay this way.  You only get one chance at this life, so make the most out of it.  Make the choice to put your happiness first, and I don’t mean selfishly…that’s called pleasure.  You’re happier when your loved ones are doing well and happy…you’re happier when you have friends who like you and that you can depend on….you’re happier when you kick ass at work…you’re happier when you kill it on the basketball court (or whatever activity you like).  Make the decision to be happy, set your goals, then perservere until you attain them…you will stumble along the way…and there will be those around you who either purposely or unknowingly discourage you, but stay focused…it’s not their life…it’s yours.  And when it gets really hard…just remember the wise words of Stuart Smalley:

Canonical’s Office of The CDO: A 5 Year Journey in DevOps

I’m often asked what being the Vice President of Cloud Development and Operations means, when introduced for a talk or meeting, or when someone happens to run by my LinkedIn profile or business card.

The office of the CDO has been around in Canonical for so long, I forget that the approach we’ve taken to IT and development is either foreign or relatively new to a lot of IT organizations, especially in the commonly thought of “enterprise” space. I was reminded of this when I gave a presentation at an OpenStack Developer Summit entitled “OpenStack in Production: The Good, the Bad, & the Ugly” a year ago in Portland, Oregon. Many in the audience were surprised by the fact that Canonical not only uses OpenStack in production, but uses our own tools, Juju and MAAS, created to manage these cloud deployments. Furthermore, some attendees were floored by how our IT and engineering teams actually worked well together to leverage these deployments in our production deployment of globally accessible and extensively used services.

Before going into what the CDO is today, I want to briefly cover how it came to be. The story of the CDO goes back to 2009, when our CEO, Jane Silber, and Founder, Mark Shuttleworth, were trying to figure out how our IT operations team and web services teams could work better…smarter together. At the same time our engineering teams had been experimenting with cloud technologies for about a year, going so far as to provide the ability to deploy a private cloud in our 9.04 release of Ubuntu Server.

Ubuntu_Enterprise_Cloud

It was clear to us then, that cloud computing would revolutionize the way in which IT departments and developers interact and deploy solutions, and if we were going to be serious players in this new ecosystem, we’d need to understand it at the core. The first step to streamlining our development and operations activities was to merge our IT team, who provided all global IT services to both Canonical and the Ubuntu community, with our Launchpad team, who developed, maintained, and serviced Launchpad.net, the core infrastructure for hosting and building Ubuntu. We then added our Online Services team, who drove our Ubuntu One related services, and this new organization was called Core DevOps…thus the CDO was born.

Roughly soon after the formation of the CDO, I was transitioning between roles within Canonical, going from acting CTO to Release Manager (10.10 on 10.10.10..perfection!🙂 ), then landing in as our new manager for the Ubuntu Server and Security teams. Our server engineering efforts continued to become more and more focused on cloud, and we had also began working on a small, yet potentially revolutionary, internal project called Ensemble, which was focused on solving the operational challenges system administrators, solution architects, and developers would face in the cloud, when one went from managing 100s of machines and associated services to 1000s.

All of this led to a pivotal engineering meeting in Cape Town, South Africa early 2011, where management and technical leaders representing all parts of the CDO and Ubuntu Server engineering met with Mark Shuttleworth, along with the small team working on Project Ensemble, to determine the direction Canonical would take with our server product.

IMG_0439

Until this moment in time, while we had been dabbling in cloud computing technologies with projects like our own cloud-init and the Amazon EC2 AMI Locator, Ubuntu Server was still playing second stage to Ubuntu for the desktop. While being derived from Debian (the world’s most widely deployed and dependable Linux web hosting server OS), certainly gave us credibility as a server OS, the truth was that most people thought of desktops when you mentioned Ubuntu the OS. Canonical’s engineering investments were still primarily client focused, and Ubuntu Server was nothing much more than new Debian releases at a predictable cadence, with a bit of cloud technology thrown in to test the waters. But this weeklong engineering sprint was where it all changed. After hours and hours of technical debates, presentations, demonstrations, and meetings, there were two major decisions made that week that would catapult Canonical and Ubuntu Server to the forefront of cloud computing as an operating system.

The first decision made was OpenStack was the way forward. The project was still in its early days, but it had already peaked many of our engineers’ interest, not only because it was being led by friends of Ubuntu and former colleagues of Canonical, Rick Clark, Thierry Carrez, and Soren Hansen, but the development methods, project organization, and community were derived from Ubuntu, and thus it was something we knew had potential to grow and sustain itself as an opensource project. While we still had to do our due diligence on the code, and discuss the decision at UDS, it was clear to many then that we’d inevitably go that direction.

The second decision made was that Project Ensemble would be our main technical contribution to cloud computing, and more importantly, the key differentiator we needed to break through as the operating system for the cloud. While many in our industry were still focused on scale-up, legacy enterprise computing and the associated tools and technologies for things like configuration and virtual machine management, we knew orchestrating services and managing the cloud were the challenges cloud adopters would need help with going forward. Project Ensemble was going to be our answer.

Fast forward a year to early 2012. Project Ensemble had been publicly unveiled as, Juju, the Ubuntu Server team had fully adopted OpenStack and plans for the hugely popular Ubuntu Cloud Archive were in the works, and my role had expanded to Director of Ubuntu Server, covering the engineering activities of multiple teams working on Ubuntu Server, OpenStack, and Juju. The CDO was still covering IT operations, Launchpad, and Online Services, but now we had started discussing plans to transition our own internal IT infrastructure over to an internal cloud computing model, essentially using the very same technologies we expected our users, and Canonical customers, to depend on.  As part of the conversation on deploying cloud internally, our Ubuntu Server engineering teams started looking at tools to adopt that would provide our internal IT teams and the wider Ubuntu community the ability to deploy and manage large numbers of machines installed with Ubuntu Server. Originally, we landed on creating a tool based on Fedora’s Cobbler project, combined with Puppet scripts, and called it Ubuntu Orchestra. It was perfect for doing large-scale, coordinated installations of the OS and software, such as OpenStack, however it quickly became clear that doing this install was just the beginning…and unfortunately, the easy part.  Managing and scaling the deployment was the hard part. While we had called it Orchestra, it wasn’t able to orchestrate much beyond machine and application install. Intelligently and automatically controlling the interconnected services of OpenStack or Hadoop in a way that allowed for growth and adaptability was the challenge.  Furthermore, the ways in which you had to describe the deployments were restricted to Puppet and it’s descriptive scripting language and approach to configuration management…what about users wanting Chef?…or CF Engine?…or the next foobar configuration management tool to come about?  If we only had a tool for orchestrating services that ran on bare metal, we’d be golden….and thus Metal as a Service (MAAS) was born.

MAAS was created for the sole purpose of providing Juju a way to orchestrate physical machines the same way Juju managed instances in the cloud.  The easiest way to do this, was to create something that gave cloud deployment architects the tools needed to manage pools of servers like the cloud.  Once we began this project, we quickly realized that it was good enough to even stand on its own, i.e. as a management tool for hardware, and so we expanded it to a full fledged project.  MAAS expanded to having a verbose API and user-tested GUI, thereby making Juju, Ubuntu Server deployment, and Canonical’s Landscape product leverage the same tool for managing hardware…allowing all three to benefit from the learnings and experiences of having a shared codebase.

The CDO Evolves

In the middle of 2012, the current VP of CDO decided to seek new opportunities elsewhere.  Senior management took this opportunity to look at the current organizational structure of Core DevOps, and adjust/adapt according to both what we had learned over the past 3 1/2 years and where we saw the evolution of IT and the server/cloud development heading.  The decision was made to focus the CDO more on cloud/scale-out server technologies and aspects, thus the Online Services team was moved over to a more client focused engineering unit. This left Launchpad and internal IT in the CDO, however the decision was also made to move all server and cloud related project engineering teams and activities into the organization. The reasoning was pretty straight-forward, put all of server dev and ops into the same team to eliminate “us vs them” siloed conversations…streamline the feedback loop between engineering and internal users to accelerate both code quality and internal adoption.  I took a career growth decision to apply for the chance to lead the CDO, and was fortunate enough to get it, and thus became the new Vice President of Core DevOps.

My first decision as new lead of the CDO was to change the name.  It might seem trivial, but while I felt it was key to keep to our roots in DevOps, the name Core DevOps no longer applied to our organization because of the addition of so much more server and cloud/scale-out computing focused engineering.  We had also decided to scale back internal feature development on Launchpad, focusing more on maintenance and reviewing/accepting outside contributions.  Out of a pure desire to reduce the overhead that department name changes usually cause in a company, I decided to keep the acronym and go with Cloud and DevOps at first. However, then the name (and quite honestly the job title itself) seemed a little too vague…I mean what does VP of Cloud or VP of DevOps really mean?  I felt like it would have been analogous to being the VP of Internet and Agile Development…heavy on buzzword and light on actual meaning.  So I made a minor tweak to “Cloud Development and Operations“, and while arguably still abstract, it at least covered everything we did within the organization at high level.

At the end of 2012, we internally gathered representation of every team in the “new and improved” CDO for a week long strategy session on how we’d take advantage of the reorganization. We reviewed team layouts, workflows, interactions, tooling, processes, development models, and even which teams individuals were on.  Our goal was to ensure we didn’t duplicate effort unnecessarily, share best practices, eliminate unnecessary processes, break down communication silos, and generally come together as one true team. The outcome resulted in some teams broken apart, some others newly formed, processes adapted, missions changed, and some people lost because they didn’t feel like they fit anymore.

Entering into 2013, the goal was to simply get work done:

  • Work to deploy, expand, and transition developers and production-level services to our internal OpenStack clouds: CanoniStack and ProdStack.
  • Work to make MAAS and Juju more functional, reliable, and scalable.
  • Work to make Ubuntu Server better suited for OpenStack, more easily consumable in the public cloud, and faster to bring up for use in all scale-out focused hardware deployments
  • Work to make Canonical’s Landscape product more relevant in the cloud space, while continuing to be true to its roots of server management.

All this work was in preparation for the 14.04 LTS release, i.e. the Trusty Tahr. Our feeling was (and still is) that this had to be the release when it all came together into a single integrated solution for use in *any* scale-out computing scenario…cloud…hyperscale…big data…high performance computing…etc.  If a computing solution involved large numbers of computational machines (physical or virtual) and massively scalable workloads, we wanted Ubuntu Server to be the defacto OS of choice.  By the end of last year, we had achieved a lot of the IT and engineering goals we set, and felt pretty good about ourselves.  However, as a company we quickly discovered there was one thing we left out in our grand plan to better align and streamline our efforts around scale-out technologies….professional delivery and support of these technologies.

To be clear, Canonical had not forgotten about growing or developing our teams of engineers and architects responsible for delivering solutions and support to customers. We had just left them out of our “how can we do this better” thinking when aligning the CDO. We were initially focused on improving how we developed and deployed, and we were benefiting from the changes made.  However, now as we began growing our scale-out computing customer base in hyperscale and cloud (both below and above), we began to see that same optimizations made between Dev and Ops, needed to be done with delivery. So in December of last year, we moved all hardware enablement and certification efforts for servers, along with technical support and cloud consultancy teams into the CDO.  The goal was to strengthen the product feedback loop, remove more “us vs them” silos, and improve the response times to customer issues found in the field.  We were basically becoming a global team of scale-out technology superheroes.

TeamCDO

It’s been only 3 months since our server and cloud enablement and delivery/support teams have joined the CDO, and there are already signs of improvement in responsiveness to support issues and collaboration on technical design.  I won’t lie and say it’s all been butterflies and roses, nor will I say we’re done and running like a smooth, well-oiled machine because you simply can’t do that in 3 months, but I know we’ll get there with time and focus.

So there you have it.

The Cloud Development and Operations organization in Canonical is now 5 years strong.  We deliver global, 24×7 IT services to Canonical, our customers and Ubuntu community.  We have engineering teams creating server, cloud, hyperscale, and scale-out software technologies and solutions to problems some have still yet to even consider.  We deliver these technologies and provide customer support for Canonical across a wide range of products including Ubuntu Server and Cloud.  This end-to-end integration of development, operations, and delivery is why Ubuntu Server 14.04 LTS, aka the Trusty Tahr, will be the most robust, technically innovative release of the Ubuntu for the server and cloud to date.

Screw the Ubuntu Edge…We’re Giving Away $30,000!!!

20130722092922-edge-1-large

So I’m partially kidding…the Ubuntu Edge is quickly becoming a crowdfunding phenomena, and everyone should support it if they can.  If we succeed, it will be a historic moment for Ubuntu, crowdfunding, and the global phone industry as well. 

But I Don’t Wanna Talk About That Right Now

While I’m definitely a fan of the phone stuff, I’m a cloud and server guy at heart and what’s gotten me really excited this past month have been two significant (and freaking awesome) announcements.

#1 The Juju Charm Championship

easy_money_board_game_1974_121011_1

First off, if you still don’t know about Juju, it’s essentially our attempt at making Cloud Computing for Human Beings.  Juju allows you to deploy, connect, manage, and scale web services and applications quickly and easily…again…and again…AND AGAIN!  These services are captured in what we call charms, which contain the knowledge of how to properly deploy, configure, connect, and scale the services and applications you will want to deploy in the cloud.  We have 100’s of charms for every popular and well-known web service and application in use in the cloud today.  They’ve been authored and maintained by the experts, so you don’t have to waste your time trying to become one.  Just as Ubuntu depends on a community of packagers and developers, so does Juju.  Juju goes only as far as our Charm Community will take us, and this is why the Charm Championship is so important to us.

So….what is this Charm Championship all about?  We took notice of the fantastic response to the Cloud-Prize contest ran by our good friends (and Ubuntu Server users) over at Netflix.  So we thought we could do something similar to boost the number of full service solutions deployable by Juju, i.e. Charm Bundles.  If charms are the APT packages of the cloud, bundles are effectively the package seeds, thus allowing you to deploy groups of services, configured and interconnected all at once.  We’ve chosen this approach to increase our bundle count because we know from our experience with Ubuntu, that the best approach for growth will be by harvesting and cultivating the expertise and experience of the experts regularly developing and deploying these solutions.  For example, we at Canonical maintain and regularly deploy an OpenStack bundle to allow us to quickly get our clouds up for both internal use and for our Ubuntu Advantage customers.  We have master level expertise in OpenStack cloud deployments, and thus have codified this into our charms so that others are able to benefit.  The Charm Championship is our attempt to replicate this sharing of similar master level expertise across more service/application bundles…..BY OFFERING $30,000 USD IN PRIZE MONEY! Think of how many Ubuntu Edge phones that could buy you…well, unless you desperately need to have one of the first 50🙂.

#2 JujuCharms.com

Ironman JARVIS technologyFrom the very moment we began thinking about creating Juju years ago…we always envisioned eventually creating an interface that provides solution architects the ability to graphically create, deploy, and interact with services visually…replicating the whiteboard planning commonly employed in the planning phase of such solutions.

The new Juju GUI now integrated into JujuCharms.com is the realization of our vision, and I’m excited as hell at the possibilities opened and the technical roadblocks removed by the release of this tool.  We’ve even charmed it, so you can  ‘juju deploy juju-gui’ into any supported cloud, bare metal (MAAS), or local workstation (via LXC) environment.  Below is a video of deploying OpenStack via our new GUI, and a perfect example of the possibilities that are opened up now that we’ve released this innovative and f*cking awesome tool:

The best part here, is that you can play with the new GUI RIGHT NOW by selecting the “Build” option on jujucharms.com….so go ahead and give it a try!

Join the Championship…Play with the GUI…then Buy the Phone

Cause I will definitely admit…it’s a damn sexy piece of hardware.😉

Keep Calm, Juju is still F*cking Awesome!

Keep Calm, Juju is still FUCKING AWESOME!

Doing cloud since 2008

Doing cloud since 2008

Lock-In: Why Your OS Choice Matters in the Cloud

Public Cloud Lock-In

I ran across an article last week about the fear of cloud lock-in being a “key concern of companies considering a cloud move“.  The article was spot on in pointing out that dependence upon some of the higher level public cloud service features hinders a user’s ability to migrate to another cloud.  There is a real risk in being locked into a public cloud service, not only due to dependence on the vendor’s services, but also the complexity and costs of trying to move your data out.  The article concludes by stating that there “aren’t easy answers to this problem“, which I think is true…but I also think by simply keeping two things in mind, a user can do a lot to mitigate the lock-in risk.

1. Choose an Independently Produced Operating System

Whatever solutions you decide to deploy, it’s absolutely critical that you choose an operating system not produced by the public cloud provider.  This recent fad of public cloud providers creating their own specific OS is just history repeating itself, where HP-UX, IRIX, Solaris, and AIX are being replaced with the likes of GCEL and Amazon Linux.  Sure, the latter are Linux-based, but just like the proprietary UNIX operating systems of the past, they are developed internally, only support the infrastructure they’re designed for, and are only serviceable by the company that produces them.  Of course the attraction to using these operating systems is understandable, because the provider can offer them for “free” to users desiring a supported OS in the cloud.  They can even price services lower to customers who use their OS as an incentive and “benefit”, with the claim it allows them to provide better and faster support.   It’s a perfect solution….at first.  However, once you’ve deployed your solution to a public cloud vendor-specific OS, you have made a huge first step towards lock-in.  Sure, the provider can say their OS is based on an independently produce operating system, but that means nothing once the two have diverged due to security updates and fixes, not to mention release schedules and added features.  There’s no way the public cloud vendor OS can keep up, and they really have no incentive to, because they’ve already got you….the longer you stay on their OS, the more you will depend on their application and library versions, thus the deeper you get.  A year or two down the road, another public cloud provider pops up with better service and/or prices, but you can’t move without the risk of extended downtimes and/or loss of data, in addition to the costs of paying your IT team the overtime it will take to architect such a migration.  We’ve all been here before with proprietary UNIX and luckily Linux arrived on the scene just in time to save us.

2. Choose an Operating System with Service Orchestration Support

Most of the lock-in features provided by public clouds are simply “Services as a Service”, be it a database service,  big data/mapreduce service, or a development platform service like rails or node.  All of these services are just applications easily deployed, scaled, and connectable to existing solutions.  Of course it’s easy to understand the attraction to using these public cloud provider services, because it means no setup, no maintenance, and someone else to blame if s**t goes sideways with the given service.  However, again by accepting these services, you are also accepting a level of lock in.  By creating/adapting your solution(s) to use the load balancing, monitoring, and/or database service, you are making them less portable and thus harder/costlier for you to migrate.  I can’t blame the providers for doing this, because it makes *perfect* sense from a business perspective:

I’m providing a service that is commoditized…I can only play price wars for so long….so how can I keep my customers once that happens….services!  And what’s more, I don’t want them to easily use another cloud, so I’ll make sure my services require you to utilize my API….possibly even provide a better experience on my own OS.

Now I’m not saying you shouldn’t use these services, but you should be careful of how much of them you consume and depend on.  If you ever intend or need to migrate, you will want a solution that covers the scenario of the next cloud provider not having the same service…or the service priced at a higher rate than you can afford…or the service quality/performance not being as good.  This is where having a good service orchestration solution becomes critical, and if you don’t want to believe me…just ask folks at IBM or OASIS.  And for the record, service orchestration is not configuration management….and you can’t get their by placing a configuration management tool in the cloud.  Trying to get configuration management tools to do service orchestration is like trying to teach a child to drive a car.  Sure, it can be done pretty well in a controlled empty parking lot…on a clear day.  However, once you add unpredictable weather, pedestrians, and traffic, it gets real bad, real quick.  Why?  Because just like your typical configuration management tool, a child lacks the intelligence to react and adapt to the changing conditions in the environment.

Choose Ubuntu Server

Obviously I’m going to encourage the use of Ubuntu Server, but not just because I work for Canonical or am an Ubuntu community member, but because I actually believe it’s currently the best option around.  Canonical and Ubuntu Server community members have put countless hours and effort into ensuring Ubuntu Server runs well in the cloud, and Canonical is working extremely hard with public cloud providers to ensure our users can depend on our images and public cloud infrastructure to get the fastest, cheapest, and most efficient cloud experience possible.   There’s much more to running well in the cloud than just putting up an image and saying “go!”.   Just to name a few examples: there’s insuring all instance sizes are supported, adding in-cloud mirrors across regions and zones to ensure faster/cheaper updates, natively packaging API tools and hosting them in the archives, updating images with SRUs to avoid costly time spent updating at first boot, daily development images made available, and ensuring Juju works within the cloud to allow for service orchestration and migration to other supported public clouds.

Speaking of Juju, we’ve also invested years (not months….YEARS) into our service orchestration project, and I can promise you that no one else, right now, has anything that can come close to what it can do.  Sure, there are plenty of people talking about service orchestration…writing about service orchestration….and some might even have a prototype or beta of a service orchestration tool, but no one comes close to what we have in Juju…no one has the community engagement behind their toolset…that’s growing everyday.  I’m not saying Juju is perfect by any means, but it’s the best you’re going to find if you are really serious about doing service orchestration in the cloud or even on the metal.

Over the next 12 months, you will see Ubuntu continue to push the limits of what users can expect from their operating system when it comes to scale-out computing.  You have already seen what the power of the Ubuntu community can do with a phone and tablet….just watch what we do for the cloud.

Can Ubuntu Server Roll Too?

Wow…I just realized how long it’s been since I did a blog post, so apologies for that first off.  FWIW, it’s not that I haven’t had any good things to say or write about, it’s just that I haven’t made the time to sit down and type them out….I need a blog thought transfer device or something🙂.  Anyway, with all the talk about Ubuntu doing a rolling release, I’ve been thinking about how that would affect Ubuntu Server releases, and more importantly….could Ubuntu Server roll as well?  In answering this question, I think it comes down to two main points of consideration (beyond what the client flavors would already have to consider).

 

How Would This Affect Ubuntu Server Users?

We have a lot of anecdotal data and some survey evidence that most Ubuntu Server users mainly deploy the LTS.  I doubt this surprises people, given the support life for an LTS Ubuntu Server release is 5 years, versus only 18 months for a non-LTS Ubuntu Server release.  Your average sysadmin is extremely risk adverse (for good reason), and thus wants to minimize any risk to unwanted change in his/her infrastructure.  In fact, most production deployments also don’t even pull packages from the main archives, instead they mirror them internally to allow for control of exactly what and when updates and fixes roll out to internal client and/or server machines.  Using a server operating system that requires you to upgrade every 18 months, to continue getting fixes and security updates, just doesn’t work in environments where the systems are expected to support 100s to 1000s of users for multiple years, often without significant downtime. With that said, I think there are valid uses of non-LTS releases of Ubuntu Server, with most falling into two main categories: Pre-Production Test/Dev or Start-Ups, with the reasons actually being the same.  The non-LTS version is perfect for those looking to roll out products or solutions intended to be production ready in the future.  These releases provide users a mechanism to continually test out what their product/solution will eventually look like in the LTS as the versions of the software they depend upon are updated along the way.  That is, they’re not stuck having to develop against the old LTS and hope things don’t change too much in two years, or use some “feeder” OS, where there’s no guarantee the forked and backported enterprise version will behave the same or contain the same versions of the software they depend on.  In both of these scenarios, the non-LTS is used because it’s fluid, and going to a rolling release only makes this easier…and a little better, I dare say.  For one, if the release is rolling, there’s no huge release-to-release jump during your test/dev cycle, you just continue to accept updates when ready.  In my opinion, this is actually easier in terms of rolling back as well, in that you have less parts moving all at once to roll back if needed.  The second thing is that the process for getting a fix from upstream or a new feature is much less involved because there’s no SRU patch backporting, just the new release with the new stuff.  Now admittedly, this also means the possibility for new bugs and/or regressions, however given these versions (or ones built subsequently) are destined to be in the next LTS anyway, the faster the bugs are found out and sorted, the better for the user in the long term.  If your solution can’t handle the churn, you either don’t upgrade and accept the security risk, or you smoke test your solution with the new package versions in a duplicate environment.  In either case, you’re not running in production, so in theory…a bug or regression shouldn’t be the end of the world.  It’s also worth calling out that from a quality and support perspective, a rolling Ubuntu Server means Ubuntu developers and Canonical engineering staff who normally spend a lot of time doing SRUs on non-LTS Ubuntu Server releases, can now focus efforts on the Ubuntu Server LTS release….where we have a majority of users and deployments.

 

How Would This Affect Juju Users?

In terms of Juju, a move to a rolling release tremendously simplifies some things and mildly complicates others.  From the point of view of a charm author, this makes life much easier.  Instead of writing a charm to use a package in one release, then continuously duplicating and updating it to work with subsequent releases that have newer packages, you only maintain two charms…maximum of three if you want to include options for running code from upstream.  The idea is that every charm in the collection would default to using packages from the latest Ubuntu Server LTS, with options to use the packages in the rolling release, and possibly an extra option to pull and deploy direct from upstream.  We already do some of this now, but it varies from charm to charm…a rolling server policy would demand we make this mandatory for all accepted charms.  The only place where the rules would be slighlty different, are in the Ubuntu Cloud Archives, where the packages don’t roll, instead new archive pockets are created for each OpenStack release.  From a users perspective, a rolling release is good, yet is also complicated unless we help…and we will.  In terms of the good, users will know every charmed service works and only have to decide between LTS and rolling as the deployment OS, where as now, they have to choose a release, then hope the charm has been updated to support that release.  The reduction in charm-to-release complexity also allows us to do better testing of charms because we don’t have to test every charm against oneiric, precise, raring, “s”, etc, just precise and the rolling release….giving us more time to improve and deepen our test suites.

With all that said, a move to a rolling Ubuntu Server release for non-LTS also adds the danger of inconsistent package versions for a single service in a deployment.  For example, you could deploy a solution with 5 instances of wordpress 3.5.1 running, we update the archive to wordpress 3.6, then you decide to add 3 more units, thus giving you a wordpress service of mixed versions….this is bad.  So how do we solve this?  It’s actually not that hard.  First, we would need to ensure that Juju never automatically adds units to an existing service if there’s a mismatch in the version of binaries between the currently deployed instances and the new ones about to be deployed.  If Juju detected the binary inconsistency, it would need to return an error, optionally asking the user if he/she wanted it to upgrade the currently running instances to match the new binary versions.  We could also add some sort of –I-know-what-I-am-doing option to give the freedom to those users who don’t care about having version mismatches.  Secondly, we should ensure an existing deployment can always grow itself without requiring a service upgrade.   My current thinking around this is that we’d create a package caching charm, that can be deployed against any existing Juju deployment.  The idea is much like squid-deb-proxy (accept the cache never expires or renews), where the caching instance acts as the archive mirror for the other instances in the deployment, providing the same cached packages deployed in that given solution.  The package cache should be ran in a separate instance with persistent storage, so that even if the service completely goes down, it can be restored with the same packages in the cache.

 

So…Can Ubuntu Server Roll?

Yes We Can!

I honestly think we can and should consider it, but I’d also like to hear the concerns of folks who think we shouldn’t.

PSA: Is your Ubuntu Server IaaS Guest Image Authentic?

The amount of uptake seen with Ubuntu Server over the past year has been extremely rewarding and simply amazing.  Infrastructure as a Service (IaaS), a.k.a. Public Cloud, providers are popping up left and right, all wanting to provide Ubuntu Server…all helping to further cement Ubuntu Server’s position as the OS for the cloud.

With that said, I’ve started to become concerned about the way in which some of these IaaS providers distribute Ubuntu.  Ubuntu developers create, publish, and regularly update images on Amazon Web Services and Microsoft Azure.  Canonical hosts and maintains internal archive mirrors in these clouds to provide a low-latency, low-cost update mechanism to users.  Finally, Canonical engineers purposely designed in a pluggable cloud provider API approach to Ubuntu’s service orchestration application, Juju, to lower the operational barriers that often place limitations on cross-cloud workload and service migrations.  We do all this to help ensure cross-platform consistency for Ubuntu Server users, i.e. workloads and applications ran on Ubuntu Server behave in the same manner on bare metal machines and across IaaS providers.

Some IaaS providers and users have decided to produce and host their own Ubuntu Server images without the involvement of the Ubuntu Project or Canonical.  I won’t go into the legal aspects of this, because I’m no lawyer.  However, I believe there is a real risk to users when these images are modified in some way, but still presented as “official” Ubuntu Server images.  Whether the changes are minor, like redirecting fixes and security updates to internal unofficial mirrors, or major, like making changes to OS and/or applications provided in the images themselves, labeling the images as “official”Ubuntu Server is a misrepresentation of the project and the product.  There is a real and legitimate risk of users losing out on the cross-platform assurance that the Ubuntu project and Canonical work so hard to provide due to the images having untested code or simply being out of sync on fixes and updates.  Furthermore, there’s no guarantee that bug fixes made to these modified images will ever make it into the official distro, thus creating a further fork between expected behavior across both bare metal and cloud platforms.  All of this has the potential to lead to poor user experience that’s very damaging to the reputation of Ubuntu the project and product, not to mention Canonical as it’s sponsor.

We ,within the Ubuntu Server team, work extremely hard to ensure our community can depend on having the same user experience and application execution results across all supported platforms, bare metal or cloud.  So…if you are a IaaS provider, and you elect to produce and distribute modified Ubuntu Server images, please…please ensure your users are aware of this by labeling them as customized derivatives.  Let them know that by using these modified images they potentially run the risk of being delayed in getting bug fixes and security updates…and that differences in OS and application behavior from your changes can lead to higher levels of complexity if/when they have a need to move workloads and services to/from other official Ubuntu Server deployments.

Thanks…we now return you to your regularly scheduled program.😉

OpenStack in Ubuntu Server 12.04 LTS

With the release of Ubuntu Server 12.04 LTS quickly approaching, the Ubuntu Server Team has been working extremely hard on ensuring OpenStack Essex will be of high quality and tightly integrated into Ubuntu Cloud.  As with prior Long Term Support releases, Canonical commits to maintaining Ubuntu Server 12.04 LTS for five years, which means users receive five years of maintenance for the OpenStack Essex packages we provide in main.   With that said, we recognize that OpenStack is still a relatively young project moving at a tremendous rate of innovation right now, with features and fixes already planned for Folsom that some users require for their production deployment.  In the past, these users would have to upgrade off the LTS, in order to get maintenance for the OpenStack release they need on Ubuntu Server… thus foregoing the five year maintenance they want and need for their production deployment.  We wholeheartedly believe there are situations where moving to the next release of Ubuntu (12.10, 13.04, etc) for newer OpenStack releases works just fine, especially for test/dev deployments.  However, we also know there will be many situations where users cannot afford the risk and/or the cost of upgrading their entire cloud infrastructure just to get the benefits of a newer OpenStack release, and we need to have a solution that fits their needs. After thinking about what users want and where most people expect OpenStack go in terms of continued innovation and stability, we have decided to provide Ubuntu users with two options for maintenance and support in the 12.04 LTS.

The first option is that users can stay with the shipped version of OpenStack (Essex) and remain with it for the full life of the LTS.  As per the Ubuntu LTS policy, we commit to maintaining and supporting the Essex release for 5 years.  The point releases will also ship the Essex version of OpenStack, along with any bug fixes or security updates made available since its release.

Introducing the Ubuntu Cloud Archive

The second option involves Canonical’s Ubuntu Cloud archive, which we are officially announcing today.  Users can elect to enable this archive, and install newer releases of OpenStack (and the dependencies) as they become available up through the next Ubuntu LTS release (presumably 14.04).  Bug processing and patch contributions will follow standard Ubuntu practice and policy where applicable.  Canonical commits to maintaining and supporting new OpenStack releases for Ubuntu Server 12.04 LTS in our Ubuntu Cloud archive for at least 18 months after they release.  Canonical will stop introducing new releases of OpenStack for Ubuntu Server 12.04 LTS into the Ubuntu Cloud archive with the version shipped in the next Ubuntu Server LTS release (presumably 14.04).  We will maintain and support this last updated release of OpenStack in the Ubuntu Cloud archive for 3 years, i.e. until the end of the Ubuntu 12.04 LTS lifecycle.
In order to allow for a relatively easy upgrades, and still adhere to Ubuntu processes and policy, we have elected to have archive.canonical.com be the home of the Ubuntu Cloud archive.  We will enable update paths for each OpenStack release.

  • e.g. Enabling “precise-folsom” in the archive will provide access to all OpenStack Folsom packages built for Ubuntu Server 12.04 LTS (binary and source), any updated dependencies required, and bug/security fixes made after release.

As of now, we have no plans to build or host OpenStack packages for non-LTS releases of Ubuntu Server in the Ubuntu Cloud archive.  We have created the chart below to help better explain the options.

Q&A

Why Not Use Stable Release Updates?

Ubuntu’s release policy states that once an Ubuntu release has been published, updates must follow a special procedure called a stable release update, or SRU, and are delivered via the -updates archive.  These updates are restricted to a specific set of characteristics:

  • severe regression bugs
  • security vulnerabilities (via the -security archive)
  • bugs causing loss of user data
  • “safe” application layer bugs
  • hardware enablement
  • partner archive updates

Exceptions to the SRU policy are possible. However, for this to occur the Ubuntu Technical Board must approve the exception, which must meet their guidelines:

  1. Updates to new upstream versions of packages must be forced or substantially impelled by changes in the external environment, i.e. changes must be outside anything that could reasonably be encapsulated in a stable release of Ubuntu. Changes internal to the operating system we ship (i.e. the Ubuntu archive), or simple bugs or new features, would not normally qualify.
  2. A new upstream version must be the best way to solve the problem.  For example, if a new upstream version includes a small protocol compatibility fix and a large set of user interface changes, then, without any judgement required as to the benefits of the user interface changes, we will normally prefer to backport the protocol compatibility fix to the version currently in Ubuntu.
  3. The upstream developers must be willing to work with Ubuntu.  A responsive upstream who understands Ubuntu’s requirements and is willing to work within them can make things very much easier for us.
  4. The upstream code must be well-tested (in terms of unit and system tests).  It must also be straightforward to run those tests on the actual packages proposed for deployment to Ubuntu users.
  5. Where possible, the package must have minimal interaction with other packages in Ubuntu.  Ensuring that there are no regressions in a library package that requires changes in several of its reverse-dependencies, for example, is significantly harder than ensuring that there are no regressions in a package with a straightforward standalone interface that can simply be tested in isolation. We would not normally accept the former, but might  consider the latter.

Once approved by the Tech Board, the exception must have a documented update policy, e.g. http://wiki.ubuntu.com/LandscapeUpdates.  Based on these guidelines and the core functionality OpenStack serves in Ubuntu Cloud, the Ubuntu Server team did not feel it was in the best interest of their users, nor Ubuntu in general, to pursue an SRU exception.

What about using Ubuntu Backports?

The Ubuntu Backports process (excludes kernel) provides us a mechanism for releasing package updates for stable releases that provide new features or functionality.  Changes were recently made to `apt` in Ubuntu 11.10, whereby it now only installs packages from Backports when they are explicitly requested.  Prior to 11.10, `apt` would install everything from Backports once it was enabled, which led to packages being unintentionally upgraded to newer versions.  The primary drawbacks with using the Backports archive is that the Ubuntu Security team does not provide updates for the archive, it’s a bit of a hassle to enable per package updates, and Canonical doesn’t traditionally offer support services for the packages hosted there.  Furthermore, with each new release of OpenStack, there are other applications that OpenStack depends on that also must be at certain levels.  By having more than one version of OpenStack in the same Backports archive, we run a huge risk of having backward compatibility issues with these dependencies.

How Will You Ensure Stability and Quality?

In order for us to ensure users have a safe and reliable upgrade path, we will establish a QA policy where all new versions and updated dependencies are required to pass a specific set of regression tests with a 100% success rate.  In addition:

  • Unit testing must cover a minimum set of functionality and APIs
  • System test scenarios must be executed for 24, 48 and 72 hours uninterrupted.
  • Package testing must cover: initial installation, upgrades from the previous OpenStack release, and upgrades from the previous LTS and non-LTS Ubuntu release.
  • All test failures must be documented as bugs in Launchpad, with regressions marked Fix Released before the packages are allowed to exit QA.
  • Test results are posted publicly and announced via a mailing list specifically created for this effort only.

Only upon successfully exiting QA will packages be pushed into the Ubuntu Cloud archive.

What Happens With OpenStack Support and Maintenance in 14.04?

Good question.  The cycle could repeat itself, however at this point Canonical is not making such a commitment.  If the rate of innovation and growth of the OpenStack project matures to a point where users become less likely to need the next release for its improved stability and/or quality, and instead just want it for a new feature, then we would likely return to our traditional LTS maintenance and support model.

Ubuntu Server is No Longer the Best OS for Cloud Computing.

Okay, so now that I got your attention….let me explain.

Over this past year and a half (maybe a little longer), I’ve seen Ubuntu Server explode in number and types of deployments, specifically around areas involving cloud computing, but also in situations involving big data and ARM server deployments.  This has all occurred at a time when people and organizations are having to do more with less…less lab space…less power…less people, which of course all leads to the real desire of operating at less financial cost.  I’ve come to the conclusion that me saying we should focus Ubuntu Server on being the best OS for cloud computing at the 11.10 UDS was aiming too low.  It’s awesome that we’ve essentially done this with our OpenStack integration efforts for Ubuntu Cloud, but we can do more…we can do better.  I now believe that for 12.04LTS and beyond, what Ubuntu Server should actually drive towards is being the best OS for scale-out computing.   

Scale-Out is Better than Scale-Up

Scale-out computing is the next evolutionary step in enterprise server computing. It used to be that if you needed an enterprise worthy server you had to buy a machine with a bunch of memory, high-end CPU configuration, and a lot of fast storage. You also needed to plan ahead to ensure what you purchased had enough open CPU and memory slots, as well as drive bays, to make sure you could upgrade when demand required it.  When the capacity limit (cpu, memory, and/or storage) of this server was hit, you had to replace it with a newer, often more expensive one, again planning for upgrades down the road.  Finally, to ensure high availability, you had to have one or two more of these servers with the same configuration.  Companies like Google, Amazon, and Facebook then came along and recognized that they could use low-cost, commodity hardware to build “pizza box” servers to do the same job, instead of relying on expensive, mainframe-like servers that needed costly redundancy built into every deployment.  These organizations realized that they could rely on a lot of cheap, easy-to-find (and replace) servers to effectively do the job a few scaled-up, high-end (and cost) servers could tackle.  More work could be accomplished, with a reduced risk of failure by exploiting the advantages a scale-out solution provided. If a machine were to die in a comparable scale­-up configuration, it would be very costly in both time and money to repair or replace it.  The scale-out approach allowed them to use only what they needed and quickly/easily replace systems when they went down.

Fast forward to today, and we have an explosion of service and infrastructure applications, like Hadoop, Ceph, and OpenStack, architected and built for scale-out deployments.  We even have the Open Compute Project focused on designing servers, racks, and even datacenters to specifically meet the needs of scale-out computing.  It’s clear that scale-out computing is overtaking scale-up as the preferred approach to most of today’s computational challenges.

With Great Scale, Comes Great Management Complexity

It’s not all rainbows and unicorns though…scale-out comes with it’s own inherent problems.  There’s a great paper published by IBM Research called, Scale-up x Scale-out: A Case Study using Nutch/Lucenewhere the researchers set out to measure and compare the performance of a scale-up versus scale-out approach to running a combined Nutch/Lucene workload.  Nutch/Lucene is an opensource framework written in Java for implementing search applications consisting of three major components: crawling, indexing, and query.  Their results indicated that “scale-out solutions have an indisputable performance and price/performance advantage over scale-up”, and that “even within a scale-up system, it was more effective to adopt a “scale-out-in-a-box” approach than a pure scale-up to utilize its processors efficiently”, i.e use virtualization technologies like KVM.  However, they also go on to conclude that

scale-out systems are still in a significant disadvantage with respect to scale-up when it comes to systems management. Using the traditional concept of management cost being proportional to the number of images, it is clear that a scale-out solution will have a higher management cost than a scale-up one.”

These disadvantages are precisely what I see Ubuntu Server attempting to account for over the next few years.  I believe that in Ubuntu Server 12.04LTS, we have already started to address these issues in several specific ways.

Power Consumption

One obvious issue with scale-out computing is the need for space to store your servers and provide enough power to run/cool them.  We haven’t figured out how to shrink the size of your server through code, so we can’t help with the space constraints.  However, we have started to develop solutions that can help administrators use less power to run their deployments. For example, we created PowerNap, which is a configurable daemon that can bring a running server to a lower power state according to a set of configuration preferences and triggers.

As a company, Canonical also began investing in supporting processor technologies that focused on delivering a high rate of operations at low-power consumption rates.  ARM has a long-standing history of providing processors that use very little power. The potential for server applications, meant you could drive server processor density up and still keep power consumption relatively low. With this greater density, server manufacturers started to see opportunities for building very high speed interconnects that allow these processors to share data and cooperate quickly and easily. ARM server technology companies such as Calxeda can now build computing grids that won’t require watercooling and an in-house backup generator running when you turn them on.  With the Cortex-A9 and Cortex-A15 processors in particular, the performance differential between ARM processors and x86 is starting to shrink significantly.  We are getting closer to having full 64-bit support in the coming ARMv8 processors, that will still retain the low power and low cost heritage of the ARM processor.  Enterprise server manufacturers are already planning to start putting ARM processors into very low-cost, very dense, and very robust systems to provide the kind of functionality, interconnectivity and compute power that used to only be possible in mainframes.  Ubuntu Server 12.04 LTS will support ARM, specifically the hard float compilation configuration (armhf).  With our pre-releases already receiving such good performance reviews, we are excited about the possibilities.  If you want to know more about what we’ve done with ARM for Ubuntu Server, I recommend you start with a great FAQ posted on our wiki.

Support Pricing

Traditional license and subscription support models are built for scale-up solutions, not scale-out.  These offerings either price by number of users or number of cores per machine, which are within reason when deploying onto a small number of machines, i.e. under 100…maybe a bit higher depending on the size of the organization.  The base price gets you access to security updates and bug fixes, and you have to pay more to get more, i.e. someone on the phone, email support, custom fixes, etc.  This is still acceptable to most users in a scale-up model.

However, when the solution is scale-out, i.e. 1000s or more, this pricing gets way out of control.  Many of the license and subscription vendors have recently wised up to this, and offer cluster-based pricing, which isn’t necessarily cheap, but certainly much less costly than the per socket/CPU/user approach.  The idea is that you pay for the master or head node, and then can add as many slave nodes as you want for free.

Ubuntu Server provides security updates and maintenance for the life of the release…for free.  That means for an LTS release of Ubuntu Server, users get five years of free maintenance, if you need someone to call or custom solutions, you can pay Canonical for that…but if you don’t…you pay nothing.  It doesn’t matter if you have a few machines or over a 1000, security updates and maintenance for the set of supported packages shipped in Ubuntu is free.

Services Management

Deploying interconnected services across a scale-out deployment is a PITA. After procuring the necessary hardware and finding lab space, you have to physically set them up, install the OS and required applications, and then configure and connect the various applications on each machine to provide the right desired services. Once you’ve deployed the entire solution, upgrading or replacing the service applications, modifying the connections between them, scaling out to account for higher load, and/or writing custom scripts for re-deployment elsewhere requires even more time…and pain.

Juju is our answer to this problem.  It focuses on managing the services you need to deliver a single solution, above simply configuring the machines or cloud instances needed to run them.  It was specifically designed, and built from the ground up, for service orchestration. Through the use of charms, Juju provides you with shareable, re-usable, and repeatable expressions of DevOps best practices. You can use them unmodified, or easily change and connect them to fit your needs. Deploying a charm is similar to installing a package on Ubuntu: ask for it and it’s there, remove it and it’s completely gone.  We’ve dramatically improved Juju for Ubuntu Server 12.04LTS, from integrating our charm collection into the client (removing the need for bzr branches) to having rolled out a load of new charms for all the services you need…and probably some you didn’t know you wanted.  As my good friend Jorge Castro says, the Juju Charm Store Will Change the Way You Use Ubuntu Server.

Deployment Tools

In terms of deployment, we recognized this hole in our offering last cycle and rolled out Orchestra as first step, to see what the uptake would be.  Orchestra wasn’t an actual tool or product, but a meta-package pointing to existing technologies like cobbler, already in our archive.  We simply ensured the tools we recommended worked, so that in 11.10 you can deploy Ubuntu Server across a cluster of machines easily.

After 11.10 released, we realized we could extend the idea from simple, multi-node OS install and deployment, to a more complex offering of multi-node service install and deployment.  This effort would require us to do more than just integrate existing projects, so we decided to create our own project called MAAS (metal as a service), which would be tied into Juju, our service orchestration tool.

Ubuntu 12.04 LTS will include Canonical’s MAAS solution, making it trivial to deploy services such as OpenStack, Hadoop, and Cloud Foundry on your servers. Nodes can be allocated directly to a managed service, or simply have Ubuntu installed for manual configuration and setup.  MAAS lets you treat farms of servers as a malleable resource for allocation to specific problems, and re-allocation on a dynamic basis.  Using a pretty slick user interface, administrators can connect, commission and deploy physical servers in record time, re-allocate nodes between services dynamically, and keep them all up to date and in due course.

We’ve Come a Long Way, But

There’s a lot more we need to do.  What if the MAAS commissioning process included hardware configuration, for example RAID setup and firmware updates?  What if you could deploy and orchestrate your services by mouse click or touch…never touching a keyboard?  What if your services were allocated to machines based on power footprint?  What if your bare metal deployment could also be aware of the Canonical hardware certification database for systems and components, allowing you to quickly identify systems that are fully certified or might have potentially problematic components?  What if your services auto-scaled based on load without you having to be involved?  What if you could have a true hybrid cloud solution, bursting up to a public cloud(s) of your choosing without ever having to rewrite or rearchitect your services?  These types of questions are just some of the challenges we look to take on over the next few releases, and if any of it interests you…I encourage you to please join us.

Follow

Get every new post delivered to your Inbox.

Join 1,005 other followers