This post was written by Rob Thatcher, co-author of Team Guide to Software Operability.
We attended the OpenStack London event on 4th June; the day was insightful, and highlighted perspectives from many different participants in the OpenStack community. There were several interesting talks and demonstrations of both OpenStack components (Cinder, Swift, etc.) and of some new features and products being built around OpenStack by Ubuntu (JuJu, Cloud-init), Marantis, Inktank (Ceph) and others.
A consistent theme that emerged from the talks was that distributed, ‘cloud’-based systems need a new approach from IT teams: “iteration is the only effective way to understand the right architecture”; “really understand the skills budget you need before you move to cloud”; the challenges [and opportunities] brought by DevOps; the need for a focus on operability.
It was a day of people called ‘Mark’ (!), with keynotes from Mark Collier of OpenStack, Mark Shuttleworth of Canonical, and several others. We were live-blogging from the hall, and we’ve included some tweets from the day below.
We kicked off proceedings with an informative and enjoyable keynote from Mark Collier, COO at the OpenStack Foundation who provided some overview of what has been achieved for and in the build up to the ‘Icehouse’ release of OpenStack. Mark was clear to emphasise a particular focus has been on quality for the code, with 53 third-party systems doing CI testing in a concerted effort to push for quality. A reliable, software-driven cloud system like OpenStack allows us to drive “expense down, experimentation & innovation up, [and to] fail cheaply”, Mark explained. However, in order to take full advantage of the speed possible with OpenStack, “organisations need to change their culture for building software systems”, a point emphasised by other speakers.
Our second Mark of the day, a certain Mr Shuttleworth, outlined how Ubuntu have been part of the OpenStack community from the very early days, and have a significant interest; we were told “7 out of 8 ‘Superusers’ in the cloud are using Ubuntu”, and that users will “Always [be able to] deploy OpenStack on the latest enterprise Ubuntu”. Mark also gave a highly interesting demo of Ubuntu’s ‘Orange Box‘ concept, and noted that a good deal of effort in the community had gone in enhancing the “install on bare metal” capabilities for the ‘Icehouse’ release. Orange Box looks to be effectively a small OpenStack cloud in a portable box, ideal for training, making pitches, and of course demonstrating the latest Juju code for orchestrating OpenStack components on Maas (‘Metal-as-a-Service’).
The basic premise for Juju looks a bit like :
Yaml definition files –> Juju –> Puppet/Chef/Shell (etc) –> Bare Metal (MaaS).
Have a look at https://juju.ubuntu.com and http://jujucharms.com to see more information.
Providing some more general background around the areas in which Ubuntu are active, Mark gave us some insight into how they are selecting components from the OpenStack Interoperability Lab (OIL) and building out clouds more than 100 times per day, also noting that Telco customers are ‘breaking the hard rocks early’, as a prerequisite to OpenStack powering the next generation networks.
In addition to work on OpenStack, Ubuntu is pioneering and supporting KVM, Live Migration, Cloud-init (auditable base images), LXC and Ceph, although it remains to be seen to what extent that will continue with the Redhat/Inktank acquisition announcement.
The importance of keeping the cost of cloud lean avoids building a white elephant, in support of that ethos Ubuntu are offering a “$15 per server, per day, fully managed OpenStack with SLA”. So if you’re running less than the proposed 200 nodes ‘magic number’, at which you should have someone else build and deliver your cloud, this offer may well be for you.
Mark Shuttleworth emphasised the importance of agility when designing infrastructure: “iteration is the only way to understand the right [infrastructure] architecture for your needs”.
We were also reminded that modern, distributed software systems are complex: “operating distributed systems is *hard* and requires a focus on operability”.
We next heard from John Griffiths of Solidfire, who specialise in storage array technology. John observed that Public Cloud offerings currently available had really raised the bar for expectations, and whilst talking provided an interesting view of how Solidfire sees the datacentre as a computer and OpenStack as the OS:
Timothy Eades from vArmour gave a lightning tour of their vArmour distributed intelligent firewall product suite and guided us through how vArmour sees ‘east-west’ traffic as the big control area, with the revolutionary thinking being around placement of the firewall control plane technology right alongside the data and compute assets.
Chris Jackson (@chriswiggy) from Rackspace delivered an interesting talk using the hunt for the Higgs boson as the metaphor for describing how he sees the marketplace for cloud today. Looking at cloud services and technology, his observations ran along the lines of:
- Confusion – What is cloud to whom? Dropbox, Openstack, Amazon?
- Hype – Every man and his dog is offering ‘cloud’ related products and services.
- Impending disaster – Fear for lacking a strategy or understanding.
Chris outlined how DevOps and ITIL can pull together to produce anti-fragile systems, but that this needs a change in mindset for many in a traditional Operations role. With the thought in mind that Operations is about maximising flow rate from idea creation to value realisation (the “time to deliver value”), Chris asked for a show of hands in relation to which of the following could be the missing “cloud Higgs boson” (silver bullet):
There were just a few hands for any one of those three; a blend of Applications, Integration, and Operations – specific to each organisation – is the secret to success with cloud technologies.
Chris’s slides are on Slideshare here: http://www.slideshare.net/ChrisJackson11/the-search-for-clouds-god-particle
Closing the days formal proceedings, Monty Taylor from HP gave a characteristically amusing and informative talk. His characterisation of software applications & workloads was useful:
- Monolithic (“just sits there and Java’s at us”): not designed for rapid or easy scaling, and does not need to be changed – e.g. Gerrit
- Scalable: can be scaled out, but the process is fairly manual – e.g. Jenkins slaves
- Elastic: designed for scaling out with a high degree of automation – e.g. bespoke web front end application
Monty pointed out that there is no ‘one size fits all’ for cloud technology, and some workloads or organisations would work better with one cloud (e.g. Amazon) whereas other situations demand other cloud types (perhaps focused on speed, compute power, or other metric):
A related point was that the ‘biodiversity’ of OpenStack implementations is actually sound engineering practice by allowing for dual or even triple redundancy in cloud suppliers, reducing the likelihood that the same failure mode will be exhibited by all suppliers (although in reality you’d possibly want to bridge into AWS as well to avoid the possibility of a common failure mode in OpenStack itself).
Monty’s slides are worth a look: http://www.slideshare.net/openstackil/hybrid-cloud-workloads-monty-taylor
Towards the end of the day we had an engaging discussion with members of the Rackspace DevOps Automation Service team, who are doing some very interesting work reaching out from ‘hosting’ towards clients who have limited experience with infrastructure automation.
My colleague Matthew wrote about this in 2013, calling it Type 4 ‘DevOps-as-a-Service’, and it’s good to see this model being espoused by Rackspace.