DevOps, Automation & The Race to the CLI. A New Cycle?

DevOps and Automation have certainly taken some mind share in the IT community and it seems to be becoming a universally accepted truth, that we need to automate operations in order to keep up with the rapid pace of development in the data center.  There is clearly a trend of moving away from GUI based configuration, towards using the CLI (Command Line Interface), scripting and agile programming in order to achieve operational objectives in our environments.  This is also evidenced in a seemingly ubiquitous substitution of job descriptions. The “System Administrator” role appears to be disappearing and a new “DevOps Engineer” role is supplanting it in many places. What’s unusual is that other than the job title, the job descriptions seem to be very much the same with additional scripting skills coming to the fore.

Minority Report UI

Even the King of the GUI, Microsoft, has seen this trend and with Windows Server 2012, dumped the Full Fat GUI approach in favour of using PowerShell as the primary point of interaction with the OS. Windows Server installs as the Core version (no GUI) by default now and it is expected that using a GUI would be the exception to the norm. I have to say that that’s not necessarily a bad thing, PowerShell is probably one of the initiatives that Microsoft has got right in recent years and those seem to be few and far between.

There are many obvious benefits to these text-based configuration approaches and it is inevitable things will continue in that direction. As workloads in the data center continue to become more transient with instances span-up and discarded frequently, it’s going to become a mandatory requirement to perform similar repeatable operations for many similar objects with scripting or similar tools.

Being around IT as long as I have though, I can’t help but wonder if this is just another “cycle”. It’s taken us 30 years to move away from the Centralised, Text Driven Mainframes of last century, but we are definitely heading back in that direction.  IT tends to be cyclical in nature and I’d hazard a guess that once we’ve all got to grips with DevOps, there will be a new generation of graphical tools in the distant but imaginable future. We are after all a primarily visual species.  If and when  DevOps fully takes hold, is it here to stay or just the returning curve of a technology cycle?

VMware is Goldmember in Openstack

It’s been a little surprising to me that there’s a decent amount of buzz in the market surrounding Openstack, but not many people are clear about what it actually is and VMware’s involvement in it.  In some discussions, it is occasionally wielded as the “Deathbringer” for all things VMware and a work-in-progress alternative to everything VMware does. More often than not, there is a reaction of surprise in those discussions when it comes to light that VMware is a member of the Openstack Foundation. Furthermore, it contributes enough resources, funds and activity to be a Gold Member of the Foundation.

Openstack Gold Member

So how does that work?

Well, first and foremost when compared to a product suite such as VMware’s vCloud Suite, it should be understood that Openstack is not a fully featured product stack that will cater for all of the functionality required to operate a private, hybrid or public cloud.  Openstack is a plug-and-play framework that defines a common list of APIs and interfaces to enable the provision and operation of cloud capabilities. The key word here is framework. This framework provides a definition and set of rules for how the components in a cloud should communicate and service each other.  Openstack doesn’t for example provide compute virtualisation, or network and storage virtualisation for that matter.  Yes, you still need a hypervisor in an Openstack implementation. There is definitely some confusion over this point and Openstack (open source cloud management) is often mentally bundled together with KVM (one open source hypervisor).  This is of course incorrect, KVM is not Openstack and vice versa. The hypervisor could be any number of those on the market today, remember it’s plug-and-play.  This is one example of where VMware has significant relevance to Openstack. You can use vSphere as the hypervisor in any Openstack system.

Alongside the compute virtualisation provided by vSphere, it’s also possible to use VMware technologies such as VSAN (Virtual SAN) to serve up storage along with NSX for network functionality. In fact, after VMware’s acquisition of Nicira, it became extremely important in terms of the development of Openstack networking projects. So it is clear to see there are definitely many areas of collaboration for VMware and Openstack.  It would be dismissive to fail to acknowledge that there are elements of competitive overlap between some VMware products and some Openstack, but these aren’t an all encompassing “Us vs Them” discussion. VMware’s approach to Openstack is very much one of being a good neighbour in a growing ecosystem. If every element of the stack is to be plug-and-play, VMware will make best efforts to ensure that it’s own components adhere to the API specifications and provide the richest set of functionality available to the market.

VMware’s Openstack membership is established and gaining momentum. VMware is a Gold Member of the Openstack Foundation and continues to increase activity and contributions in all relevant projects. Although a relatively late arrival to the foundation, VMware now sits in the Top 10 contributing companies for the whole project (rankings based on Source code commits).

Openstack Commit by Company

If you would like to know more about getting started with VMware and Openstack please read the following whitepaper:

http://www.vmware.com/files/pdf/techpaper/VMWare-Getting-Started-with-OpenStack-and-vSphere.pdf

Software Defined X – Automation: Job Stealer or Job Enabler?

I’ve had many conversations in recent weeks about the commoditization of the data center with many being concerned about the effect of the diminishing need for specialist hardware and greater automation through software. More specifically, how that might affect the job prospects of administrators and other technical roles in the modern IT environment.

We are in an era of rapid evolutionary change and this can be unsettling for many as change often is.  There seems to be a wide variety of reactions to these changes. At one end there is the complete denial and a desire to retain the status quo, with an expectation that these industry changes may never occur. In the middle, we have those that tip their hat in recognition of the general direction of the trends, but expect things to happen more gradually and then there are those that embrace it with an expectation of gaining some level of competitive advantage by being a first mover.  If there is one thing that is certain, if you find yourself at the wrong end of that spectrum, you will most definitely find yourself in difficulty.

No Change Around Here

The change is happening and happening more quickly than most expect.  The automation of data center operations and a focus on innovation is a key objective for most organisations at the moment. “Keeping the lights on” tasks are becoming less relevant in this world.

Casting Off the Shackles of Hardware

Development of custom hardware based intelligence is complex. This often involves the research and production of custom chipsets for these devices.  Due to the research, prototyping and production requirements of this type of operation.  We are usually working to a 2-3 year development and release cycle. In fact, most organisations have been used to using this kind of procurement cycle, executing a hardware refresh every 3-5 years.

This has worked historically, but today there are new kids on the block and they are eating the market with a new approach to developing and delivering services. Pioneers like Facebook, Google and Netflix have fundamentally changed how service delivery works. These operations have decoupled their software intelligence from hardware and deliver their services based on commodity inexpensive hardware. This not only reduces their capital outlay, it also provides them with a platform to rapidly deliver agile software services. In these types of environments, it is not uncommon to see software releases move from a 18-24 month cycle to a daily or weekly cycle. Strategically they can pivot at a moments notice and they can easily scale or contract operations at a very low-cost. As you might imagine, this kind of agility has become very challenging from a competitive stand point for companies like Microsoft who have had 3-4 year major release cycles baked into the fibre of their operational approach (e.g. Exchange, Windows Server, etc).

What About Automation?

The more we move towards software controlled infrastructures, the more easily they can be automated. Most solutions today are built with some kind of API (application programming interface) to enable other applications to programmatically control or manage them in someway. In this decade, the industry has moved firmly away from proprietary API technologies, towards standardised ones. More often not based on the RESTful API architecture. Alongside this we are starting to see the rise of DevOps tools such as Puppet and Chef, which help bridge the gap between IT operations and the developers actually creating the applications that organisations rely on.

So What Does This Mean For the Modern IT Professional?

As the development of these tools and API interoperability progresses, undoubtedly, IT operations roles will also have to evolve.  This does not mean that there will be fewer jobs in IT.  In fact, IT skills have become more relevant than ever, but those skills have to change a little.  It is time to start moving up the stack by putting more focus on innovation in the area of application and service, rather than keeping the lights on down in the bits and bytes of the infrastructure. By doing this, these industry changes should become a massive job and career enabler, not a cause of suspicion and concern for job security.

I had a chat with a family member this week which summed this up really well for me.  We were discussing the Luddites, a 19th century movement in my home region of the North of England. The Luddites, were a group of textile workers who protested against the mechanisation of the production of garments. They did this violently under the auspices of “those machines are taking our jobs, we’ll have nothing to do and we’ll all starve”. A couple of hundred years on, we can see that starvation didn’t happen and those same people survived by finding new ways to innovate. On a sidenote, I once received a letter from a CBE calling me a Luddite who had seen me on TV discussing an environmental issue. I found this most amusing given the industry I work in and my lust of technological progress. In the same conversation with the family member, I mentioned that I was looking forward to the introduction of robot-taxis (e.g. Self-driving Google Cars) due to the efficiencies and cost of car sharing. They replied “but that could be 20,000 taxi drivers losing their jobs in Manchester alone”. I replied “Yes, but that’s also 20,000 people who could alternatively be working on curing cancer, pioneering space travel or solving the world’s energy problems”.

Conclusion – Software Defined X – Automation: Job Stealer or Job Enabler?

For me I see it as a Job Enabler. My advice… embrace the change, relish the opportunity to innovate and change the world for the better.. one step at a time.