Networking Primer – Part 1: Introduction

The world of networking has been fairly static for many years now. It’s been historically characterised by static infrastructures that require infrequent changes.  These configuration changes were performed via command line interfaces by network engineers, usually sitting with a laptop and a cable plugged directly into a piece of networking hardware. Activities were manual, repeated for every individual device and extremely error prone due to the non-human readable nature of network configuration information.

The workloads running in the modern datacenter have most definitely changed in recent years. It has become apparent that the capabilities of current networking devices and operational approaches simply cannot keep up with the pace of change.  In the modern datacenter, the rapid and overwhelming success of server virtualisation has fundamentally changed the way applications consume resources and the network has become somewhat of a bottleneck in providing agile, reliable and cost effective means of delivering new applications.   In addition to the shortcomings of existing technology, operational processes and a tendency to silo server, storage and networking departments has also become a major blocker for any significant progress in dealing with these challenges.

In the last 2-3 years, there has been industry recognition that these challenges need to be addressed and there has been a marked shift in strategy.  There has been a wide realisation that the boundaries need to break down and the silo’d teams need to converge into singular, collaborative and multi-skilled teams, delivering IT in a more integrated manner.  The technology also needed to change and the Software Defined Networking (SDN) movement is one that has been central to this shift.

Some time ago I worked in the military messaging field and have had wide exposure to networking, as it relates to battlefield communications protocols. The concepts and NATO protocols that underpin military messaging and not so different to those used in our datacenters and I have been working on understanding the datacenter networking space in the last 6 months or so. I’d like to share what I’ve learned and hopefully provide a reasonable learning resource for those administrators who are preparing themselves for the new converged infrastructure world.  I’ll be taking things right back to basics. Explaining at a beginner level what networking fundamentally is and working through to how we are addressing the key challenges that are being faced by organisations today.

Next: Networking Primer – Part 2: Defining Networking with OSI and TCP/IP Suite

 

Software Defined X – Automation: Job Stealer or Job Enabler?

I’ve had many conversations in recent weeks about the commoditization of the data center with many being concerned about the effect of the diminishing need for specialist hardware and greater automation through software. More specifically, how that might affect the job prospects of administrators and other technical roles in the modern IT environment.

We are in an era of rapid evolutionary change and this can be unsettling for many as change often is.  There seems to be a wide variety of reactions to these changes. At one end there is the complete denial and a desire to retain the status quo, with an expectation that these industry changes may never occur. In the middle, we have those that tip their hat in recognition of the general direction of the trends, but expect things to happen more gradually and then there are those that embrace it with an expectation of gaining some level of competitive advantage by being a first mover.  If there is one thing that is certain, if you find yourself at the wrong end of that spectrum, you will most definitely find yourself in difficulty.

No Change Around Here

The change is happening and happening more quickly than most expect.  The automation of data center operations and a focus on innovation is a key objective for most organisations at the moment. “Keeping the lights on” tasks are becoming less relevant in this world.

Casting Off the Shackles of Hardware

Development of custom hardware based intelligence is complex. This often involves the research and production of custom chipsets for these devices.  Due to the research, prototyping and production requirements of this type of operation.  We are usually working to a 2-3 year development and release cycle. In fact, most organisations have been used to using this kind of procurement cycle, executing a hardware refresh every 3-5 years.

This has worked historically, but today there are new kids on the block and they are eating the market with a new approach to developing and delivering services. Pioneers like Facebook, Google and Netflix have fundamentally changed how service delivery works. These operations have decoupled their software intelligence from hardware and deliver their services based on commodity inexpensive hardware. This not only reduces their capital outlay, it also provides them with a platform to rapidly deliver agile software services. In these types of environments, it is not uncommon to see software releases move from a 18-24 month cycle to a daily or weekly cycle. Strategically they can pivot at a moments notice and they can easily scale or contract operations at a very low-cost. As you might imagine, this kind of agility has become very challenging from a competitive stand point for companies like Microsoft who have had 3-4 year major release cycles baked into the fibre of their operational approach (e.g. Exchange, Windows Server, etc).

What About Automation?

The more we move towards software controlled infrastructures, the more easily they can be automated. Most solutions today are built with some kind of API (application programming interface) to enable other applications to programmatically control or manage them in someway. In this decade, the industry has moved firmly away from proprietary API technologies, towards standardised ones. More often not based on the RESTful API architecture. Alongside this we are starting to see the rise of DevOps tools such as Puppet and Chef, which help bridge the gap between IT operations and the developers actually creating the applications that organisations rely on.

So What Does This Mean For the Modern IT Professional?

As the development of these tools and API interoperability progresses, undoubtedly, IT operations roles will also have to evolve.  This does not mean that there will be fewer jobs in IT.  In fact, IT skills have become more relevant than ever, but those skills have to change a little.  It is time to start moving up the stack by putting more focus on innovation in the area of application and service, rather than keeping the lights on down in the bits and bytes of the infrastructure. By doing this, these industry changes should become a massive job and career enabler, not a cause of suspicion and concern for job security.

I had a chat with a family member this week which summed this up really well for me.  We were discussing the Luddites, a 19th century movement in my home region of the North of England. The Luddites, were a group of textile workers who protested against the mechanisation of the production of garments. They did this violently under the auspices of “those machines are taking our jobs, we’ll have nothing to do and we’ll all starve”. A couple of hundred years on, we can see that starvation didn’t happen and those same people survived by finding new ways to innovate. On a sidenote, I once received a letter from a CBE calling me a Luddite who had seen me on TV discussing an environmental issue. I found this most amusing given the industry I work in and my lust of technological progress. In the same conversation with the family member, I mentioned that I was looking forward to the introduction of robot-taxis (e.g. Self-driving Google Cars) due to the efficiencies and cost of car sharing. They replied “but that could be 20,000 taxi drivers losing their jobs in Manchester alone”. I replied “Yes, but that’s also 20,000 people who could alternatively be working on curing cancer, pioneering space travel or solving the world’s energy problems”.

Conclusion – Software Defined X – Automation: Job Stealer or Job Enabler?

For me I see it as a Job Enabler. My advice… embrace the change, relish the opportunity to innovate and change the world for the better.. one step at a time.

Help Me Choose a Charity

Due to the recent demise of my dear old car.. I’ve decided to donate the remnants of it’s existence to charity. I found a really cool way to do this via the www.giveacar.co.uk website.  They will pick the car up and then scrap or auction it to generate the largest possible donation.  There are over 800 charities registered to choose from and new additional charities can register as needed.

Hani Old Car

In this instance, I’m pretty open to where the donation goes but would like it to go to one of three areas. Charities that work with vulnerable children, cancer research or human rights organisations.  Please help me decide where to donate by voting for one of the following below:

  • UNICEF: is the world’s leading organisation working for children and their rights, with a presence in more than 190 countries and territories reaching children on a scale like no other. They work with local communities, partners and governments to ensure every child’s right to survive and thrive is upheld.
  • Cancer Research: is the world’s leading charity dedicated to beating cancer through research. They’ve saved millions of lives with our ground-breaking work into preventing, diagnosing and treating cancer.
  • Amnesty International: Amnesty is made up of ordinary people from across the world standing up for humanity and human rights. Their purpose is to protect individuals wherever justice, fairness, freedom and truth are denied.
  • ADDED:  Neuroblastoma Children’s Cancer Alliance: The Neuroblastoma Children’s Cancer Alliance helps families of children suffering from neuroblastoma through providing financial assistance for children’s treatment.

Which charity should I donate my car to?

  • Neuroblastoma Children's Cancer Alliance (46%, 24 Votes)
  • Cancer Research (33%, 17 Votes)
  • UNICEF (13%, 7 Votes)
  • Amnesty International (8%, 4 Votes)

Total Voters: 52

Loading ... Loading ...

Job Change: Heading To The Mothership

It’s is with great pleasure that I post about a change of role and company.  I have had a phenomenal two years working as an SE at Veeam, but I am now making the move from VMware eco-system partner, to VMware itself.

I’ll be moving to VMware in the role of Senior SE and expanding the scope of my day-to-day work beyond just data protection.  VMware has posted some fantastic results in recent weeks, including double digit growth (almost unheard of for a company of it’s size).  The new role will give me the opportunity to work with customers in a wide breadth of areas with a killer product range covering Software Defined Data Center, Cloud and End User Computing.

If you’ve ever asked yourself the question, why would you want to work at VMware?, here are some key facts, VMware is:

  • a $5Billion+ company currently posting double-digit growth.
  • the #1 preferred vendor for private cloud.
  • one of the Top 3 most innovative companies in 2013 (source Forbes).

Perhaps the question you should be asking is why wouldn’t you want to work at VMware?

Monday will be my first day at the company and I can’t wait to get started. Expect more VMware focused blogs to appear in the near future.

Whitepaper: Virtual Backup Strategies: Using Storage Snapshots for Backups

Introduction
Effective data protection is a mandatory element in the modern IT environment. Historically, backup strategies were confined to the last few chapters in an administrator’s manual and treated like an afterthought. Now they sit firmly at the forefront of every CIO’s mind. The ability to continue business operations after a system failure and the need to fulfil stringent compliance requirements have made backup a necessity—not only for business continuity, but also for
business survival. The question organizations need to ask about data protection is not whether to backup their data, but how to backup their data.

IT systems are prone to rapid evolution and present a constantly shifting landscape and the techniques used to protect those systems need to evolve as well. Perhaps one of the most significant changes in recent years has been the advent of virtualization. In the virtual world, legacy backup systems have become unfit for their purpose, causing backup windows to increase beyond a manageable scope. While this paradigm presents new challenges, new opportunities to improve efficiency, cut costs and reduce risks are also created.

This paper will examine the use of storage snapshots as backups for virtual environments. We will evaluate the relative benefits and limitations while also considering where they fit into a holistic backup strategy when compared to a virtual disk-to-disk backup solution such as Veeam® Backup & Replication™

Background
Pre-virtualization backup strategies were underpinned by operating system (OS) and application-level features. The typical implementation would involve installing a backup agent into an OS and the agent would be responsible for putting applications into a consistent state for backup; copying backup data across the network to a backup server and subsequently monitoring any ongoing changes.

While this worked well in the physical world, virtualization changed everything as operating systems began to share the same physical hardware. Instead of having one backup agent consuming resources from a physical host, there was an agent for each virtual machine (VM) on that host. This meant that ten agents (based on a 10:1 consolidation ratio) or even more could be contending for the host’s CPU, RAM and disk resources. This contention was not only with each other, but also with the applications they were installed to protect. In addition, volumes of data increased to a level where it was no longer feasible to use standard transports to move it across the production network to the backup server. This situation clearly could not continue as virtualization has become the standard practice of datacenters worldwide.

Virtualized Layers

Where virtualization presented new challenges, it also presented new opportunities. The physical world consisted solely of the application/OS layer. The virtual world, Continue reading

PernixData Unbuttons Trench Coat at SFD3 and Reveals..

Flash Virtualization Platform.

SFD3_FVP

It seems in Storage circles there is much discussion around using new technologies to cache data in faster more accessible media. Flash is everywhere.. but there are many choices for where and how you can deploy flash technology in order to alleviate strain on storage systems, whose current SAS/SATA based disk drives are struggling to keep up with the day to day IOPS requirements of many organisations.

Pernix Data believe they have the solution for VMware environments in the form of their Flash Virtualisation Platform (FVP). They certainly have the credentials to be making such claims with team members coming from many leading companies. This includes, Satyam Vaghani, their CTO who came from VMware and was responsible for the creation of VMware’s VMFS filesystem.

FVP is a hardware agnostic server-side flash virtualization solution.  It virtualizes PCIe flash cards and local SSD at the server side. What I found particularly impressive about this solution is that it is a software only solution, that looks very easy to implement. It sits seamlessly between hypervisor and SAN without requiring any configuration changes to the either the datastores at the hypervisor or LUNs at the SAN. It’s just a extension that is easily implemented on vSphere.

The clustered nature of the product also overcomes some of the current server-side flash device challenges. When flash caching is being used in a server, a footprint of hot, commonly accessed data is built up for the workload running. If a VM (and its associated workload) migrate to another host due to vMotion or some other reason, the footprint needs to be recreated from scratch. FVP resolves this by replicating copies of the footprint data to other host in the cluster, making it easy for a VM to pickup it’s cached footprint if it moves. There are also, obviously data protection benefits to keeping multiple copies of the data in the event a server dies along with it’s cache.

In addition, to easy implementation the product also provides solid easy to read stats on what results it’s achieved. A sure fire way to build a solid business case around IOPS saved and the reduced requirement to scale up your SAN to deal with load.

What these new caching capabilities amount to is an entirely new storage tier between RAM & SAN. This new tier (or layer) will definitely come with challenges. One such challenge would be ensuring consistent copies of data at the SAN for things like Backup processes. If FVP caches the data at the server, some reads/writes never actually reach the SAN. So if your backing up from SAN you need a way to flush the data through in a consistent state. FVP does include a “write-through” mode which should flush changes to disk and stop caching (i.e. write-back mode). In order to achieve consistency there will need to be careful orchestration from VSS (or prefreeze/postthaw scripts on Linux) to FVP to VMware Snapshot and beyond. The product will have a PowerShell interface which could be used to switch between write modes for such an operation, but users should be aware that this is a requirement.

All in all, FVP looks like a great product that hist a lot of my hot buttons.. easy-to-install, easy-to-use, transparent but powerful and solid reporting on results. Although not publicly available as yet, it will be interesting to see the licensing model, TCO and ROI information Pernix provide. If they get that right, they could be very successful.

Here is the Intro from Storage Filed Day 3:

 

Echoes From the Past, The Mother of All Demos – Douglas Engelbart (1968)

We tend to envisage modern desktops with keyboards and mice as something that appeared in the early 1980’s. Microsoft did a great job of commoditizing this for the masses, but they did of course get the idea (or copied the idea, depending on your point of view) from a Xerox research project that never quite made off the ground.  Anything prior to the arrival of Windows stirs up visions of large mechanical devices with green text consoles and paper punch cards for input/output.

The reality is that the embryonic beginnings of the current desktop stretch back closer to 50 years ago, even before the Xerox project. I wasn’t aware of this until I heard about a 1960’s demo on a recent podcast (thanks Speaking in Tech).  The video below is of Douglas Engelbart, who unfortunately passed away in recent weeks.  This is probably the first ever large scale demo of this kind of technology. In this video, Douglas cuts a ghostly figure as he is superimposed on the film along side the desktop he is operating.  Working as a Sales Engineer, I do a lot of demos.. you always like to feel like you’re on the cutting edge and showing your customer something new, but I think this video shows that although the technology changes a lot of what drives demos is fundamentally the same.  Great demo from who should be considered both Jobbs’ and Gates’ predecessor.

Webinar: Disaster Recovery for Virtual Environments, One Simple Solution for Five Common SAN Replication Challenges

This is a replay of webinar, I ran last year.. the associated Whitepaper is linked below:

Whitepaper Available here: http://wp.me/p2ZZG3-fG

A new sister webinar/whitepaper focusing on using SAN snapshots in a holistic data protection strategy to be posted shortly.

Closing Our Doors

Hi All,

I think we are just about done. The residential application is progressing and all ideas of previous incinerator plans have been dropped.

There is still one more matter to deal with.. we still have £1900 in the Campaign Fund.  When we defined our constitution as a group back in 2010, we stated that on closure, any remaining funds be donated to a local community group or charity.  This is the first time I have been heavily involved in community activity and I’ve been very happy with the outcome. I am however very aware that there is a group of people who contribute in equal measure, but do this on a weekly basis and have done for many, many years. Where I will now go back to working on career and family, I feel comfortable in the knowledge that this group will continue to develop, protect and work for the community.

I have discussed the matter with the Say No committee and we have agreed that before closing doors on the campaign, the remaining funds should be donated to the MVCA (Monton Village Community Association). I think all who have been involved in the campaign will understand what the MVCA has contributed to our victory.

Please feel free to contact me if you would like to discuss this final action which will be concluded this week.

Thank you to all who have stepped up in this time of need. It’s taken a phenomenal amount of time, energy and dedication from the community as a whole, but I think we can finally put this one to rest. Stay vigilant, but enjoy this well earned victory and what the future may bring for the area.

Best if luck,

Hani