Agile: It’s never too late? Or is it? Competing with Old and New.

Disclaimer: I work at Rubrik. Views are my own, etc.

I spent some time this weekend reading trade websites and watching video-streamed events to catch up on the competitive landscape for Rubrik. This is not something to which I pay a huge amount of attention, but it’s always worth understanding the key differences between your company’s and your competitors approaches to the problem you are solving.

What I found was as surprising as it was comforting and certainly solidified my belief that Rubrik is 100% the market leader in the new world of Data Management. This was equally true of new market entrants as it was of old guard vendors.

How Rubrik Does it

We are predominantly lead by a laser-focused vision. We came to market to fix the data protection problem and leverage the resulting platform to deliver a world-class data management solution.  This has always been the plan, we stuck to it and are delivering on it.

Rubrik’s software engineering function has the Agile Software Development methodology baked into it’s DNA. At its core our engineering team is made up of experienced developers, many of which were at the heart of the distributed systems revolution at Google, Facebook and other new wave companies. This means we can introduce capabilities quickly and iterate the cycle at a high frequency. We have absolutely nailed this approach and have consistently delivered significant payloads of functionality since v1.0 of the product.

New features are introduced as a MVPs (Minimum Viable Products) and additional iterations to mature those features are delivered in rapid cycles. We have delivered 4 Major CDM (Cloud Data Management) releases year-on-year and are positioned to accelerate this. Our Polaris SaaS platform is a living organic entity and delivers capability in terms of hours-to-days-to-weeks.

This is pro-active innovation aligned with a rapid and consistent timeline.

How The Old Guard Do It

When referring to the old guard, I’m referring to vendors who’ve been around for more than a decade. These organisations have grown up in the era of the Waterfall Software Development Model. This model is much more linear and follows these stages: Gather Requirements -> Design -> Implementation -> Verification -> Maintenance. The cycle is documentation and process heavy. This is why most traditional vendors are only able to release major software versions in 18-24month cycles.

These vendors are stuck between a rock and a hard place. They have revenue streams for ageing products to maintain and historical code baggage (technical debt) they have to contend with which wasn’t developed for the new world of cloud. More importantly the mindset change to move to Agile is a mountain to climb in itself and many long-tenured developers struggle to adapt to the change. In short the commercial pressure to retain revenue from existing products, technical debt, people and process challenges leave them hamstrung and struggling to keep up. One such vendor announced a major version two years ago and to date have failed to deliver the release.

A by-product of this challenge is that many will use sticking-plaster market-ecture to claim they can do the same things as Rubrik. This usually comes in the format of PowerPoint, vapourware and a new front-end UI to hide the legacy platform.

How The New Entrants Try To Do It

Rubrik has experienced significant success in our laser focused approach to addressing data management challenges. It is only natural in a fair competitive market that other newer companies will either launch or pivot their approach in an attempt to experience similar success.  These companies also use Agile software development, may also have distributed systems backgrounds and are able to iterate in a similar manner.

However, these organisations face a different set of challenges. If they did not build their base platform with the same vision, goal and focus, it will essentially be equally as hamstrung by similar underlying challenges as the old guard experience. Common sense tells us that if you design a product to do one thing and then adapt it to do another, it is not purpose-built and won’t do the same job without significant trade-offs or difficulties.

A second and more important challenge is that they don’t have the same mature understanding of the vision and consequently will be late to release new features. They will always have a time lag behind the market leader, waiting for the market leader to deliver a feature and then starting the process of developing that same feature. I’m seeing some competitors announcing and demoing Beta versions of functionality that Rubrik formally released 1-2 years ago. This is not pro-active innovation, it’s re-active mimicking.

Having experienced (in previous roles) those situations where your company hypes up a new “game-changing” proposition, only to feel massively deflated when it’s announced and you’re aware that the market leader has been providing it for over a year, I can tell you that it is not an awesome experience for employees.

This approach forces them to commoditize their solution, position it as “good enough” and command a significantly lower price as the value is ultimately not being delivered. It’s simply not sustainable in the medium or long term. When the funding begins to diminish, these companies become easy acquisitions for industry giants.

Conclusion

I have previously worked in both traditional vendors (e.g. Veeam) and new entrant vendors such as SimpliVity who pivoted to challenge Nutanix in the HCI (Hyper-Converged Infrastructure) space. SimpliVity did not win that competition, were subsequently acquired by HPE and the solution now receives much less focus as part of HPEs large portfolio of products.

Whether new or old, with adapted or re-purposed solutions, competing with Rubrik on price is frequently the only option available. This creates a situation which is not optimal for customer or vendor. The customer doesn’t get the outcome they wanted and the vendor suffers in terms of available finance for ongoing operations as well as investing in the development of their solution. If a company is offering 80% of the functionality for 60% of the price, then you will only get 50% of the result planned. Furthermore, that company’s financials will most likely look like a cash haemorrhaging horror story.

Rubrik is without a doubt the market leader in the Cloud Data Management space. The Old Guard are re-badging their solutions as their “Rubrik-Killer” and the new entrants are pivoting to follow while burning through their funding.  It’s going to be an interesting couple of years. Ancient giant trees will fall and new saplings will be plucked by the behemoths. Exciting times in the industry, watch this space!

Hyperconverged Breakout

Now for a little fun. Don’t take this too seriously. This is my homage to the ultimate start-up pioneer, Steve Jobs. A celebration of hyperconvergence.

DISCLAIMER: This game in no way represents the opinions of SimpliVity. Nor is it meant to provide any comment on SimpiVity’s competitors, there capabilities or how easy they are to explode.

Leaving VMware, Joining Simplivity

Things move fast in the tech industry and after a relatively brief time at VMware, I’ve been offered a new role that I simply cannot refuse.  Today, I’ll be leaving VMware and moving to the hyper-converged infrastructure company, Simplivity.

In recent months, I’ve been in the exceptionally lucky position of having the option to choose between some phenomenal technology companies to decide where best to continue my career. Ultimately, the sadness of leaving the VMware team has been outweighed by the opportunity, belief in the technology and enthusiasm I feel for the Simplivity proposition.  It’s difficult to describe the thought process for making such a decision but it’s best summed up here…


There’s an engineer racing across the ocean in a speedboat.  He loves the speed boat, it has a great team, quality engine, it’s fast and can change direction quickly. During the race the speedboat encounters a gigantic cruise ship. It’s heading in the same direction and is part of the same race. The engineer is in awe of the sheer size of this thing and sailing along side it, all he can see is the seemingly impenetrable glossy hull which scales all the way up into the sky. What’s inside the cruise ship is a mystery, but it looks awesome and the whole race has their eyes on it.  While the engineer stares and ponders what could be inside this goliath, a gold plated rope ladder unfurls down it’s side with a written invitation to join the crew tied to the bottom.  The curiosity is too much to resist, so he heads up the ladder.

Cruise Ship with Speedboats

Once on-board, the cruise ship team welcome the engineer. They’re really friendly and clearly all great people, but he instantly realizes that it’s going to take some time to figure out where he and all these people fit into the ship’s organisation. To use a militaristic parallel, it feels very much like moving from a four man special forces team into the ranks of the general infantry.  It’s an unusual experience and with so many people, systems and different parts, there is no other option but to jump in with both feet to figure all this stuff out. Who is who? Who does what? Who is accountable for what? So many questions to answer.

After several months of on-boarding processes and training, he finally and thankfully is assigned to his tasks.  The initial task is to clean all the portholes on deck 37. The portholes give customers a narrow view into what’s available on the cruise ship and if they like what they see, they’re invited aboard to check out the facilities in depth.  On day one, he jumps straight to the job and cleans porthole 1 through to 60. Once complete, he goes back to porthole 1 to rinse, repeat and do it all again. During this time, he’s chatting with the long serving veteran engineer on Deck 38.  They’re looking out of their respective portholes and watching the speedboats zip around the surface of the ocean outside.

The veteran watches the speedboats with some trepidation as they jump and crash around on the waves. “I don’t think that’s very safe, do you?”, he says. The new engineer says in reply, “Yes, I can see that some of them just aren’t going to make it across the ocean. The waves are way too large. But, did you see how high that one jumped? That’s got to be a world record? Looks like fun.”. “Way too dangerous!”, the veteran concludes.

Some time passes and the new engineer quickly gets to grips with his task. He gets to know the rest of the team and settles in to enjoying his daily routine.  He also starts to understand the inner working of the ship. It’s complex, actually has lot’s of different engines and lots of different teams. Not always working to the same plan. He queries this with the veteran in an attempt to understand how they all work together.

The veteran explains, “Well, as with any large organisation, it’s not always easy to get everyone pushing in exactly the same direction and that’s also true of the engines on this ship. Some are pushing in their own directions, but if the ships general direction of travel remains towards the right destination then that’s got to mean success, right?”. The new engineer agrees and believes that the correct destination will eventually be reached. He does however wonder at what speed this will happen.

As time moves on, the engineer dutifully completes his work, all the while watching the speedboats doing their thing. He continually weighs up the risks that the speedboats are taking against the rewards that they are reaping. He questions his position daily. Should he jump ship or should he stay and enjoy the cruise with everyone else?

On one particular day, he sees an unusual speedboat in the flotilla surrounding the area. This one appears to be running faster than the others. It dips in, out  and across the waves with an agility that the other boats just don’t seem to have. He shouts down to the boat and asks, “How are you turning so quickly while moving so fast?”. A voice comes back from the speedboat, “Well, every speedboat here is using the same engine, but we’ve built a special accelerator component that gives our boat more speed, more agility and makes our fuel costs much lower than anyone else’s.”

Looking at the speedboat’s technology, the engineer’s brain blub not only lights up, it explodes. A whole shelf of pennies drop as he realizes the potential. “Wow, that’s massively impressive!”, the engineer shouts.

“Wait until you see what’s in our roadmap and pipeline! Come and join the fun!”, shouts back the voice.

After much deliberation and soul searching, the engineer comes to the following conclusion. Joining the speedboat is by far the more challenging path, it also presents many seen and unseen risks. However, with risk comes reward. Without risk nor challenge their is no reward to reap.  While he can see in future that he may want to put his feet up and join the cruise, right now is not the right time. He must grab the opportunity and work his socks off to help build something new, something worth building.


The moral of this story is that there is no good or bad boat. No right or wrong answer.  Just boats that offer different styles and opportunities for sailing the ocean.  If you have a tolerance for risk and hard work, you’re probably more suited to the speedboat and the rewards that may bring. If not, settle in for the long haul and enjoy the cruise.

Really looking forward to working at Simplivity and will approach the opportunity with the enthusiasm, excitement and rigor that the role requires.

Networking Primer – Part 8: Summary

Holy cow, when starting to write this series I in no way expected it to turn into a 12 part, 3 month process.  I have covered so much, but there is so much more that could have been covered.  It’s been a challenge to keep pulling back and remembering that this is just a primer. To recap what we’ve covered by post:

Part Description
Part 1: Introduction Introduction to series content and objectives.
Part 2: Defining Networking with OSI and TCP/IP Suite Defining networking background, terminology and models.
Part 3: Application, Presentation and Session Layers Describes the tope 3 layers of the OSI model.
Part 4: Transport Layer, TCP and UDP A dip into the world of connection-orientated vs connectionless protocols.
Part 5.1: Network Layer – IP Addressing IP Addresses, what they are and how they are used.
Part 5.2: Network Layer – DNS and DHCP Converting IP addresses to names we can read and handing them out on a network.
Part 5.3: Network Layer – IP Routing Getting IP packets from one node to another.
Part 6.1: Data Link Layer, Ethernet and MAC Ethernet Frames for shifting local traffic.
Part 6.2: Media Access Control – CSMA/CD, CSMA/CA Getting access to the wire, fibre or air.
Part 6.3: Layer 2 Switching – Loops, Spanning Tree and Topologies Fitting LAN switches together,
Part 6.4: VLANs and other ANs (Area Networks) Security and Isolation with VLANs
Part 7: Physical Layer, Electrons, Photons and All Things Quantum Physical media and nerdiness.
Part 8: Summary Summary.

I’ve really enjoyed refreshing all my own knowledge on these basic concepts and hope you have too. Future series are to be more specifically focussed on network virtualisation and other areas of the data centre.

Networking Primer – Part 7: Physical Layer, Electrons, Photons and All Things Quantum

Our 7th and Final OSI Layer is the Physical Layer. Unless you are planning to work with or design specialist hardware there isn’t much interaction required with this layer from an administrative point of view. We’ll cover some of the basics here but not in-depth.  This layer strays directly into the realms of science, more specifically physics and even more specially quantum physics.  This is most definitely the geekiest post in this series.

Most modern computers store, process and transmit data in its simplest form, binary. It makes sense to encode data in the form of binary as even the most complex information can be broken down and represented as combinations of simple 0s and 1s. This simple representation maps very conveniently across to physical properties.  The physical technologies we use today work on the premise of data being represented by one of two states, there or not there (on or off).  The electronic components inside a computer are able to create and detect physical state. To simplify this to the highest level, electrical current is either present or not present inside a component on a circuit board. If it is present, that represents a 1 and if it is not, the represents a 0.

Signalling and Media

This is much easier to visualise in networking terms. If you have two nodes connected with a copper wire, the sending node is able to transmit electrical pulses down the wire and the receiving node is able to decode this as a stream of 0’s and 1’s. Another way of thinking of this is like the ships of World War 2 that used big lights to send messages to each other in the form of morse code. This is referred to as signalling and the whole focus of layer 1 is to define the protocols for converting data to signals along with sending/receiving them.  Anything that is able to exist (and be detected) in one of two physical states can be used as a transmission device.

Morse Code Signalling Lamp

Our transmission media doesn’t necessarily have to be electrons flowing down an electrically conducting wire. It could just as well be photons of light travelling down a fibre optic cable. It could also photons travelling through the air as part of radio magnetic waves (i.e. Wifi). When implementing the physical network, consideration must be given to the properties of the physical media, each of them will have different speed, throughput, cost and flexibility attributes. For instance, Fibre optic technologies generally cost more than their electron shuffling equivalents, but today they represent the fastest possible method moving signals from one place to another. They do this at the speed of light (299,792,458 metres per second) and that is essentially the fastest anything can travel as it’s the cosmological speed limit of the entire universe. Electrical current still moves down wires pretty fast, in fact almost at the speed of light, but slightly less due to other physical factors that cause some interference and resistance to the movement.

From a WiFi perspective it’s strange to think that we have electromagnetic waves running through us and all around us at all times. The transmission element in the radio wave is of course also the photon, after all, light is just an electromagnetic wave oscillating at a different (and visible) frequency than radio. I particularly like the picture below which visualised the electromagnetic wave of WiFi propagating across a city.

Here's What Wi-Fi Would Look Like If We Could See It

You can find more cool WiFi visualisation here:

Here’s What Wi-Fi Would Look Like If We Could See It

Wow, time to make our way out of the land of geek.  This is as much as we’ll cover here as this is a Primer and not a physics course.

DevOps, Automation & The Race to the CLI. A New Cycle?

DevOps and Automation have certainly taken some mind share in the IT community and it seems to be becoming a universally accepted truth, that we need to automate operations in order to keep up with the rapid pace of development in the data center.  There is clearly a trend of moving away from GUI based configuration, towards using the CLI (Command Line Interface), scripting and agile programming in order to achieve operational objectives in our environments.  This is also evidenced in a seemingly ubiquitous substitution of job descriptions. The “System Administrator” role appears to be disappearing and a new “DevOps Engineer” role is supplanting it in many places. What’s unusual is that other than the job title, the job descriptions seem to be very much the same with additional scripting skills coming to the fore.

Minority Report UI

Even the King of the GUI, Microsoft, has seen this trend and with Windows Server 2012, dumped the Full Fat GUI approach in favour of using PowerShell as the primary point of interaction with the OS. Windows Server installs as the Core version (no GUI) by default now and it is expected that using a GUI would be the exception to the norm. I have to say that that’s not necessarily a bad thing, PowerShell is probably one of the initiatives that Microsoft has got right in recent years and those seem to be few and far between.

There are many obvious benefits to these text-based configuration approaches and it is inevitable things will continue in that direction. As workloads in the data center continue to become more transient with instances span-up and discarded frequently, it’s going to become a mandatory requirement to perform similar repeatable operations for many similar objects with scripting or similar tools.

Being around IT as long as I have though, I can’t help but wonder if this is just another “cycle”. It’s taken us 30 years to move away from the Centralised, Text Driven Mainframes of last century, but we are definitely heading back in that direction.  IT tends to be cyclical in nature and I’d hazard a guess that once we’ve all got to grips with DevOps, there will be a new generation of graphical tools in the distant but imaginable future. We are after all a primarily visual species.  If and when  DevOps fully takes hold, is it here to stay or just the returning curve of a technology cycle?

Networking Primer – Part 6.4: VLANs and other ANs (Area Networks)

Previous: Part 6.3: Layer 2 Switching – Loops, Spanning Tree and Topologies

I probably should have covered this a little earlier in the series, regardless we’ll do it now. Networks are loosely categorised by the area they cover. This is usually compacted into a useful xAN acronym where x stands for the scope, A stands for Area and N stands for Network. The following table lists the different scopes:

Scope Description
LAN – Local Area Network Usually restricted to a single building or even sub-parts of the building in some cases. This type of network is most relevant to everything we have discussed at Layer 2 of the OSI stack. Primarily related to wired network connectivity.
WLAN – Wireless Local Area Network Very similar to LAN, but focussed on wireless connectivity as opposed to wired. Usually restricted to a single building or even sub-parts of the building in some cases. This type of network is most relevant to everything we have discussed at Layer 2 of the OSI stack.
WAN – Wide Area Network The largest scope of network that could potential span the entire globe.
MAN – Metropolitan Area Network Still large but restricted in size to Metropolitan area such as a city or large suburb.
CAN – Campus Area Network Multibuilding networks deployed across educational or similar institutional campuses.
PAN – Personal Area Network Used for devices in your immediate personal space or within a few meters. Smart phones and other Bluetooth driven devices sit in this category.

One acronym missing from the above table is VLAN – Virtual Local Area Network. Let’s put some focus on it now.

VLAN – Virtual Local Area Network
The reason I’ve missed it from the table is because a VLAN doesn’t really fit into a physical scope. It’s actually a logical segmentation construct that sits inside an existing Local Area Network or LAN.

Remember the importance of the Port as a management entity as stated in the previous post? This comes into play again here with VLANs too. By assigning a VLAN to a port we effectively segment it from the rest of the ports in the environment that aren’t assigned to the same VLAN. Without VLANs, every device connected to every switch in the network sits in the same Broadcast domain. Once the switches have learned which ports are occupied by which MAC addresses, broadcasts are reduced, but they do still need to happen as network changes are made frequently. By assigning VLANs, we are logically splitting down the broadcast domain into multiple smaller broadcast domains. Another more dynamic way to establish VLAN membership is by MAC address. This means that whichever port in the network a device is plugged into, it will always be recognised as a member of the correct VLAN.

So why would we want segment at all? There are two reasons, Security and Network Efficiency.  From a security perspective, by creating this logical segmentation we stop nodes from receiving frames that they do not need to receive, as all broadcast traffic is isolated to the ports that belong to the correct VLAN. This is can prevent an eavesdropping or any other unwanted visibility of frames outside of the VLAN. We might want to segment different departments in this way. For example, the payroll department might sit on its own VLAN, as the data it transmits is financially sensitive. Do all the nodes in the other departments need to see those broadcasts? Probably not. Network Efficiency is pretty straight forward too. By segmenting the traffic into VLAN we also reduce the amount of traffic each node receives. This reduces the amount of bandwidth used by the node and also the amount of processing the node has to do, to work out if the unwanted frames are intended for it, before discarding them.

While VLANs are an excellent tool for subdividing broadcast domains, we can take this even further if required using PVLANs (Private VLANs). A detailed description of the PVLANs is out of scope for this primer, but as a high level summary we can say that they are used to subdivide VLANs into even smaller broadcast domains. We create some secondary VLANs and then implement some rules to restrict which ports in the Primary VLAN each sub-division can communicate with. A good example use case for this might be a hotel network, where we want all devices to be able to communicate with the internet connected router, but not with each other. More details can be found here : Private VLANs.

VMware is Goldmember in Openstack

It’s been a little surprising to me that there’s a decent amount of buzz in the market surrounding Openstack, but not many people are clear about what it actually is and VMware’s involvement in it.  In some discussions, it is occasionally wielded as the “Deathbringer” for all things VMware and a work-in-progress alternative to everything VMware does. More often than not, there is a reaction of surprise in those discussions when it comes to light that VMware is a member of the Openstack Foundation. Furthermore, it contributes enough resources, funds and activity to be a Gold Member of the Foundation.

Openstack Gold Member

So how does that work?

Well, first and foremost when compared to a product suite such as VMware’s vCloud Suite, it should be understood that Openstack is not a fully featured product stack that will cater for all of the functionality required to operate a private, hybrid or public cloud.  Openstack is a plug-and-play framework that defines a common list of APIs and interfaces to enable the provision and operation of cloud capabilities. The key word here is framework. This framework provides a definition and set of rules for how the components in a cloud should communicate and service each other.  Openstack doesn’t for example provide compute virtualisation, or network and storage virtualisation for that matter.  Yes, you still need a hypervisor in an Openstack implementation. There is definitely some confusion over this point and Openstack (open source cloud management) is often mentally bundled together with KVM (one open source hypervisor).  This is of course incorrect, KVM is not Openstack and vice versa. The hypervisor could be any number of those on the market today, remember it’s plug-and-play.  This is one example of where VMware has significant relevance to Openstack. You can use vSphere as the hypervisor in any Openstack system.

Alongside the compute virtualisation provided by vSphere, it’s also possible to use VMware technologies such as VSAN (Virtual SAN) to serve up storage along with NSX for network functionality. In fact, after VMware’s acquisition of Nicira, it became extremely important in terms of the development of Openstack networking projects. So it is clear to see there are definitely many areas of collaboration for VMware and Openstack.  It would be dismissive to fail to acknowledge that there are elements of competitive overlap between some VMware products and some Openstack, but these aren’t an all encompassing “Us vs Them” discussion. VMware’s approach to Openstack is very much one of being a good neighbour in a growing ecosystem. If every element of the stack is to be plug-and-play, VMware will make best efforts to ensure that it’s own components adhere to the API specifications and provide the richest set of functionality available to the market.

VMware’s Openstack membership is established and gaining momentum. VMware is a Gold Member of the Openstack Foundation and continues to increase activity and contributions in all relevant projects. Although a relatively late arrival to the foundation, VMware now sits in the Top 10 contributing companies for the whole project (rankings based on Source code commits).

Openstack Commit by Company

If you would like to know more about getting started with VMware and Openstack please read the following whitepaper:

http://www.vmware.com/files/pdf/techpaper/VMWare-Getting-Started-with-OpenStack-and-vSphere.pdf

Networking Primer – Part 6.3: Layer 2 Switching – Loops, Spanning Tree and Topologies

Previous: Networking Primer – Part 6.2: Media Access Control – CSMA/CD, CSMA/CA

We were briefly introduced to devices called Network Switches in the last post in this series. A switch essentially acts as a central connection point in a star topology  for many network nodes.It is similar to a Hub from a topological perspective but whereas a hub will take a frame in from one port and broadcast it out on all of the other ports, a switch has some built-in intelligence so it may forward the frame only to those ports which should receive the frame.  I like to think of a switch very much like it’s similar namesake, the switchboard, from the public telephony world.

Old Telephony Switchboard

In this older world, you picked up your phone to call the operator.  When the operator at the other end answered, you would tell her/him who you would like to call, they would cross reference the name with the relevant port number on the switchboard and plug in a cross-connecting wire between your incoming port to your outgoing call recipients. A network switch operates in a similar fashion although there are of course some notable differences.

Switch Ports

Ports are a very important entity in the switching process. Modern switches can contain 8, 16, 24 .. or even 1000’s of ports in large-scale enterprise level implementations. Port occupancy on a network switch can be very transient with desktops and laptops changing the port they are plugged into on a daily basis. To cope with this, the switch must be much more malleable and must have a mechanism for learning which device is occupying which port. It does this by maintaining a table of the source MAC addresses it receives from each port.  It is worth being aware that if a switch doesn’t know which port of the destination MAC address it will still broadcast to all the other ports in the same way a hub does.

I can’t emphasise the following enough, so it is worth re-iterating.. the Port is a very important entity in the switching process and is not only a node’s physical access point into the network.  It also represents a management construct that can be used to control the nodes security and resource permissions within the network.  The Port and it’s associated ID can be used to segment traffic as well as shape it (e,g, restrict bandwidth, etc).

Switching Topologies

A single switch device connecting all the nodes in a network is a pretty simple architecture to visualise and understand. This kind of set-up is however only found in small office environments. In larger environments, it may become impossible to cable all of the nodes into the same switch due to geographical, redundancy or resiliency factors.  In these environments, we need to introduce multiple interconnected switches.  Luckily most modern switches have the intelligence to connect to other switches in pretty much any configuration. We can daisy chain them together, make circular loops or any other artistic creation we wish.. all of these are possible:

Logical Topologies

When a switch is connected to another switch, it soon learns that the interconnecting port isn’t occupied by a single node and MAC address. They’ll learn that there is another switch there and that the port is possibly the destination for many devices. Any source MAC addresses coming in from that port will be stored in the table so that local nodes may send frames back to those devices via that port.  Given this flexibility, of connecting switches together in any configuration, it is possible to find ourselves with the problem of circular switching loops.

Switching Loops and the Spanning Tree Protocol

As stated above, if a switch receives a frame on a port and hasn’t yet learned the forwarding port of its MAC Address, it will broadcast it out on all of its ports with the exception of the one it receives it from. This is called a broadcast of an unknown unicast frame. A similar bulk multi-port forwarding operation may occur for general broadcast frames as well as multicast frames (frames for more than one destination node).  These multi-port broadcast have the potential to turn into infinite circular loops where there is a circular route to follow in an architecture.

Take the following example:

Switching Loops

A node connected to Switch B wants to communicate with a node connected to Switch C. It doesn’t know where the forwarding port for this node is so Switch B broadcasts to all ports including the ports interconnecting A, C, D & E. Switch A will send it to C, D & E. The frame will reach its destination on Switch C, but it may receive two copies of the frame, one from B one from A. Also now that D is in the mix, it’s possible D could broadcast it back to B, who in turn will broadcast it back to A. This is just one example of a switching loop.

The problem with these loops is that they’re often difficult to spot. The frame does get where it’s going, but multiple copies of it are being looped. This is only really apparent when the switches CPU seems to be increasing workload for no apparent reason. Enter STP or Spanning Tree Protocol. In Brief, STP learns the multiple possible routes a frame may take across the switching infrastructure. It then assesses these multiple routes using an algorithm to select the best one and blocks the rest, thus preventing any looping.

The Hierarchical Network Model

The Hierarchical Network Model is a network design model created by Cisco. It’s a very simple layered model created from medium or large network environments. The Layers are defined as follows:

  • Core Layer – composed of powerful high throughput switches and border routers to make up the backbone of the network
  • Distribution layer – a second tier layer used for aggregation the lower layer switches and connecting through to core.
  • Access Layer – the tier containing the front-end switches where the network devices/nodes gain access to the network.

The Hierarchy Network Model

 

This model is very widely deployed and has become somewhat of a defacto standard. It is worth remembering these layers and where they sit. In future blogs, I intend to address network virtualization and how it has shifted the dotted line and pulled some of the access layer into the hypervisor (more to come on this later).

 Next: Part 6.4: VLANs and other ANs (Area Networks)

Response To: Is Openstack Dead? at VirtualizationPractice.com

In recent months, I have heard a lot of buzz in the media about what’s happening with Openstack. Published today, I found this article on the virtualisation practice website an interesting and thought provoking read. In it, they actively question the viability of Openstack and its long-term future:

http://www.virtualizationpractice.com/openstack-dead-26869/

The article questions with some focus, the economic viability of the continued development of Openstack. It identifies that there appears to be a lack of a driving force behind the project  In other words, a lack of any major bankrolling entity that stands to benefit from the success of Openstack.  This is very bold article that will definitely shake some of the proponents of Openstack up.  I can hear the heckling streaming across the blog-vines as we speak.  While there are many valid points in this article stating that open stack is a “dead cloud walking” is perhaps extreme.

I personally don’t believe Openstack is dead. It does however have a significant number of challenges to overcome in the short to medium term.  Integration, lack of compatibility and the increasing number of diverging distros is clearly becoming a problem.  The many interested and involved parties pulling in their own unique direction may also  hinder progress.

So why do I believe there is life in Openstack?

I would say that there is a driving force behind the project. That driving force being the open source community. I would however question whether that driving force has the momentum, resources and capability to conquer the world of cloud in the short-term. I’ve heard a lot of chatter from various Openstack community members and conferences validating similarities between Openstack today and the early days of Linux.

There are two primary issues with this comparison that strike me as problematic. These issues make me believe that it will take a very long time for Openstack to gain any kind of significant traction.  The first is that the Linux project has had a very authoritative and dictatorial leadership model (in the form of Linus Torvalds). I believe this approach and the lack of “leadership by committee” was somewhat instrumental to the success it has achieved.

I think we can all agree that Linux has been very successful and is definitely here to stay, but what is the definition of success and how long are we willing to wait for it ?

Desktop OS  Market Share 2014

Linux was originally developed as an alternative desktop operating system and the project started in 1991. Today in 2014, 23 years later it still retains less than 2% of that market.  It has made significantly better progress in the web server market (30%) and has pretty much killed the mobile device market in the form of android (80% of smartphone sales in 2013), but these later successes have only really come to pass in the last 2 to 3 years.

Ultimately, it’s taken Linux 20 years with that focussed single-minded leadership to generate that success.  Openstack is, to date, 4 years in the making.  This is not to say that Openstack must experience the same 20 year battle.  I do however find it difficult to perceive that it will be ready to garner any significant traction in the private or public cloud space within the next 5 (perhaps even 10) years.  It’s a complicated beast with many moving parts and even if it was fully ready to deploy and had feature parity with the commercial alternatives, it still requires monumental shifts of mindset in order for organisations to buy in to such a platform.  I’ll watch developments with interest.

Disclaimer: I’m a VMware employee and it should be clear that although VMware is an active Openstack community member there is also an element of competition between the respective stacks. These opinions are my own.