Cloud Blob Storage Race-to-Zero: Azure Reserved Pricing Is Here

If you thought Azure Blob Storage was inexpensive before, how would you like a further 38% discount just to be sure?

In November 2019, Microsoft announced that they would be introducing Reserved Price Discounting for 6 new services, including Blob Storage in GPv2 Accounts:

https://azure.microsoft.com/en-us/blog/save-more-on-azure-usage-announcing-reservations-for-six-more-services/

The pricing is also now available in the Azure Pricing Calculator:

What does this mean?

You can now reserve capacity in advance, the same way you can reserve VM Instances. The new model allows you to reserve capacity for either 1 or 3 years, with a 38% discount on your current costs for 3 years.

This is great for users who have a predictable consumption need.. such as Backup users 🙂

What to look out for:

1, Ensure that your subscription type is eligible for the pricing. These types are currently covered:

“Enterprise Agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P): For an Enterprise subscription, the charges are deducted from the enrollment’s monetary commitment balance or charged as overage.

Individual subscription with pay-as-you-go rates (offer numbers: MS-AZR-0003P or MS-AZR-0023P): For an individual subscription with pay-as-you-go rates, the charges are billed to the credit card or invoice payment method on the subscription.”

2, The pricing is available in minimum increments of 100TB (i.e. Do not bother if you have a couple of TBs in Azure).

Full details are available here. Please review this in order to understand your particular circumstances:

https://docs.microsoft.com/en-gb/azure/storage/blobs/storage-blob-reserved-capacity

Cloud Blob Storage Trends – Time to use $/TB/month?

I love a good technology industry prediction and especially calling the Predicters out at a later time.  In this case, I’m tipping my hat to Ikram Hawramani and his 2015 prediction on the continued decline of cloud blob storage costs (http://hawramani.com/aws-storage-historical-pricing-and-future-projections):

In 2015, he produced this trended prediction:

4 years later in mid-2019, his trend appears to be extremely accurate. He predicted that by August 2019, the cost of Cloud Blob Storage would be approximately $10/TB/month, $0.01/GB/month in currently adopted metrics.

A recent price review by Jay Chapel, ParkMyCloud CEO at https://www.datacenterdynamics.com/opinions/review-cloud-storage-costs/ confirms:

The introduction of new Cooler Blob Storage Tiers are a relatively new development that Ikram would not have had visibility of. These are currently being priced at a fraction of a percent of a cent. For example, Glacier Deep Archive at $0.00099/GB or $0.99/TB, yes that’s right we’ve broke the $1 floor.

As the race to zero continues, when will be ready for pricing per TB. I think soon.

 

Agile: It’s never too late? Or is it? Competing with Old and New.

Disclaimer: I work at Rubrik. Views are my own, etc.

I spent some time this weekend reading trade websites and watching video-streamed events to catch up on the competitive landscape for Rubrik. This is not something to which I pay a huge amount of attention, but it’s always worth understanding the key differences between your company’s and your competitors approaches to the problem you are solving.

What I found was as surprising as it was comforting and certainly solidified my belief that Rubrik is 100% the market leader in the new world of Data Management. This was equally true of new market entrants as it was of old guard vendors.

How Rubrik Does it

We are predominantly lead by a laser-focused vision. We came to market to fix the data protection problem and leverage the resulting platform to deliver a world-class data management solution.  This has always been the plan, we stuck to it and are delivering on it.

Rubrik’s software engineering function has the Agile Software Development methodology baked into it’s DNA. At its core our engineering team is made up of experienced developers, many of which were at the heart of the distributed systems revolution at Google, Facebook and other new wave companies. This means we can introduce capabilities quickly and iterate the cycle at a high frequency. We have absolutely nailed this approach and have consistently delivered significant payloads of functionality since v1.0 of the product.

New features are introduced as a MVPs (Minimum Viable Products) and additional iterations to mature those features are delivered in rapid cycles. We have delivered 4 Major CDM (Cloud Data Management) releases year-on-year and are positioned to accelerate this. Our Polaris SaaS platform is a living organic entity and delivers capability in terms of hours-to-days-to-weeks.

This is pro-active innovation aligned with a rapid and consistent timeline.

How The Old Guard Do It

When referring to the old guard, I’m referring to vendors who’ve been around for more than a decade. These organisations have grown up in the era of the Waterfall Software Development Model. This model is much more linear and follows these stages: Gather Requirements -> Design -> Implementation -> Verification -> Maintenance. The cycle is documentation and process heavy. This is why most traditional vendors are only able to release major software versions in 18-24month cycles.

These vendors are stuck between a rock and a hard place. They have revenue streams for ageing products to maintain and historical code baggage (technical debt) they have to contend with which wasn’t developed for the new world of cloud. More importantly the mindset change to move to Agile is a mountain to climb in itself and many long-tenured developers struggle to adapt to the change. In short the commercial pressure to retain revenue from existing products, technical debt, people and process challenges leave them hamstrung and struggling to keep up. One such vendor announced a major version two years ago and to date have failed to deliver the release.

A by-product of this challenge is that many will use sticking-plaster market-ecture to claim they can do the same things as Rubrik. This usually comes in the format of PowerPoint, vapourware and a new front-end UI to hide the legacy platform.

How The New Entrants Try To Do It

Rubrik has experienced significant success in our laser focused approach to addressing data management challenges. It is only natural in a fair competitive market that other newer companies will either launch or pivot their approach in an attempt to experience similar success.  These companies also use Agile software development, may also have distributed systems backgrounds and are able to iterate in a similar manner.

However, these organisations face a different set of challenges. If they did not build their base platform with the same vision, goal and focus, it will essentially be equally as hamstrung by similar underlying challenges as the old guard experience. Common sense tells us that if you design a product to do one thing and then adapt it to do another, it is not purpose-built and won’t do the same job without significant trade-offs or difficulties.

A second and more important challenge is that they don’t have the same mature understanding of the vision and consequently will be late to release new features. They will always have a time lag behind the market leader, waiting for the market leader to deliver a feature and then starting the process of developing that same feature. I’m seeing some competitors announcing and demoing Beta versions of functionality that Rubrik formally released 1-2 years ago. This is not pro-active innovation, it’s re-active mimicking.

Having experienced (in previous roles) those situations where your company hypes up a new “game-changing” proposition, only to feel massively deflated when it’s announced and you’re aware that the market leader has been providing it for over a year, I can tell you that it is not an awesome experience for employees.

This approach forces them to commoditize their solution, position it as “good enough” and command a significantly lower price as the value is ultimately not being delivered. It’s simply not sustainable in the medium or long term. When the funding begins to diminish, these companies become easy acquisitions for industry giants.

Conclusion

I have previously worked in both traditional vendors (e.g. Veeam) and new entrant vendors such as SimpliVity who pivoted to challenge Nutanix in the HCI (Hyper-Converged Infrastructure) space. SimpliVity did not win that competition, were subsequently acquired by HPE and the solution now receives much less focus as part of HPEs large portfolio of products.

Whether new or old, with adapted or re-purposed solutions, competing with Rubrik on price is frequently the only option available. This creates a situation which is not optimal for customer or vendor. The customer doesn’t get the outcome they wanted and the vendor suffers in terms of available finance for ongoing operations as well as investing in the development of their solution. If a company is offering 80% of the functionality for 60% of the price, then you will only get 50% of the result planned. Furthermore, that company’s financials will most likely look like a cash haemorrhaging horror story.

Rubrik is without a doubt the market leader in the Cloud Data Management space. The Old Guard are re-badging their solutions as their “Rubrik-Killer” and the new entrants are pivoting to follow while burning through their funding.  It’s going to be an interesting couple of years. Ancient giant trees will fall and new saplings will be plucked by the behemoths. Exciting times in the industry, watch this space!

Hyperconverged Breakout

Now for a little fun. Don’t take this too seriously. This is my homage to the ultimate start-up pioneer, Steve Jobs. A celebration of hyperconvergence.

DISCLAIMER: This game in no way represents the opinions of SimpliVity. Nor is it meant to provide any comment on SimpiVity’s competitors, there capabilities or how easy they explode.

Leaving VMware, Joining Simplivity

Things move fast in the tech industry and after a relatively brief time at VMware, I’ve been offered a new role that I simply cannot refuse.  Today, I’ll be leaving VMware and moving to the hyper-converged infrastructure company, Simplivity.

In recent months, I’ve been in the exceptionally lucky position of having the option to choose between some phenomenal technology companies to decide where best to continue my career. Ultimately, the sadness of leaving the VMware team has been outweighed by the opportunity, belief in the technology and enthusiasm I feel for the Simplivity proposition.  It’s difficult to describe the thought process for making such a decision but it’s best summed up here…


There’s an engineer racing across the ocean in a speedboat.  He loves the speed boat, it has a great team, quality engine, it’s fast and can change direction quickly. During the race the speedboat encounters a gigantic cruise ship. It’s heading in the same direction and is part of the same race. The engineer is in awe of the sheer size of this thing and sailing along side it, all he can see is the seemingly impenetrable glossy hull which scales all the way up into the sky. What’s inside the cruise ship is a mystery, but it looks awesome and the whole race has their eyes on it.  While the engineer stares and ponders what could be inside this goliath, a gold plated rope ladder unfurls down it’s side with a written invitation to join the crew tied to the bottom.  The curiosity is too much to resist, so he heads up the ladder.

Cruise Ship with Speedboats

Once on-board, the cruise ship team welcome the engineer. They’re really friendly and clearly all great people, but he instantly realizes that it’s going to take some time to figure out where he and all these people fit into the ship’s organisation. To use a militaristic parallel, it feels very much like moving from a four man special forces team into the ranks of the general infantry.  It’s an unusual experience and with so many people, systems and different parts, there is no other option but to jump in with both feet to figure all this stuff out. Who is who? Who does what? Who is accountable for what? So many questions to answer.

After several months of on-boarding processes and training, he finally and thankfully is assigned to his tasks.  The initial task is to clean all the portholes on deck 37. The portholes give customers a narrow view into what’s available on the cruise ship and if they like what they see, they’re invited aboard to check out the facilities in depth.  On day one, he jumps straight to the job and cleans porthole 1 through to 60. Once complete, he goes back to porthole 1 to rinse, repeat and do it all again. During this time, he’s chatting with the long serving veteran engineer on Deck 38.  They’re looking out of their respective portholes and watching the speedboats zip around the surface of the ocean outside.

The veteran watches the speedboats with some trepidation as they jump and crash around on the waves. “I don’t think that’s very safe, do you?”, he says. The new engineer says in reply, “Yes, I can see that some of them just aren’t going to make it across the ocean. The waves are way too large. But, did you see how high that one jumped? That’s got to be a world record? Looks like fun.”. “Way too dangerous!”, the veteran concludes.

Some time passes and the new engineer quickly gets to grips with his task. He gets to know the rest of the team and settles in to enjoying his daily routine.  He also starts to understand the inner working of the ship. It’s complex, actually has lot’s of different engines and lots of different teams. Not always working to the same plan. He queries this with the veteran in an attempt to understand how they all work together.

The veteran explains, “Well, as with any large organisation, it’s not always easy to get everyone pushing in exactly the same direction and that’s also true of the engines on this ship. Some are pushing in their own directions, but if the ships general direction of travel remains towards the right destination then that’s got to mean success, right?”. The new engineer agrees and believes that the correct destination will eventually be reached. He does however wonder at what speed this will happen.

As time moves on, the engineer dutifully completes his work, all the while watching the speedboats doing their thing. He continually weighs up the risks that the speedboats are taking against the rewards that they are reaping. He questions his position daily. Should he jump ship or should he stay and enjoy the cruise with everyone else?

On one particular day, he sees an unusual speedboat in the flotilla surrounding the area. This one appears to be running faster than the others. It dips in, out  and across the waves with an agility that the other boats just don’t seem to have. He shouts down to the boat and asks, “How are you turning so quickly while moving so fast?”. A voice comes back from the speedboat, “Well, every speedboat here is using the same engine, but we’ve built a special accelerator component that gives our boat more speed, more agility and makes our fuel costs much lower than anyone else’s.”

Looking at the speedboat’s technology, the engineer’s brain blub not only lights up, it explodes. A whole shelf of pennies drop as he realizes the potential. “Wow, that’s massively impressive!”, the engineer shouts.

“Wait until you see what’s in our roadmap and pipeline! Come and join the fun!”, shouts back the voice.

After much deliberation and soul searching, the engineer comes to the following conclusion. Joining the speedboat is by far the more challenging path, it also presents many seen and unseen risks. However, with risk comes reward. Without risk nor challenge their is no reward to reap.  While he can see in future that he may want to put his feet up and join the cruise, right now is not the right time. He must grab the opportunity and work his socks off to help build something new, something worth building.


The moral of this story is that there is no good or bad boat. No right or wrong answer.  Just boats that offer different styles and opportunities for sailing the ocean.  If you have a tolerance for risk and hard work, you’re probably more suited to the speedboat and the rewards that may bring. If not, settle in for the long haul and enjoy the cruise.

Really looking forward to working at Simplivity and will approach the opportunity with the enthusiasm, excitement and rigor that the role requires.

Networking Primer – Part 8: Summary

Holy cow, when starting to write this series I in no way expected it to turn into a 12 part, 3 month process.  I have covered so much, but there is so much more that could have been covered.  It’s been a challenge to keep pulling back and remembering that this is just a primer. To recap what we’ve covered by post:

Part Description
Part 1: Introduction Introduction to series content and objectives.
Part 2: Defining Networking with OSI and TCP/IP Suite Defining networking background, terminology and models.
Part 3: Application, Presentation and Session Layers Describes the tope 3 layers of the OSI model.
Part 4: Transport Layer, TCP and UDP A dip into the world of connection-orientated vs connectionless protocols.
Part 5.1: Network Layer – IP Addressing IP Addresses, what they are and how they are used.
Part 5.2: Network Layer – DNS and DHCP Converting IP addresses to names we can read and handing them out on a network.
Part 5.3: Network Layer – IP Routing Getting IP packets from one node to another.
Part 6.1: Data Link Layer, Ethernet and MAC Ethernet Frames for shifting local traffic.
Part 6.2: Media Access Control – CSMA/CD, CSMA/CA Getting access to the wire, fibre or air.
Part 6.3: Layer 2 Switching – Loops, Spanning Tree and Topologies Fitting LAN switches together,
Part 6.4: VLANs and other ANs (Area Networks) Security and Isolation with VLANs
Part 7: Physical Layer, Electrons, Photons and All Things Quantum Physical media and nerdiness.
Part 8: Summary Summary.

I’ve really enjoyed refreshing all my own knowledge on these basic concepts and hope you have too. Future series are to be more specifically focussed on network virtualisation and other areas of the data centre.

Networking Primer – Part 7: Physical Layer, Electrons, Photons and All Things Quantum

Our 7th and Final OSI Layer is the Physical Layer. Unless you are planning to work with or design specialist hardware there isn’t much interaction required with this layer from an administrative point of view. We’ll cover some of the basics here but not in-depth.  This layer strays directly into the realms of science, more specifically physics and even more specially quantum physics.  This is most definitely the geekiest post in this series.

Most modern computers store, process and transmit data in its simplest form, binary. It makes sense to encode data in the form of binary as even the most complex information can be broken down and represented as combinations of simple 0s and 1s. This simple representation maps very conveniently across to physical properties.  The physical technologies we use today work on the premise of data being represented by one of two states, there or not there (on or off).  The electronic components inside a computer are able to create and detect physical state. To simplify this to the highest level, electrical current is either present or not present inside a component on a circuit board. If it is present, that represents a 1 and if it is not, the represents a 0.

Signalling and Media

This is much easier to visualise in networking terms. If you have two nodes connected with a copper wire, the sending node is able to transmit electrical pulses down the wire and the receiving node is able to decode this as a stream of 0’s and 1’s. Another way of thinking of this is like the ships of World War 2 that used big lights to send messages to each other in the form of morse code. This is referred to as signalling and the whole focus of layer 1 is to define the protocols for converting data to signals along with sending/receiving them.  Anything that is able to exist (and be detected) in one of two physical states can be used as a transmission device.

Morse Code Signalling Lamp

Our transmission media doesn’t necessarily have to be electrons flowing down an electrically conducting wire. It could just as well be photons of light travelling down a fibre optic cable. It could also photons travelling through the air as part of radio magnetic waves (i.e. Wifi). When implementing the physical network, consideration must be given to the properties of the physical media, each of them will have different speed, throughput, cost and flexibility attributes. For instance, Fibre optic technologies generally cost more than their electron shuffling equivalents, but today they represent the fastest possible method moving signals from one place to another. They do this at the speed of light (299,792,458 metres per second) and that is essentially the fastest anything can travel as it’s the cosmological speed limit of the entire universe. Electrical current still moves down wires pretty fast, in fact almost at the speed of light, but slightly less due to other physical factors that cause some interference and resistance to the movement.

From a WiFi perspective it’s strange to think that we have electromagnetic waves running through us and all around us at all times. The transmission element in the radio wave is of course also the photon, after all, light is just an electromagnetic wave oscillating at a different (and visible) frequency than radio. I particularly like the picture below which visualised the electromagnetic wave of WiFi propagating across a city.

Here's What Wi-Fi Would Look Like If We Could See It

You can find more cool WiFi visualisation here:

Here’s What Wi-Fi Would Look Like If We Could See It

Wow, time to make our way out of the land of geek.  This is as much as we’ll cover here as this is a Primer and not a physics course.

DevOps, Automation & The Race to the CLI. A New Cycle?

DevOps and Automation have certainly taken some mind share in the IT community and it seems to be becoming a universally accepted truth, that we need to automate operations in order to keep up with the rapid pace of development in the data center.  There is clearly a trend of moving away from GUI based configuration, towards using the CLI (Command Line Interface), scripting and agile programming in order to achieve operational objectives in our environments.  This is also evidenced in a seemingly ubiquitous substitution of job descriptions. The “System Administrator” role appears to be disappearing and a new “DevOps Engineer” role is supplanting it in many places. What’s unusual is that other than the job title, the job descriptions seem to be very much the same with additional scripting skills coming to the fore.

Minority Report UI

Even the King of the GUI, Microsoft, has seen this trend and with Windows Server 2012, dumped the Full Fat GUI approach in favour of using PowerShell as the primary point of interaction with the OS. Windows Server installs as the Core version (no GUI) by default now and it is expected that using a GUI would be the exception to the norm. I have to say that that’s not necessarily a bad thing, PowerShell is probably one of the initiatives that Microsoft has got right in recent years and those seem to be few and far between.

There are many obvious benefits to these text-based configuration approaches and it is inevitable things will continue in that direction. As workloads in the data center continue to become more transient with instances span-up and discarded frequently, it’s going to become a mandatory requirement to perform similar repeatable operations for many similar objects with scripting or similar tools.

Being around IT as long as I have though, I can’t help but wonder if this is just another “cycle”. It’s taken us 30 years to move away from the Centralised, Text Driven Mainframes of last century, but we are definitely heading back in that direction.  IT tends to be cyclical in nature and I’d hazard a guess that once we’ve all got to grips with DevOps, there will be a new generation of graphical tools in the distant but imaginable future. We are after all a primarily visual species.  If and when  DevOps fully takes hold, is it here to stay or just the returning curve of a technology cycle?

Networking Primer – Part 6.4: VLANs and other ANs (Area Networks)

Previous: Part 6.3: Layer 2 Switching – Loops, Spanning Tree and Topologies

I probably should have covered this a little earlier in the series, regardless we’ll do it now. Networks are loosely categorised by the area they cover. This is usually compacted into a useful xAN acronym where x stands for the scope, A stands for Area and N stands for Network. The following table lists the different scopes:

Scope Description
LAN – Local Area Network Usually restricted to a single building or even sub-parts of the building in some cases. This type of network is most relevant to everything we have discussed at Layer 2 of the OSI stack. Primarily related to wired network connectivity.
WLAN – Wireless Local Area Network Very similar to LAN, but focussed on wireless connectivity as opposed to wired. Usually restricted to a single building or even sub-parts of the building in some cases. This type of network is most relevant to everything we have discussed at Layer 2 of the OSI stack.
WAN – Wide Area Network The largest scope of network that could potential span the entire globe.
MAN – Metropolitan Area Network Still large but restricted in size to Metropolitan area such as a city or large suburb.
CAN – Campus Area Network Multibuilding networks deployed across educational or similar institutional campuses.
PAN – Personal Area Network Used for devices in your immediate personal space or within a few meters. Smart phones and other Bluetooth driven devices sit in this category.

One acronym missing from the above table is VLAN – Virtual Local Area Network. Let’s put some focus on it now.

VLAN – Virtual Local Area Network
The reason I’ve missed it from the table is because a VLAN doesn’t really fit into a physical scope. It’s actually a logical segmentation construct that sits inside an existing Local Area Network or LAN.

Remember the importance of the Port as a management entity as stated in the previous post? This comes into play again here with VLANs too. By assigning a VLAN to a port we effectively segment it from the rest of the ports in the environment that aren’t assigned to the same VLAN. Without VLANs, every device connected to every switch in the network sits in the same Broadcast domain. Once the switches have learned which ports are occupied by which MAC addresses, broadcasts are reduced, but they do still need to happen as network changes are made frequently. By assigning VLANs, we are logically splitting down the broadcast domain into multiple smaller broadcast domains. Another more dynamic way to establish VLAN membership is by MAC address. This means that whichever port in the network a device is plugged into, it will always be recognised as a member of the correct VLAN.

So why would we want segment at all? There are two reasons, Security and Network Efficiency.  From a security perspective, by creating this logical segmentation we stop nodes from receiving frames that they do not need to receive, as all broadcast traffic is isolated to the ports that belong to the correct VLAN. This is can prevent an eavesdropping or any other unwanted visibility of frames outside of the VLAN. We might want to segment different departments in this way. For example, the payroll department might sit on its own VLAN, as the data it transmits is financially sensitive. Do all the nodes in the other departments need to see those broadcasts? Probably not. Network Efficiency is pretty straight forward too. By segmenting the traffic into VLAN we also reduce the amount of traffic each node receives. This reduces the amount of bandwidth used by the node and also the amount of processing the node has to do, to work out if the unwanted frames are intended for it, before discarding them.

While VLANs are an excellent tool for subdividing broadcast domains, we can take this even further if required using PVLANs (Private VLANs). A detailed description of the PVLANs is out of scope for this primer, but as a high level summary we can say that they are used to subdivide VLANs into even smaller broadcast domains. We create some secondary VLANs and then implement some rules to restrict which ports in the Primary VLAN each sub-division can communicate with. A good example use case for this might be a hotel network, where we want all devices to be able to communicate with the internet connected router, but not with each other. More details can be found here : Private VLANs.

VMware is Goldmember in Openstack

It’s been a little surprising to me that there’s a decent amount of buzz in the market surrounding Openstack, but not many people are clear about what it actually is and VMware’s involvement in it.  In some discussions, it is occasionally wielded as the “Deathbringer” for all things VMware and a work-in-progress alternative to everything VMware does. More often than not, there is a reaction of surprise in those discussions when it comes to light that VMware is a member of the Openstack Foundation. Furthermore, it contributes enough resources, funds and activity to be a Gold Member of the Foundation.

Openstack Gold Member

So how does that work?

Well, first and foremost when compared to a product suite such as VMware’s vCloud Suite, it should be understood that Openstack is not a fully featured product stack that will cater for all of the functionality required to operate a private, hybrid or public cloud.  Openstack is a plug-and-play framework that defines a common list of APIs and interfaces to enable the provision and operation of cloud capabilities. The key word here is framework. This framework provides a definition and set of rules for how the components in a cloud should communicate and service each other.  Openstack doesn’t for example provide compute virtualisation, or network and storage virtualisation for that matter.  Yes, you still need a hypervisor in an Openstack implementation. There is definitely some confusion over this point and Openstack (open source cloud management) is often mentally bundled together with KVM (one open source hypervisor).  This is of course incorrect, KVM is not Openstack and vice versa. The hypervisor could be any number of those on the market today, remember it’s plug-and-play.  This is one example of where VMware has significant relevance to Openstack. You can use vSphere as the hypervisor in any Openstack system.

Alongside the compute virtualisation provided by vSphere, it’s also possible to use VMware technologies such as VSAN (Virtual SAN) to serve up storage along with NSX for network functionality. In fact, after VMware’s acquisition of Nicira, it became extremely important in terms of the development of Openstack networking projects. So it is clear to see there are definitely many areas of collaboration for VMware and Openstack.  It would be dismissive to fail to acknowledge that there are elements of competitive overlap between some VMware products and some Openstack, but these aren’t an all encompassing “Us vs Them” discussion. VMware’s approach to Openstack is very much one of being a good neighbour in a growing ecosystem. If every element of the stack is to be plug-and-play, VMware will make best efforts to ensure that it’s own components adhere to the API specifications and provide the richest set of functionality available to the market.

VMware’s Openstack membership is established and gaining momentum. VMware is a Gold Member of the Openstack Foundation and continues to increase activity and contributions in all relevant projects. Although a relatively late arrival to the foundation, VMware now sits in the Top 10 contributing companies for the whole project (rankings based on Source code commits).

Openstack Commit by Company

If you would like to know more about getting started with VMware and Openstack please read the following whitepaper:

http://www.vmware.com/files/pdf/techpaper/VMWare-Getting-Started-with-OpenStack-and-vSphere.pdf