Cloud Blob Storage Race-to-Zero: Azure Reserved Pricing Is Here

If you thought Azure Blob Storage was inexpensive before, how would you like a further 38% discount just to be sure?

In November 2019, Microsoft announced that they would be introducing Reserved Price Discounting for 6 new services, including Blob Storage in GPv2 Accounts:

https://azure.microsoft.com/en-us/blog/save-more-on-azure-usage-announcing-reservations-for-six-more-services/

The pricing is also now available in the Azure Pricing Calculator:

What does this mean?

You can now reserve capacity in advance, the same way you can reserve VM Instances. The new model allows you to reserve capacity for either 1 or 3 years, with a 38% discount on your current costs for 3 years.

This is great for users who have a predictable consumption need.. such as Backup users 🙂

What to look out for:

1, Ensure that your subscription type is eligible for the pricing. These types are currently covered:

“Enterprise Agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P): For an Enterprise subscription, the charges are deducted from the enrollment’s monetary commitment balance or charged as overage.

Individual subscription with pay-as-you-go rates (offer numbers: MS-AZR-0003P or MS-AZR-0023P): For an individual subscription with pay-as-you-go rates, the charges are billed to the credit card or invoice payment method on the subscription.”

2, The pricing is available in minimum increments of 100TB (i.e. Do not bother if you have a couple of TBs in Azure).

Full details are available here. Please review this in order to understand your particular circumstances:

https://docs.microsoft.com/en-gb/azure/storage/blobs/storage-blob-reserved-capacity

Agile: It’s never too late? Or is it? Competing with Old and New.

Disclaimer: I work at Rubrik. Views are my own, etc.

I spent some time this weekend reading trade websites and watching video-streamed events to catch up on the competitive landscape for Rubrik. This is not something to which I pay a huge amount of attention, but it’s always worth understanding the key differences between your company’s and your competitors approaches to the problem you are solving.

What I found was as surprising as it was comforting and certainly solidified my belief that Rubrik is 100% the market leader in the new world of Data Management. This was equally true of new market entrants as it was of old guard vendors.

How Rubrik Does it

We are predominantly lead by a laser-focused vision. We came to market to fix the data protection problem and leverage the resulting platform to deliver a world-class data management solution.  This has always been the plan, we stuck to it and are delivering on it.

Rubrik’s software engineering function has the Agile Software Development methodology baked into it’s DNA. At its core our engineering team is made up of experienced developers, many of which were at the heart of the distributed systems revolution at Google, Facebook and other new wave companies. This means we can introduce capabilities quickly and iterate the cycle at a high frequency. We have absolutely nailed this approach and have consistently delivered significant payloads of functionality since v1.0 of the product.

New features are introduced as a MVPs (Minimum Viable Products) and additional iterations to mature those features are delivered in rapid cycles. We have delivered 4 Major CDM (Cloud Data Management) releases year-on-year and are positioned to accelerate this. Our Polaris SaaS platform is a living organic entity and delivers capability in terms of hours-to-days-to-weeks.

This is pro-active innovation aligned with a rapid and consistent timeline.

How The Old Guard Do It

When referring to the old guard, I’m referring to vendors who’ve been around for more than a decade. These organisations have grown up in the era of the Waterfall Software Development Model. This model is much more linear and follows these stages: Gather Requirements -> Design -> Implementation -> Verification -> Maintenance. The cycle is documentation and process heavy. This is why most traditional vendors are only able to release major software versions in 18-24month cycles.

These vendors are stuck between a rock and a hard place. They have revenue streams for ageing products to maintain and historical code baggage (technical debt) they have to contend with which wasn’t developed for the new world of cloud. More importantly the mindset change to move to Agile is a mountain to climb in itself and many long-tenured developers struggle to adapt to the change. In short the commercial pressure to retain revenue from existing products, technical debt, people and process challenges leave them hamstrung and struggling to keep up. One such vendor announced a major version two years ago and to date have failed to deliver the release.

A by-product of this challenge is that many will use sticking-plaster market-ecture to claim they can do the same things as Rubrik. This usually comes in the format of PowerPoint, vapourware and a new front-end UI to hide the legacy platform.

How The New Entrants Try To Do It

Rubrik has experienced significant success in our laser focused approach to addressing data management challenges. It is only natural in a fair competitive market that other newer companies will either launch or pivot their approach in an attempt to experience similar success.  These companies also use Agile software development, may also have distributed systems backgrounds and are able to iterate in a similar manner.

However, these organisations face a different set of challenges. If they did not build their base platform with the same vision, goal and focus, it will essentially be equally as hamstrung by similar underlying challenges as the old guard experience. Common sense tells us that if you design a product to do one thing and then adapt it to do another, it is not purpose-built and won’t do the same job without significant trade-offs or difficulties.

A second and more important challenge is that they don’t have the same mature understanding of the vision and consequently will be late to release new features. They will always have a time lag behind the market leader, waiting for the market leader to deliver a feature and then starting the process of developing that same feature. I’m seeing some competitors announcing and demoing Beta versions of functionality that Rubrik formally released 1-2 years ago. This is not pro-active innovation, it’s re-active mimicking.

Having experienced (in previous roles) those situations where your company hypes up a new “game-changing” proposition, only to feel massively deflated when it’s announced and you’re aware that the market leader has been providing it for over a year, I can tell you that it is not an awesome experience for employees.

This approach forces them to commoditize their solution, position it as “good enough” and command a significantly lower price as the value is ultimately not being delivered. It’s simply not sustainable in the medium or long term. When the funding begins to diminish, these companies become easy acquisitions for industry giants.

Conclusion

I have previously worked in both traditional vendors (e.g. Veeam) and new entrant vendors such as SimpliVity who pivoted to challenge Nutanix in the HCI (Hyper-Converged Infrastructure) space. SimpliVity did not win that competition, were subsequently acquired by HPE and the solution now receives much less focus as part of HPEs large portfolio of products.

Whether new or old, with adapted or re-purposed solutions, competing with Rubrik on price is frequently the only option available. This creates a situation which is not optimal for customer or vendor. The customer doesn’t get the outcome they wanted and the vendor suffers in terms of available finance for ongoing operations as well as investing in the development of their solution. If a company is offering 80% of the functionality for 60% of the price, then you will only get 50% of the result planned. Furthermore, that company’s financials will most likely look like a cash haemorrhaging horror story.

Rubrik is without a doubt the market leader in the Cloud Data Management space. The Old Guard are re-badging their solutions as their “Rubrik-Killer” and the new entrants are pivoting to follow while burning through their funding.  It’s going to be an interesting couple of years. Ancient giant trees will fall and new saplings will be plucked by the behemoths. Exciting times in the industry, watch this space!

Help Me Choose a Charity

Due to the recent demise of my dear old car.. I’ve decided to donate the remnants of it’s existence to charity. I found a really cool way to do this via the www.giveacar.co.uk website.  They will pick the car up and then scrap or auction it to generate the largest possible donation.  There are over 800 charities registered to choose from and new additional charities can register as needed.

Hani Old Car

In this instance, I’m pretty open to where the donation goes but would like it to go to one of three areas. Charities that work with vulnerable children, cancer research or human rights organisations.  Please help me decide where to donate by voting for one of the following below:

  • UNICEF: is the world’s leading organisation working for children and their rights, with a presence in more than 190 countries and territories reaching children on a scale like no other. They work with local communities, partners and governments to ensure every child’s right to survive and thrive is upheld.
  • Cancer Research: is the world’s leading charity dedicated to beating cancer through research. They’ve saved millions of lives with our ground-breaking work into preventing, diagnosing and treating cancer.
  • Amnesty International: Amnesty is made up of ordinary people from across the world standing up for humanity and human rights. Their purpose is to protect individuals wherever justice, fairness, freedom and truth are denied.
  • ADDED:  Neuroblastoma Children’s Cancer Alliance: The Neuroblastoma Children’s Cancer Alliance helps families of children suffering from neuroblastoma through providing financial assistance for children’s treatment.

Which charity should I donate my car to?

  • Neuroblastoma Children's Cancer Alliance (46%, 24 Votes)
  • Cancer Research (33%, 17 Votes)
  • UNICEF (13%, 7 Votes)
  • Amnesty International (8%, 4 Votes)

Total Voters: 52

Loading ... Loading ...

Whitepaper: Virtual Backup Strategies: Using Storage Snapshots for Backups

Introduction
Effective data protection is a mandatory element in the modern IT environment. Historically, backup strategies were confined to the last few chapters in an administrator’s manual and treated like an afterthought. Now they sit firmly at the forefront of every CIO’s mind. The ability to continue business operations after a system failure and the need to fulfil stringent compliance requirements have made backup a necessity—not only for business continuity, but also for
business survival. The question organizations need to ask about data protection is not whether to backup their data, but how to backup their data.

IT systems are prone to rapid evolution and present a constantly shifting landscape and the techniques used to protect those systems need to evolve as well. Perhaps one of the most significant changes in recent years has been the advent of virtualization. In the virtual world, legacy backup systems have become unfit for their purpose, causing backup windows to increase beyond a manageable scope. While this paradigm presents new challenges, new opportunities to improve efficiency, cut costs and reduce risks are also created.

This paper will examine the use of storage snapshots as backups for virtual environments. We will evaluate the relative benefits and limitations while also considering where they fit into a holistic backup strategy when compared to a virtual disk-to-disk backup solution such as Veeam® Backup & Replication™

Background
Pre-virtualization backup strategies were underpinned by operating system (OS) and application-level features. The typical implementation would involve installing a backup agent into an OS and the agent would be responsible for putting applications into a consistent state for backup; copying backup data across the network to a backup server and subsequently monitoring any ongoing changes.

While this worked well in the physical world, virtualization changed everything as operating systems began to share the same physical hardware. Instead of having one backup agent consuming resources from a physical host, there was an agent for each virtual machine (VM) on that host. This meant that ten agents (based on a 10:1 consolidation ratio) or even more could be contending for the host’s CPU, RAM and disk resources. This contention was not only with each other, but also with the applications they were installed to protect. In addition, volumes of data increased to a level where it was no longer feasible to use standard transports to move it across the production network to the backup server. This situation clearly could not continue as virtualization has become the standard practice of datacenters worldwide.

Virtualized Layers

Where virtualization presented new challenges, it also presented new opportunities. The physical world consisted solely of the application/OS layer. The virtual world, Continue reading

Webinar: Disaster Recovery for Virtual Environments, One Simple Solution for Five Common SAN Replication Challenges

This is a replay of webinar, I ran last year.. the associated Whitepaper is linked below:

Whitepaper Available here: http://wp.me/p2ZZG3-fG

A new sister webinar/whitepaper focusing on using SAN snapshots in a holistic data protection strategy to be posted shortly.

Closing Our Doors

Hi All,

I think we are just about done. The residential application is progressing and all ideas of previous incinerator plans have been dropped.

There is still one more matter to deal with.. we still have £1900 in the Campaign Fund.  When we defined our constitution as a group back in 2010, we stated that on closure, any remaining funds be donated to a local community group or charity.  This is the first time I have been heavily involved in community activity and I’ve been very happy with the outcome. I am however very aware that there is a group of people who contribute in equal measure, but do this on a weekly basis and have done for many, many years. Where I will now go back to working on career and family, I feel comfortable in the knowledge that this group will continue to develop, protect and work for the community.

I have discussed the matter with the Say No committee and we have agreed that before closing doors on the campaign, the remaining funds should be donated to the MVCA (Monton Village Community Association). I think all who have been involved in the campaign will understand what the MVCA has contributed to our victory.

Please feel free to contact me if you would like to discuss this final action which will be concluded this week.

Thank you to all who have stepped up in this time of need. It’s taken a phenomenal amount of time, energy and dedication from the community as a whole, but I think we can finally put this one to rest. Stay vigilant, but enjoy this well earned victory and what the future may bring for the area.

Best if luck,

Hani

What’s been happening?

It’s been some time since the last update on this website and on the campaign as a whole.  Today we have news, but I’ll explain what’s been happening with the overall direction of the Green Lane site.

A short time after the Incinerator appeal was dismissed at the inquiry, Sky Properties decided to challenge the decision in the High Court. At that point, the situation became very much about Sky challenging both the Planning Inspectorate and in turn Salford City Council over the legality of the decision made at the inquiry. . this campaign group became an interested  3rd party without direct involvement in any court future proceedings.

In parallel,  the other stakeholders in the Green Lane site (not Mr. Hirsch) have been assessing and discussing the possibility of submitting an application for residential dwellings on the site. Many of you will have received a letter from the consultation company handling the residential application, Local Dialogue. There have been several months with no update since receiving this letter. In that time, a few hurdles have been encountered for the residential application including some reservations from the Highways Agency.

Hazel Blears has been in frequent contact with the new Green Lane representatives and pushing for regular updates on where we stand with both the court action and the residential application. Hazel has been back in contact this week and sent an update… We’re happy to let you know, that the issues encountered with the highways agency appear to have been overcome. Also, although we are waiting for confirmation.. we expect the application for a court hearing to be withdrawn in the very near future.

It seems after all this time we may be back on track again. Let’s cross our fingers and hope that the court action is withdrawn sooner rather than later. Also, Local Dialogue will be starting the process of consultation on the residential application soon.. let’s be mindful that we can’t sit on the sidelines and hope that this just gets through. I encourage you all to actively engage with Local Dialogue to contribute and assist in creating a useful development on the site, which is beneficial not only for the community, but also for those who have invested in the site.

Whitepaper: Disaster Recovery for Virtual Environments, One Simple Solution for Five Common SAN Replication Challenges

Introduction
It would be no overstatement of fact to say that in the last five years virtualization has radically changed the landscape of IT infrastructure for the better. Workloads encapsulated into standardized virtual machines have significantly increased our ability to optimize and use physical resources in a way that saves much time and money. In addition to these economic benefits, new avenues have opened up to tackle data protection and disaster recovery, allowing us to increase service uptime while also reducing business risk. This white paper focuses on some of the common challenges experienced while implementing and using SAN-based replication for disaster recovery and it examines an alternative approach to disaster recovery to help resolve these issues.

Background
Pre-virtualization disaster recovery plans were underpinned by application-level features hooking directly into specific hardware to achieve the required business recovery goals. To ensure that disaster recovery could be achieved, network infrastructure, hardware, software and application data were replicated to an offsite location, commonly referred to as the Disaster Recovery (DR) site. Depending on an application’s required Recovery Point Objective (RPO) and Recovery Time Objective (RTO), costs could spiral upwards to achieve small improvements in both RPOs and RTOs. When you increase the amount of application uptime provided from 99.99% to 99.999%, the cost increase is not linear, it’s exponential. With the advent of virtualization, the infrastructure stack gained a new layer, enabling the movement of workloads between geographically dispersed locations. Importantly, this is achieved without requiring application-specific engineering, because workloads are compartmentalized and encapsulated into virtual machines. In a virtual machine, everything needed to support that workload can be encapsulated into a set of files in a folder and moved as a single contiguous entity.

Scope and Definitions
The virtualization and storage layers are examined below; i.e., virtual machines (VMs), hypervisors and storage area networks (SANs). Application-level replication is beyond the scope of this document.

There are many potentially overlapping terms, which people often interpret differently. For the purposes of this paper, I will use “Continuous Data Protection (CDP),” “synchronous,” “fault tolerant,” “asynchronous” and “high availability.”

CDP consists of synchronous replication, which in turn involves double-writing to two different devices where an application (or hypervisor) only receives a confirmation of a successful write to storage when both devices have acknowledged completion of the operation. CDP can help you achieve a zero RPO and RTO but requires strict hardware compatibility at both source and destination sites. This allows you to deploy VMs in a cross-site, fault-tolerant configuration, so if you have an infrastructure problem, you can failover to the DR site without any downtime.

Synchronous solutions are expensive and require a lot of network bandwidth but are appropriate for some mission-critical applications where no downtime or data loss can be tolerated. One issue with synchronous replication is that data is transferred to the DR site in real time. This means that if the disaster is driven by some kind of data corruption, malware or virus, then the problem that brings down the production site simultaneously does the same to the DR site. This is why synchronous implementations should always be combined with an asynchronous capability.

This paper primarily is concerned with asynchronous replication of virtual infrastructures for disaster recovery purposes. An asynchronous strategy takes a point-in-time copy of a portion of the production environment and transfers it to the DR site in a time frame that matches the required RPO/RTO goals, and this may be “near real-time/near CDP” or scheduled (hourly, daily, etc.). This is more akin to high availability than fault tolerance. High availability in virtual environments refers primarily to having cold standby copies of VMs that can be powered on and booted in the event that the live production VM is lost. This approach underpins most currently implemented DR strategies.

The next section examines how SAN technologies approach asynchronous replication and the differences between SAN-level and VM-level strategies to achieve DR objectives.

SAN Replication Overview
SAN devices are typically engineered to aggregate disk resources to deal with large amounts of data. In recent years, additional processing power has been built into the devices to offload processing tasks from hosts serving up resources to the virtual environment. The basic unit of management for a SAN device is a Logical Unit Number (LUN). A LUN is a unit of storage, which may consist of several physical hard disks or a portion of a single disk. There are several considerations to balance when specifying a LUN configuration. One LUN intended to support VMs running Tier-1 applications may be backed by high-performance SSD disks, whereas another LUN may be backed by large, inexpensive disks and used primarily for test VMs. Once created, LUNs are made available to hypervisors, which in turn format them to create volumes; e.g., Virtual Machine File System (VMFS) on VMware vSphere and Cluster Shared Volume (CSV) on Microsoft Hyper-V. From this point on, I will use the terms “LUN” and “volume” interchangeably. A LUN can contain one or more VMs.

 SAN LUN Configuration

For SAN devices, the basic mechanism for creating a point-in-time copy of VM disk data is the LUN snapshot. SANs are able to create LUN-level snapshots of the data they are hosting. A LUN snapshot freezes the entire volume at the point it is taken, while read-write operations continue without halting to another area of the array. Continue reading