Cloud Blob Storage Race-to-Zero: Azure Reserved Pricing Is Here

If you thought Azure Blob Storage was inexpensive before, how would you like a further 38% discount just to be sure?

In November 2019, Microsoft announced that they would be introducing Reserved Price Discounting for 6 new services, including Blob Storage in GPv2 Accounts:

https://azure.microsoft.com/en-us/blog/save-more-on-azure-usage-announcing-reservations-for-six-more-services/

The pricing is also now available in the Azure Pricing Calculator:

What does this mean?

You can now reserve capacity in advance, the same way you can reserve VM Instances. The new model allows you to reserve capacity for either 1 or 3 years, with a 38% discount on your current costs for 3 years.

This is great for users who have a predictable consumption need.. such as Backup users 🙂

What to look out for:

1, Ensure that your subscription type is eligible for the pricing. These types are currently covered:

“Enterprise Agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P): For an Enterprise subscription, the charges are deducted from the enrollment’s monetary commitment balance or charged as overage.

Individual subscription with pay-as-you-go rates (offer numbers: MS-AZR-0003P or MS-AZR-0023P): For an individual subscription with pay-as-you-go rates, the charges are billed to the credit card or invoice payment method on the subscription.”

2, The pricing is available in minimum increments of 100TB (i.e. Do not bother if you have a couple of TBs in Azure).

Full details are available here. Please review this in order to understand your particular circumstances:

https://docs.microsoft.com/en-gb/azure/storage/blobs/storage-blob-reserved-capacity

Agile: It’s never too late? Or is it? Competing with Old and New.

Disclaimer: I work at Rubrik. Views are my own, etc.

I spent some time this weekend reading trade websites and watching video-streamed events to catch up on the competitive landscape for Rubrik. This is not something to which I pay a huge amount of attention, but it’s always worth understanding the key differences between your company’s and your competitors approaches to the problem you are solving.

What I found was as surprising as it was comforting and certainly solidified my belief that Rubrik is 100% the market leader in the new world of Data Management. This was equally true of new market entrants as it was of old guard vendors.

How Rubrik Does it

We are predominantly lead by a laser-focused vision. We came to market to fix the data protection problem and leverage the resulting platform to deliver a world-class data management solution.  This has always been the plan, we stuck to it and are delivering on it.

Rubrik’s software engineering function has the Agile Software Development methodology baked into it’s DNA. At its core our engineering team is made up of experienced developers, many of which were at the heart of the distributed systems revolution at Google, Facebook and other new wave companies. This means we can introduce capabilities quickly and iterate the cycle at a high frequency. We have absolutely nailed this approach and have consistently delivered significant payloads of functionality since v1.0 of the product.

New features are introduced as a MVPs (Minimum Viable Products) and additional iterations to mature those features are delivered in rapid cycles. We have delivered 4 Major CDM (Cloud Data Management) releases year-on-year and are positioned to accelerate this. Our Polaris SaaS platform is a living organic entity and delivers capability in terms of hours-to-days-to-weeks.

This is pro-active innovation aligned with a rapid and consistent timeline.

How The Old Guard Do It

When referring to the old guard, I’m referring to vendors who’ve been around for more than a decade. These organisations have grown up in the era of the Waterfall Software Development Model. This model is much more linear and follows these stages: Gather Requirements -> Design -> Implementation -> Verification -> Maintenance. The cycle is documentation and process heavy. This is why most traditional vendors are only able to release major software versions in 18-24month cycles.

These vendors are stuck between a rock and a hard place. They have revenue streams for ageing products to maintain and historical code baggage (technical debt) they have to contend with which wasn’t developed for the new world of cloud. More importantly the mindset change to move to Agile is a mountain to climb in itself and many long-tenured developers struggle to adapt to the change. In short the commercial pressure to retain revenue from existing products, technical debt, people and process challenges leave them hamstrung and struggling to keep up. One such vendor announced a major version two years ago and to date have failed to deliver the release.

A by-product of this challenge is that many will use sticking-plaster market-ecture to claim they can do the same things as Rubrik. This usually comes in the format of PowerPoint, vapourware and a new front-end UI to hide the legacy platform.

How The New Entrants Try To Do It

Rubrik has experienced significant success in our laser focused approach to addressing data management challenges. It is only natural in a fair competitive market that other newer companies will either launch or pivot their approach in an attempt to experience similar success.  These companies also use Agile software development, may also have distributed systems backgrounds and are able to iterate in a similar manner.

However, these organisations face a different set of challenges. If they did not build their base platform with the same vision, goal and focus, it will essentially be equally as hamstrung by similar underlying challenges as the old guard experience. Common sense tells us that if you design a product to do one thing and then adapt it to do another, it is not purpose-built and won’t do the same job without significant trade-offs or difficulties.

A second and more important challenge is that they don’t have the same mature understanding of the vision and consequently will be late to release new features. They will always have a time lag behind the market leader, waiting for the market leader to deliver a feature and then starting the process of developing that same feature. I’m seeing some competitors announcing and demoing Beta versions of functionality that Rubrik formally released 1-2 years ago. This is not pro-active innovation, it’s re-active mimicking.

Having experienced (in previous roles) those situations where your company hypes up a new “game-changing” proposition, only to feel massively deflated when it’s announced and you’re aware that the market leader has been providing it for over a year, I can tell you that it is not an awesome experience for employees.

This approach forces them to commoditize their solution, position it as “good enough” and command a significantly lower price as the value is ultimately not being delivered. It’s simply not sustainable in the medium or long term. When the funding begins to diminish, these companies become easy acquisitions for industry giants.

Conclusion

I have previously worked in both traditional vendors (e.g. Veeam) and new entrant vendors such as SimpliVity who pivoted to challenge Nutanix in the HCI (Hyper-Converged Infrastructure) space. SimpliVity did not win that competition, were subsequently acquired by HPE and the solution now receives much less focus as part of HPEs large portfolio of products.

Whether new or old, with adapted or re-purposed solutions, competing with Rubrik on price is frequently the only option available. This creates a situation which is not optimal for customer or vendor. The customer doesn’t get the outcome they wanted and the vendor suffers in terms of available finance for ongoing operations as well as investing in the development of their solution. If a company is offering 80% of the functionality for 60% of the price, then you will only get 50% of the result planned. Furthermore, that company’s financials will most likely look like a cash haemorrhaging horror story.

Rubrik is without a doubt the market leader in the Cloud Data Management space. The Old Guard are re-badging their solutions as their “Rubrik-Killer” and the new entrants are pivoting to follow while burning through their funding.  It’s going to be an interesting couple of years. Ancient giant trees will fall and new saplings will be plucked by the behemoths. Exciting times in the industry, watch this space!

Whitepaper: Virtual Backup Strategies: Using Storage Snapshots for Backups

Introduction
Effective data protection is a mandatory element in the modern IT environment. Historically, backup strategies were confined to the last few chapters in an administrator’s manual and treated like an afterthought. Now they sit firmly at the forefront of every CIO’s mind. The ability to continue business operations after a system failure and the need to fulfil stringent compliance requirements have made backup a necessity—not only for business continuity, but also for
business survival. The question organizations need to ask about data protection is not whether to backup their data, but how to backup their data.

IT systems are prone to rapid evolution and present a constantly shifting landscape and the techniques used to protect those systems need to evolve as well. Perhaps one of the most significant changes in recent years has been the advent of virtualization. In the virtual world, legacy backup systems have become unfit for their purpose, causing backup windows to increase beyond a manageable scope. While this paradigm presents new challenges, new opportunities to improve efficiency, cut costs and reduce risks are also created.

This paper will examine the use of storage snapshots as backups for virtual environments. We will evaluate the relative benefits and limitations while also considering where they fit into a holistic backup strategy when compared to a virtual disk-to-disk backup solution such as Veeam® Backup & Replication™

Background
Pre-virtualization backup strategies were underpinned by operating system (OS) and application-level features. The typical implementation would involve installing a backup agent into an OS and the agent would be responsible for putting applications into a consistent state for backup; copying backup data across the network to a backup server and subsequently monitoring any ongoing changes.

While this worked well in the physical world, virtualization changed everything as operating systems began to share the same physical hardware. Instead of having one backup agent consuming resources from a physical host, there was an agent for each virtual machine (VM) on that host. This meant that ten agents (based on a 10:1 consolidation ratio) or even more could be contending for the host’s CPU, RAM and disk resources. This contention was not only with each other, but also with the applications they were installed to protect. In addition, volumes of data increased to a level where it was no longer feasible to use standard transports to move it across the production network to the backup server. This situation clearly could not continue as virtualization has become the standard practice of datacenters worldwide.

Virtualized Layers

Where virtualization presented new challenges, it also presented new opportunities. The physical world consisted solely of the application/OS layer. The virtual world, Continue reading

Webinar: Disaster Recovery for Virtual Environments, One Simple Solution for Five Common SAN Replication Challenges

This is a replay of webinar, I ran last year.. the associated Whitepaper is linked below:

Whitepaper Available here: http://wp.me/p2ZZG3-fG

A new sister webinar/whitepaper focusing on using SAN snapshots in a holistic data protection strategy to be posted shortly.

Whitepaper: Disaster Recovery for Virtual Environments, One Simple Solution for Five Common SAN Replication Challenges

Introduction
It would be no overstatement of fact to say that in the last five years virtualization has radically changed the landscape of IT infrastructure for the better. Workloads encapsulated into standardized virtual machines have significantly increased our ability to optimize and use physical resources in a way that saves much time and money. In addition to these economic benefits, new avenues have opened up to tackle data protection and disaster recovery, allowing us to increase service uptime while also reducing business risk. This white paper focuses on some of the common challenges experienced while implementing and using SAN-based replication for disaster recovery and it examines an alternative approach to disaster recovery to help resolve these issues.

Background
Pre-virtualization disaster recovery plans were underpinned by application-level features hooking directly into specific hardware to achieve the required business recovery goals. To ensure that disaster recovery could be achieved, network infrastructure, hardware, software and application data were replicated to an offsite location, commonly referred to as the Disaster Recovery (DR) site. Depending on an application’s required Recovery Point Objective (RPO) and Recovery Time Objective (RTO), costs could spiral upwards to achieve small improvements in both RPOs and RTOs. When you increase the amount of application uptime provided from 99.99% to 99.999%, the cost increase is not linear, it’s exponential. With the advent of virtualization, the infrastructure stack gained a new layer, enabling the movement of workloads between geographically dispersed locations. Importantly, this is achieved without requiring application-specific engineering, because workloads are compartmentalized and encapsulated into virtual machines. In a virtual machine, everything needed to support that workload can be encapsulated into a set of files in a folder and moved as a single contiguous entity.

Scope and Definitions
The virtualization and storage layers are examined below; i.e., virtual machines (VMs), hypervisors and storage area networks (SANs). Application-level replication is beyond the scope of this document.

There are many potentially overlapping terms, which people often interpret differently. For the purposes of this paper, I will use “Continuous Data Protection (CDP),” “synchronous,” “fault tolerant,” “asynchronous” and “high availability.”

CDP consists of synchronous replication, which in turn involves double-writing to two different devices where an application (or hypervisor) only receives a confirmation of a successful write to storage when both devices have acknowledged completion of the operation. CDP can help you achieve a zero RPO and RTO but requires strict hardware compatibility at both source and destination sites. This allows you to deploy VMs in a cross-site, fault-tolerant configuration, so if you have an infrastructure problem, you can failover to the DR site without any downtime.

Synchronous solutions are expensive and require a lot of network bandwidth but are appropriate for some mission-critical applications where no downtime or data loss can be tolerated. One issue with synchronous replication is that data is transferred to the DR site in real time. This means that if the disaster is driven by some kind of data corruption, malware or virus, then the problem that brings down the production site simultaneously does the same to the DR site. This is why synchronous implementations should always be combined with an asynchronous capability.

This paper primarily is concerned with asynchronous replication of virtual infrastructures for disaster recovery purposes. An asynchronous strategy takes a point-in-time copy of a portion of the production environment and transfers it to the DR site in a time frame that matches the required RPO/RTO goals, and this may be “near real-time/near CDP” or scheduled (hourly, daily, etc.). This is more akin to high availability than fault tolerance. High availability in virtual environments refers primarily to having cold standby copies of VMs that can be powered on and booted in the event that the live production VM is lost. This approach underpins most currently implemented DR strategies.

The next section examines how SAN technologies approach asynchronous replication and the differences between SAN-level and VM-level strategies to achieve DR objectives.

SAN Replication Overview
SAN devices are typically engineered to aggregate disk resources to deal with large amounts of data. In recent years, additional processing power has been built into the devices to offload processing tasks from hosts serving up resources to the virtual environment. The basic unit of management for a SAN device is a Logical Unit Number (LUN). A LUN is a unit of storage, which may consist of several physical hard disks or a portion of a single disk. There are several considerations to balance when specifying a LUN configuration. One LUN intended to support VMs running Tier-1 applications may be backed by high-performance SSD disks, whereas another LUN may be backed by large, inexpensive disks and used primarily for test VMs. Once created, LUNs are made available to hypervisors, which in turn format them to create volumes; e.g., Virtual Machine File System (VMFS) on VMware vSphere and Cluster Shared Volume (CSV) on Microsoft Hyper-V. From this point on, I will use the terms “LUN” and “volume” interchangeably. A LUN can contain one or more VMs.

 SAN LUN Configuration

For SAN devices, the basic mechanism for creating a point-in-time copy of VM disk data is the LUN snapshot. SANs are able to create LUN-level snapshots of the data they are hosting. A LUN snapshot freezes the entire volume at the point it is taken, while read-write operations continue without halting to another area of the array. Continue reading

BBC Report Highlights Bad Spelling As Key Factor In Email Data Loss

A BBC Report has highlighted mis-spelled email addresses as a key factor in loss of sensitive data via email.  Putting a dot in the wrong place or utilizing slight mis-spellings in domain names has presented a security loop hole for malicious attackers to use to steal data.

Click For BBC Report

Many large organisations use multiple sub domains to divide their various divisions either by function or geographically.  When using email addresses in this type of environment they can get pretty complex. For example bank.com might use the sub-domain us.bank.com as the email sub-domain for it’s US employees. So, John Smith might have an address like “john.smith@us.bank.com“. Data loss can occur when a user types the wrong email suffix, such as usbank.com. An email to this address would normally be bounced back to the sender with an error as the domain wouldn’t be recognized. It is however very easy for an attacker to set-up the wrongly spelled email domain, putting them in a position where they receive all email for that domain.  Researchers have found that by doing this they managed to grab over 20GB of incorrectly addressed mail over a 6 month period. The data grabbed included personal details, usernames, passwords and a bevvy of other sensitive information.

This is a loop hole often ignored by companies, but one that is easily mitigated.  By using an information classification tool such a the Boldon James Email Classifier product, organisations can not only categorized their emails by their level of sensitivity, they can also control what domains are allowed to receive emails from their employees. This is known as white-listing. If you would like to know more about email white-listing please contact me or contact Boldon James directly at www.boldonjames.com

 

ePrivacy Directive: EU to tighten up on Data Breach Notifications

You may be aware that the EU recently put into force the updated ePrivacy Directive (2002/58/EC).  As of May 2011, the use of cookies to track website visitor information is now strictly prohibited.  Cookies which were previously used to track visitor behaviour and personal details may now only be used with the express permission of the visitor. Interesting website based outside of the EU, do not have to operate with the same constraints.  The enforcement and technical implementation of the directive may take some time to filter through to every cookie using site on the web, and penalties for not doing so are yet to be seen.

Work continues on the ePrivacy Directive in the coming months. One InfoSec concept which the EU are looking to tighten up control of through the directive is “disclosure”.  Whereas in the past, companies or organisations may have been a little shy about publicising their information security breaches, it’s soon going to be come a strictly enforced legal requirement to do so. Under the ePrivacy Directive disclosure requirements will be covered under Data Breach Notification rules.  A public consultation is currently underway and is sue to conclude in September:

ePrivacy Consultation

The consultation will cover the mechanisms for categorising. assessing and reporting breaches.

The hacker groups Anonymous and Lulzsec have made a mockery of the security controls of some major organisations in recent months.  Data loss and it’s prevention continues to be a major challenge for infromation security managers.  It’s time for organisations of all sizes to get serious about InfoSec, and this legislation could help push for that.

Phone Hacking How To: Hacking Voicemail

I’ve been asked in recent weeks how the News of the World private investigators were able to hack into the voicemail of the alleged 4,000 victims of the phone hacking scandal.  While the details of all that activity are something for the police to worry about, we can explain the basic methodology of a simple attack to do this. The one probably used in the majority of cases.

In the world of Infosec there is such a thing called a spoofing attack. A spoofing attack is where you have your device (whether that be a phone, pc or laptop) send out network packets with the identity of someone else.  In the IP world, communications are broken down into thousands of small packets of data. Each packet has a destination address and a source address. When we’re trying to use a spoofing attack, we can use specialised software to send out packets, with someone else’s source address.

With the convergence of data and voice networks over the last 10 years, there’s been a proliferation of technologies that allow data networks to connect to older technologies traditionally used to provide voice services. This has come in the form of VoIP, technologies that provide Voice Over IP data network. This has brought voice communications into the realm of the computing community, and also into the hands of the bad guys in that community.. hackers.  Hackers have produced software tools, that allow them to control the data sent out over VoIP data connections, where calls are made and received.

Continue reading

Microsoft AD RMS: User Adoption Made Simple

What is Rights Management?

Rights management pertains directly to managing permissions for individuals to access specific information. Our two jargon busting acronyms for this area are DRM (Digital Rights Management) and IRM (Information Rights Management). For the purposes of this article we will consider both DRM and IRM one in the same.

Development of this area of technology primarily driven by Copyright. Publishers of books, music and films have in recent years been more and more motivated to try to protect their material, in the face of the proliferation of internet use. The Internet has been it exponentially more possible to share copyrighted materials with the click of a button, and not to just one person, but hundreds of people, even one’s that the sharer has never even met.  The need to control who has the right to access, read, modify or even delete information and also become prominent in both government and commercial organisations.

Microsoft AD RMS – Active Directory Rights Management Services

Controlling content is at the heart of fulfilling those requirements, and Microsoft provides an Active Directory integrated service ADRMS, to do exactly that.  The basis of the AD RMS service is that each document is automatically encrypted by an RMS client, at the point of creation (the desktop). It is then, by default, protected from unauthorised individuals trying to access it.  When created, the creator is able to apply a list of permissions to the document, to specify who have what level of access to read or change it.  These permissions are stored in the central AD RMS server, so at the time any other client tries to access the document, the server can be queried to see if the requested access should be permitted. Simple enough? Continue reading