Brochures

Claranet | Data Protection Bimodel World White Paper

Issue link: https://insight.claranet.co.uk/i/1019714

Contents of this Issue

Navigation

Page 2 of 7

Data Protection White Paper 3 Introduction Backup is broken – both EMC and Gartner agree on this point. Their thinking is that infrastructure should now protect itself using built in intelligence – all part of the software defined datacentre strategy for the new bimodal hybrid cloud world. In this whitepaper, we explore this concept by taking you through the construct of a robust data protection strategy and how new technologies such as copy data virtualisation and object storage may change the way you protect data forever. Quite possibly, your next backup application is no backup application. Data protection is an insurance policy. Certain processes, often transparent to the data owner, are put in place to protect data from the threat of loss or corruption. These threats can be manifold in nature – user or operator error, system failure, datacentre loss, malicious intent or degenerative bugs. The speed to recover, or the point in time from when to recover, will vary based on the data type and how critical the application is to the business. As data grows so too does the cost of protecting it. It's a capacity thing. 74% of organisations expect to spend more on data storage this year than they did last year, and so the problem is getting bigger, according to research by IDC. They estimate $931 million was spent on data protection software alone in 2015. RTO, RPO: Scoping the Data Protection Service Data protection services are measured in their ability to recover. The two most common metrics are Recovery Time Objective (RTO) and Recovery Point Objective (RPO). Together they form a data recovery policy and dictate the data protection process. RTOs are measured in elapsed time (days, hours, minutes) – the time it takes to recover data so that an application service can begin to operate normally once more. The more important the application that the data set is related to, the more acute the RTO objective will be. For example a trading system for an investment bank is likely to have a more acute RTO demand than that of its marketing team's systems. RPOs are measured in calendar dates or points in time rather than elapsed time. These are markers in history from when a data set could be recovered in its original state. The frequency of these data set captures is determined by the criticality of the application that the data set supports. The longevity for which an image is kept will be governed by compliance demands. A RPO therefore has two separate deliverables – frequency of capture and longevity of retention. A point of sale system for a retailer may have a very granular RTO with a frequency of perhaps every 15 minutes, but a short longevity retention of perhaps 30 days. Whereas HR records may require infrequency of copy (e.g. every 24 hours), but high longevity demands that run into years. Accurately defining a range of RPOs and RTOs is critical to ensuring the correct service class is aligned to any given data set. This service class definition and alignment process is essential to optimising the economics of any data protection service. This definition and alignment process scopes the data protection service that the technology must support helping to control cost and mitigate risk. Your next backup application is no backup application. As data grows so too does the cost of protecting it. It's a capacity thing. 74% of organisations expect to spend more on data storage this year than they did last year, and so the problem is getting bigger.

Articles in this issue

view archives of Brochures - Claranet | Data Protection Bimodel World White Paper