In the age of remote replication, disaster recovery automation, and VM-centric protection policies, it can be hard to imagine that many businesses still do disaster recovery the old-fashioned way – shipping physical tape drives to an offsite location. While tape is an economical approach that’s been around for ages, it’s certainly not the best option. Aside from the logistical headaches of tape rotation and shipping, recovering from tape-based backups could take hours or even days. With the cost of data center downtime approaching $9,000 per minute on average, being offline for that long could be, well, disastrous.
Believe it or not, tape remains the primary strategy for many organizations. Take one recent SimpliVity customer. Before moving to SimpliVity, this California school district was relying strictly on offsite tape backups for disaster recovery. Their challenges around data protection didn’t stop there. Backups were taking 60 hours to complete. They would kick off backup jobs on Friday afternoon and the backups wouldn’t complete until sometime Monday morning – bleeding into their production hours and impacting application performance. On top of that, backups only included the data, not the server operating systems. So a true disaster recovery scenario would first require a server rebuild, and then a restore from tape backups.
Does this scenario sound too familiar? If so, you’re not alone. In fact, before deploying SimpliVity hyperconverged infrastructure, 70% of organizations rated sub-par disaster recovery as their number one data protection related challenge. Similarly, a recent survey of enterprise IT leaders by 451 Research found that data growth and disaster recovery were both in the top five challenges facing their organizations.
The connection between data growth and disaster recovery is significant. As data volumes increase, the time to back up and restore data increases as well – 60 hours in the example noted above. At the same time, data has never been more critical. So while tolerance for downtime is shrinking, growing data volumes are making it harder to recover data in time. This gap was highlighted in a survey by ESG Research, which found that while 35% of respondents demanded recovery times in less than 15 minutes, only 6% actually achieved these recovery time objectives (RTOs).
So, how do we bridge the gap between downtime tolerance and the ability to recover in time? By addressing the challenge of data growth directly. SimpliVity reduces data volumes by deduplicating all data inline, globally, before it is ever written to disk. In the case of the California school district, a key driver of data growth was the fact that teachers were copying the same files to multiple folders, replicating the same file tens or even hundreds of times. After implementing SimpliVity, their data volumes shrunk dramatically – from 424 TB of logical data across primary and backup storage, down to just 9 TB of used disk capacity, a 43:1 data efficiency ratio. And because SimpliVity’s native backup and restore capabilities take full advantage of this deduplication, they can back up an entire server VM in seconds, recover in minutes, and retain backups for up to 18 months.
Most importantly, they’ve solved their disaster recovery problem with a remote site about 40 miles away from their main data center. With bandwidth-efficient replication built into the SimpliVity platform, they can bring up their workloads at the offsite location in minutes. You can see why this customer was quoted as saying that SimpliVity was, “[t]he best purchase I have made as a Director of Technology.”
So how long would it take you to recover from a disaster? Seconds? Minutes? Hours? Odds are, your organization can’t wait that long. SimpliVity now offers automated disaster recovery so you can be back online faster. Watch this video to learn more.