Last week, I was lucky enough to participate in ActualTech Media’s Data Protection Megacast. One thing that came through loud and clear during the event – current data protection methods aren’t solving modern data center challenges.
Nearly 280 attendees weighed in, and the poll results showed that the top three challenges in protecting data are “Disaster recovery planning,” “Data sprawl and ensuring it is all protected” and “Cost control.” In the subsequent poll question, respondents noted that their top three data protection concerns are “Cost,” “Performance – time to restore” and “Performance – time to backup.”
This real-time data backs up anecdotes we hear consistently from enterprises: data is increasing faster than legacy data protection solutions can handle. This leads to issues with getting data backed up and/or restored within timeframes that meet business demands. The end result is concern with the organization’s ability to recover data in the event of a disaster. As expected, all of this needs to be solved in a cost-conscious manner.
There were several other great companies featured on this Megacast, but SimpliVity’s advantage is our focus on managing the data across the entire lifecycle, not just focusing on the backup or replication stage of the data.
Protecting Production Data
Being a hyperconvergence platform, SimpliVity first provides the compute and storage infrastructure for customer’s production applications. As data is ingested from the hypervisor, we stage the VM data into DRAM on the OmniStack Accelerator Card across two of our nodes within a single datacenter. With data now protected across multiple nodes, in addition to supercapacitor and flash storage protecting the DRAM on each OmniStack Accelerator Card, we acknowledge a successful write back to the VM and process the data for deduplication, compression and optimization to permanent storage on the Hard Disk Drives (HDDs) on both nodes. Once this process is complete, every VM in a SimpliVity datacenter can survive the loss of at least two HDDs in every node, in a datacenter AND the loss of a full SimpliVity node.
Now let’s imagine that the unthinkable happens and a full node goes offline due to a disaster, I like to refer to this as a VERY bad day in IT.
Backup and Restore
To protect data at specific instances of time, SimpliVity designed backup and restoration operations directly into the DNA of the SimpliVity OmniStack Data Virtualization Platform, enabled by our ability to dedupe, compress and optimize all the VM data. This results in backups and restores that can be taken in seconds, which can help reduce Recovery Point Objectives (RPOs) and Recovery Time Objectives (RTOs), while consuming almost no IOPS off the HDDs.
Replication and WAN Efficiency
When protecting data across datacenters, SimpliVity maintains awareness of data deduplication across the different sites. If a VM is configured to backup to a remote datacenter, the receiving datacenter determines which unique blocks need to be transported across the WAN and the sending datacenter only sends those unique blocks. (See the Lego Analogy post for more details.) This drastically reduces the WAN bandwidth necessary between sites, increasing the frequency of backups to remote sites and eliminate IOPS by reducing the amount of data that needs to be read from and written to the HDDs.
When “a VERY bad day in IT” turns into a true disaster and an entire datacenter must be restored in a new location, SimpliVity can help reduce the RTO. Since all the data is maintained in a deduplicated format, the restoration process only requires the creation and presentation of a logical representation of the VM. No data needs to be read from backup media and written to production media at any point. This, of course, means no IOPS are consumed to restore the data, which means both quicker restorations and no impact to running workloads.
One astute observer during the Megacast was quick to point out that RTO is a measure of more than just the restoration of the VM. This is absolutely correct, and this business metric should include ALL the steps required to get the application and data available to the end user. This generally starts with the restoration process and should also include IP changes, DNS updates, re-establishment of communication paths between parts of a n-tier application stack and other non-infrastructure items. This is why we’ve made a focus over the last several months on providing integration into other tools that can help automate and orchestrate all the steps of the disaster recovery process, including pre-built packages of SimpliVity functionality within VMware’s vRealize Automation and Cisco’s UCS Director, and supporting partners in the development of tools on top of SimpliVity APIs like VM2020’s EZ-DR.
Wrapping up all these features that SimpliVity provides built-in from the ground up, we’re able to provide a platform that can offer very fast backup (#2 data protection concern) and restore (#3 data protection concern) operations, the ability to keep up with the growth and protection of a modern data center (#2 data protection challenge) and integrate into an orchestrated disaster recovery plan (#1 data protection challenge). This is all possible because of our ability to deduplicate, compress and optimize all data from inception and through all phases of the life of the data. This also reduces the footprint within the datacenter (due to both reduced number of HDDs and reduced number of single-point-in-the-lifecycle appliances) and the number of IOPS consumed off the HDDs (further reducing the number of HDDs necessary), which makes for a very nice cost savings in the end (#3 data protection challenge and #1 data protection concern).
Big congratulations to the team at ActualTech Media on a wildly successful Megacast property they’ve built. I encourage you to review our presentation on the Data Protection Megacast for more information about our capabilities.