What is the true measurement of scalability within a data center platform? Obviously the number of building blocks or nodes that can be placed in a cluster is one measurement (and that’s the one most often cited). But is it actually the best measure?
Hyperconverged infrastructure changes things: with hyperconverged infrastructure, storage is running alongside the business applications. Therefore, there are other factors to consider besides just the number of nodes in a cluster.
CPU and Memory Resources Available To Business Applications
The first is the amount of CPU and memory resources available to business applications. VMs sharing a host server are sharing the available resources. When one VM consumes more resources, other VMs in the environment will be “starved” for that resource. This contention could have implications for application performance. If it’s something that occurs consistently, it often forces IT organizations to either relocate VMs to other hosts/clusters or provision more resources.
In hyperconverged infrastructure environments, the storage stack is running on the host servers. It’s only natural that, the storage stack will take resources away from the VMs cohabitating on the server. These VMs house the business applications that the nodes are designed to run in the first place. Bolting on functions like backup, replication, acceleration, deduplication and compression will consume even more resources, leaving even less available to the applications/VMs.
Reducing the resources available to the applications/VMs means that fewer business applications can run per node. This reduced application load per hyperconverged infrastructure building block increases the needed number of nodes, licenses, network ports, and virtual storage controllers (which will increase the amount of acquired resources that cannot directly support business applications).
How SimpliVity Addresses CPU and Memory Resources
SimpliVity designed its hyperconverged infrastructure from the ground up to take advantage of a converged architecture, and ultimately maximize data center efficiency. This approach offers significant benefits, including a higher consolidation ratio, better performance, and comprehensive protection—which ultimately drive down the total cost of ownership.
First, the storage operations, like backup, replication, deduplication, and compression, are not bolted on after the fact. These functions are an integral component of SimpliVity’s Data Virtualization Platform. All data coming in to the SimpliVity hyperconverged infrastructure platform is deduplicated, compressed and optimized, before it is written to disk. Data operations on VMs, such as backup and replication, are, in turn, highly optimized because the data is already in a deduplicated, compressed and optimized state. Copies of VMs for backup and transfer of the copies between SimpliVity hyperconverged infrastructure nodes are already storage- and bandwidth-optimized. There’s no additional processing that has to occur, and, therefore, no unnecessary consumption of resources required.
Second, SimpliVity’s hyperconverged infrastructure offloads the “heavy lifting” processing of the storage operations to its OmniStack Accelerator Card (OAC). That means that the SimpliVity hyperconverged infrastructure environment will have less impact on host resources to support the storage stack. The accelerator card enables a SimpliVity hyperconvergence environment to offer built-in enterprise features like backup, replication, acceleration, deduplication and compression with no impact to the performance of the business applications—and at no additional cost.
Capacity Available to Business Applications
Another factor to consider when evaluating scale is capacity. For hyperconverged infrastructure systems, you can examine the raw and available capacity specifications. However, what is not often taken into account is the impact of data efficiency technologies on storage capacity. It’s not an exact science, but compression typically yields a 2:1 ratio. There are many factors that can affect deduplication ratios, including the global data set that the data is being compared against for redundancy, the deduplication algorithms and how granular the block size is for comparison. Among these and other factors, deduplication can yield data reduction ratios of 5:1 to 100:1. The higher end of the deduplication ratio spectrum is usually the result of a high degree of redundancy in the data set. For example, when production data and backup copies of production data are maintained in the same global data set. With data efficiency technologies driving down capacity requirements, more application (and backup) data can be stored per hyperconverged infrastructure node.
How SimpliVity Addresses Capacity Resources
As previously described, SimpliVity’s hyperconverged infrastructure deduplicates, compresses and optimizes all data at inception, once and forever. The fact that all data is deduplicated and compressed, across both running production VMs and backups, means that a SimpliVity hyperconverged infrastructure node can also manage a greater volume of application data per node. Most importantly, SimpliVity performs these processor-intensive operations on all data with no tradeoffs to performance. SimpliVity has customers who manage greater than 3PB of company data across only six SimpliVity hyperconverged infrastructure nodes.
All this means that more workloads can be supported per node and, consequently, less nodes are required to host a company’s entire set of business applications and data. Less nodes results in fewer licenses, less power, less rack space, less administrative overhead, less complexity and less cost.
So, what can a hyperconverged infrastructure solution optimized for higher consolidation ratios of business applications and data do for you? It will deliver greater operational efficiency, higher utilization of the resources you’ve made an investment in, and, ultimately, more time and resources to devote to the important work that matters.