Hyperconverged infrastructure represents a paradigm shift in IT. It’s a disruptive force in the IT infrastructure market. As hyperconverged infrastructure forces a monumental transformation in data center technology, the metrics we use to measure the value of this technology needs to change as well.
Oftentimes, technology evaluation comes with a bias. With its software-defined storage connection, some people view the merits of hyperconverged infrastructure through the lens of a storage administrator. There is more of a tendency to apply criteria such as nodes per cluster and terabytes per cluster to hyperconverged infrastructure. Since hyperconverged infrastructure shifts the paradigm from managing infrastructure components to managing VMs, there should also be a shift in the metrics used to measure it.
Think of how metrics have had to evolve in other industries. The best example of a paradigm shift that instigated a change in evaluation criteria is the electric car. Shopping for a Tesla vs. a combustion-powered Toyota won’t involve a popular metric: miles per gallon (MPG). The industry had to establish a new metric. The industry had to establish a new metric. In this case, MPGe is 33.7 kilowatt-hours of electricity – the equivalent to one gallon of gasoline – to determine energy consumption.
In the fitness realm, wearables, like Fitbit, created a different unit of measurement for daily exercise: steps. For years, we were encouraged to get 30 minutes of activity a few days per week. Today, 10,000 steps a day is a rough equivalent to the Surgeon General’s recommendation to accumulate 30 minutes of activity most days of the week and the CDC’s recommendation of at least 150 minutes of moderate exercise per week. The American Heart Association and the World Health Organization have now adopted steps vs. minutes of activity.
So it is with hyperconverged infrastructure. Many in the industry are still counting nodes and terabytes, or looking for storage-specific features rather than focusing on the real unit of measurement in a cloud and software-defined world: the VM. When you want to run a set of workloads in public cloud infrastructure, the conversation with the provider is more about the profile of the workload, its service level agreements (SLAs), and the cost of the subscription period. Would the conversation with the provider go to the level of how many nodes or terabytes (TB) power the cloud infrastructure?
But if nodes and TBs are no longer the metrics that matter for evaluating hyperconverged infrastructure, what should be?
In an informal survey of SimpliVity customers, we asked which key criteria are used to evaluate IT infrastructure. The top three things used to evaluate IT infrastructures were backup and recovery metrics, followed closely by a tie between cost metrics and operational metrics.
The time it takes to back up data to an offsite disaster recovery location was an important consideration for those who listed backup and recovery metrics as a key criteria for evaluating their IT infrastructure environment, along with recovery time and recovery point objectives (RTOs/RPOs).
Cost has always been an important consideration during evaluations. In particular, total cost of ownership (TCO) was called out by customers as the top evaluation criterion to do with cost metrics, with over seventy percent (70%) of respondents indicating TCO was an important factor to their organization when identifying cost metrics.
With backup and recovery metrics, cost metrics, and operational metrics becoming increasingly important criteria for evaluating the value of data center solutions, it seems that business outcomes are most important when evaluating IT. And from there, these business outcomes inform the features that are critical to the success of IT and the company.
When asked which units of measure are most important for establishing policies for workloads running on hyperconverged infrastructure, two-thirds of respondents said that the VM was king. Since VMs are mobile, it makes sense to have VM-centric (and VM-tethered) policies. Given this paradigm shift in the software-defined data center, some of the VM-centric metrics to consider adopting are:
- Cost per VM
- Cost per fully-protected VM (includes data protection resources)
- Number of VMs per admin
- VMs per floor tile (in the data center)
SimpliVity customers were also asked to rank results from the survey (weighted average) to determine the most important criteria in each category. The top three results in each category were:
- Cost metrics: 1) ROI; 2) TCO; 3) OpEx Savings
- Operational metrics: 1) Time to deployment; 2) Number of VMs/admin; 3) Virtual hosts/admin
- SLA metrics: 1) Ability to sustain a device failure without data loss; 2) Availability; 3) Downtime/Uptime percentage
- efficiency metrics: 1) Device consolidation; 2) VM per floor tile; 3) Power usage effectiveness (PUE)
- Data protection metrics: 1) RTO; 2) RPO; 3) Time to disaster recovery
Overall, a solution’s ability to match features with SLAs and business objectives will dictate its success. Nearly three-quarters (73%) of survey respondents cite that it is more important to evaluate hyperconverged infrastructure based on alignment with service delivery objectives versus individual features. As data centers are being transformed at such a rapid rate, shouldn’t the metrics toolkit we use to evaluate them evolve too?