I’ve been working within and studying the IT market for decades, and have seen many technology hype cycles. The current noise surrounding the emerging market segment sometimes called “infrastructure convergence” seems historically high. Within just the last two years, every large infrastructure vendor, as well as a slew of early stage companies, has launched products that claim to converge IT elements. And while many over-hyped technologies quickly flame out before gaining significant market traction, this emerging market has staying power, as the underlying need driving the interest is real. What are the drivers?
Simply put, the tremendous interest in convergence is driven by the need for IT teams to address their most vexing problems: infrastructure cost and complexity. They need simple, low-cost in regards to purchase as well as low-cost to operate IT infrastructure.
And we have great validation that the solution is attainable: Google and Amazon have opened our eyes to the fact that IT Infrastructure TCO can be lower, if we use commodity hardware to power potent, multi-functional software architecture. See my blog post from last month (8/20/12) which describes the evolution toward an open and integrated IT infrastructure topology. The idea—if taken to its logical conclusion—is to assimilate all the functionality of the IT infrastructure stack onto a single unified architecture that runs efficiently atop shared, virtualized, commodity compute resources. Now, the race is on for delivering similar topology for deployment onsite in the data center.
Given the great need and customer demand, it is no surprise that so many vendors have jumped on these trendy bandwagons, claiming that their solutions are in fact “converged,” or that they are delivering software-defined this or that. In a crowded, noisy market characterized by hyperbole and confusion, what is required are clear metrics for differentiation and evaluation of all suggested architectures.
To better understand this new market segment—and the offerings within it—I propose that the best metrics are: Scope of Functionality and Degree of Efficiency.
Scope of Functionality measures the number of core infrastructure functions that the offering provides. A broader functional scope ensures that users can get more of what they need from a single point of management. This potentially reduces the acquisition cost as well as the operating expenses, given that fewer products need to be purchased, managed, and potentially less power and space are required.
Degree of Efficiency is a measure of the amount of compute resources (processors, memory, flash, SSD, and HDD) required to deliver the functionality for a given workload. Highly efficient offerings do a better job of sharing the underlying compute resources amongst the various functions, and therefore offer significant acquisition cost savings. In contrast, offerings that merely bundle disparate products together without improving resource utilization score low on this metric. (Of course, sharing of resources is at the heart of server virtualization. A recent research paper by Leana Golubchik, Ph.D, Professor of Computer Science at USC demonstrates that the concept can be extended to the sharing not only of server resources, but also the other elements that comprise today’s IT infrastructure. View the paper here).
These two metrics lend themselves to a simple graph in which efficiency is measured on the vertical axis while functional scope is measured on the horizontal one. Note that the chart below lists the 12 core infrastructure functions that the modern data center demands along the X-axis.
Once the market framework is established, it becomes relatively easy to plot offerings along the two metrics, which we’ve done below using capital letters to depict actual products in the market today. When we do so, the chart demonstrates two clusters of offerings:
- In the lower left quadrant—characterized by limited functional scope and limited efficiency—we see a host of offerings from large vendors who have integrated a number of disparate products within a single frame and under a common GUI. These offerings all offer baseline storage and server functions, and some go farther to include caching and some limited scale-out capabilities. They all score low on efficiency since the underlying compute resources are not shared amongst the independent products that are bundled together
- In the upper left quadrant we see a number of offerings, primarily from newer vendors. These products also deliver baseline server and storage services but do so within a single container, in which the underlying compute resources are indeed virtualized and shared. This sharing improves the efficiency score, but does not—on its own—enable them to deliver the advanced functionality on the right side of the chart.
To score high along both axes is not easy—it requires the solution to not only deliver a vast set of functionality (that today spans 8-12 disparate products), but to do so while sharing all of the underlying resources amongst all of the functions. What is required, then, is an entirely new IT infrastructure stack—one that is developed from the ground-up, with that specific mission in mind. This is exactly what SimpliVity has done, placing our flagship product—OmniCubeTM—firmly in the top right of the chart.
In short, OmniCube is the first product to achieve the promise of delivering a multi-functional, truly converged, IT infrastructure platform, enabling customers to radically simplify their IT environment and dramatically reduce costs in the process.