So you’re interested in hyperconvergence – what is it and why should you care about the “hyper” part?
Let’s take a step back and look at how we got here. There are three phases of convergence, all of which started in 2009 in response to the glaring need for a simplified and consolidated datacenter infrastructure. As I discussed in my last blog, the collection of point products in the datacenter acquired over a decade creates several challenges. It became clear to industry experts that the ideals of cloud (i.e.: agility, scalability, control, and self-service) could not be achieved without taking a new approach to IT infrastructure – and so began the phases of convergence.
Here’s the break down of the evolution of converged infrastructure:
First we saw the Convergence 1.0 or “integrated systems”. Companies like VCE and NetApp were first to market in this space. They put existing storage, server, and virtualization into one box with a single support number. The idea with integrated systems was to make it easier and faster to deploy than legacy IT infrastructure. Convergence 1.0 solutions took the first step in consolidating some of the physical infrastructure, however point products are still needed for data protection and WAN optimization. The underlying data architecture remains untouched with Convergence 1.0 technologies – the data problem and the I/O gap still exist here.
Convergence 2.0 or “Partial Converged” systems came out shortly after. In these solutions the server and storage were converged with a single shared resource pool for use cases like VDI. Convergence 2.0 solutions were another great step in the right direction, however, they are not VM-centric, they do not address the problems around efficiency or performance, and they do not include native data protection or a global unified management function.
Finally there is Convergence 3.0, also known as true hyperconvergence. SimpliVity offers the only truly hyperconverged solution that provides cloud economics with enterprise capabilities, including performance, data efficiency, native data protection, and global unified management. SimpliVity has an architectural advantage. Other’s who claim to be “hyperconverged” run on x86 combining server and storage and do not address the I/O problem. SimpliVity’s OmniStack Data Virtualization Platform solves the I/O gap as it deduplicates, compresses, and optimizes all data, at inception, once and forever, across all stages of the data lifecycle. This increases efficiency and simultaneously improves performance. This is something that Convergence 1.0 and 2.0 solutions are still missing.
SimpliVity customer, The Vollrath Company is just one example of many customers that has seen first hand the differences of convergence 1.0, 2.0 and 3.0 solutions. Vollrath, a large stainless steel manufacturing company, was facing a wide range of challenges, including ROBO issues, backups that took hours, limited retention, and issues meeting RTOs. At the time, Vollrath had two Vblocks and purpose-built backup solutions. They decided to look at Convergence 1.0 and 2.0 offerings; however, the cost of implementing either of these solutions at their seven remote sites was extremely high. On top of that, they did not offer the incremental building blocks or the global unified management aspect that the company required. What Vollrath needed was a hyperconverged solution – SimpliVity. Check out the full case study here.
Watch my colleague, Dan Pearl, discuss the evolution of hyperconvergence in this video.