At the VMware Partner Exchange (PEX) conference this week, there was a flurry of vendors using the term “hyperconverged,” including some taking baby steps forward regarding storage layer. However, these solutions are still only “Convergence 2.0” solutions.
In contrast, SimpliVity simplifies IT by hyperconverging everything “below the hypervisor.” While the distinction might seem subtle, SimpliVity’s approach delivers unprecedented efficiency. And efficiency matters. Let me explain.
When developing a hyperconverged infrastructure solution, both the scope of the functionality and the overall efficiency need to be considered. At SimpliVity, when we talk about hyperconverged infrastructure solutions, we often show an image of the traditional data center stack and compare it to the simplification offered by our technology.
This captures our ability to replace the functionality of eight to 12 discrete products in the data center. There are multiple ways to deliver this functionality. One approach would be to take existing products, run them in one or many VMs, and run them within a single shared resource pool.
There are many challenges with this approach, but I want to focus on the efficiency of the solution. If all of the business applications and infrastructure applications are running within the same shared x86 resource pool, then the efficiency matters. If each infrastructure application is processing the data separately, then the same data will be processed again and again. In the example below, the same piece of data is being processed nine different times!
There is nothing efficient about processing the same data nine different times. It is a huge waste of CPU resources within the infrastructure. This will require more CPU and memory, and likely more hosts within the environment. In the end, this approach does not deliver the expected cost savings. SimpliVity takes a different approach to delivering infrastructure functionality. SimpliVity solves the data problem.
Let’s illustrate that with an analogy. Imagine having the flu for a moment. One approach to feeling better is to take different medications for each of your symptoms: body aches, cough, congestion, sore throat, headache, and so on. The end result is that you haven’t actually addressed the problem. You’ve taken nine different medications, but you still have the flu.
Instead of treating the “symptoms” within the data center with many different bolt-on technologies, SimpliVity goes after the underlying issue; we treat the flu—or in this case, the data problem. SimpliVity’s hyperconverged infrastructure eliminates the need to purchase and operate up to a dozen different products in the data center to deliver data efficiency. Instead of the same data element being processed separately by each device or infrastructure application in the data center, SimpliVity processes it once.
You might not actually be using up to a dozen different products, but I would be willing to bet that data gets processed multiple times in its lifecycle. If your savings is 4x instead of 9x, that is still an amazing efficiency impact on the infrastructure.
So, if you’re exploring hyperconverged infrastructure and redefining simplicity is alluring, think bigger picture. Think about redefining simplicity AND efficiency … because efficiency matters.