It is time we admit the truth. Technology “add-ons” sound great, but in practice, they frequently fall short. Think about the last time you purchased equipment for your IT infrastructure, you probably started the process with a list of features and functionalities and then picked your solution after you found one vendor that could check off all the boxes on your list.
However, some IT infrastructure providers are simply “adding-on” certain capabilities in order to check those boxes. Rather then taking the time to truly “build-in” the technology before going to market, they identified a gap and attached capabilities with partners or patches. This “add-on” approach may “check the box,” but it can result in difficult choices. You may think you have all the necessary features, but chances are, features like deduplication cannot be turned on without significant tradeoffs to performance.
A good way of seeing the differences between the “built-in” and “add-on” approaches is to think about how you shop for a car. A car buyer may have a list of requirements – fuel-efficiency, speed, space, and four-wheel drive. These requirements have to be built into the engine from the beginning to satisfy the customer. If not, you may need to switch off features like traction control or all-wheel drive to achieve the fuel efficiency – not exactly what you had in mind.
Now let’s talk IT. The IT infrastructure industry is rapidly evolving. Buyers are looking for simple, cheap, fast, and reliable solutions, and they’re seeing how hyperconverged infrastructure can meet their needs. Data efficiency capabilities, such as deduplication and compression, provide better performance and reduce capacity requirements; however, when they are not baked into the hyperconverged solution at inception, the product will underperform because the infrastructure resources are spread too thin. If they are provided as “add-ons,” they end up satisfying the checkbox, but the feature sucks up your application resources. In other words, there is a trade-off for “add-ons.”
In my experience, an IT administrator would never end up using the data efficiency or data protection features with a trade-off on application performance. The admin doesn’t want to be pulled out of bed at 2:00 a.m. because of application issues. If turning on data efficiency or data protection features could cause an application performance issue, the administrator would have to leave these key features off or use them selectively.
Now let’s look at SimpliVity. We spent 43 months in development to ensure our platform is the most complete data center solution. SimpliVity is the only hyperconverged infrastructure vendor to combine all the functionality of an entire data center stack below the hypervisor—including compute, storage, network switching, replication, backup and more—onto industry-standard x86 hardware.
The company’s data protection and efficiency features are designed to optimize capacity and bandwidth resources while increasing the system resource availability for end user applications. SimpliVity deduplicates, compresses, and optimizes data inline, the first time it is written to disk, eliminating redundant writes. This results in IO savings and delivers better performance across the board. There is no trade-off between efficiency, capacity savings, and performance. You can have it all.
See a demo of SimpliVity’s OmniStack 3.0 or look through our customer case studies, where we have enabled our customers to move from just keeping up with IT troubleshooting tickets to leading strategic IT projects. SimpliVity has delivered an industry breakthrough to these customers and is helping transform IT.