Last week, I had the pleasure of presenting at TechTarget’s Boston Roadshow on Virtual Desktop Infrastructure (VDI), headlined by Gabe Knuth – VDI expert and IT industry guru – who delivered the keynote.
As principal engineer and solution owner for SimpliVity’s VDI offering, I was thrilled to share SimpliVity’s VDI story with the attendees, and to talk about how SimpliVity and hyperconvergence are changing the game for VDI. Of course, I didn’t expect such a big shout-out from Gabe, who touted hyperconvergence as an important evolution in VDI infrastructure.
Gabe talked about hyperconvergence in the context of some common challenges that have historically plagued VDI projects, starting with how difficult it is to size VDI workloads upfront. “Where servers are predictable, desktops are random,” as Gabe put it. And the only way to know for sure whether you have enough capacity to support your users, is to throw something against the wall and see if it sticks.
But that can be painful with traditional IT. Siloed data centers, composed of independent compute, storage, network, and data protection platforms, with distinct administrative interfaces, are inherently inefficient, cumbersome, and costly. Rolling out a new application like VDI becomes a manually intensive, time-consuming proposition, and expanding system capacity can take days or even weeks, and require complex provisioning and administration.
But, there is hope. As Gabe pointed out, hyperconvergence offers many distinct benefits for VDI. For example, here’s how SimpliVity hyperconvergence addresses the sizing issue.
First, SimpliVity eliminates complexity by delivering all IT functions and services in a single, scalable 2U building block that runs on commodity x86 hardware. With this modular, building block approach comes the ability start small based on their immediate needs, and then quickly scale out as needed to match demand, with stable, predictable performance as you scale. And because it’s so fast and simple to deploy SimpliVity hyperconverged infrastructure, VDI admins no longer need to predict the future by forecasting demand upfront.
Another pain point that Gabe addressed is performance. Boot storms and login storms can overwhelm compute and storage IO capacity during peak periods, creating significant performance degradation if not handled properly. Gabe cited in-line deduplication (“Not your father’s dedupe” – his words) as a way to address the I/O gap that results from storage controllers getting overwhelmed with read/write requests from thousands of desktops at once.
SimpliVity addresses VDI performance by performing inline data deduplication, compression and optimization on all data at inception across all phases of the data lifecycle, all handled with fine data granularity of just 4KB-8KB. On average, SimpliVity customers achieve 40:1 data efficiency while simultaneously increasing application performance – thanks to our OmniStack Accelerator Card, which offloads the processing of these activities. We also use predictive cache algorithms to anticipate reads and place them in high-performance SSD cache before they are requested. When you put those two together – in-line deduplication and predictive caching – you get a high performing VDI infrastructure that delivers an excellent user experience, without breaking the bank.
It’s great to see industry leaders like Gabe Knuth touting the benefits of hyperconvergence when it comes to VDI. We’re excited about the unique benefits that our solution delivers for VDI, and its ability to overcome common VDI challenges, and we love that more and more organizations are taking note of these developments.
If you’d like to learn more about SimpliVity for VDI, check out our recently published third-party validated performance testing with Login VSI, and watch Brian Madden complete the daunting task of shopping for his VDI project in the comedic video below.