The introduction of hyperconvergence causes a major shift in the way IT operates. Reduction of technology silos, improved data protection, reduced capital expenditures, improved operation efficiency, and reduced time dedicated to the refresh cycle of IT equipment, are all common outcomes seen by SimpliVity customers. While the simplification of the infrastructure creates many obvious advantages, bringing the data closer to the virtual machines (VMs) can create a performance advantage. How data locality, or the proximity of the VM storage to the VM compute, is managed can make a big difference to the performance of business-critical applications.
Data locality is an architectural challenge unique to hyperconvergence and is a place where current hypervisor resource balancing features don’t have full awareness. Locating the data and compute resources of a VM on the same host can be a distinct advantage since it will minimize latency, and thus provide peak performance of the storage layer. There are several ways hyperconvergence vendors are choosing to address this challenge.
Some choose to avoid the problem altogether and instead stripe all data across all nodes. This has the advantage of distributing the processing of data across all nodes when serving data for every VM. It also provides inconsistent performance no matter where the compute resources for the VM exist because some percentage of the data — depending on the number of nodes in the cluster and the number of blocks being accessed — will always come across the network. The downside is that this performance is always bound by the performance of the network and every VM is susceptible to cluster-wide data performance or availability impacts due to issues on a subset of the cluster nodes.
Others have chosen to allow VMware Distributed Resource Scheduler (DRS) to do its thing and fix the data locality problem under the covers. While this will certainly optimize the CPU and memory resources of the applications in the VMs, it does so with complete ignorance of the location of the data. This leaves the vendor with one of two choices, both of which will cause unpredictable performance impacts.
- Leave the data and compute where they are and simply access data across the network. This will impact storage performance since all data will be accessed across the network.
- Migrate the data to “follow the VM.” This is no small task since all data must be read from the original node, copied across the network, and written to the new node.
This is a very resource-intensive operation that will consume excess CPU, network I/O, and storage I/O on both nodes. These are limited resources that aren’t available to the business applications during this process, and could cause the storage subsystem of the hyperconverged node to become a noisy neighbor.
On the other hand, SimpliVity’s Intelligent Workload Optimizer allows SimpliVity customers to utilize the power of DRS to maximize CPU and memory resources, but also provide it with full awareness of the location of the data within the cluster. Within a SimpliVity hyperconverged infrastructure, the data for every VM is stored as a complete unit on two separate nodes. In order to maintain data locality, DRS groups and rules are created to identify the relationship between the VM and the placement of data across the hosts. These rules are defined as “suggested affinity” rules, so DRS will keep VMs on these hosts unless CPU and memory demands cannot be properly satisfied. The selection of the two nodes that will house the data is done intelligently at the moment the VM is created based on current capacity and a history of the I/O performance on each node. Once this placement is determined, the proper DRS groups are updated. A regularly scheduled task ensures that these rules and group memberships are maintained in case of user-initiated changes.
This approach allows DRS to work as it was intended, while minimizing the separation of VM compute and storage resources. By providing VMware’s existing tools this level of awareness, we not only integrate better with the vSphere environment with no additional complexity, but we also allow DRS to load balance VMs to maximize storage performance along with CPU and memory performance. By choosing a lower impact approach to data locality, SimpliVity maintains higher availability of resources to the business applications in our customers’ data centers. See this demo video to get a full appreciation of what it can do.