Fifteen years ago, your standard hard drive had a capacity of about 36GBs. These hard drives delivered roughly 150 IOPS. Today, hard drives are equipped with over 6TB of capacity… and they deliver roughly 150 IOPS.
You see, the data problem isn’t just one of capacity anymore. Data growth is an issue to be sure, with IDC predicting that the amount of data will increase from 4.4 zettabytes to 44 zettabytes from 2013 to 2020, but it’s not the problem that should be at the forefront of IT professionals’ minds. The most troubling issue for those in IT is that increased data growth and hard drive capacity don’t necessarily correlate to improved performance. Read and write speeds have not increased at nearly the rate that disk capacity has, and this creates a performance bottleneck.
Think of it like drinking your favorite iced coffee through a straw. This process works great when sipping out of your standard cup. But, what happens on that Monday morning when a single cup of iced coffee just isn’t enough? What if you need a whole barrel of caffeine? And you need it ASAP. You put the coffee in a barrel so the amount of liquid has increased and the size of the container is larger, but you’re still drinking from the same straw. The result? You’re not able to drink the coffee any faster, you’re not getting caffeinated any quicker, and you’re probably pretty frustrated, tired, and inefficient.
And so it is in the data center, too. As the straw couldn’t keep up with the increase in the amount of iced coffee necessary to make a Monday productive, performance hasn’t kept up with the increased capacity available. However, there is a cure – the key to solving the data problem is in making data truly efficient.
Data efficiency technologies were originally designed to take on the problem of managing the rapidly growing amounts of data. However, now that the primary concern for IT isn’t addressing capacity limitations, but performance ones, data efficiency technologies like deduplication, compression, and optimization need to be adjusted to make sense in this new environment.
Herein lies the most prominent data center conundrum: how do you ensure peak performance and predictability of applications in a cost-effective manner in the post-virtualization world when IOPS requirements have increased dramatically and hard drive IOPS have increased only incrementally?
To combat stagnant performance rates, many companies are looking to flash storage as a solution. Flash storage is useful for removing the performance bottleneck, but it’s expensive and it’s not suitable for all portions of the data lifecycle.
The use of flash storage is a part of the solution to the data problem, but it’s not a cure for the entire lifecycle of data. That’s why SimpliVity uses flash/SSDs where it makes sense. SimpliVity’s technology delivers deduplication, compression, and optimization for all data globally, across all tiers of the data lifecycle, and it’s all inline. This makes the data much more efficient to store, move, track, and protect. By making data efficient from the very outset and across all lifecycles, the data problem is solved.