In racing, the success of a Formula 1 team ultimately centres on the car, painstakingly designed, tested and assembled with meticulous precision to ensure every element works as smoothly and quickly as possible in conjunction with the others. But in the wrong hands, even the most powerful and aerodynamic car won’t be able to take the right lines around the corners or accelerate quickly enough. The driver is another critical success factor: he understands the car and how it works, and knows the circuit he’ll be driving.

In your business it’s the same: today much of its success depends on its IT data centre infrastructure and its understanding of how it will perform. But driving that data centre, dictating how fast it’s able to process and recall data, is the storage infrastructure. And because storage is so vital to the smooth running of the entire data centre, it’s crucial to get it right. We’ve all heard horror stories of (and perhaps experienced ourselves) bottlenecks leading to days of delays when it comes to backing up or carrying out even the most basic of applications. And that’s often because the storage isn’t the right fit for the application workloads it is expected to deal with.

Flash versus cloud

When buying storage, customers are told that some solutions are better suited to certain environments. For example, because it is easily and highly flexible and scalable, cloud-based storage works well for companies that want to provision additional storage quickly and cheaply as their data volumes grow, whereas all-flash was considered the default choice for organisations who needed real-time access to their data. Until recently though, that’s where the advice stopped. There was no definitive way to tell whether the storage you’d selected could handle your application workloads. So there was a reliance on guesswork, followed by crossed fingers during the implementation phase. Of course, it was during implementation that many of the glitches were discovered. But by then it was too late – money had exchanged hands and there was no going back.

And we know that any storage problem has a knock-on effect on the data centre. Whether it leaves the storage manager unable to access and analyse data in real time, or even access that data at all. It’s like putting a rally car on an F1 circuit – even if that car is state-of-the-art, it lacks the features required to win.

The industry is realising that supplying storage systems that may not be capable of handling customers’ workloads is not the best way to work, and has started to develop new tools to help storage managers find the right solution. These tools characterize profiles, not just in terms of KPIs like latency, throughput or IOPS over time, but also key I/O metrics such as read/write ratios, data/metadata command mixes and random/sequential ratios.

It was during implementation that many of the glitches were discovered. But by then it was too late – money had exchanged hands and there was no going back.

These tools give storage managers a level of control they’ve never had before by offering a real insight into what storage capacity and capabilities they actually need. It’s now clearer which hardware and software will be most suited to their applications, and which will lead to disappointing results. And although the vendor has always purported to be able to recommend the most effective technology for each customer, the onus is now on them to do more. They can no longer rely on ‘in the lab’ results to do the selling for them. Sure, an array might be able to perform millions of IOPS in the perfect environment, but how does it fare when it’s faced with an unusual workload, one that heavily utilizes compression and deduplication? It’s this question that’s encouraged vendors to use workload profiling tools at the PoC stage to make sure their kit is what’s really required.

But that change might not be quick enough. Resellers are already relying on workload analysis tools to compare systems from different vendors for their customers. And the tools themselves are evolving, with a growing number of free products that customers can employ to see how their existing storage products are performing.

Guesswork? How droll…

This opens up a whole new level of competition within the industry. And I can see that, not too far into the future, the idea of selecting storage based on guesswork and some overprovisioning will be laughable. Vendors and the channel will become more accountable: they will have to show that they have chosen the correct storage system for their customers’ workloads. And customers will have much more control over their purchasing decisions.

The effect of I/O profiling on the data centre will be wide reaching. Eliminating storage overprovisioning, underprovisioning, and ongoing issues will mean budget and staff time are freed up for the wider data centre. And by reducing those pain points the entire data centre should run more smoothly. It’s like finding the perfect driver for your F1 car, knowing the layout of the track and understanding the weather conditions. A winning combination.

Chris James is EMEA marketing director at Virtual Instruments