Today, the total capacity of all the hard drives in the world comes to a little under a zettabyte, but according to Cisco, we’ll need to store approximately 3.5 Zbytes of data by 2019. Even if these predictions don’t come to pass, it is obvious we’ll need a lot more capacity in order to keep up with the social changes caused by technological progress.
We’ll also need faster types of storage for specific workloads – the kind that leave flash in the dust. We’ll need to decide which of the emerging standards will unite the industry, and which standards need to be killed off. We’ll have to manage our data better and deploy systems that can track every single file that constitutes a zettabyte. To find out how the storage landscape will change, we’ve decided to ask three leading experts.
Ron Bianchini, co-founder and CEO of hybrid storage startup Avere Systems, says that most organizations continue deploying NAS boxes as individual islands of capacity. Instead, we’ll need nearly unlimited storage pools located in the cloud and automated through software.
“Rather than having these islands of capacity locked behind the protocol engine, like NFS or CIFS, the idea would be to have a large pool of capacity, a large object store, and then put protocol engines in front of it for the different protocols you need, so that you have a big scalable storage pool, but then you have the protocols providing the functionality. For data reliability, for disaster recovery, we want geo-dispersed storage infrastructure,” Bianchini added.
One issue complicating the adoption of such an approach is privacy regulation – quite a few governments around the world have realized that data about their citizens can be dangerous in the wrong hands and are now demanding that such data stays within the country’s borders. That’s great news for data center builders, but it is a headache for corporations that want to build storage pools that stretch across borders.
In order to solve this issue, we’ll need to invest in copy data management – something that has been around forever but is especially relevant now, thanks to legal tools such as the EU’s ‘Right to be Forgotten,’ which will require digital service providers to delete customer data from their systems on request.
The Right to be Forgotten will become EU law in March 2018, yet according to a recent Compuware-sponsored survey of CIOs across Europe and the US, 30 percent of businesses are not confident they could track down and erase all of the data that relates to an individual customer. That’s where copy data management software comes in – it tracks and minimizes the number of copies to simplify regulatory compliance, while also ensuring there are enough copies in enough locations for disaster recovery.
Form follows function
Dave Wright, general manager of SolidFire – an all-flash storage company that was acquired by NetApp for $870m in cash, with the deal closing earlier this year – offers his own insight into the future of storage.
Wright told DatacenterDynamics he believes in a future where the flash industry will standardize around a 2.5in SSD form-factor that supports NVM Express (NVMe) – a relatively new storage interface that will likely replace the SATA we know and love. Wright also believes there’s a bright enterprise future ahead for the M.2 standard for flash storage.
Today, this tiny SSD form-factor is mostly seen as part of high-end ultrabooks – devices so thin they can’t possibly fit a 2.5in drive – but it could be a boon to server designers since it also supports NVMe.
“The other big trend we are seeing is the move to storage systems being primarily software-based, with industry-standard hardware,” Wright said. “And there are still people who are focused on proprietary storage appliances but that [trend] is really going to drive the need to adopt standardized form-factors in the marketplace because that’s what customers are moving towards.”
That could spell bad news for EMC’s DSSD D5 and SanDisk’s InfiniFlash – two of the recently launched arrays that have ditched highly popular legacy drives in favor of proprietary flash formats. Sure, these look cool and perform like champions, but they also prevent customers from finding a better deal on their flash. Considering the rate at which the price per GB has been falling, this might prove to be their undoing.
As the storage market realigns itself around software, today’s dominant players will have to adapt or risk losing their dominance. “NetApp was a very appliance-centric company, and even though there was a lot of software IP, it was all delivered in proprietary hardware. They have really shifted as a company over time towards delivering more of their technology as standard software. At SolidFire, it has always been the case, and we have both appliance and software-only versions of our product.”
NetApp is not the only traditional storage vendor that knows where the wind is blowing: last year, EMC donated source code for the ViPR storage controller to the open-source community, its first ever contribution of a commercial product. So what does it do? ViPR, and its open-source twin CoprHD, abstract storage from disparate arrays into a single pool of storage capacity and support a wide range of third-party hardware.
“Most of the market is still buying integrated appliance solutions, and I think vendors will continue to sell a high volume of those solutions,” Wright said. “But the appliances, much like the SolidFire model, will be built around off-the-shelf hardware, so you can use the same software base whether a customer wants to buy an appliance or whether they want to provide their own hardware.”
And then there’s the question of the all-flash data center – an approach to infrastructure that seemed to be the Holy Grail of storage just a few years ago. “Anything where you actually care about any level of performance at all is going to flash very rapidly,” Wright said. “And, really, it’s driven by the falling cost of flash, the increasing capacity of flash and, quite frankly, the dramatic benefits of all-flash solutions over hybrid solutions when it comes to performance, consistency, reliability, space, power, cooling, and on and on down the line. I have many customers who say, ‘At least when it comes to primary storage, I’m only buying flash, I’m not buying any more disk.’”
Gary Lyng, senior director of marketing for data center business at Western Digital – a company that was threatened by the flash revolution until it bought SanDisk, the world’s third-largest manufacturer of flash memory – offers the final piece of pragmatic thinking. According to Lyng, innovation in storage remains limited by the speed of manufacturing and the length of the hardware refresh cycles.
“People always look towards the evolution of one sort of media, saying ‘X is going to replace Y,’ but the reality is, there is investment, there’s also the rate of fabrication facilities, and although flash is very powerful in terms of additional performance, lower power and lower cooling, there are a lot of new technologies coming out down the line. You’ve probably seen some of the recent SanDisk announcements around ReRAM and the different approaches there.”
Resistive RAM (ReRAM) represents a new class of storage class memory (SCM) devices that promise to be thousands of times faster than flash, but with higher capacities and at much lower price than DRAM. Other types of SCM currently in development include Ferroelectric RAM (FeRAM), Magnetic RAM (MRAM) and Phase Change Memory (PCM).
Obviously, the industry has no need for several incompatible memory types to do the same job, and the number of contenders goes to show just how hotly contested this field is. Expect it to grab the headlines as the next big format war: think VHS versus Betamax, or Blu-Ray versus HD DVD. But this one will not be fought in living rooms and video stores; instead, it will be fought in the data center.
“One of the things you’ll see more of – especially from Western Digital since we own the fabrication facilities and are the world’s largest [storage] provider – is not only innovating within the media but also up the stack,” Lyng said. “So we’ve got vertical integration from the individual firmware on drives, whether SSD or HDD, then integrating it with scale-out object-based storage and unified file and object storage, and adding open APIs and tying them into that stack.”