When storage area networks (SANs) first hit the IT market, the possibilities seemed boundless. Instead of procuring and managing islands of storage, it became part of a shared pool. IT managers could then use those storage resources in a much more efficient way.

Of course, those SANs are now aging and becoming problematic in modern data centers. Traditional shared external storage just wasn’t designed for today’s virtualized infrastructure. For one, SAN management requires IT teams to provision individual LUNs to map storage volumes to virtualized machines. Those configurations have to be updated constantly to ensure availability and performance. There are also important hardware considerations to manage when using a SAN with virtualized servers, and plenty of gotchas to avoid, like storage space getting trapped in a VM when it could be used elsewhere.

6346 hard disk drive stock web hdd storage
Read more about the move to hyperconvergence in Open Research

Lots of companies are still using these aging SANs as part of legacy infrastructures that weren’t designed to support virtualization, flash storage, or cloud computing. They’re not adapting to modern-day data center needs. SAN-based storage systems worked well then, but they don’t anymore. And companies building new IT infrastructures more recently have skipped the SAN altogether in favor of cloud computing.

What’s next for data center storage?

Today, storage is solving fundamentally similar problems to those of 15 or so years ago. Data growth has exploded since 2000, though, and users need performance on many more devices and for many more applications. Storage has to modernize to meet those needs.

That’s happening in a few different ways. First, with the increasing adoption of hyperconvergence, storage is returning to the server. Now, though, it’s managed as a shared resource pool across multiple hyperconverged infrastructure nodes within a single data center, as well as across data centers. That makes the infrastructure much more flexible and helps save on cost. One of the SAN’s downsides is that it sucks up both CAPEX and OPEX budgets, first in the huge upfront purchase and then continually in maintenance. A better-suited storage system as part of a hyperconverged infrastructure can be purchased and scaled out incrementally, one atomic x86 building block unit at a time vs. a storage infrastructure built on a proprietary hardware platform. That saved budget can be put to good use on any of the many interesting, innovative, and business-focused data center technologies that have emerged in the market.

Second, modern data center storage is getting smarter. Hyperconverged systems offer a more intelligent storage architecture that’s VM-centric, which is a sharp contrast to the older storage arrays conceived before the virtualization era. That intelligent architecture also extends to built-in features that normally would require add-on tools such as built-in data efficiency and data protection. This eliminates the need for point hardware appliances and software such as backup deduplication appliances and VM-centric backup tools.

To move toward this better data center vision, it makes sense to phase out SANs gradually and move toward a modern architecture based on hyperconvergence. IT managers can take a VM-centric approach to infrastructure—and really, the entire data center—and look for efficiency features in their infrastructure technology tools. It’s also important to understand the value of x86 servers and their role in scale-out data center infrastructure approaches. The old SAN served their purpose, but new methods based on hyperconverged infrastructure have arrived to relieve them.

Jesse St. Laurent is vice president of product strategy at SimpliVity

Learn more about hyperconvergence in our Open Research section or register now for our free webinar.