Flash memory and storage is growing in popularity as large enterprises use more of it in their critical applications and core data center workloads. This should come as a surprise to no one, as the promises of increased performance, reduced operational complexity, and greater reliability are benefits all companies are interested in realizing. Combine the advantages of flash with its steadily decreasing price point and it’s no wonder many enterprises are ready to fully embrace the technology.
The performance and reliability advantages of flash are well-documented, but the one thing many people are missing is the impact of flash on the network. While organizations are moving from HDDs to SSDs, many network administrators are fearing an all-flash future.
Latency comes to the fore
With hard disk drives, network latency is practically irrelevant because the read and write speeds of HDDs are so much slower than the network – on the order of 10x or more. But with flash media, the read and write speeds are roughly the same as a full network hop, and the network is suddenly impacting system responsiveness in the form of increased latency.
New technologies are only compounding the problem of network strain in many cases. One example, hyperconverged infrastructure, integrates the key components of a traditional IT stack into a single solution, creating a simpler, more efficient IT environment. Enterprises around the world are increasingly turning to hyperconverged infrastructure, and vendors everywhere are starting to offer all flash as an option for their solutions. But customers may not realize how hyperconverged architectures may magnify the strain on network.
Performance is always impacted by the weakest link. For years, hard drives have been the weakest link, but now that has changed
In most hyperconverged environments, VM data is written throughout a cluster in order to ensure resiliency in case of a hardware failure. Each vendor approaches this process differently. Some widely distribute portions of the data throughout the hyperconverged systems – you can think of this as the peanut butter approach because small chunks of data are spread out across the environment. Others keep complete sets of the data grouped together, but stored on multiple systems – this is known as full data localization.
Spread or localized?
While both approaches will increase data resiliency, they have drastically different implications for the network. The peanut-butter approach places tremendous pressure on the network by increasing the amount of traffic across the data center, referred to as “east-west” traffic. The problem is, most networks aren’t designed to handle this much east-west traffic. Instead, traditional data centers are optimized for data travelling from edge to core in the data center, referred to as “north-south” traffic. Depending on how a hyperconverged solution distributes data, it may place an incredible burden on the network. On top of that, the peanut butter approach can counteract some of the speed and performance benefits of all-flash. While the network is roughly 10x faster than HDD, in a distributed SSD setup, the latency is doubled because the network is too slow to match the speed of the flash technology.
Full data localization copies the entire VM to selected systems as opposed to the distributed approach of taking chunks of the VM and spreading it out across the entire hyperconverged configuration. Full data localization reduces east-west traffic and eliminates the network bottleneck because the data is always local and therefore requires less network travel, putting less strain on the network (and network admins).
Performance is always impacted by the weakest link. For years, hard drives have been the weakest link, but now that has changed and network is quickly becoming the bottleneck inhibiting speed and performance. IT leaders, and particularly network administrators, will face the challenge of choosing an architecture for flash that will limit network strain.
Jesse St Laurent is the vice president of product strategy at SimpliVity.