A while ago we talked about a number of areas where data center storage technology is changing.
Changing consumption habits and cloud applications are having a strong impact on the demand for data centers, and impacting data center configurations as well. On top of that, storage hardware is evolving, with new technology such as U.2, NVMe and PCIe bringing lower power consumption, lower physical rack footprint, and of course, much improved performance.
SSDs are mainstream
The majority of new consumer computers are now sold with SSD technology, so it’s fair to say that SSDs are well on their way to replacing hard disks as being the world’s dominant data storage medium. This is going to continue to have a big impact on data centers everywhere.
We currently see data centers switching to NVMe-based SSDs especially where they require five or six-nine SLAs for mission-critical applications. Redundancy is key and NVMe drives featuring large DRAM caches are able to deliver QoS performance (long-term performance stability).
Most systems that implement SATA or SAS SSDs connect using hardware-based RAID controllers. But NVMe uses the PCI-Express ports, which inherently provides faster transfer speeds, and then leverages software-defined RAID profiles.
Cheap consumer SSDs don't meet data center demands
As of 2020, the largest barrier in switching to NVMe is the upfront cost. While the price of flash storage is reaching parity when comparing SATA and NVMe, the required hardware changes deliver greater upfront costs. This is set to slowly change over time.
But consumer NVMe SSDs may not be suitable for use in the data center. Today’s M.2 NVMe drives aren’t typically hot-pluggable, nor is there an attachment for a carrier case or bracket to facilitate easy removal. SATA has remained a dominant form factor as bad drives could be swapped out without powering down an entire server.
This is where the U.2 form factor comes in. U.2 provides the performance of NVMe in the data center but unlike consumer M.2 drives, is hot pluggable in front-loading server bays, provided there is both host and OS support.
As both SATA and NVMe have a place in the modern data center, customers need enterprise-grade SSDs based on traditional SATA for the majority of data center installations, as well as a suite of enterprise NVMe products.
Subtle SSD differences
Under the hood, there are subtle differences between various data center SSDs, depending on whether they are intended for read, write or mixed workloads. This affects the physical configuration, such as the number of NAND chips, and the size of the DRAM cache being used, as well as the overprovisioning area. Additional NAND chips allow us to offer higher endurance without sacrificing user capacity, and by using more DRAM cache, the SSD can offer consistent read and write performance with low latency while performing any background operations.
What type of SSD you choose depends on your application. Read-centric workloads, typically a web server, streaming, CDN, cloud or any other application where data is written once and read multiple times, will be reading data from an SSD more than writing to it.
Mixed workload SSDs are ideal for virtualization, databases, OLTP, cloud, caching tier or any application that has an equal number of writes as reads, or even more. Customers can work out what kind they need either by looking at the typical workload of their application use case or rely on S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) data.
The last difference is in the firmware. The algorithm is tailored to handle the data flow from host to NAND efficiently by considering the read/write workload, using the SSD Controller and the large DRAM cache, while performing background operations such as garbage collection and wear levelling simultaneously without sacrificing QoS and latency.
SSDs tuned to the demands of a data center should have consistent performance, verified by thorough testing and benchmarking. The ideal performance graph would show a straight line with 100 percent consistency, but it's impossible to meet that exactly. However, where possible, consistency mustn't resemble a fluctuating "sawtooth" pattern, or what’s referred to as a “Christmas Tree” pattern in IO delivery when the performance data results are graphed.
A drive that is not properly tuned may experience big swings in performance. At one point, the drive may perform 50,000 IOPs, and then it drops to 20,000 IOPs before bouncing back up 60,000 IOPs. While the high numbers look excellent for a specification sheet and sales literature, performance spikes don’t tell the whole performance story.
It's important to have consistency even if it may mean sacrificing some peak levels of performance. This consistency ensures that customers have a predictable advantage for managing their storage clusters, allowing them to build build applications and meet service-level agreements.
5G means more data
The roll-out of 5G is certain to lead to a rapid expansion of data being generated and naturally increased demand for data centers to process it. This void might not be easily filled by a one-size fits all approach, and instead organizations may look towards adapting infrastructure & business models to support the new data demands of customers.
Optimized drives can address this broader customer spectrum. And to meet this challenge data centers will increasingly adopt NVMe solutions, be it AIC, M.2 or U.2, with further new technology and applications, such as PCIe 4.0, doubling the PCIe bandwidth of 1000Mbps to 2000Mbps per lane.
More from DCD
Conference Session Scalable storage - where should the data sit?