In consumer desktop and laptop computers, the migration to NVMe storage is part of an ongoing revolution in storage performance. NVMe moves flash memory storage away from the constraints of the SATA and SAS interfaces, which were invented in an era where it was only ever assumed data would be stored on hard disks with spinning magnetic platters, and subsequent long delays between sending a command to read a file and that data being available to the CPU.
NVMe replaces the inefficient AHCI interface, which was designed at a time when serial, rather than parallel, file access was enough for any system, again due to the assumed physical limitations of older storage mediums.
Using up to four PCI Express lanes, NVMe storage devices offer maximum read and write performance that’s many times faster than SATA, depending on which generation of PCIe technology a device uses. PCIe 3.0 NVMe storage tops out at 1GBps per lane, for a theoretical limit of 4GB/sec, while PCIe 4.0 uses 2GBps per lane for 8GBps.
Real world differences
Various real-world differences, including firmware, controller and budget considerations mean most measured real-world performance comes slightly under these theoretical limits, but it still offers considerably more headroom than the 600MBps limit of SATA storage, or hard disks.
The result is faster application performance, quicker boot times, and easier manipulation of giant files and datasets.
In addition to a new interface and updated command queues, new form factors such as M.2 that shrink SSDs down in physical size and reduce power consumption make a lot of sense for consumer devices, giving rise to thinner laptop designs that rely on battery power.
Of course, we all know that the data center world has totally different requirements to client computing and moves at a completely different pace and tempo. But with such a leap ahead in storage performance, new form factors and reduced power consumption, the data center world is bound to see as much change from the switch to NVMe as client computing has.
The advantages for the data center are different though. The biggest problem currently facing the data center world is demand. The whole world is moving to digital technology, and with it, an exponential rise in the number of devices being used and online activity, especially in the cloud. With that increase comes greater demand for digital services and with it, the need for processing power to handle the rise in data use.
But despite considerable efforts to make data centers as green and efficient as possible, such as switching to using renewable energy, it would be hard to ever suggest data centers are good for the environment. They’re filled with row upon row of noisy servers, gobbling energy, pumping out heat that needs air conditioning to keep its temperature under control, which consumes more energy. They take up large amounts of space. And they’re expensive to construct and maintain.
Building even more data centers is one way we could meet the world’s growing data demand. Increasing efficiency and throughput in our existing data centers is another, and no other technology that’s available today offers as much of an improvement as NVMe storage.
In a single data center, you could (in theory) replace ten storage racks filled with SATA SSDs with just one that’s based on PCIe 4.0 NVMe storage and still get better throughput, or even replace a far greater number of racks if they’re still using hard disks.
That’s an incredible difference, and suggests we will soon have our cake and eat it too - getting a lot more data processing capacity while improving efficiency.
When it comes to storage devices themselves, NVMe SSDs already offer the crucial technology that server flash storage devices depend on. The U.2 form factor offers hot swappable technology that’s crucial in a working data center. Power loss protection, hardware encryption, predictable low-latency and high I/O Consistency are standard in high quality NVMe drives specifically optimised for data center workloads.
Meanwhile, with ever increasing SSD capacities as well, we think the benefits of NVMe are so obvious, it’s a matter of when, not if all data centers switch to NVMe storage. But we see this change taking some time, potentially up to five years before the majority of the world’s data centers are using NVMe.
Switching is more complex than swapping out your SATA drives and plugging in new storage. NVMe depends on support at the chipset and processor level. That means you might need an entire server upgrade, with more planning, more complex data migration and more upfront capital expenditure.
Faced by such a decision, admins may choose to look at current and projected data consumption and schedule their switch to NVMe accordingly. There may not be an immediate need for massively increased data throughput, but such capacity might be needed in three years.
There may be compatibility issues too. Chipsets may require certain types of DRAM or certain configurations for the best performance, and when planning a wider data center upgrade it may help to talk to your storage provider about potential options for the best solution to meet your needs.
Many reputable vendors offer this advice without any obligation to purchase from them, and is certainly recommended when planning a big upgrade.
More from DCD
Sponsored Content Software-defined storage: What is Ceph?