In a previous blog post, I wrote about how Ethernet Storage Fabric (ESF) is the logical choice as the next-generation storage network because of its superior performance, intelligence, and efficiency.

This, of course, begs the question: but what about Fibre Channel? Traditionally, high-performance networked storage meant block storage and required building a Fibre Channel SAN. Indeed, when storage and data center architects developed distributed storage and evaluated available networking technologies to untether hard disk drives, Fibre Channel was the wise choice. It offered the best combination of performance, storage-aware intelligence efficiency and reliability. Fast forward two decades, however, and evaluating networks on the same metrics yields a much different answer. Indeed, Ethernet wins on all counts. Evaluated on the metrics of performance, flexibility, reliability, scalability and security, Ethernet surpasses Fibre Channel on every count today.

ethernet cables r and m 1
– R&M

The Ethernet performance advantage

Performance is a key reason that Ethernet is supplanting Fibre Channel today. But storage networking is not just about bandwidth and latency; it’s also about storage protocols. In its day, Fibre Channel was an innovative technology that took the parallel SCSI bus and serialized it.

This also meant that it could be switched and thus become networked and scalable. But, essentially, there was no innovation on the storage protocol front itself. The basic SCSI protocol remained unchanged and was simply serialized. So the traditional command, ready, data response sequence was just duplicated in a serial manner.

This really didn’t matter when all storage was hard disk drive (HDD) media with access times on the order of 10 milliseconds. With the long latency from spinning rust platters and moving mechanical read/write heads into place, who cared that there was 100us of network latency, limited parallelism, software locks, and CPU interrupts galore? However, this all changed with the advent of Flash Memory. Today, the latency of Flash solid state drives (SSDs) storage is measured not in milliseconds but in 10s of microseconds.

New low-latency storage media have exposed SCSI as a bottleneck. Due to limitations inherent to the SCSI protocol, Fibre Channel-based media prevent taking full advantage of Flash SSDs. Realizing this, the NVMe protocol was developed, bypassing the SCSI protocol entirely.

NVMe streamlines Flash memory access directly through the PCI-Express interface to achieve optimal performance, latency, and parallelism while minimizing interrupts and software locks. Today, NVMe SSDs deliver the highest performance and have matured, coming down in price to become the de facto form factor for even volume Flash SSDs. But this flash performance has been confined to a single server, limiting scalability, reducing efficiency, and causing poor utilization.


Enter NVMe-over-Fabrics (NVMe-oF), which gets the bandwidth out of the box. NVMe-oF defines a protocol that provides for remote, network access to NVMe Flash storage as efficiently as possible. To do this, NVMe-oF leverages Remote Direct Memory Access (RDMA) technology to access and move data efficiently between applications without involving a host CPU. RDMA over Converged Ethernet or RoCE is the key enabling technology that makes this possible over ordinary Ethernet networks.

The standardized versions of NVMe-oF utilize RDMA (either InfiniBand or RoCE) as an integral means of achieving low latency, transport offload, user-space data transfers, and CPU-free data transfers. Other transports, including Fibre Channel, having realized the threat that NVMe-oF poses, have sought to develop Rube Goldberg-like contraptions that look like NVMe-oF but, in fact, are just inelegant attempts to retrofit a new technology behind an old one. In theory, sometime in the future, you might be able to run NVMe-over-Fabrics over Fibre Channel … but why would you?

It is clear that Fibre Channel is declining, and the advent of NVMe-Over-Fabrics will accelerate this decline. Forward-thinking Enterprise Storage architects have realized this and are embracing the change brought by the disruption of the cloud. These innovators are abandoning Fibre Channel and building the next generation of enterprise using ESF to achieve the scalability and efficiency of the cloud with enterprise levels of security.

The hyperscalers and the public cloud providers were the first to realize this and have adopted a converged Ethernet Storage Fabric in their mega data centers. This helps to explain the continued decline in Fibre Channel ports at negative 6 percent CAGR and Ethernet’s rapid growth at 18 percent CAGR (Crehan Research, Long Term Forecast Report, July 2017). The advent of NVMe-Over-Fabrics and RoCE will only accelerate this decline.

At the end of the day, the market is the ultimate arbiter of technology battles. When I look at the market right now, I see a ton of OEMs, Flash vendors, and startups investing in NVMe-oF solutions and a robust ecosystem of Ethernet switch and RoCE NIC vendors investing in technology and competing for these sockets. By contrast, given the consolidation in the Fibre Channel space, there aren’t any hungry, pure play companies driving investment to move this technology forward.

And then, of course, there is the cloud. Did I mention there is no Fibre Channel in the cloud?

Kevin Deierling is VP of Marketing at Mellanox