Skip to content

Not Just for NVMe: Emerging IP Storage Protocols Redefining Ethernet's Role in the Data Center

Ethernet has quietly become the backbone of modern enterprise infrastructure, and now it’s taking over one more domain that used to be the stronghold of specialized networks: storage.

While NVMe over Fabrics (NVMe-oF) often grabs the spotlight, it’s just one part of a much broader shift. Across industries, organizations are leveraging Ethernet to transport not just block storage, but also high-performance file services, legacy protocols, and even emerging object workloads, all on a single converged fabric.

Understanding the diversity of IP-based storage protocols is critical to designing infrastructure that’s ready for what’s next. Let’s take a closer look at the key players, where they thrive, and how Arista’s fabric architecture supports them at scale.

The Expanding IP Storage Protocol Landscape

NVMe/TCP

NVMe/TCP brings the high-speed performance of NVMe to standard TCP/IP networks. Unlike RDMA-based transports, NVMe/TCP doesn’t require specialized hardware or complex configurations like Data Center Bridging (DCB). That simplicity makes it ideal for fast deployment across existing Ethernet infrastructure.

It’s already gaining traction in use cases like flash-optimized databases, AI/ML pipelines, and cloud-native platforms like Kubernetes. Arista’s high-bandwidth, low-latency switches provide the foundation to ensure NVMe/TCP runs predictably and scales with demand.

NFS over RDMA

For organizations relying on file-based storage, NFS over RDMA is a compelling option. It reduces CPU overhead and significantly improves performance by bypassing much of the traditional networking stack. This is especially valuable in technical computing, media production, and AI training pipelines where file access needs to be fast and efficient.

Arista’s lossless Ethernet capabilities, including deep buffers and full support for RDMA via RoCEv2, enable NFS over RDMA to perform reliably even in large-scale, dynamic environments.

SMB Direct

Windows-heavy environments often rely on SMB for file sharing, and SMB Direct supercharges that protocol with RDMA support. This is particularly beneficial for workloads like Microsoft Hyper-V and SQL Server, where storage I/O is tightly linked to overall performance.

Again, the success of RDMA-based protocols like SMB Direct depends on a robust, lossless transport layer—something Arista’s switching architecture is purpose-built to deliver.

iSCSI

While often viewed as legacy, iSCSI still plays an important role in many environments. It provides a reliable, well-supported path for block storage over IP networks. With the performance improvements offered by modern Ethernet (10GbE and beyond), iSCSI remains viable for general-purpose workloads and is often used as a baseline when evaluating newer options like NVMe/TCP.

Use Case-Driven Protocol Selection

Different workloads place different demands on the storage fabric, and each of these protocols brings something unique to the table.

  • AI/ML pipelines benefit from high-throughput, low-latency storage access, making NVMe/TCP and NFS over RDMA strong options.
  • Media and entertainment workflows demand fast file access and consistent performance, which can be delivered via NFS or SMB, especially when RDMA is used.
  • Hyperconverged infrastructure (HCI) platforms commonly rely on Ethernet-based storage, using protocols like iSCSI or NVMe/TCP.
  • High-performance databases need low-latency block access, typically served by NVMe/TCP or fast iSCSI implementations.
  • Virtualized environments depend on protocol compatibility: SMB Direct for Hyper-V and a mix of NFS, iSCSI, or NVMe/TCP for VMware and KVM deployments.

The Case for Converging Storage on Ethernet

Running multiple storage protocols over a unified Ethernet fabric brings real advantages. It simplifies operations, reduces the number of specialized platforms that need to be supported, and allows organizations to leverage their existing networking expertise.

However, the success of a converged fabric depends on the network’s ability to prioritize traffic and maintain visibility. Storage traffic often has strict latency and throughput requirements, especially when running alongside other data center applications.

Arista addresses this with robust Quality of Service (QoS) controls, including class-based queuing, traffic shaping, and support for DSCP and CoS tagging. This ensures that latency-sensitive storage traffic is handled appropriately, even when sharing bandwidth with other workloads. In addition, tools like CloudVision provide real-time telemetry, allowing teams to monitor storage traffic, detect congestion, and proactively manage performance across the network.

Supporting Protocol Diversity at Scale

Arista’s portfolio is built for modern, multiprotocol storage environments. With deep buffers, high throughput, and full support for lossless Ethernet transport, Arista switches allow organizations to run block and file protocols—legacy and next-gen—on the same fabric without compromise.

Whether your environment requires NVMe/TCP, NFS over RDMA, SMB Direct, iSCSI, or some combination, Arista provides the foundation to do it at scale. This flexibility enables IT teams to design around use case needs—not protocol limitations—and future-proof their infrastructure as storage strategies evolve.

Final Thoughts

As storage continues to evolve, the ability to support multiple protocols on a unified Ethernet fabric will become a key differentiator. Ethernet’s versatility, combined with the power of Arista’s architecture, offers a path forward that’s simpler, more cost-effective, and ready for the growing performance demands of enterprise IT.

The storage fabric of the future won’t be a patchwork of specialized networks; it will be Ethernet, running everything from NVMe to SMB, all with confidence.

If you’re exploring ways to modernize your storage network, let’s talk. We can help you design a fabric that’s built to support what’s next.

 

Table 1: IP Storage Protocol Comparison

Protocol

Transport

Primary Use Case

Key Benefit

Performance Profile (Latency, Throughput)

Arista Fabric Enabler

NVMe/TCP

TCP

High-performance block storage (flash, SSDs)

High performance on standard Ethernet, no specialized rNICs needed

Low latency, High throughput/IOPS

High-bandwidth, low-latency Ethernet, Deep Buffers

NFS over RDMA

RDMA (RoCE)

High-performance file sharing (HPC, AI, Media)

Reduced CPU overhead, very low latency, high throughput for file access

Very low latency, High throughput

Lossless Ethernet (DCB/PFC/ECN), RoCE support, Deep Buffers

SMB Direct

RDMA (RoCE)

Windows file sharing (Hyper-V, SQL Server)

Reduced CPU overhead, very low latency, high throughput for Windows workloads

Very low latency, High throughput

Lossless Ethernet (DCB/PFC/ECN), RoCE support, Deep Buffers

iSCSI

TCP

General purpose block storage, legacy integration

Broad compatibility, runs on standard Ethernet

Moderate latency, Good throughput on fast Ethernet

High-bandwidth Ethernet, QoS