
A Decision Framework for iSCSI, NVMe/TCP, and NVMe/RoCE
Choosing the right storage protocol is the most critical architectural decision you'll make when migrating from Fibre Channel. This guide provides a clear, workload-driven framework to help you select the right transport for your performance, operational, and financial goals.
Beyond Speeds and Feeds: A New Framework for Storage
The conversation around modern storage networking has moved beyond a simple "Fibre Channel vs. Ethernet" debate. Today’s architects know that a converged Ethernet fabric is the future, offering dramatic TCO reduction, seamless scalability, and an open, future-proof platform.1
The real question is no longer if you should move to Ethernet, but how. With powerful protocols like iSCSI, NVMe/TCP, and NVMe/RoCE available, there is no single "best" option. The optimal choice depends entirely on a trade-off between three critical pillars: raw performance, operational complexity, and total cost of ownership.
This guide is designed to walk you through that decision-making process, empowering you to architect a storage network that is perfectly aligned with your specific business and application needs.
(For a foundational understanding of the technologies discussed, we recommend reviewing our primers on(https://intelligentvisibility.com/fibre-channel-vs-ethernet-storage), the Storage Migration Guide, NVMe-oF Guide, RoCE vs iWarp, iSCSI Guide, and our Fibre Channel vs. Ethernet Guide

The Three Pillars of Protocol Selection
Pillar 1: The Performance Profile
While all modern protocols on high-speed Ethernet are fast, they are not the same. The key difference is latency.
NVMe/RoCE (RDMA over Converged Ethernet) is the undisputed performance gold standard, offering the lowest possible latency by bypassing the host CPU's networking stack for direct memory access. In benchmark tests, RoCE can deliver up to 66% lower write latency compared to NVMe/TCP, making it essential for applications where microseconds matter.
NVMe/TCP offers a significant leap in performance over iSCSI, reducing latency by up to 34%.3 It delivers many of the benefits of the NVMe command set over standard, ubiquitous TCP/IP networks.
iSCSI on modern 10/25/100GbE networks is a proven workhorse. While its latency is higher than NVMe-based protocols due to TCP/IP overhead, it provides more than enough throughput for a vast range of enterprise workloads.
Pillar 2: Operational Complexity & Required Skillset
A faster protocol is only valuable if your team can deploy and manage it effectively.
iSCSI & NVMe/TCP are operationally simple. They run over standard Ethernet and the well-understood TCP/IP stack that your network and systems teams have managed for decades. This simplicity accelerates deployment and reduces the risk of misconfiguration.
NVMe/RoCE demands a meticulously configured "lossless" network. Achieving this requires deep expertise in Data Center Bridging (DCB) technologies like Priority-based Flow Control (PFC) and Explicit Congestion Notification (ECN).2 This complexity is often a significant barrier to adoption for teams without specialized skills.
Pillar 3: Total Cost of Ownership (TCO)
The financial impact extends beyond the initial purchase price.
iSCSI & NVMe/TCP are the most cost-effective options, as they run on standard, commodity Ethernet NICs that are already in your servers.
NVMe/RoCE carries a higher cost. It requires more expensive RDMA-capable NICs (RNICs) and, to guarantee lossless performance, often relies on higher-end, deep-buffered switches. This investment, while typically 30% less than legacy Fibre Channel, is still more significant that iSCSI & NVMe/TCP and must be justified by a clear business requirement for extreme low-latency performance.
Workload to Protocol Mapping
Based on the three pillars, here our our clear, use-case driven recommendations.
Workload Profile | Our Recommendation | Justification |
General Compute & HCI (VMware, Databases, Nutanix) |
NVMe/TCP | The pragmatic successor to iSCSI. It offers significant performance gains with the same operational ease and cost-effectiveness of running over a well-architected TCP network (10/25GbE host connectivity, 100/400GbE interconnects, deep-buffers, EVPN/VxLAN) |
Extreme Low-Latency Applications (AI/ML Training, HFT, Real-Time Analytics) |
NVMe/RoCE | The ultimate in performance. For these workloads, microsecond-level latency is a direct business requirement that justifies the higher cost and operational complexity of a lossless fabric. Still significantly less expensive or complex than fibre channel, more attention needed to TCP network architecture. (Add RoCEv2/RDMA support, etc) |
Cost-Effective SAN Consolidation (Tier 2/3 Applications, Backups, File Services) |
Modern iSCSI | A mature, reliable, and well-understood protocol that provides excellent value and robust performance on modern 25/100GbE networks, making it ideal for consolidating diverse workloads. |
A Single Fabric for Every Workload
Making the right protocol choice shouldn't force you into architectural compromises. An Arista-powered Ethernet fabric is uniquely engineered to run all of these workloads on a single, converged, and highly observable network without trade-offs.
Deep Buffers Absorb Bursts: Arista's R-series switches are built with ultra-deep buffers that absorb the microbursts and incast traffic common in storage environments, preventing packet loss for both TCP and RoCE traffic.
Lossless-Ready with DCB: Full, native support for Data Center Bridging (DCB) means the fabric is ready to deliver the guaranteed lossless transport required for your most demanding NVMe/RoCE workloads.
Intelligent Observability: With Arista CloudVision, you gain end-to-end visibility into your storage flows. This allows you to monitor performance, proactively identify congestion, and ensure QoS for your mission-critical applications, de-risking the move to a converged fabric.
With Intelligent Visibility and Arista, you can design your network around your applications, not the limitations of a protocol.
(Read more about network equipment selection for storage networking in our guide Switch Selection for Ethernet Storage)
Resources

Guide: NVMe-OF
Maximize your storage performance with NVMe-oF on Ethernet. This page dissects NVMe/TCP (easy deployment, standard networks) versus NVMe/RoCE (ultra-low latency RDMA, needs lossless DCB), helping you choose the optimal transport by detailing trade-offs, hardware, and network requirements.
Learn More
Guide: Switch Selection for Storage Networking
When storage traffic meets standard Ethernet, performance suffers. Learn what makes a switch storage-ready — from deep buffers to DCB — and how to build a high-performance, lossless Ethernet SAN for protocols like NVMe-oF and iSCSI.
Learn More
Guide: RoCE vs. iWARP
Curious whether RoCE or iWARP is right for your storage fabric? This guide demystifies RDMA technology and explores how each protocol impacts latency, deployment complexity, and NVMe-oF readiness—so you can choose with confidence.
Learn More
Guide: iSCSI
A practical guide to iSCSI in modern storage networks. Learn how iSCSI works, where it fits in Fibre Channel migrations, and when to choose it over NVMe/TCP.
Learn MoreArchitecture: NVMe/TCP
Explore our detailed Reference Architecture for NVMe/TCP in VMware Environments. Proven deployment guidance to ensure performance, visibility, and scale.
DownloadArchitecture: NVMeOF/RoCEv2
Explore our detailed Reference Architecture for NVMeOF/RDME with RoCEv2 for ultra-high performance environments. Proven deployment guidance to ensure performance, visibility, and scale.
DownloadArchitecture: Nutanix HCI + Arista
A scalable, low-latency reference architecture using Nutanix + Arista EVPN/VXLAN with support for metro and multi-site replication.
Download