iSCSI for Modern Storage: Protocol, Architecture & Migration
A practical guide to deploying iSCSI as part of your move from Fibre Channel to Ethernet.

iSCSI for Modern Storage: Table of Contents
The Client-Server Model: Initiators and Targets
iSCSI Targets: The Storage Servers
The IP Network: The Transport Fabric
iSCSI Security Mechanisms
Ease of Setup and Management: Familiarity with Ethernet and IP
Scalability: Growing with Standard Networking Gear
Flexibility and Distance: Beyond Traditional SAN Limits
Boot from SAN Capability
CPU Overhead from TCP/IP Processing
Performance Compared to Traditional Fibre Channel
Quality of Service (QoS): Prioritizing Storage Traffic
Jumbo Frames: Reducing Packet Overhead
Multipathing (MPIO): Enhancing Redundancy and Performance
Addressing Network Contention and Potential Upgrade Costs
iSCSI as a Common Migration Target or Stepping Stone
iSCSI vs. NVMe/TCP: Key Differences and Performance Insights
Frequently Asked Questions - FAQs
Introduction: What is iSCSI and Why Does It Still Matter in 2025?
Internet Small Computer System Interface (iSCSI) is an Internet Protocol (IP)-based storage networking standard for linking data storage facilities.1 It enables block-level access to storage devices by carrying SCSI commands over TCP/IP networks. Essentially, iSCSI allows servers (initiators) to send SCSI commands to storage devices (targets) located on a remote server, making them appear as locally attached disks.
Developed in the late 1990s and standardized by the IETF (RFC 3720), iSCSI revolutionized storage area networks (SANs) by offering a more affordable and accessible alternative to traditional Fibre Channel (FC) SANs. It achieved this by leveraging the ubiquity and familiarity of standard Ethernet and IP networking infrastructure.
Even in 2025, with the rise of newer technologies like NVMe over Fabrics (NVMe-oF), iSCSI remains highly relevant. It continues to be a workhorse for many organizations, offering a reliable, cost-effective, and well-understood solution for various storage needs, particularly for small to medium-sized businesses, virtualized environments, and specific application tiers. It also plays a crucial role in data migration strategies as enterprises modernize their storage infrastructure.
How iSCSI Works: The Underlying Mechanism
Understanding iSCSI begins with grasping its core function: extending the reach of the SCSI protocol beyond the confines of a local server bus.
The Core Concept: SCSI Commands Over TCP/IP
At its heart, iSCSI encapsulates SCSI commands, data, and status messages within TCP/IP packets. This allows these SCSI interactions, which are fundamental for block-level storage operations (like reading and writing data blocks to a disk), to traverse standard IP networks. TCP (Transmission Control Protocol) provides the reliable, connection-oriented transport, ensuring that data is delivered in order and retransmitted if lost, while IP (Internet Protocol) handles the addressing and routing of these packets across the network.
The Client-Server Model: Initiators and Targets
iSCSI operates on a client-server model:
iSCSI Initiator: This is the client component, typically residing on a server that needs to access storage. The initiator is responsible for originating SCSI commands and sending them to an iSCSI target.
iSCSI Target: This is the server component, usually a storage array or a server configured to provide storage. The target receives SCSI commands from initiators, processes them (e.g., reads data from or writes data to its disks), and sends back data and status responses.
iSCSI Sessions, Connections, and Discovery (including iSNS)
Communication between an initiator and a target occurs within an iSCSI session. A session is established through a login process where the initiator discovers and connects to the target. This discovery can be manual (static configuration of target IP addresses and iSCSI Qualified Names - IQNs) or automated using protocols like the Internet Storage Name Service (iSNS, RFC 4171), which acts as a centralized directory service for iSCSI devices on the network, simplifying management and discovery in larger environments. Within a single iSCSI session, one or more TCP connections can be established to facilitate parallel processing and potentially improve performance or provide redundancy (though multipathing is the more common approach for redundancy).
Key Components of an iSCSI Architecture
A functioning iSCSI SAN involves several key components:
iSCSI Initiators: The Clients
The initiator is the software or hardware component on the host server that initiates iSCSI communication.
Software Initiators: These are the most common type. Most modern operating systems (Windows, Linux, VMware ESXi, macOS) include built-in iSCSI software initiators. They use the host server's standard Ethernet Network Interface Cards (NICs) and CPU to perform iSCSI and TCP/IP processing.
Hardware iSCSI Host Bus Adapters (HBAs) vs. NICs with TCP Offload Engines (TOEs):
Hardware iSCSI Host Bus Adapters (HBAs): These are specialized adapter cards specifically designed for iSCSI. They offload both the iSCSI protocol processing and the entire TCP/IP stack processing from the host CPU. This significantly reduces CPU utilization on the server, which can be beneficial for performance-critical applications. They present storage to the OS similarly to how a Fibre Channel HBA would.
NICs with TCP Offload Engines (TOEs): Some advanced standard Ethernet NICs include TOE functionality. These NICs offload only the TCP/IP processing from the server's CPU, which can still provide a noticeable performance benefit and CPU reduction compared to no offload. However, the iSCSI protocol processing itself (SCSI command encapsulation/decapsulation) remains handled by the host CPU's software initiator.
iSCSI Targets: The Storage Servers
The iSCSI target is the storage system that provides block-level storage resources to initiators.
Storage Arrays: Most modern enterprise and SMB storage arrays offer iSCSI target functionality, presenting storage capacity over their Ethernet ports. These arrays manage the physical disks (HDDs or SSDs) and RAID configurations.
Logical Unit Numbers (LUNs): A target presents storage to initiators in the form of LUNs. A LUN is a uniquely numbered logical disk that the initiator's operating system can discover, format, and use as if it were a local hard drive. iSCSI targets control which initiators are allowed to access specific LUNs (a process called LUN masking or LUN mapping) for security and data segregation.
The IP Network: The Transport Fabric
The IP network, comprising standard Ethernet switches, NICs, and cabling, forms the transport fabric for iSCSI traffic. Unlike Fibre Channel, which requires specialized switches and HBAs, iSCSI leverages existing, widely understood Ethernet technology. While iSCSI can technically run over routed WANs, it is predominantly used within Local Area Networks (LANs) to minimize latency.
iSCSI Security Mechanisms
iSCSI includes mechanisms for secure deployments. Challenge-Handshake Authentication Protocol (CHAP) is commonly used to authenticate initiators and targets, preventing unauthorized access by verifying credentials during session establishment. Furthermore, for environments requiring encryption of data-in-transit, iSCSI traffic can be secured using IPsec (Internet Protocol Security) to protect data traversing potentially untrusted network segments. Proper LUN masking on the target is also crucial for authorization.
Advantages of iSCSI: Why It Remains a Popular Choice
iSCSI's enduring popularity stems from several key advantages:
Cost-Effectiveness: This is arguably iSCSI's most significant benefit. It utilizes standard, commodity Ethernet hardware (NICs, switches, cables) which is generally much less expensive than specialized Fibre Channel components. Organizations can leverage their existing IP network infrastructure and expertise, reducing both capital expenditure (CapEx) and operational expenditure (OpEx).
Ease of Setup and Management: IT administrators are generally very familiar with Ethernet and TCP/IP networking concepts. Setting up and managing an iSCSI SAN is often perceived as simpler and requiring less specialized training compared to the complexities of managing a Fibre Channel fabric (zoning, WWNs, etc.).
Scalability: iSCSI SANs can be easily scaled by adding more storage capacity to targets or by adding more targets and initiators, using standard Ethernet switches to expand the network. Network bandwidth can be increased by upgrading to faster Ethernet speeds (e.g., 1GbE to 10GbE, 25GbE, 100GbE, and beyond).
Flexibility and Distance: Because iSCSI runs over IP, it can theoretically span long distances over routed networks (WANs or MANs), enabling remote data replication and disaster recovery scenarios, although latency becomes a major consideration over longer distances.
Boot from SAN Capability: Many iSCSI initiators (both software and hardware HBAs) support booting servers directly from an iSCSI LUN. This allows for diskless server configurations, simplifying server provisioning, replacement, and improving disaster recovery capabilities by centralizing boot images.
iSCSI Disadvantages and Performance Considerations: A Balanced View
Despite its advantages, iSCSI is not without its limitations and performance considerations:
Dependency on Network Performance: iSCSI performance is directly tied to the underlying IP network's health and capacity. Network congestion, insufficient bandwidth, or high latency can significantly degrade iSCSI storage performance. Unlike Fibre Channel, which traditionally offers a more deterministic and dedicated fabric, shared Ethernet can introduce variability.
CPU Overhead from TCP/IP Processing: When using software initiators, the host server's CPU is responsible for both iSCSI protocol processing and the TCP/IP stack processing. This can consume significant CPU cycles, especially under heavy I/O loads, potentially impacting the performance of applications running on the server
Mitigation: Hardware iSCSI HBAs or NICs with TOE capabilities can offload this processing, freeing up host CPU resources, but this adds to the cost, somewhat reducing the "low-cost" advantage.
Performance Compared to Traditional Fibre Channel:
Latency: In non-optimized or heavily congested Ethernet networks, iSCSI latency can be noticeably higher than that of a well-configured Fibre Channel SAN. FC's design inherently provides lower protocol overhead.
Throughput: With adequate network bandwidth (e.g., 10GbE or higher) and proper network design, iSCSI can achieve throughput comparable to or even exceeding older generations of Fibre Channel. However, FC has also continued to evolve its speed capabilities.
Predictability: Fibre Channel often offers more predictable performance due to its dedicated nature and lossless design, whereas iSCSI performance can be more variable if the underlying Ethernet network is shared and not optimized for storage.
Critical Network Design Best Practices for Optimal iSCSI Performance
To achieve reliable and high-performance iSCSI storage, careful network design is crucial. Simply running iSCSI over a general-purpose, unoptimized corporate LAN is often a recipe for poor performance.
Dedicated Networks or VLANs: Traffic Isolation:
Physical Separation: The ideal scenario is often a completely separate physical Ethernet network dedicated solely to iSCSI traffic. This eliminates contention with other network traffic (LAN, internet, management).
Logical Separation (VLANs): If a physically separate network isn't feasible, using Virtual LANs (VLANs) to isolate iSCSI traffic into its own broadcast domain is a minimum requirement. This helps segregate storage traffic and apply specific policies to it.
Quality of Service (QoS): Prioritizing Storage Traffic:
On converged networks where iSCSI shares links with other traffic, QoS mechanisms (like IEEE 802.1p priority tagging and Differentiated Services Code Point - DSCP) should be implemented on switches to prioritize iSCSI packets, ensuring they receive preferential treatment during periods of congestion.
Jumbo Frames:
Enabling jumbo frames (increasing the Maximum Transmission Unit - MTU from the standard 1500 bytes to typically 9000 bytes) allows more SCSI data to be carried in each Ethernet frame. This reduces the number of packets and packet headers that need to be processed, lowering CPU overhead and potentially increasing throughput. Jumbo frames must be configured consistently end-to-end (initiators, switches, targets).
Multipathing (MPIO): Enhancing Redundancy and Performance:
Using Multipath I/O (MPIO) software on the host allows multiple network paths to be established between an initiator and a target. This provides path redundancy (failover if one path goes down) and can improve performance through load balancing across the available paths.
Addressing Network Contention and Potential Upgrade Costs:
It's critical to ensure that the Ethernet switches used have sufficient backplane capacity (non-blocking architecture) and adequate per-port bandwidth. If existing network infrastructure is old or under-provisioned, significant upgrades might be necessary to support iSCSI effectively, potentially eroding some of the initial cost savings compared to FC. Consider switches with features like deep buffers to handle traffic bursts common in storage environments.
iSCSI's Role in Fibre Channel to Ethernet Migration Strategies
For organizations looking to move away from Fibre Channel, iSCSI often plays a significant role in the migration strategy.
A Mature and Proven Ethernet SAN Protocol
iSCSI is a well-established, mature, and widely supported protocol. Its stability, interoperability across various vendors, and the wealth of available knowledge and expertise make it a reliable choice for building Ethernet-based SANs.
iSCSI as a Common Migration Target or Stepping Stone
Direct Migration Target: Many organizations migrate workloads from older FC SANs directly to new iSCSI-based storage systems to leverage cost savings and simplify management with familiar Ethernet/IP tools.
Stepping Stone to Newer Protocols: iSCSI can also serve as an intermediate step. For example, an organization might first migrate from FC to a 10GbE or 25GbE iSCSI environment. Later, as application needs evolve or newer storage systems are adopted, they might then migrate specific high-performance workloads from iSCSI to even faster Ethernet-based protocols like NVMe/TCP, having already established the Ethernet SAN infrastructure.
The Evolving Role of iSCSI in the Era of NVMe/TCP
While iSCSI has been a dominant force in IP-based storage, the landscape is evolving with the advent of NVMe/TCP, which is designed to extend the high performance of the NVMe protocol directly over TCP/IP networks.
Where iSCSI Continues to Shine
Despite the rise of NVMe/TCP, iSCSI remains a cost-effective, robust, and viable solution for many use cases in 2025, particularly for:
Small to Medium-Sized Businesses (SMBs): Where budgets are tighter and the simplicity of leveraging existing Ethernet skills and infrastructure is paramount.
Less Performance-Critical Applications: Workloads such as backups, archives, development/test environments, and general file sharing often do not require the ultra-low latency of NVMe-oF.
Tier 2/3 Workloads: Applications that need reliable shared storage but are not at the absolute peak of performance demand.
Virtualization Infrastructure: While high-performance VMs might benefit from NVMe/TCP, many general-purpose virtualized workloads run perfectly well on well-configured iSCSI SANs, especially with 10GbE or faster networks.
Cost-Sensitive Deployments: Where the cost of NVMe-capable storage arrays and potentially newer NICs for optimal NVMe/TCP performance cannot yet be justified for all workloads.
iSCSI vs. NVMe/TCP: Key Differences and Performance Insights
Protocol Design: iSCSI encapsulates legacy SCSI commands. NVMe/TCP is built around the more efficient, parallel, and low-overhead NVMe command set designed for flash.
Performance: In like-for-like network conditions, especially with all-flash storage arrays, NVMe/TCP generally offers significantly lower latency and higher IOPS compared to iSCSI. The streamlined NVMe protocol results in less software overhead.
CPU Efficiency: NVMe/TCP is often more CPU-efficient than iSCSI (using software initiators) because the NVMe protocol itself is lighter and designed for parallelism, leading to better utilization of multi-core CPUs.
Ease of Deployment: Both leverage standard TCP/IP networks. As of 2025, NVMe/TCP is a production-grade protocol with widespread support in modern enterprise operating systems (e.g., Linux kernels 5.x and later, Windows Server 2022 and later, VMware vSphere 7.0 U2 and later) and across numerous storage arrays. This makes its deployment increasingly straightforward and comparable in complexity to iSCSI for supported environments.
Performance benchmarks consistently show NVMe/TCP outperforming iSCSI, particularly when accessing modern all-flash storage arrays where the storage media itself is not the bottleneck.
Making the Right Choice: iSCSI or NVMe/TCP for Your Needs
The decision between iSCSI and NVMe/TCP for new deployments or migrations depends on:
Specific Performance Demands: For applications requiring the lowest possible latency and highest IOPS (e.g., high-transaction databases, real-time analytics on flash), NVMe/TCP is increasingly the preferred choice.
Existing Infrastructure and Budget: If existing Ethernet infrastructure is robust (10GbE+) and servers/storage support it, NVMe/TCP can be a clear winner. For environments where cost is a primary driver and extreme performance isn't needed for all workloads, iSCSI remains attractive.
Application Requirements: Analyze whether applications can truly benefit from the microsecond-level latency improvements NVMe/TCP can offer over a well-tuned iSCSI setup.
Future Growth Strategies: If a future shift to all-flash and NVMe-centric architectures is planned, adopting NVMe/TCP sooner for new deployments might be more strategic.
For many, a hybrid approach might be suitable, using NVMe/TCP for performance-critical workloads and iSCSI for other general-purpose storage needs.
Conclusion: iSCSI's Enduring Place in the Storage Networking Landscape
iSCSI has fundamentally changed storage networking by democratizing SAN technology through the use of ubiquitous and cost-effective Ethernet and IP. Its ease of use, scalability, security features like CHAP, support for Boot from SAN, and lower cost compared to traditional Fibre Channel have cemented its place in data centers worldwide.
While newer protocols like NVMe/TCP are pushing the boundaries of performance for Ethernet-based storage, iSCSI (as of 2025) remains a highly relevant, mature, and dependable solution for a wide array of applications and organizational needs. Its role continues as a workhorse for many, a practical migration target from Fibre Channel, and a valuable component in a diversified storage strategy that might also include higher-performance options like NVMe-oF. Understanding its strengths, limitations, and optimal deployment practices allows organizations to make informed decisions and leverage iSCSI effectively in their evolving IT infrastructures.
Frequently Asked Questions
What is iSCSI?
iSCSI (Internet Small Computer System Interface) is a storage networking standard that allows SCSI commands (used for block-level storage access) to be sent over standard TCP/IP networks. Essentially, it lets servers access remote storage as if it were a locally attached disk.
Why is iSCSI still relevant in 2025 if there are newer technologies?
Even with newer technologies like NVMe/TCP, iSCSI remains highly relevant in 2025 due to its cost-effectiveness, reliability, widespread familiarity, and suitability for many SMBs, virtualized environments, and specific application tiers. It also plays a key role in storage migration strategies.
How do SCSI commands travel over a TCP/IP network with iSCSI?
iSCSI encapsulates SCSI commands, data, and status messages within TCP/IP packets. TCP ensures reliable, ordered delivery, while IP handles the addressing and routing across the network, effectively extending SCSI communication beyond a local server bus.
What are "iSCSI initiators" and "iSCSI targets"?
What is a LUN in an iSCSI setup?
A LUN (Logical Unit Number) is a uniquely numbered logical disk that an iSCSI target presents to an initiator. The initiator's operating system can discover, format, and use this LUN as if it were a local hard drive.
What are the primary disadvantages or performance concerns with iSCSI?
iSCSI performance is highly dependent on the health and capacity of the underlying IP network; congestion or high latency can degrade it. Software iSCSI initiators can also lead to CPU overhead on the host server due to TCP/IP processing.
How can iSCSI performance be optimized?
Optimal performance relies on careful network design, such as using dedicated networks or VLANs for traffic isolation, implementing Quality of Service (QoS), enabling jumbo frames, and using Multipathing (MPIO) for redundancy and load balancing. Hardware iSCSI HBAs or NICs with TCP Offload Engines can also reduce host CPU load.
What's the key difference between iSCSI and NVMe/TCP?
The main difference lies in the underlying storage protocol they transport. iSCSI encapsulates traditional SCSI commands, designed for spinning disks. NVMe/TCP transports NVMe commands, which are specifically designed for modern, high-performance flash storage, resulting in lower latency and higher efficiency.
When should I choose iSCSI versus NVMe/TCP for my storage needs today?
Choose NVMe/TCP for performance-critical applications requiring the absolute lowest latency and highest IOPS, especially with all-flash storage. Opt for iSCSI for more cost-sensitive deployments, general-purpose applications, backups, or when leveraging existing robust Ethernet infrastructure where extreme low-latency isn't the primary driver for all workloads.