Marvell Technology Inc.

11/17/2021 | News release | Distributed by Public on 11/17/2021 10:29

Still the One: Why Fibre Channel Will Remain the Gold Standard for Storage Connectivity

For the past two decades, Fibre Channel has been the gold standard protocol in Storage Area Networking (SAN) and has been a mainstay in the data center for mission-critical workloads, providing high-availability connectivity between servers, storage arrays and backup devices. If you're new to this market, you may have wondered if the technology's origin has some kind of British backstory. Actually, the spelling of "Fibre" simply reflects the fact that the protocol supports not only optical fiber but also copper cabling; though the latter is for much shorter distances.

During this same period, servers matured into multicore, high-performance machines with significant amounts of virtualization. Storage arrays have moved away from rotating disks to flash and NVMe storage devices that deliver higher performance at much lower latencies. New storage solutions based on hyperconverged infrastructure have come to market to allow applications to move out of the data center and closer to the edge of the network. Ethernet networks have gone from 10Mbps to 100Gbps and beyond. Given these changes, one would assume that Fibre Channel's best days are in the past.

The reality is that Fibre Channel technology remains the gold standard for server to storage connectivity because it has not stood still and continues to evolve to meet the demands of today's most advanced compute and storage environments. There are several reasons Fibre Channel is still favored over other protocols like Ethernet or InfiniBand for server to storage connectivity.

Reliability and Resiliency

Fibre Channel was purpose-built for connecting servers to storage devices. Fibre Channel SANs provide a high-performance expressway for storage traffic. For example, Fibre Channel incorporates a buffer credit mechanism to ensure reliable and in-order delivery of the data, host and target I/O devices that minimize CPU utilization with full offload capability, and switches with advanced analytics and inbuilt nameservers. Zoning is used to segregate and isolate target and initiator communications. In addition, new capabilities have been developed that allow switches and adapters to communicate with each other regarding the overall SAN environment. This helps identify and eliminate congestion in the fabric and provide self-healing capability within the SAN.

When trying to use other general-purpose protocols like Ethernet for storage connectivity, system administrators must go to great lengths to optimize the Ethernet environment for storage traffic. This includes mapping and masking for each connection on servers and switches; deploying lossless Ethernet networks to enable in-order delivery of storage information; and using link aggregation and establishing vLANs to provide segregation of data. This adds many steps that are not required with Fibre Channel and adds significant complexity to the network design while limiting scalability to one or two hops. This is especially true when deploying RoCEv2 RDMA, which is required to reduce the network latency. Fibre Channel had direct memory access with full offloads from day one. Here we are 20 years later, and the industry has come to the realization that Ethernet also should have direct access to system memory for I/O processing.

Cost and Complexity

When it was first developed, Fibre Channel was viewed as costly and complex. With hexadecimal addresses called worldwide names, charting out zones in the early days was a bit complex. The fact that SANs were completely dedicated to storage traffic and required two distinct paths in the fabric for high availability made them appear to be expensive as well.

However, in 2011, the concept of target-driven zoning came into play and allowed Fibre Channel switches to automate the zoning, greatly reducing the complexity of the Fibre Channel SAN. In addition, storage administrators have realized that even with Ethernet, the only way to deliver predictable storage performance is with a dedicated storage network. And today the TCO between high-performance Ethernet NICs, switches, cabling, and optics are comparable to Fibre Channel switches, adapters, and optics.

Bandwidth Debate

Ethernet proponents will tout the fact that 100Gbps and 200Gbps Ethernet is readily available and that Fibre Channel tops out at 64GFC today. While this is true, very few storage arrays exist with connectivity above 25Gbps for Ethernet and 32GFC for Fibre Channel. The fact that a switch or NIC can run at high bandwidth is irrelevant if the storage array and server PCIe can't. In addition, 128Gbps Fibre Channel is in full development across the industry.

The standards groups for both Ethernet and Fibre Channel work from the same starting points for bandwidth. The same technology used for 100Gbps and 200Gbps Ethernet is already available for Fibre Channel and enabling its evolution to higher bandwidth. 2021 has seen the first deployments of 64GFC Fibre Channel in the data center for host and switch connectivity, but likely won't become mainstream until sometime in 2023. By the time data center SANs move to higher bandwidth, Fiber Channel will be there, and with all of its unique benefits.

Benefits of QLogic® Fibre Channel HBA Technology from Marvell

Manageability and Troubleshooting

There's no question that IT departments have the expertise to manage Ethernet networks. Heck, that is what all the internet and cloud traffic run on, making Ethernet skills table stakes in today's enterprise. Likewise, there are plenty of software tools and utilities from a multitude of vendors to monitor and troubleshoot Ethernet issues that arise. This can create a challenge in that management tools vary by vendor across operating systems, switches and network adapters.

Fibre Channel isn't that much different, however. As mentioned earlier, the setup and zoning of the storage fabric is far simpler with Fibre Channel than Ethernet, and because there are only two primary Fibre Channel Switch vendors and two Host Bus Adapter (HBA) vendors, there are only a few management utilities that the IT team need to be trained on. Diagnostic tools and software for Fibre Channel SANs are very comprehensive and new capabilities like Fabric Performance Impact Notifications (FPINs) enable the switch and HBA devices to interact with each other automatically to address issues like congestion and link integrity.

Innovation

Born on SCSI, innovation has always been in the DNA for Fibre Channel. In addition to increasing bandwidth in new generations of HBAs and switches, new capabilities continue to be added to the standards. Support for Hardware Root of Trust was added to prevent the insertion of malware in HBA firmware. A new standard for transmitting NVM Express™ over Fibre Channel (FC-NVMe) was introduced and all 16GFC enhanced HBAs and switches support this new protocol. FC-NVMe v2 was recently released to improve error correction handling and is now supported in select HBAs as well. Standards were further enhanced to support encryption in the Fibre Channel fabric and is available in select HBA offerings. All these capabilities further improve the performance and security of Fiber Channel technology.

Summary

There is no question that Ethernet has a place in data center storage. RDMA-enabled Ethernet networks are ideal for AI/ML solutions that require large data stores for a small number of servers. We see this today in JBOF and EBOF solutions available in the market. Ethernet is also ideal for software-defined storage or hyperconverged infrastructure (HCI) solutions, where multiple servers share captive storage resources. However, as new shared storage arrays get deployed, architects will find that Fibre Channel has all the necessary features and capabilities to remain the gold standard for SAN connectivity to these high-performance arrays. With 32GFC Fibre Channel readily available today and 64GFC Fibre Channel on the horizon, there is plenty of bandwidth available to run the business-critical workloads of today and tomorrow. With tens of millions of ports in production today, the proven reliability and scalability of Fibre Channel will continue to be the best option in the enterprise data center for connecting servers and storage together. At Marvell, we provide industry-leading QLogic Fibre Channel HBA technology to our customers. Our port-isolated architecture ensures predictable performance across each adapter port. Our unified driver makes deploying next-generation NVMe storage arrays easy, eliminating the need for additional drivers to be added to each server as competitive offerings require. Lastly, with full integration for management and troubleshooting with both Brocade® and Cisco® Fibre Channel switches and directors, QLogic adapters provide easy integration into any SAN fabric. For more information on QLogic Fibre Channel adapter technology from Marvell, visit www.marvell.com/qlogic.

This entry was posted on Wednesday, November 17th, 2021 at 9:00 am and is filed under Fibre Channel. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.