Presentation
Benchmarking Ethernet Interconnect for HPC/AI workloads
SessionCommunication, I/O, and Storage at Scale on Next-Generation Platforms – Scalable Infrastructures
DescriptionInterconnects have always played a cornerstone role
in HPC. Since the inception of the Top500 ranking, interconnect
statistics have been predominantly dominated by two compet-
ing technologies: InfiniBand and Ethernet. However, even if
Ethernet increased its popularity due to versatility and cost-
effectiveness, InfiniBand used to provide higher bandwidth and
continues to feature lower latency. Industry seeks for a further
evolution of the Ethernet standards to enable fast and low-
latency interconnect for emerging AI workloads by offering
competitive, open-standard solutions. This paper analyzes the
early results obtained from two systems relying on an HPC
Ethernet interconnect, one relying on 100G and the other on
200G Ethernet. Preliminary findings indicate that the Ethernet-
based networks exhibit competitive performance, closely aligning
with InfiniBand, especially for large message exchanges.
in HPC. Since the inception of the Top500 ranking, interconnect
statistics have been predominantly dominated by two compet-
ing technologies: InfiniBand and Ethernet. However, even if
Ethernet increased its popularity due to versatility and cost-
effectiveness, InfiniBand used to provide higher bandwidth and
continues to feature lower latency. Industry seeks for a further
evolution of the Ethernet standards to enable fast and low-
latency interconnect for emerging AI workloads by offering
competitive, open-standard solutions. This paper analyzes the
early results obtained from two systems relying on an HPC
Ethernet interconnect, one relying on 100G and the other on
200G Ethernet. Preliminary findings indicate that the Ethernet-
based networks exhibit competitive performance, closely aligning
with InfiniBand, especially for large message exchanges.