Microway has delivered a full-scale 2nd Gen AMD EPYC processor-powered cluster to a major insurance company. Refreshing their existing cluster gave the company more than double the compute cores they had previously, far superior memory bandwidth, and PCIe® 4 capabilities to drive vastly improved HPC application performance.
Featuring dual/redundant head nodes to manage and operate it, the new cluster has a total of 2,816 processor cores, a total of 11TB memory, and a 2GB/s scratch storage device in each node. Also included is Mellanox 100G EDR InfiniBand connectivity between all nodes.
In addition to supplying the cluster, Microway architected a 64-port fabric that optimizes cost without affecting overall workload performance. The fabric minimizes switch count without sacrificing latency and was validated to meet the application bandwidth need—as well as supplies additional ports for cluster growth and to link in parallel storage.
No Cost Remote Benchmarking for Cluster Drives Technology Selection
Working with Microway experts, the company used Microway’s test drive cluster at no cost to test the 2nd Gen AMD EPYC processors and conduct an evaluation of how they ran with the latest CFD application offerings and in-house code. This offering is available to any Microway cluster opportunity in North America.
Based on a head to head comparison, the 2nd Gen AMD EPYC CPU-based cluster came out ahead of existing x86 architecture-based cluster nodes and cloud offerings. The rigorous evaluation process provided a high level of confidence in the company’s decision to deploy in-house resources to meet their exacting performance specifications.
“We are excited to announce this deployment of an extremely cost-effective and high-performance cluster powered by 2nd Gen AMD EPYC processors,” said Eliot Eshelman, VP of HPC Initiatives at Microway. “The 2nd Gen AMD EPYC processors deliver fantastic performance and hold many CPU benchmark world records. Moreover, future generations of AMD EPYC CPUs will be at the heart of the US Department of Energy’s Oak Ridge National Laboratory (ORNL) Frontier and Lawrence Livermore National Laboratory (LLNL) El Capitan exascale supercomputers.”