Monday, December 23, 2024
Monday, December 23, 2024
HomeINB EnglishSupermicro's Rack Scale Liquid-Cooled Solutions with the Industry's Latest Accelerators Target AI...

Supermicro’s Rack Scale Liquid-Cooled Solutions with the Industry’s Latest Accelerators Target AI and HPC Convergence 

- Advertisement -

“Supermicro continues to work with our AI and HPC customers to bring the latest technology, including total liquid cooling solutions, into their data centers,” said Charles Liang, president and CEO of Supermicro. “Our complete liquid cooling solutions can handle up to 100 kW per rack, which reduces the TCO in data centers and allows for denser AI and HPC computing. Our building block architecture allows us to bring the latest GPUs and accelerators to market, and with our trusted suppliers, we continue to bring new rack-scale solutions to the market that ship to customers with a reduced time to delivery.”

Supermicro application-optimized high-performance servers are designed to accommodate the most performant CPUs and GPUs for simulation, data analytics, and machine learning. The Supermicro 4U 8-GPU liquid-cooled server is in a class by itself, delivering petaflops of AI computing power in a dense form factor with the NVIDIA H100/H200 HGX GPUs. Supermicro will soon ship liquid-cooled Supermicro X14 SuperBlade in 8U and 6U configurations, the rackmount X14 Hyper, and the Supermicro X14 BigTwin. Several HPC-optimized server platforms will support the Intel Xeon 6900 with P-cores in a compact, multi-node form factor.

In addition, Supermicro continues its leadership shipping the broadest portfolio of liquid cooled MGX Products in the industry.. Supermicro also confirms its support for delivering the latest accelerators from Intel with its new Intel Gaudi 3 accelerator and AMD’s MI300X accelerators. With up to 120 nodes per rack with the Supermicro SuperBlade, large-scale HPC applications can be executed in just a few racks. Supermicro will display a wide range of servers at the International Supercomputing Conference, including Supermicro X14 systems incorporating the Intel Xeon 6 processors.

Supermicro will also showcase and demonstrate a wide range of solutions designed specifically for HPC and AI environments at ISC 2024. The new 4U 8-GPU liquid-cooled servers with NVIDIA HGX H100 and H200 GPUs highlight the Supermicro lineup. These servers and others will support the NVIDIA B200 HGX GPUs when available. New systems with high-end GPUs accelerate AI training and HPC simulation by bringing more data closer to the GPU than previous generations by using high-speed HBM3 memory. With the incredible density of the 4U liquid-cooled servers, a single rack delivers (8 servers x 8 GPUs x 1979 Tflops FP16 (with sparsity) = 126+ petaflops.  The Supermicro SYS-421GE-TNHR2-LCC can use dual 4th or 5th Gen Intel Xeon processors, and the AS -4125GS-TNHR2-LCC is available with dual 4th Gen AMD EPYC CPUs.

The new AS -8125GS-TNMR2 server gives users access to 8 AMD Instinct MI300X accelerators. This system also includes dual AMD EPYC™ 9004 Series Processors with up to 128 cores/256 threads and up to 6TB memory. Each AMD Instinct MI300X accelerator contains 192GB of HBM3 memory per GPU, all connected with an AMD Universal Base Board (UBB 2.0). Moreover, the new AS -2145GH-TNMR-LCC and AS -4145GH-TNMR APU servers are targeted to accelerate HPC workloads with the MI300A APU. Each APU combines high-performance AMD CPU, GPU, and HBM3 memory for 912 AMD CDNA 3 GPU compute units, 96 “Zen 4” cores, and 512GB of unified HBM3 memory in a single system.

At ISC 2024, a Supermicro 8U server with the Intel Gaudi 3 AI Accelerator will be shown. This new system is designed for AI training & Inferencing and can be directly networked with a traditional Ethernet fabric. Twenty-four 200 gigabit (Gb) Ethernet ports are integrated into every Intel Gaudi 3 accelerator, providing flexible and open-standard networking. In addition, 128GB of HBM2e high-speed memory is included. The Intel Gaudi 3 accelerator is designed to scale up and scale out efficiently from a single node to thousands to meet the expansive requirements of GenAI models. Supermicro’s Petascale storage systems, which are critical for large-scale HPC and AI workloads, will also be displayed.

The Supermicro SuperCloud Composer will be demonstrated for the data center management software, showing how, from a single console, an entire data center can be monitored and managed, including the status of all liquid-cooled servers.

International Supercomputing Conference (ISC) — Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is addressing the most demanding requirements from customers who want to expand their AI and HPC capacities while reducing data center power requirements. Supermicro delivers complete liquid-cooled solutions, including cold plates, CDUs, CDMs, and entire cooling towers. A significant reduction in the PUE of a data center is quickly realized with data center liquid-cooled servers and infrastructure, and this can reduce overall power consumption in the data center by up to 40%.

SHARE
Koo bird

MOST POPULAR