ASUSTeK Computer Inc.

03/19/2024 | Press release | Distributed by Public on 03/18/2024 20:59

ASUS Presents MGX-Powered Data-Center Solutions

Press Release

Mar 19 , 2024


ASUS Presents MGX-Powered Data-Center Solutions

Leading full stack of AI supercomputing solutions unveiled at GTC 2024

  • ASUS NVIDIA MGX-powered servers on show:Visit ASUS booth #730 to experience the apex of GPU server innovation to accelerate AI supercomputing
  • Advanced liquid cooling:ASUS collaborating closely with industry-leading cooling solution providers to maximize power-usage effectiveness
  • Confident AI software solution:No-code ASUS AI platform plus integrated software stack enables businesses to accelerate AI development

TAIPEI, Taiwan, March 19, 2024 - ASUS today announced its participation at the NVIDIAGTCglobal AI conference, where it will showcase its solutions at booth #730. On show will be the apex of ASUS GPU server innovation, ESC NM1-E1 and ESC NM2-E1, powered by the NVIDIA MGXmodular reference architecture, accelerating AI supercomputing to new heights.

To help meet the increasing demands for generative AI, ASUS uses the latest technologies from NVIDIA, including the B200 Tensor Core GPU, the GB200 Grace Blackwell Superchip, and H200 NVL, to help deliver optimized AI server solutions to boost AI adoption across a wide range of industries.

To better support enterprises in establishing their own generative AI environments, ASUS offers an extensive lineup of servers, from entry-level to high-end GPU server solutions, plus a comprehensive range of liquid-cooled rack solutions, to meet diverse workloads. Additionally, by leveraging its MLPerf expertise, the ASUS team is pursuing excellence by optimizing hardware and software for large-language-model (LLM) training and inferencing and seamlessly integrating total AI solutions to meet the demanding landscape of AI supercomputing.

Tailored AI solutions with the all-new ASUS NVIDIA MGX-powered server

The latest ASUS NVIDIA MGX-powered 2U servers, ESC NM1-E1 and ESC NM2-E1, showcase the NVIDIA GH200 Grace HopperSuperchipwhich offer high performance and efficiency. The NVIDIA Grace CPU includes Arm®Neoverse V9 CPU cores with Scalable Vector Extensions (SVE2) and is powered by NVIDIA NVLink-C2Ctechnology. Integrating with NVIDIA BlueField-3 DPUs and ConnectX-7 network adapters, ASUS MGX-powered servers deliver a blazing data throughput of 400Gb/s, ideal for enterprise AI development and deployment. Coupled with NVIDIA AI Enterprise, an end-to-end, cloud-native software platform for building and deploying enterprise-grade AI applications, the MGX-powered ESC NM1-E1 provides unparalleled flexibility and scalability for AI-driven data centers, HPC, data analytics and NVIDIA Omniverseapplications.

Advanced liquid-cooling technology

The surge in AI applications has heightened the demand for advanced server-cooling technology. ASUS direct-to-chip (D2C) cooling offers a quick, simple option that distinguishes itself from the competition. D2C can be rapidly deployed, lowering data center power-usage effectiveness (PUE) ratios. ASUS servers, ESC N8-E11 and RS720QN-E11-RS24U, support manifolds and cool plates, enabling diverse cooling solutions. Additionally, ASUS servers accommodate a rear-door heat exchanger compliant with standard rack-server designs, eliminating the need to replace all racks - only the rear door is required to enable liquid cooling in the rack. By closely collaborating with industry-leading cooling solution providers, ASUS provides enterprise-grade comprehensive cooling solutions and is committed to minimizing data center PUE, carbon emissions and energy consumption to assist in the design and construction of greener data centers.

Confident AI software solutions

With its world-leading expertise in AI supercomputing, ASUS provides optimized server design and rack integration for data-intensive workloads. At GTC, ASUS will showcase ESC4000A-E12 to demonstrate a no-code AI platform with an integrated software stack, enabling businesses to accelerate AI development on LLM pre-training, fine-tuning and inference - reducing risks and time-to-market without starting from scratch. Additionally, ASUS provides a comprehensive solution to support different LLM tokens from 7B, 33B and even over 180B with customized software, facilitating seamless server data dispatching. By optimizing the allocation of GPU resources for fine-tune training, the software stack ensures that AI applications and workloads can run without wasting resources, which helps to maximize efficiency and return on investment (ROI). Furthermore, the software-hardware synergy delivered by ASUS provides businesses with the flexibility to choose the AI capabilities that best fit their needs, allowing them to push ROI still further.

This innovative software approach optimizes the allocation of dedicated GPU resources for AI training and inferencing, boosting system performance. The integrated software-hardware synergy caters to diverse AI training needs, empowering businesses of all sizes, including SMBs, to leverage advanced AI capabilities with ease and efficiency.

To address the evolving requirements of enterprise IoT applications, ASUS, renowned for its robust computing capabilities, is collaborating with industrial partners, software experts and domain-focused integrators. These collaborations aim to offer turnkey server support for complete solutions, including full installation and testing for modern data-center, AI and HPC applications.

GTC 2024
ROG Intelligent Cooling uses
AI Solutions

Share article

The link copy success
Copy Text
Copy success
Download all images

Press Contacts

Global Media Contact Investor Relationship Contact
[email protected] [email protected]
Latest News

Mar 19 , 2024 Press Release

ASUS Announces MGX-Powered Server at CloudFest 2024 Showcase

Mar 18 , 2024 Press Release

ASUS Republic of Gamers Announces Strix XG27UCS and XG27ACS Gaming Monitors

Mar 18 , 2024 ASUS Stories

AI-assisted Ultrasound Solutions Show the Path to Better, Smarter Care