EDITOR/AUTHOR:MR.WANGBERK KINGHENRYBLAKE PACYSUNNY
2 min read
20 May
"MSI" HAS INTRODUCED NEW AI SERVER SOLUTIONS USING "NVIDIA MGX" AND "NVIDIA DGX" PRO-STATION REFERENCE ARCHITECTURES DESIGNED TO SUPPORT REQUIREMENTS AND THE ACCELERATED COMPUTING WORKLOADS PROBABILITY.



MSI has introduced new AI server solutions using NVIDIA MGX and NVIDIA DGX Station reference architectures designed to support the expanding requirements of enterprise, HPC, and accelerated computing workloads.

The company's new server platforms feature modular and scalable building blocks aimed at addressing increasing AI demands in both enterprise and cloud data centre environments.

Mr.Danny Hsu, General Manager of Enterprise Platform Solutions at MSI, said, "AI adoption is transforming enterprise data centers as organizations move quickly to integrate advanced AI capabilities.






With the explosive growth of generative AI and increasingly diverse workloads, traditional servers can no longer keep pace. MSI's AI solutions, built on the NVIDIA MGX and NVIDIA DGX Station reference architectures, deliver the scalability, flexibility, and performance enterprises need to future-proof their infrastructure and accelerate their AI innovation."


One of the main highlights is a rack solution based on the NVIDIA Enterprise Reference Architecture, comprising a four-node scalable unit constructed on the MSI AI server utilising NVIDIA MGX. Each server in this solution contains eight NVIDIA H200 NVL GPUs, further enhanced by the NVIDIA Spectrum-X networking platform to enable scalable AI workloads. This modular setup provides the capability to expand to a maximum of 32 server systems, meaning up to 256 NVIDIA H200 NVL GPUs can be supported within a single deployment.



MSI states that this architecture is optimised for multi-node AI and hybrid applications and is designed to support complex computational tasks expected in the latest data centre operations. It is built to accommodate a range of use cases, including those leveraging large language models and other demanding AI workloads.


The AI server platforms have been constructed using the NVIDIA MGX modular architecture, establishing a foundation for accelerated computing in AI, HPC, and NVIDIA Omniverse contexts. The MSI 4U AI server provides configuration options using either Intel or AMD CPUs, aimed at large-scale AI projects such as deep learning training and model fine-tuning.



The CG480-S5063 platform features dual Intel Xeon 6 processors and eight full-height, full-length dual-width GPU slots that support NVIDIA H200 NVL and NVIDIA RTX PRO 6000 Blackwell Server Edition, with power capacities up to 600W. It offers 32 DDR5 DIMM slots and twenty PCIe 5.0 E1.S NVMe bays for high memory bandwidth and rapid data access, with its modular design supporting both storage needs and scalability.



Another server, the CG290-S3063, is a 2U AI platform also constructed on NVIDIA MGX architecture. It includes a single-socket Intel Xeon 6 processor, 16 DDR5 DIMM slots, and four GPU slots with up to 600W capacity. The CG290-S3063 incorporates PCIe 5.0 expansion, four rear 2.5-inch NVMe bays, and two M.2 NVMe slots to provide support for various AI tasks, from smaller-scale inference to extensive AI training workloads.




MSI's server platforms have been designed for deployment within enterprise-grade AI environments, offering support for the NVIDIA Enterprise AI Factory validated design.


Be the Absolute first to receive Notifications to know More of RelevantDatumWorthy per the Rectified happenings inthe World from "Https//:www.VOICEOFUMNP.com".


Kindly,Subscribe with your Valid Email Address and receive Relevant Notifications to your active Device with Professionalism.

Thankyou for the Scheduled Quality Ample Time.


I BUILT MY SITE FOR FREE USING