Menu
Home
Forums
New posts
Search forums
What's new
Featured content
New posts
New media
New media comments
New resources
Latest activity
Media
New media
New comments
Search media
Resources
Latest reviews
Search resources
Log in
Register
What's new
Search
Search
Search titles only
By:
New posts
Search forums
Menu
Log in
Register
Install the app
Install
Home
Forums
Labrish
Nyuuz
Supermicro and Nvidia Amp Up AI Data Centers
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
[QUOTE="Nehanda, post: 29108, member: 2262"] Supermicro just rolled out fresh systems that run on the NVIDIA Blackwell Ultra platform. These include the NVIDIA HGX B300 NVL16 and NVIDIA GB300 NVL72 platforms. Supermicro partnered with NVIDIA to create AI solutions that work better for tough computing jobs. Their new stuff handles AI reasoning, agent AI, and video tasks faster than ever before. Charles Liang leads Supermicro as president and CEO. He talked about their long friendship with NVIDIA, bringing new AI tech to market. Their Data Center Building Block approach makes developing air and liquid-cooled systems easier. These systems match the heat needs of NVIDIA HGX B300 NVL16 and GB300 NVL72 perfectly. Their advanced cooling works with warm water - 40°C for 8-node racks or 35°C for 16-node setups. This smart design cuts power use by 40% and saves water, helping both the planet and your wallet. NVIDIA built the Blackwell Ultra platform to tackle the hardest AI jobs by fixing problems caused by limited memory and network speed. Each NVIDIA Blackwell Ultra GPU comes with 288 GB HBM3e memory, making AI training and running big models much faster. The networking setup with NVIDIA Quantum-X800 InfiniBand and Spectrum-X Ethernet doubles data speed up to 800 Gb/s. Supermicro offers two main solutions with NVIDIA Blackwell Ultra: systems based on NVIDIA HGX B300 NVL16 for any data center and the NVIDIA GB300 NVL72 with NVIDIA Grace Blackwell architecture. The HGX systems serve as standard building blocks for AI training groups. They feature eight GPUs linked together and a 1:1 GPU-to-NIC ratio for high speed. Supermicro created a new 8U platform that makes the most from NVIDIA HGX B300 NVL16 boards. All GPUs connect in a 16-GPU domain, moving data at 1.8 TB/s with 2.3 TB of HBM3e per system. Supermicro improved network performance by adding eight NVIDIA ConnectX-8 NICs right into the main board. These support 800 Gb/s speeds between nodes using NVIDIA Quantum-X800 InfiniBand or Spectrum-X Ethernet. The NVIDIA GB300 NVL72 packs 72 NVIDIA Blackwell Ultra GPUs with 36 NVIDIA Grace CPUs in one rack. This creates massive computing power with upgraded HBM3e memory—over 20 TB interconnected at 1.8 TB/s across all 72 GPUs. The NVIDIA ConnectX-8 SuperNIC gives 800 Gb/s speeds for both GPU-to-NIC and NIC-to-network talks, dramatically boosting how fast the entire AI system works together. Supermicro knows cooling, data center setup, and building block methods really well, which helps them deliver NVIDIA Blackwell Ultra systems faster than anyone else. They offer complete liquid cooling options, including new direct-to-chip cold plates, a 250kW in-rack CDU, and a cooling tower. Supermicro also helps companies build data centers from scratch, planning, designing, powering up, testing, and setting up everything exactly how they need it. Their 8U NVIDIA HGX B300 NVL16 system works for any data center with its streamlined heat-friendly case and 2.3 TB HBM3e memory per system. The NVIDIA GB300 NVL72 delivers supercomputer AI power in just one rack with twice the memory and network speed of earlier models. You can see Supermicro at GTC 2025 in San Jose from March 17-21. Visit booth #1115 to check out X14/H14 B200, B300, and GB300 systems, plus their rack-scaled liquid-cooled solutions. [/QUOTE]
Insert quotes…
Name
Post reply
Home
Forums
Labrish
Nyuuz
Supermicro and Nvidia Amp Up AI Data Centers
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.
Accept
Learn more…
Top