Nvidia announced that Taiwan’s leading computer manufacturers will launch the first wave of systems powered by Nvidia Grace CPU Superchip And Grace Hopper For a variety of workload coverage digital twinsArtificial Intelligence, High Performance Computing, Cloud Graphics and Games.
Dozens of server models from ASUS, Foxconn Industrial Internet, GIGABYTE, QCT, Supermicro and Wiwynn are expected in the first half of 2023. Grace-powered systems will join x86 and other Arm-based servers to provide customers with a variety of options to achieve high performance and efficiency in your data centers.
Our partners’ new systems, driven by our own technology Grace Super Chips It will bring the power of computational acceleration to new markets and industries around the world
says Ian Buck, vice president of Hyperscale and HPC at Nvidia. Our partners’ new systems, powered by our technology, Grace Super Chips It will bring the power of computational acceleration to new markets and industries around the world.”
The upcoming servers are based on four new system architectures featuring the Grace CPU Superchip and the Grace Hopper Superchip, which Nvidia announced at its last two GTC conferences. 2U form factor designs provide server floor plans and baseplates for OEMs and OEMs to quickly bring systems to market for the Nvidia CGX Cloud Gaming, Nvidia OVX Digital Twin, AI, and HPC Nvidia HGX platforms.
Running modern workloads
The two technologies Nvidia Grace Superchip Enable a wide range of compute-intensive workloads across multiple system architectures:
Grace CPU Superchip features two CPU chips, cohesively connected via an NVLink-C2C link, with up to 144 high-performance Arm V9 cores with scalable vector extensions and a 1TB/sec memory subsystem. Innovative design delivers the highest performance, twice the memory bandwidth, and power efficiency of today’s leading server processors to handle the most demanding HPC applications, data analytics, digital twin, cloud gaming and high-bandwidth computing applications;
The Grace Hopper Combines the Nvidia Hopper GPU and the Nvidia Grace CPU via NVLink-C2C into an integrated unit designed to handle large-scale HPC and AI applications. Using NVLink-C2C interconnection, the Grace CPU transmits data to the Hopper GPU 15 times faster than conventional CPUs.
Grace wide server suite for AI, HPC, digital twins and cloud gaming
The Grace CPU Superchip and Grace Hopper Superchip server design suite includes systems available on single motherboards with single, dual, and four-way configurations available in four workload designs that can be customized by server manufacturers according to customer needs:
systems Nvidia HGX Grace Hopper For artificial intelligence, heuristics and HPC training are available with Grace Hopper and the Nvidia BlueField-3 DPU;
systems Nvidia HGX Grace For HPC and Super Computing, it depends only on CPU design with Grace CPU Superchip and BlueField-3;
Nvidia OVX Systems for digital twins Collaborative workloads depend on Grace CPU Superchip, The Blue Field -3 and the Nvidia GPUs;
Nvidia CGX systems for cloud graphics and games are based on Grace CPU Superchip, The Blue Field -3 and the Nvidia A16 GPUs
Nvidia is expanding its program for Nvidia Certified Systems to include servers that use the . extension Nvidia Grace CPU Superchip And Grace Hopper Super Chip, Plus x86 CPUs. The first OEM server certifications are expected soon after the delivery of partner systems.
Grace’s suite of servers is optimized for Nvidia’s advanced computing software packages, including Nvidia HPC, Nvidia AI, Omniverse, and Nvidia RTX.
You may also like