Zephyrnet Logo

Nvidia sets out timeline for H100 GPUs – now for HGX, next year for DGX

Date:

GTC Nvidia’s long-awaited Hopper H100 accelerators will begin shipping later next month in OEM-built HGX systems, the silicon giant said at its GPU Technology Conference (GTC) event today.

However, those waiting to get their hands on Nvidia’s DGX H100 systems will have to wait until sometime in Q1 next year. The DGX is Nvidia’s line of workstations and servers using its GPUs and interconnects, and HGX systems are partner-made servers again using Nv’s technology.

And while Nvidia is hyping its Hopper architecture in the datacenter, most of the enterprise kit announced this week won’t be getting the chip giant’s flagship architecture anytime soon.

At the edge, Nvidia seems content to eke a full life from its Ampere architecture.

Today, Nvidia detailed the next-gen edge AI and robotics platform it calls IGX.

Nvidia’s IGX platform is a full-sized system board built around an Orin Industrial system on module

IGX is an “all-in-one computing platform to accelerate the deployment of real-time intelligent machines and medical devices,” Kimberly Powell, veep of healthcare at Nvidia, said. At its core, the system is essentially an expanded version of Nvidia’s previously announced Jetson AGX Orin module, announced this spring.

“IGX is a full system with the Nvidia Orin robotics processor, Ampere tensor-core GPU, the ConnectX streaming I/O processor, a functional safety island, and safety microcontroller unit because more and more robots and humans will be working in the same environment,” she added.

In terms of performance, there’s not much new here. We’re told the platform is based on an Orin industrial system on module with 64GB of memory that’s comparable in performance to the AGX Orin module launched earlier this year. That system featured 32GB of memory, an octacore Arm Cortex A78AE CPU and Ampere-based GPU.

The IGX does gain an integrated ConnectX-7 NIC for high-speed connectivity by way of two 200Gbps interfaces. The board also appears to feature a full complement of M.2 storage, PCIe slots, and at least one legacy PCI slot for expansion.

Nvidia is aiming the IGX platform at a variety of edge AI and robotics use cases in healthcare, manufacturing, and logistics, where confidentiality or latency make more centralized systems impractical.

Like the AGX Orin, the system is complimented by Nvidia’s AI Enterprise software suite and Fleet Command platform for deployment and management.

One of the first applications of the IGX platform will be using Nvidia’s robotics imaging platform.

“Nvidia Clara Holoscan is our application framework that sits on top of IGX for medical devices and imaging robotics pipelines,” Powell explained.

Three medical device vendors – Activ Surgical, Moon Surgical, and Proximinie – plan to use the IGX and Clara Holoscan to power their surgical robotics and telepresence platforms. IGX Orin developer kits are slated to ship early next year with production systems available from ADLink, Advantech, Dedicated Computing, Kontron, MBX, and Onyx to name a handful.

On the topic of Orin, Nvidia also unveiled its Jetson Orin Nano compute modules. Orin Nano is available in two configurations at launch including an 8GB version capable of 40 TOPS of AI inferencing and a cut-down version with 4GB of memory capable of 20 TOPS.

Nvidia's Jetson Orin Nano modules

Nvidia’s new Jetson Orin Nano modules are pin compatible with their predicessor

Like prior Jetson modules, the Orin Nano uses a pin-compatible edge connector reminiscent of that used for laptop SODIMM memory and consumes between 5W and 15W depending on application and SKU. Nvidia’s Jetson Orin Nano modules are available in January starting at $199.

An OVX refresh

Nvidia’s OVX servers, which are designed to run its Omniverse platform, won’t be running on Hopper either.

The company’s second-gen visualization and digital-twinning systems instead come equipped with eight L40 GPUs. The cards are based on the company’s next-generation Ada Lovelace architecture and feature Nvidia’s third-gen ray tracing cores and fourth-gen Tensor Cores.

The GPUs are accompanied by a pair of Ice Lake Intel Xeon Platinum 8362 CPUs, for a total of 128 processor threads at up to 3.6GHz.

Nvidia's OVX Server for Omniverse

Nvidia’s revised OVX system packs eight Ada Lovelace GPUs into its golden chassis

The compute system is accompanied by three ConnectX-7 NICs, each capable of 400Gbps of throughput, and 16TB of NVMe storage. While the system is available as individual nodes, Nvidia envisions the system being deployed as part of what it calls an OVX SuperPod, which incorporates 32 systems connected using the company’s 51.2Tbps Spectrum-3 switches.

The second-gen systems will be available from Lenovo, Supermicro, and Inspur from 2023. In the future, Nvidia plans to expand availability of the systems to additional partners.

Hopped up on Drive Thor

The only bit of kit announced at GTC this week that is getting Nvidia’s Hopper architecture is the Drive Thor autonomous vehicle computer system.

Drive Thor replaces Nvidia’s Atlan platform on its 2025 roadmap and promises to deliver 2,000 TOPS of inferencing performance when it launches.

Nvidia's autonomous vehicle computer

Nvidia’s Drive Thor autonomous vehicle computer promises 2,000 TOPS of performance when it launches in 2025

“Drive Thor comes packed with cutting-edge capabilities introduced in our Grace CPU, our Hopper GPU, and our next-generation GPU architecture,” Danny Shapiro, VP of Automotive at Nvidia, told a press briefing. He said Drive Thor is designed to unify the litany of computer systems that power modern automobiles into a single centralized platform.

“Look at today’s advanced driver-assistance systems – parking, driver monitoring, camera mirrors, digital instrument cluster, infotainment systems – they’re all on different computers distributed throughout the vehicle,” he said. “In 2025, however, these functions will no longer be separate computers. Rather, Drive Thor will enable manufacturers to efficiently consolidate these functions into a single system.”

To cope with all the information streaming from automobile sensors, the chip features multi-compute-domain isolation, which Nvidia says allows the chip to run concurrent critical processes without interruption.

The tech also allows the chip to run multiple operating systems simultaneously to suit the various vehicle applications. For example, the car’s core operating system might run on Linux, while the infotainment system might run QNX or Android.

However, it it is unknown when we might get to see the tech in action. As it stands, all three of Nvidia’s launch partners – Zeekr, Xpeng, and QCraft – are based in China. ®

spot_img

Latest Intelligence

spot_img