Nvidia Corp. today announced the introduction of the HGX H200 computing platform, a new powerful system that features the upcoming H200 Tensor Core graphics processing unit based on its Hopper architecture, with advanced memory to handle the massive amounts of data needed for artificial intelligence and supercomputing workloads.
The company announced the new platform (pictured) during today’s Supercomputing 2023 conference in Denver, Colorado. It revealed that the H200 will be the first GPU to be built with HB3e memory, a high-speed memory designed to accelerate large language model AIs and high-performance computing capabilities for scientific and industrial endeavors.
The H200 is the next generation after the H100 GPU, Nvidia’s first GPU to be built on the Hopper architecture. It includes a new feature called the Transformer Engine designed to speed up natural language processing models. With the addition of the new HB3e memory, the H200 has more than 141 gigabytes of memory at 4.8 terabits per second, capable of nearly double the capacity and 2.4 times the bandwidth of the Nvidia A100 GPU. — Read More