Product Description
Nvidia H200 Tensor Core Gpu - 141GB of HBM3e Gpu memory - 4.8TB/s of memory bandwidth
The Nvidia H200 Tensor Core Gpu supercharges generative AI And high-performance computing (HPC) workloads With game-changing performance And memory capabilities. As The first Gpu With HBM3e, The H200’s larger And faster memory fuels The Acceleration of generative AI And large language Monthsdels (LLMs) while advancing scientific computing For HPC workloads.
Highlights
Experience Next-Level Performance
Llama2 70B Inference: 1.9X Faster
GPT-3 175B Inference: 1.6X Faster
High-Performance Computing: 110X Faster
Benefits
Higher Performance And Larger, Faster Memory
Based on the Nvidia Hopper architecture, The Nvidia H200 Is The first Gpu To offer 141 gigabytes (GB) of HBM3e memory at 4.8 terabytes Per second (TB/s) —that’s nearly double The capacity of the Nvidia H100 Tensor Core GPU with 1.4X Monthsre memory bandwidth. The H200’s larger And faster memory accelerates generative AI And LLMs, while advancing scientific computing For HPC workloads With better energy efficiency And lower total cost of ownership.
Preliminary measured performance, subject To change.
Llama2 13B: ISL 128, OSL 2K | Throughput | H100 1x Gpu BS 64 | H200 1x Gpu BS 128
GPT-3 175B: ISL 80, OSL 200 | x8 H100 GPUs BS 64 | x8 H200 GPUs BS 128
Llama2 70B: ISL 2K, OSL 128 | Throughput | H100 1x Gpu BS 8 | H200 1x Gpu BS 32.
Unlock Insights With High-Performance LLM Inference
In The ever-evolving landscape of AI, businesses rely on LLMs To address a diverse range of inference needs. An AI inference accelerator must deliver The highest throughput at The lowest TCO when deployed at scale For a massive user base.
The H200 boosts inference speed by up To 2X compared To H100 GPUs when handling LLMs like Llama2.
Supercharge High-Performance Computing
Memory bandwidth Is crucial For HPC applications as it enables faster data transfer, reducing complex processing bottlenecks. For memory-intensive HPC applications like simulations, scientific research, And artificial intelligence, The H200’s higher memory bandwidth ensures that data can be accessed And manipulated efficiently, leading up To 110X faster time To results compared To CPUs.
Projected performance, subject To change.
HPC MILC- dataset NERSC Apex Medium | HGX H200 4-GPU | dual Sapphire Rapids 8480
HPC Apps- CP2K: dataset H2O-32-RI-dRPA-96points | GROMACS: dataset STMV | ICON: dataset r2b5 | MILC: dataset NERSC Apex Medium | Chroma: dataset HMC Medium | Quantum Espresso: dataset AUSURF112 | 1x H100 | 1x H200.
Single-node HGX measured performance | A100 April 2021 | H100 TensorRT-LLM Oct 2023 | H200 TensorRT-LLM Oct 2023
The Nvidia Hopper architecture delivers an unprecedented performance leap over its predecessor And continues To raise The bar through ongoing software enhancements With The H100, including The recent release of powerful open-source libraries.
The introduction of The H200 continues The Monthsmentum With Monthsre performance. Investment in it ensures performance leadership now, and—with continued improvements To supported software—the future.