Quick tech specs
- Unprecedented Performance, Scalability and Security for Every Data Centre
- Faster HBM3 Memory and NVLink Connectivity
- RoHS Complaint
- Support of Large Language Models in Mainstream PCIe-based Server Systems
- Memory Capacity: 94GB
Know your gear
NVIDIA® H100 NVL supercharges large language model inference in mainstream PCIe-based server systems. With increased raw performance, bigger, faster HBM3 memory, and NVIDIA NVLink™ connectivity via bridges, mainstream systems with H100 NVL outperform NVIDIA A100 Tensor Core systems by up to 5X on Llama 2 70B. H100 NVL also includes NVIDIA Enterprise software.