Quick tech specs
- AI Inference and Mainstream Compute
- Supports PCI Express Gen 4
- Error Correction
- Quality of Service Across Diverse Workloads
- High Speed HBM2 Memory
- Compute Preemption
Know your gear
Bring accelerated performance to every enterprise workload with the PNY NVIDIA A30 Tensor Core GPUs. With NVIDIA Ampere architecture Tensor Cores and Multi-Instance GPU (MIG), it delivers speedups securely across diverse workloads, including AI inference at scale and high-performance computing (HPC) applications. By combining fast memory bandwidth and low-power consumption in a PCIe form factor optimized for mainstream servers, A30 enables an elastic data center and delivers maximum value for enterprises.
The NVIDIA A30 Tensor Core GPU delivers a versatile platform for mainstream enterprise workloads, like AI inference, training and HPC. With TF32 and FP64 Tensor Core support, as well as an end-to-end software and hardware solution stack, A30 ensures that mainstream AI training and HPC applications can be rapidly addressed. Multi-instance GPU (MIG) ensures the quality of service (QoS) with secure, hardware-partitioned, right-sized GPUs across all of these workloads for diverse users, optimally utilizing GPU compute resources.