🎁❄️ Christmas Order Deadlines! 📦 Online Orders: Sunday, December 8th 🏬 In-Store Orders: Monday, December 9th

NVIDIA A2 Tensor Core GPU

Entry-level GPU that brings NVIDIA AI to any server.

Versatile Entry-Level Inference

The NVIDIA A2 Tensor Core GPU provides entry-level inference with low power, a small footprint, and high performance for NVIDIA AI at the edge. Featuring a low-profile PCIe Gen4 card and a low 40-60W configurable thermal design power (TDP) capability, the A2 brings versatile inference acceleration to any server for deployment at scale.

Up to 20X More Inference Performance

AI inference is deployed to enhance consumer lives with smart, real-time experiences and to gain insights from trillions of end-point sensors and cameras. Compared to CPU-only servers, edge and entry-level servers with NVIDIA A2 Tensor Core GPUs offer up to 20X more inference performance, instantly upgrading any server to handle modern AI.

Computer Vision
(EfficientDet-DO)

CV(EDet-Do).png__PID:e0212d84-680b-4090-8655-28b1d1af5e83

Natural Language Processing
(BERT-Large)

Natural Language Processing.png__PID:60932bb8-a1e9-4bee-9989-e13bddaed749

Text-to-Speech
(Tacotron2 + Waveglow)

text to speech.png__PID:a690d195-6675-46d9-b9d3-f006708c473e

Inference Speedup

Comparisons of one NVIDIA A2 Tensor Core GPU versus a dual-socket Xeon Gold 6330N CPU

System Configuration: [CPU: HPE DL380 Gen10 Plus, 2S Xeon Gold 6330N @2.2GHz, 512GB DDR4] NLP: BERT-Large (Sequence length: 384, SQuAD: v1.1) | TensorRT 8.2, Precision: INT8, BS:1 (GPU) | OpenVINO 2021.4, Precision: INT8, BS:1 (CPU)Text-to-Speech: Tacotron2 + Waveglow end-to-end pipeline (input length: 128) | PyTorch 1.9, Precision: FP16, BS:1 (GPU) | PyTorch 1.9, Precision: FP32, BS:1 (CPU)Computer Vision: EfficientDet-D0 (COCO, 512x512) | TensorRT 8.2, Precision: INT8, BS:8 (GPU) | OpenVINO 2021.4, Precision: INT8, BS:8 (CPU)

Higher IVA Performance for the
Intelligent Edge

Servers equipped with NVIDIA A2 GPUs offer up to 1.3X more performance in intelligent edge use cases, including smart cities, manufacturing, and retail. NVIDIA A2 GPUs running IVA workloads deliver more efficient deployments with up to 1.6X better price-performance and 10 percent better energy efficiency than previous GPU generations.

nvidia-iva-performance-perf-chart-2c50-d.svg__PID:385fee25-3f66-4cc7-a0ff-fbc9f5d6171f

System Configuration: [Supermicro SYS-1029GQ-TRT, 2S Xeon Gold 6240 @2.6GHz, 512GB DDR4, 1x NVIDIA A2 OR 1x NVIDIA T4] | Measured performance with Deepstream 5.1. Networks: ShuffleNet-v2 (224x224), MobileNet-v2 (224x224). | Pipeline represents end-to-end performance with video capture and decode, pre-processing, batching, inference, and post-processing.

Optimized for Any Server

NVIDIA A2 is optimized for inference workloads and deployments in entry-level servers constrained by space and thermal requirements, such as 5G edge and industrial environments. A2 delivers a low-profile form factor operating in a low-power envelope, from a TDP of 60W down to 40W, making it ideal for any server.

Lower Power and Configurable TDP

nvidia-a2-lower-power-configurable-tdp-perf-chart-2c50-d.svg__PID:cdd7525b-1c34-4548-a060-6749fa42b22b

Leading AI Inference Performance Across Cloud, Data Center, and Edge

AI inference continues to drive breakthrough innovation across industries, including consumer internet, healthcare and life sciences, financial services, retail, manufacturing, and supercomputing. A2’s small form factor and low power combined with the NVIDIA A100 and A30 Tensor Core GPUs deliver a complete AI inference portfolio across cloud, data center, and edge. A2 and the NVIDIA AI inference portfolio ensure AI applications deploy with fewer servers and less power, resulting in faster insights with substantially lower costs.

nvidia-a-family-2c50-d.jpg__PID:db9c7e7d-607c-4395-8414-623c3c6ed8de

Ready for Enterprise Utilization

NVIDIA AI Enterprise

NVIDIA AI Enterprise, an end-to-end cloud-native suite of AI and data analytics software, is certified to run on A2 in hypervisor-based virtual infrastructure with VMware vSphere. This enables management and scaling of AI and inference workloads in a hybrid cloud environment.

nvidia-a2-ai-enterprise-2c50-d.jpg__PID:b63788fe-b491-4366-ac1b-d4437f9b3424
nvidia-a2-certified-systems-2c50-d.jpg__PID:327a0504-11b9-424c-a541-ba80571da0c4

Mainstream NVIDIA-Certified Systems

NVIDIA-Certified Systems™ with NVIDIA A2 bring together compute acceleration and high-speed, secure NVIDIA networking in enterprise data center servers, built and sold by NVIDIA’s OEM partners. This program lets customers identify, acquire, and deploy systems for traditional and diverse modern AI applications from the NVIDIA NGC™ catalog on a single high-performance, cost-effective, and scalable infrastructure.

Powered by the NVIDIA Ampere Architecture

The NVIDIA Ampere architecture is designed for the age of elastic computing, delivering the performance and acceleration needed to power modern enterprise applications. Explore the heart of the world’s highest-performing, elastic data centers.

nvidia-a2-ampere-architecture-2c50-d.jpg__PID:9fd40159-db67-4dbf-bb0a-93bde7cd9585

Specifications

NVIDIA A2 Tensor Core GPU

Form Factor1-slot, low-profile PCIe
Peak FP32 4.5 TF
TF32 Tensor Core 9 TF | 18 TF¹
BFLOAT16 Tensor Core 18 TF | 36 TF¹
Peak FP16 Tensor Core Peak FP16 Tensor Core 
Peak INT8 Tensor Core 36 TOPS | 72 TOPS¹
Peak INT4 Tensor Core 72 TOPS | 144 TOPS¹
RT Cores 10
Media engines 1 video encoder
2 video decoders (includes AV1 decode)
GPU memory 16GB GDDR6
GPU memory bandwidth 200GB/s
Interconnect PCIe Gen4 x8
Max thermal design power (TDP) 40–60W (configurable)
Virtual GPU (vGPU) software support² NVIDIA Virtual PC (vPC), NVIDIA Virtual Applications (vApps), NVIDIA RTX Virtual Workstation (vWS), NVIDIA AI Enterprise, NVIDIA Virtual Compute Server (vCS)

1 With sparsity
2 Supported in future vGPU release

template:page.pf-cd4e3912