With DGX Station A100, organizations can provide multiple users with a centralized AI resource for all workloads—training, inference, data analytics—that delivers an immediate on-ramp to NVIDIA DGX™-based infrastructure and works alongside other NVIDIA-Certified Systems. And with Multi-Instance GPU (MIG), it’s possible to allocate up to 28 separate GPU devices to individual users and jobs.
AI Data Center in a Box
Data science teams are at the leading edge of innovation, but they’re often left searching for available AI compute cycles to complete projects. They need a dedicated resource that can plug in anywhere and provide maximum performance for multiple, simultaneous users anywhere in the world. NVIDIA DGX Station™ A100 brings AI supercomputing to data science teams, offering data center technology without a data center or additional IT infrastructure. Powerful performance, a fully optimized software stack, and direct access to NVIDIA DGXperts ensure faster time to insights.
Data Center Performance Anywhere
AI Supercomputing for Data Science Teams
Data Center Performance without the Data Center
DGX Station A100 is a server-grade AI system that doesn’t require data center power and cooling. It includes four NVIDIA A100 Tensor Core GPUs, a top-of-the-line, server-grade CPU, super-fast NVMe storage, and leading-edge PCIe Gen4 buses, along with remote management so you can manage it like a server.
An AI Appliance You Can Place Anywhere
Designed for today's agile data science teams working in corporate offices, labs, research facilities, or even from home, DGX Station A100 requires no complicated installation or significant IT infrastructure. Simply plug it into any standard wall outlet to get up and running in minutes and work from anywhere.
Bigger Models, Faster Answers
NVIDIA DGX Station A100 is the world’s only office-friendly system with four fully interconnected and MIG-capable NVIDIA A100 GPUs, leveraging NVIDIA® NVLink® for running parallel jobs and multiple users without impacting system performance. Train large models using a fully GPU-optimized software stack and up to 320 gigabytes (GB) of GPU memory.