Nvidia DGX POD
Enterprise performance
Performance without compromise

AI optimized data centre
Enterprises and Research Institutions are using artificial intelligence to solve an ever increasing number of problems. For very complex challenges, a single AI-system may not have enough compute power to provide sufficient resolution times. Nvidia DGX PODs are industry proven solutions that combine all the critical components for rapid deployment of state of the art AI in data centers.
What is a DGX POD solution?
Several DGX A100 AI supercomputers
From 4 to 40+ DGX A100 systems – depending on the total compute requirements. Typically up to four or five DGX A100 systems will share a rack chassis, depending on the cooling capacity in the data room.
Fast shared storage
All compute systems need to access the same data - and write results back, so that analysts immediately can use the information. NVIDIA has partnered up and certified solutions from storage leaders such as DDN, NetApp and PureStorage to provide shared storage at near-local-access speeds.
High speed networking
Large scale AI tasks involve huge amounts data and the transfer of tasks between servers and / or storage can quickly become a bottleneck. DGX PODs are therefore using bonded 200 Gbit infiniband links, enabling truly shared workloads over multiple servers.
Full software stack
Powerful tools for management, software deployment and scheduling makes setting up and running an AI-optimized data centre easier than ever before. And should you require assistance, a DGX POD solution comes with full support from NVIDIAs team of AI-Experts.
You can view more details and specifications of the NVIDIA DGX A100 datasheet here
Are you planning a new data centre? Read more about how to design best-practise AI setups.
Contact us, and we are happy to help you forward.
