Commonly Asked Questions and Answers
[b]Q: What is NVIDIA announcing?[/b] A: NVIDIA is announcing the availability of NVIDIA GPU Cloud (NGC), which provides a container registry of fully integrated and optimized deep learning framework containers, available to users at no cost. These GPU-accelerated containers are tested, tuned, and certified by NVIDIA to take full advantage of NVIDIA GPUs on popular cloud infrastructure such as AWS, or on-premises systems like NVIDIA DGX-1 and NVIDIA DGX Station. [b]Q: What do I get by signing up for NGC?[/b] A: You get access to a comprehensive catalog of fully integrated and optimized deep learning framework containers, at no cost. [b]Q: What is in the containers?[/b] A: Each container has a pre-integrated stack of software optimized for deep learning on NVIDIA GPUs, including a Linux OS, CUDA runtime, required libraries, and the chosen framework (NVCaffe, TensorFlow, etc.). [b]Q: Which frameworks are available on NGC?[/b] A: The NGC container registry has NVIDIA GPU accelerated releases of the most popular frameworks: NVCaffe, Caffe2, Microsoft Cognitive Toolkit (CNTK), DIGITS, MXNet, PyTorch, TensorFlow, Theano, Torch, and CUDA (base level container for developers). [b]Q: Where can I run the containers?[/b] A: The GPU accelerated containers are are tuned, tested, and certified by NVIDIA to run on Amazon EC2 P3 instances with NVIDIA Volta and NVIDIA DGX Systems. Support for additional cloud service providers such as Microsoft Azure is coming soon. The containers can also be run on Pascal and Volta enabled workstations and servers running Linux, however support for these systems is best effort only. [b]Q: How do I run these containers on Amazon EC2?[/b] A: NVIDIA created an Amazon Machine Image (AMI) called NVIDIA Volta Deep Learning AMI, available from the AWS Marketplace at no charge. This AMI is an optimized environment for running the deep learning framework containers available from the NGC container registry. You simply create an instance of the AMI, and pull the desired framework container from NGC into your instance, and can instantly get started running deep learning jobs. Incidentally, there are other AMIs for deep learning on the AWS Marketplace, but they are not tested or optimized by NVIDIA [b]Q: How often are the containers and frameworks updated?[/b] A: NVIDIA updates each deep learning framework with the latest optimizations in drivers, frameworks and libraries on a monthly cadence. [b]Q: What kind of support does NVIDIA offer for these containers?[/b] A: NGC users get access to the NVIDIA DevTalk Developer’s Forum https://devtalk.nvidia.com/. The NVIDIA DevTalk Developer’s Forum is supported by a large community of deep learning and GPU experts from NVIDIA, customer, partner, and employee ecosystem. [b]Q: Do I pay for GPU compute time if the containers are provided at no charge?[/b] A: Containers on the NGC container registry are provided at no charge, subject to the terms of the TOU. However, each cloud provider will have their own pricing for compute time. [b]Q: How do I run the containers on a local server or workstation with Pascal or Volta GPUs?[/b] A: The containers require the use of Docker, and the nvidia-docker utility. A description of nvidia-docker, along with how to install and use it, can be found at https://github.com/NVIDIA/nvidia-docker . [b]Q: Do I need to get an NGC account if I already have a DGX account?[/b] A: Because the license terms and features are different for NGC and DGX customers, if you want to use NGC provided containers you should create a new account on NGC. This account, and associated API key, are required to be used for all container use on non-DGX systems. [b]Q: Can I build a container based off of ones in NGC and push it to another public container repository?[/b] A: Because the containers pulled from NGC include licensed software, they can not be redistributed. The good news is that setting up a local container repository isn't hard and https://docs.docker.com/registry/deploying/ documents one way to do just that. Derived containers can also be archived using docker load and save commands to store them when a private container repository isn't available. [b]Q: How do I determine NGC container registry operational status?[/b] A: Users can visit the ngc status page to determine operational status: https://ngc.statuspage.io/#
Q: What is NVIDIA announcing?
A: NVIDIA is announcing the availability of NVIDIA GPU Cloud (NGC), which provides a container registry of fully integrated and optimized deep learning framework containers, available to users at no cost. These GPU-accelerated containers are tested, tuned, and certified by NVIDIA to take full advantage of NVIDIA GPUs on popular cloud infrastructure such as AWS, or on-premises systems like NVIDIA DGX-1 and NVIDIA DGX Station.

Q: What do I get by signing up for NGC?
A: You get access to a comprehensive catalog of fully integrated and optimized deep learning framework containers, at no cost.

Q: What is in the containers?
A: Each container has a pre-integrated stack of software optimized for deep learning on NVIDIA GPUs, including a Linux OS, CUDA runtime, required libraries, and the chosen framework (NVCaffe, TensorFlow, etc.).

Q: Which frameworks are available on NGC?
A: The NGC container registry has NVIDIA GPU accelerated releases of the most popular frameworks: NVCaffe, Caffe2, Microsoft Cognitive Toolkit (CNTK), DIGITS, MXNet, PyTorch, TensorFlow, Theano, Torch, and CUDA (base level container for developers).

Q: Where can I run the containers?
A: The GPU accelerated containers are are tuned, tested, and certified by NVIDIA to run on Amazon EC2 P3 instances with NVIDIA Volta and NVIDIA DGX Systems. Support for additional cloud service providers such as Microsoft Azure is coming soon. The containers can also be run on Pascal and Volta enabled workstations and servers running Linux, however support for these systems is best effort only.

Q: How do I run these containers on Amazon EC2?
A: NVIDIA created an Amazon Machine Image (AMI) called NVIDIA Volta Deep Learning AMI, available from the AWS Marketplace at no charge. This AMI is an optimized environment for running the deep learning framework containers available from the NGC container registry. You simply create an instance of the AMI, and pull the desired framework container from NGC into your instance, and can instantly get started running deep learning jobs. Incidentally, there are other AMIs for deep learning on the AWS Marketplace, but they are not tested or optimized by NVIDIA

Q: How often are the containers and frameworks updated?
A: NVIDIA updates each deep learning framework with the latest optimizations in drivers, frameworks and libraries on a monthly cadence.

Q: What kind of support does NVIDIA offer for these containers?
A: NGC users get access to the NVIDIA DevTalk Developer’s Forum https://devtalk.nvidia.com/. The NVIDIA DevTalk Developer’s Forum is supported by a large community of deep learning and GPU experts from NVIDIA, customer, partner, and employee ecosystem.

Q: Do I pay for GPU compute time if the containers are provided at no charge?
A: Containers on the NGC container registry are provided at no charge, subject to the terms of the TOU. However, each cloud provider will have their own pricing for compute time.

Q: How do I run the containers on a local server or workstation with Pascal or Volta GPUs?
A: The containers require the use of Docker, and the nvidia-docker utility. A description of nvidia-docker, along with how to install and use it, can be found at https://github.com/NVIDIA/nvidia-docker .

Q: Do I need to get an NGC account if I already have a DGX account?
A: Because the license terms and features are different for NGC and DGX customers, if you want to use NGC provided containers you should create a new account on NGC. This account, and associated API key, are required to be used for all container use on non-DGX systems.

Q: Can I build a container based off of ones in NGC and push it to another public container repository?
A: Because the containers pulled from NGC include licensed software, they can not be redistributed. The good news is that setting up a local container repository isn't hard and https://docs.docker.com/registry/deploying/ documents one way to do just that. Derived containers can also be archived using docker load and save commands to store them when a private container repository isn't available.

Q: How do I determine NGC container registry operational status?
A: Users can visit the ngc status page to determine operational status: https://ngc.statuspage.io/#

#1
Posted 10/25/2017 06:38 PM   
If you have any feedback or suggestions for FAQ - please post on our "Feature Request" section. thanks
If you have any feedback or suggestions for FAQ - please post on our "Feature Request" section.
thanks

#2
Posted 10/31/2017 11:20 PM   
Scroll To Top

Add Reply