Does NGC and NVidia Docker mean the end of NVidia machine-learning repos?

Several people use the CUDA and Machine Learning repositories for building their deep learning machines:
http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/
http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64

These work well and although their have been updates of many parts such as cudnn, nccl, drivers, etc. There are many parts, for example Digits, that have not been updated in a long time. Does the success of NGC mean that this is the preferred and possibly only future way to stay current and move forward with NVidia supported builds?

I am trying to decide if I should just wipe out my machine I built using the repositories, and rebuild it on NGC containers.

Has the NVidia DevBox repos moved to a container format? Traditionally they were standard packages as above.

I want to be able to have modern packages such as TensorFlow and Digits 6 without having to manually build them. I would have assumed NVidia would have built packages for these and made them available in the above repos but that did not happen, so I am just trying to understand the strategy going forward. Thank you.