Tensorflow:17.10 image doesn't work out of the box

pisymbol@kitt:~$ docker images nvcr.io/nvidia/tensorflow
REPOSITORY                  TAG                 IMAGE ID            CREATED             SIZE
nvcr.io/nvidia/tensorflow   17.10               9b6a599f403c        3 weeks ago         3.89GB
pisymbol@kitt:~$ docker run -it 9b6a599f403c /bin/bash

================
== TensorFlow ==
================

NVIDIA Release 17.10 (build 192916)

Container image Copyright (c) 2017, NVIDIA CORPORATION.  All rights reserved.
Copyright 2017 The TensorFlow Authors.  All rights reserved.

Various files include modifications (c) NVIDIA CORPORATION.  All rights reserved.
NVIDIA modifications are covered by the license terms that apply to the underlying project or file.

WARNING: The NVIDIA Driver was not detected.  GPU functionality will not be available.
   Use 'nvidia-docker run' to start this container; see
   https://github.com/NVIDIA/nvidia-docker/wiki/nvidia-docker .

NOTE: The SHMEM allocation limit is set to the default of 64MB.  This may be
   insufficient for TensorFlow.  NVIDIA recommends the use of the following flags:
   nvidia-docker run --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 ...

root@613cfc7665a3:/workspace# python
Python 2.7.12 (default, Nov 19 2016, 06:48:10)
[GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/__init__.py", line 24, in <module>
    from tensorflow.python import *
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/__init__.py", line 49, in <module>
    from tensorflow.python import pywrap_tensorflow
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 52, in <module>
    raise ImportError(msg)
ImportError: Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 41, in <module>
    from tensorflow.python.pywrap_tensorflow_internal import *
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
    _pywrap_tensorflow_internal = swig_import_helper()
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
    _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
ImportError: libcuda.so.1: cannot open shared object file: No such file or directory


Failed to load the native TensorFlow runtime.

See https://www.tensorflow.org/install/install_sources#common_installation_problems

for some common reasons and solutions.  Include the entire stack trace
above this error message when asking for help.
>>>

I know I didn’t run this with nvidia-docker, but still?

I can of course fix this but this seems very wrong to me. (I actually build my own docker image but I was curious on comparing notes).

The NGC GPU enabled containers require nvidia-docker to run, and do not run in CPU only mode.

The error you are seeing is expected when launching this container with docker run ... instead.

This is a big bummer.

If you want to run in CPU only mode, just pull the official TF image from DockerHub

https://hub.docker.com/r/tensorflow/tensorflow/

This NGC TensorFlow image is specific to Volta GPUs (in particular, P3 instances).

Hi,

I notice that nvcr.io/nvidia/tensorflow:17.10 by default come with python 2.7 legacy. However, most of my deep learning script are written in Python 3.6. And most importantly, I am using asyncio (only after python 3.4) to compute MCTS. Would nvcr.io build official container that use latest python distribution? How can I build my own docker image based on nvcr.io/nvidia/tensorflow:17.10 but have tensorflow configured with python 3.6?

@hangyu5: Please see [url]Python3 not configured out of the box. - Frameworks - NVIDIA Developer Forums

Thanks,
Cliff