After spending some time on this, the best I can surmise is that privileged mode is sufficient. I originally was trying this without success, but perhaps I had something else wrong (I suspect perhaps one of the GL Shared Objects was not pointing to the tegra-egl, rather the mesa driver)
This is what I was previously trying:
docker run -ti -v /dev:/dev:rw --device=/dev/video0 --device=/dev/i2c-0 --device=/dev/nvhost-vic -e DISPLAY=:0 -v /tmp/.X11-unix:/tmp/.X11-unix -v /opt/nvidia/nvcam:/opt/nvidia/nvcam -v /home/nvidia/:/home/nvidia --privileged=true --net=host --env="DISPLAY" --volume="$HOME/.Xauthority:/root/.Xauthority:rw" jetson:test
however when I boiled it down, this was actually sufficient:
docker run -ti --privileged=true jetson:test
Either way, for anyone else looking, this seems to work fine to run the argus_oneshot example. I flashed the Jetson TX2 with all of the default options.
In the effort of sharing with the community, here is the complete Dockerfile that builds and runs the argus_oneshot example. It is rather verbose, but contains most things you should need (I didn’t put the CUDA samples in, but CUDA seems to work as well)
# Start from the latest Xenail (Ubuntu 16.04) release available at the moment
FROM arm64v8/ubuntu:xenial-20180808
# The JETPACK_URL paths used here comes from the jetson_downloads/repository.json file when you run the regular jetpack installer on an x64 linux host
# Use Jetpack 3.3
ARG JETPACK_URL=https://developer.download.nvidia.com/devzone/devcenter/mobile/jetpack_l4t/3.3/lw.xd42/JetPackL4T_33_b39
RUN apt update
RUN apt upgrade -y
RUN apt install -y apt-utils
RUN apt install -y git cmake sudo vim curl libexpat1-dev build-essential libgtk-3-dev libjpeg-dev libv4l-dev libgstreamer1.0-dev
RUN apt autoremove -y
# Get the cuda 9.0 package public key, it comes with the cuda debian install file, but
# it seems like it is not installed in the correct order. We'll steal the same key from the x86_64 repo manually
# since there isn't one specific to the ARM processor
RUN mkdir /var/cuda-repo-9-0-local/
WORKDIR /var/cuda-repo-9-0-local/
RUN curl https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub -so 7fa2af80.pub
RUN apt-key add /var/cuda-repo-9-0-local/7fa2af80.pub
# Download the L4T package
WORKDIR /tmp
RUN curl $JETPACK_URL/cuda-repo-l4t-9-0-local_9.0.252-1_arm64.deb -o cuda-repo-l4t-9-0-local_9.0.252-1_arm64.deb
RUN dpkg -i cuda-repo-l4t-9-0-local_9.0.252-1_arm64.deb
RUN rm cuda-repo-l4t-9-0-local_9.0.252-1_arm64.deb
# Not all are required but the x86_x64 Jetpack installer does it, so we will too
RUN apt update && apt install -y cuda-toolkit-9.0 libgomp1 libfreeimage-dev libopenmpi-dev openmpi-bin
# Install other useful packages
RUN curl $JETPACK_URL/libcudnn7_7.1.5.14-1+cuda9.0_arm64.deb -so libcudnn7_7.1.5.14-1+cuda9.0_arm64.deb
RUN dpkg -i libcudnn7_7.1.5.14-1+cuda9.0_arm64.deb
RUN rm libcudnn7_7.1.5.14-1+cuda9.0_arm64.deb
RUN curl $JETPACK_URL/libcudnn7-dev_7.1.5.14-1+cuda9.0_arm64.deb -so libcudnn7-dev_7.1.5.14-1+cuda9.0_arm64.deb
RUN dpkg -i libcudnn7-dev_7.1.5.14-1+cuda9.0_arm64.deb
RUN rm libcudnn7-dev_7.1.5.14-1+cuda9.0_arm64.deb
RUN curl $JETPACK_URL/libnvinfer4_4.1.3-1+cuda9.0_arm64.deb -so cuda-repo-libnvinfer_arm64.deb
RUN dpkg -i cuda-repo-libnvinfer_arm64.deb
RUN rm cuda-repo-libnvinfer_arm64.deb
RUN curl $JETPACK_URL/libnvinfer-dev_4.1.3-1+cuda9.0_arm64.deb -so libnvinfer-dev_4.1.3-1+cuda9.0_arm64.deb
RUN dpkg -i libnvinfer-dev_4.1.3-1+cuda9.0_arm64.deb
RUN rm libnvinfer-dev_4.1.3-1+cuda9.0_arm64.deb
RUN curl $JETPACK_URL/libnvinfer-samples_4.1.3-1+cuda9.0_arm64.deb -so libnvinfer-samples_4.1.3-1+cuda9.0_arm64.deb
RUN dpkg -i libnvinfer-samples_4.1.3-1+cuda9.0_arm64.deb
RUN rm libnvinfer-samples_4.1.3-1+cuda9.0_arm64.deb
RUN curl $JETPACK_URL/tensorrt_4.0.2.0-1+cuda9.0_arm64.deb -so cuda-repo-tensorrt_arm64.deb
RUN dpkg -i cuda-repo-tensorrt_arm64.deb
RUN rm cuda-repo-tensorrt_arm64.deb
# Install OpenCV dependencies
RUN apt install -y ffmpeg libgtk2.0-0 libjasper1 libtbb2 libtbb-dev
RUN curl $JETPACK_URL/libopencv_3.3.1_t186_arm64.deb -so libopencv_3.3.1_t186_arm64.deb
RUN dpkg -i libopencv_3.3.1_t186_arm64.deb
RUN rm libopencv_3.3.1_t186_arm64.deb
RUN curl $JETPACK_URL/libopencv-dev_3.3.1_t186_arm64.deb -so libopencv-dev_3.3.1_t186_arm64.deb
RUN dpkg -i libopencv-dev_3.3.1_t186_arm64.deb
RUN rm libopencv-dev_3.3.1_t186_arm64.deb
WORKDIR /tmp
RUN curl -sL $JETPACK_URL/Tegra186_Linux_R28.2.1_aarch64.tbz2 | tar xvfj -
RUN /tmp/Linux_for_Tegra/apply_binaries.sh -r / && rm -fr /tmp/*
# Use the tegra-egl driver not the mesa driver, relink the libs we need
RUN rm -f /usr/lib/aarch64-linux-gnu/libEGL.so
RUN rm -f /usr/lib/aarch64-linux-gnu/libGLESv2.so
RUN rm -f /usr/lib/aarch64-linux-gnu/libGL.so
RUN ln -s /usr/lib/aarch64-linux-gnu/tegra-egl/libEGL.so /usr/lib/aarch64-linux-gnu/libEGL.so
RUN ln -s /usr/lib/aarch64-linux-gnu/tegra-egl/libGLESv2.so /usr/lib/aarch64-linux-gnu/libGLESv2.so
RUN ln -s /usr/lib/aarch64-linux-gnu/tegra-egl/libGL.so /usr/lib/aarch64-linux-gnu/libGL.so
RUN ln -s /usr/lib/aarch64-linux-gnu/tegra/libnvidia-ptxjitcompiler.so.28.2.0 /usr/lib/aarch64-linux-gnu/tegra/libnvidia-ptxjitcompiler.so
RUN ln -s /usr/lib/aarch64-linux-gnu/tegra/libnvidia-ptxjitcompiler.so.28.2.0 /usr/lib/aarch64-linux-gnu/tegra/libnvidia-ptxjitcompiler.so.1
RUN ln -s /usr/lib/aarch64-linux-gnu/libcuda.so /usr/lib/aarch64-linux-gnu/libcuda.so.1
# Rebuild ldconfig cache, remove any mesa garbage
RUN rm /etc/ld.so.conf.d/aarch64-linux-gnu_GL.conf
RUN rm /etc/ld.so.conf.d/aarch64-linux-gnu_EGL.conf
RUN echo /usr/lib/aarch64-linux-gnu/tegra > /etc/ld.so.conf.d/aarch64-linux-tegra.conf
RUN echo /usr/lib/aarch64-linux-gnu/tegra-egl > /etc/ld.so.conf.d/aarch64-linux-tegra-egl.conf
RUN rm /etc/ld.so.cache
# Clean up (don't remove cuda libs... used by child containers)
RUN apt-get -y autoremove && apt-get -y autoclean
RUN rm -rf /var/cache/apt
# Add in the multimedia API
RUN sudo apt install -y python-imaging
WORKDIR /home/nvidia/
RUN curl $JETPACK_URL/Tegra_Multimedia_API_R28.2.1_aarch64.tbz2 -sL | tar xvfj -
WORKDIR /home/nvidia/tegra_multimedia_api/argus
RUN cmake .
RUN make
RUN make install
ENV LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64
RUN export LD_LIBRARY_PATH=$LD_LIBRARY_PATH
Now run docker:
docker run -ti --privileged=true jetson:test
After the container launches:
argus_daemon &
/home/nvidia/tegra_multimedia_api/argus/samples/oneShot/argus_oneshot
A few odd looking warnings in here, but we do see a sample image ‘oneShot.jpg’ as a result:
root@985e0ab42c07:/home/nvidia# /home/nvidia/tegra_multimedia_api/argus/samples/oneShot/argus_oneshot
Executing Argus Sample: argus_oneshot
=== Connection 7F988741E0 established ===
OFParserGetVirtualDevice: virtual device driver node not found in proc device-tree
OFParserGetVirtualDevice: virtual device driver node not found in proc device-tree
LoadOverridesFile: looking for override file [/Calib/camera_override.isp] 1/16LoadOverridesFile: looking for override file [/data/nvcam/settings/camera_overrides.isp] 2/16LoadOverridesFile: looking for override file [/opt/nvidia/nvcam/settings/camera_overrides.isp] 3/16LoadOverridesFile: looking for override file [/var/nvidia/nvcam/settings/camera_overrides.isp] 4/16LoadOverridesFile: looking for override file [/data/nvcam/camera_overrides.isp] 5/16LoadOverridesFile: looking for override file [/data/nvcam/settings/e3326_front_P5V27C.isp] 6/16LoadOverridesFile: looking for override file [/opt/nvidia/nvcam/settings/e3326_front_P5V27C.isp] 7/16LoadOverridesFile: looking for override file [/var/nvidia/nvcam/settings/e3326_front_P5V27C.isp] 8/16---- imager: No override file found. ----
CameraProvider result: provider=0x7f94b62670, shim=0x7f94b63b20, status=0, rpc status=1, size=9
Argus Version: 0.96.2 (multi-process)
Growing thread pool to 1 threads
SCF: Error InvalidState: NonFatal ISO BW requested not set. Requested = 2147483647 Set = 4687500 (in src/services/power/PowerServiceCore.cpp, function setCameraBw(), line 653)
/== CLEANUP 0x7f94b63b20 ==\
Destroying real provider 0x7f94b62670
\== CLEANUP DONE ==/
(Argus) Error EndOfFile: (propagating from libs/rpc_socket_server/ServerSocketManager.cpp, function recvThreadCore(), line 138)
(Argus) Error EndOfFile: (propagating from libs/rpc_socket_server/ServerSocketManager.cpp, function run(), line 56)
=== Connection 7F988741E0 closed ===
Cleaning up 0 requests...
Cleaning up 0 queues...
Cleaning up 0 streams...
Cleaning up 0 stream settings...
ServerWorkerThreadPool: final size = 1
=== Connection 7F988741E0 cleaned up ===
root@985e0ab42c07:/home/nvidia# ls
CMakeCache.txt CMakeFiles CMakeLists.txt Makefile bin cmake_install.cmake oneShot.jpg src