Cross Compile (toolchain) VibrantePDK 4.1.8 - Update Toolchains please

Hello,

We have been using the provided cross-compile toolchains as provided by nvidia successfully for the VibrantePDK <= 4.0, i.e. for Ubuntu 15.10.

For the newest versions of Vibrante however, this is no longer the case. It seems that those are not 100% updated to match the new OS environment (Ubuntu 16.04). For example Ubuntu 16.04 uses gcc 5.3 while the current toolchain still has still gcc 4.8/4.9 (toolchains seems unchanged). As a result, cross compilation is not working any more (lib versions are different for dynamic linking).

(Of course we don’t want to bundle used .so with each package we built with GCC AND to use the -rpath option. It is too “messy”. Also we want to avoid -static linking)

Can you please provide a updated toolchain that matches the current OS setup/environment?
It would also be great if you provide the boost libraries for cross-compilation. (If not, worst case we can extract them from the target)

P.S.: I am referring to the toolchains found in:

VibrantePDK/toolchains

Thank you!
Athanasios

Dear athanasios,

Unfortunately, this issue will likely be address by next branch release. Thanks.

Hi,

I am also unable to cross-compile with Ubuntu 16.04 :

pedro@hal9000:~/NVIDIA_DrivePX2/demo_danisi/build$ cmake -DCMAKE_BUILD_TYPE=Release \
>       -DCMAKE_TOOLCHAIN_FILE=/usr/local/driveworks/samples/cmake/Toolchain-V4L.cmake \
>       -DVIBRANTE_PDK:STRING=/home/pedro/DrivePX2/VibranteSDK/vibrante-t186ref-linux \
>        ..
-- VIBRANTE_PDK = /home/pedro/DrivePX2/VibranteSDK/vibrante-t186ref-linux
-- VIBRANTE_PDK_DEVICE = t186ref
-- VIBRANTE_PDK_BRANCH = 4.1.8.0
-- Vibrante version 4.1.8.0
-- VIBRANTE_PDK = /home/pedro/DrivePX2/VibranteSDK/vibrante-t186ref-linux
-- Vibrante version 4.1.8.0
-- The C compiler identification is unknown
-- The CXX compiler identification is unknown
CMake Error at CMakeLists.txt:17 (project):
  The CMAKE_C_COMPILER:

    /home/pedro/DrivePX2/VibranteSDK/vibrante-t186ref-linux/../toolchains/tegra-4.9-nv/usr/bin/aarch64-gnu-linux/aarch64-gnu-linux-gcc

  is not a full path to an existing compiler tool.

  Tell CMake where to find the compiler by setting either the environment
  variable "CC" or the CMake cache entry CMAKE_C_COMPILER to the full path to
  the compiler, or to the compiler name if it is in the PATH.


CMake Error at CMakeLists.txt:17 (project):
  The CMAKE_CXX_COMPILER:

    /home/pedro/DrivePX2/VibranteSDK/vibrante-t186ref-linux/../toolchains/tegra-4.9-nv/usr/bin/aarch64-gnu-linux/aarch64-gnu-linux-g++

  is not a full path to an existing compiler tool.

  Tell CMake where to find the compiler by setting either the environment
  variable "CXX" or the CMake cache entry CMAKE_CXX_COMPILER to the full path
  to the compiler, or to the compiler name if it is in the PATH.


-- Configuring incomplete, errors occurred!
See also "/home/pedro/NVIDIA_DrivePX2/demo_danisi/build/CMakeFiles/CMakeOutput.log".
See also "/home/pedro/NVIDIA_DrivePX2/demo_danisi/build/CMakeFiles/CMakeError.log".

Is there a way to solve this problem?

Dear SteveNV, any update on when the toolchain is expected to be updated? According to the Release Notes of Nvidia DRIVE LINUX 5.0.5.0 is still at GCC 4.9. Although cross compiling works against some official 16.04 arm64 libraries, it won’t work against many due to ABI changes in how strings are handled (related to _GLIBCXX_USE_CXX11_ABI ). This means we have to recompile many libraries, including Boost and much of ROS if we want to use them for cross-compiling.

Alternatively, it would be an acceptable alternative if we can fully switch to native compiling on the PX2, but from the PX1 days I recall that some things were delivered in a way that you had to cross-compile (in other words: you couldn’t achieve as much with native compiling as you could with cross-compiling).

Dear c.j.lekkerkerker,

We can’t support changing the host toolchain until next October. Thanks.

i’m also interested in having an alternative to native compilation, although from reading various threads it seems like no single alternative (cross compilation with nvidia-provided toolchains, cross compilation with other toolchains, or native compilation) is ‘best’ or handles all use cases.

as an alternative to native compilation, i’ve been experimenting with using various methods. one method in particular seems to work in initial testing for me – i can compile a binary with dependencies on 50+ ubuntu 16.04 arm64 packages (including some ROS packages) on my host, copy it to the PX2 where i install the deps with apt, and run the binary without error (albeit with only minimal testing so far).

similar to the ‘hybrid’ method described in the PX2 docs, i created an arm64 chroot on the host (created using qemu-debootstrap via ‘mk-sbuild --arch=arm64 xenial’ ), and then install the PX2-specific debs into that chroot. see see: Arm64Qemu - Debian Wiki for some background.

then, you have a couple options for compilation:

  • you can use the ubuntu/debian provided cross-compiler that targets arm64, and point sysroot to the chroot (at least after making all the symlinks in the chroot relative by using (inside the chroot!) ‘symlinks -cro /lib /usr/lib /etc | tee /root/symlinks-cro-lib-usr-lib-etc.out’ or similar ). this is how i build my custom binary.
  • you can compile inside the chroot (under qemu emulation), which is pretty slow, but uses host RAM/CPU, doesn’t need the actual PX2 hardware, and should allow ‘out of the box’ building of most packages (maybe? this is how i build libprotobuf3, since i had trouble building it using the cross-compiler+sysroot method). aside from having the nvidia .deb’s installed in the chroot, this is a pretty ‘vanilla’ method for ubuntu/debian cross compilation.

of course, this means that you’d be mixing the maybe-compiled-with-older-gcc nvidia libs with the ‘stock’ ubuntu ones, but that seems to be how things are on the PX2 itself?

note 1: another method i tried was to use multiarch support to install arm64 versions of development packages alongside the host cross compiler in a host-native ubuntu chroot (or not even using a chroot, for the brave). this worked in some cases, but it seems like multiarch support for devleopment is still pretty immature, and i ran into various walls i was unable to work around, in particular related to lack of python multiarch support. but, it did seem promising!

note 2: in order to speed up compilation inside the chroot, i’ve read that is should be possible to compile static native (amd64) gcc binaries that target arm64 (but named ‘gcc/g++’ as normal, not named as differently like ‘normal’ cross compilers) that can be inserted in the chroot, similar to the static qemu binary itself. if this was done, then the main disadvantage of compiling inside the chroot would be removed.

i’m just learning/experimenting with all this myself, so i welcome any comments/advice!

Hi, Just a question
How to get the Vibrantepdk 4 sdk(Vibrante Linux SDK/PDK 4.0 for Tegra )? From where I can download it.
I need to create my software binaries for this system. Seeking help and procedure to acquire sdks.

@moskewczv5asj, thanks for your extensive answer. Our current cross-compile approach is to chroot into the “targetfs” (as created by DriveInstall) using qemu. From that chroot we can install stuff like ROS (via apt-get) as if we are actually on the DrivePX itself (that requires making some bind mounts to the host to expose the network etc). After adding the desired libraries to that targetfs (via apt within the chroot session), we can use that targetfs dir from the host (not in chroot anymore) as our main point (CMAKE_SYSROOT) to provide all the cross-compile libraries/headers on the host system. We made a script to simplify the chroot/qemu procedure. See below.

What I am curious about is this: what are the advantages of using the nvidia cross-compiler, rather than the official ubuntu cross-compiler? The nvidia one ties us to gcc 4.9, which mismatches with many of the official Ubuntu Xenial libraries. If we would do native DPX compiling we would use Ubuntu’s gcc 5.x (I don’t think Nvidia provides a custom/optimized C/C++ compiler for the DPX itself?). So if I’m not mistaken, native compiling on DPX with Ubuntu’s GCC should be equivalent to cross-compiling with our targetfs changeroot trick using Ubuntus GCC arm64 cross compiler. Maybe I’m wrong (I’m just a mechanical engineer ;-) ), but when I realized this it made me less reluctant to try and use Ubuntu’s arm64 cross-compiler, rather than the nvidia toolchain. I’m curious if anyone is able to comment on that.

@SteveNV: could you maybe comment on what is the advantage of using nvidia’s official gcc 4.9 toolchain, compared to using Ubuntu’s version?

The chroot script
I actually made a nice script for chrooting into the targetfs with qemu. It assumes you first set (export) bash var NVIDIA_PDKROOT_DOCKERHOST_DPX2 to point the top directory of the PDK (or SDK). The script was build for Vibrante 4.8.x. Current version is Drive 5.0.5.0, which includes a (partial) rename from Vibrante → Drive. This might break the script but fixing should be straightforward. Writing below script was one of my first encounters with chroot, so I’m not sure if I overlooked some other best practices or tools that would make it easier, but it seems to work fine for us so far.

Note1: forget about the docker refs in the script, without docker it will work fine. It just happens that we use it from docker.
Note2: you can also ignore the DPX1 part, we actually us this still for our old DPX1’s too ;)

#! /bin/bash

###############
## Functions ##
###############

print_help() {
  echo "Usage (on Development PC):"
  echo "  sudo -E ./start_targetfs_session.sh -p DPX1"
  echo ""
  echo "Arguments:"
  echo " -p,--platform       Either DPX1 or DPX2"
  echo " -c,--cleanup-only   Only make sure we leave our targetfs clean (unmounted binds, etc.)"
  echo " -h,--help           Print this help..."
}

check_bash_dir_variables() {
  TMP_VAR="$(eval echo \$$1)"
  if [ "$TMP_VAR" == "" ]; then
    echo "Please make sure you set the bash variable: $1 (e.g. via .bashrc)"
    exit 1
  fi
  if [ ! -d "$TMP_VAR" ]; then
    echo "The directory of var $1 (which resolves to $TMP_VAR) does not exist. Maybe you forgot to mount an external drive?"
    exit 1
  fi
}

targetfs_prep_chroot(){
  if [ $PLATFORM == "DPX1" ]; then
    : # do nothing
  elif [ $PLATFORM == "DPX2" ]; then
    mv $TARGETFS_DIR/etc/resolv.conf{,.bak} || true # backup this weird broken symlink
  fi
  touch $TARGETFS_DIR/etc/resolv.conf
  mkdir $TARGETFS_DIR/dev/pts
  touch $TARGETFS_DIR/dev/urandom

  cp /usr/bin/qemu-aarch64-static            $TARGETFS_DIR/usr/bin
  #mount -o bind /run                         $TARGETFS_DIR/run # seems not needed currently
  mount -t proc proc                         $TARGETFS_DIR/proc
  mount -t sysfs sys                         $TARGETFS_DIR/sys
  mount -o bind /dev/pts                     $TARGETFS_DIR/dev/pts
  mount -o bind /dev/urandom                 $TARGETFS_DIR/dev/urandom
  mount -o bind /etc/hosts                   $TARGETFS_DIR/etc/hosts
  mount -o bind /etc/hostname                $TARGETFS_DIR/etc/hostname
  mount -o bind /etc/resolv.conf             $TARGETFS_DIR/etc/resolv.conf
}

targetfs_cleanup_chroot(){
  echo -ne "cleanup qemu... ";                  rm     $TARGETFS_DIR/usr/bin/qemu-aarch64-static &> /dev/null || echo -n "not needed"
  #echo -ne "\nunmount  /run... ";               umount $TARGETFS_DIR/run &> /dev/null || echo -n "not needed"
  echo -ne "\nunmount  /proc... ";              umount $TARGETFS_DIR/proc &> /dev/null || echo -n "not needed"
  echo -ne "\nunmount  /dev/pts... ";           umount $TARGETFS_DIR/dev/pts &> /dev/null || echo -n "not needed"
  echo -ne "\nunmount  /dev/urandom... ";       umount $TARGETFS_DIR/dev/urandom &> /dev/null || echo -n "not needed"
  echo -ne "\nunmount  /sys... ";               umount $TARGETFS_DIR/sys &> /dev/null || echo -n "not needed"
  echo -ne "\nunmount  /etc/hosts... ";         umount $TARGETFS_DIR/etc/hosts &> /dev/null || echo -n "not needed"
  echo -ne "\nunmount  /etc/hostname... ";      umount $TARGETFS_DIR/etc/hostname &> /dev/null || echo -n "not needed"
  echo -ne "\nunmount  /etc/resolv.conf... ";   umount $TARGETFS_DIR/etc/resolv.conf &> /dev/null || echo -n "not needed"
  echo ""

  echo "cleanup tmp files"
  rmdir $TARGETFS_DIR/dev/pts &> /dev/null || true
  rm    $TARGETFS_DIR/dev/urandom &> /dev/null || true
  if [ $PLATFORM == "DPX1" ]; then
    rm $TARGETFS_DIR/etc/resolv.conf &> /dev/null || true
  elif [ $PLATFORM == "DPX2" ]; then
    mv -f $TARGETFS_DIR/etc/resolv.conf{.bak,} &> /dev/null || true # backup this weird broken symlink
  fi
}

parse_args() {
  while [[ $# -ge 1 ]]; do
    key="$1"
    case $key in
        -h|--help)
        print_help
        exit 0
        ;;
        -p|--platform)
        PLATFORM="$2"
        shift
        ;;
        -c|--cleanup-only)
        CLEANUP_ONLY="true"
        ;;
        *)
        echo "unknown option '$key' supplied"        # unknown option
        exit 1
        ;;
    esac
    shift # past argument or value
    done

  if [ $PLATFORM != "DPX1" ] && [ $PLATFORM != "DPX2" ]; then
    echo "Invalid platform supplied..."
    print_help
    exit 1
  fi

  if [ $PLATFORM == "DPX1" ]; then
    check_bash_dir_variables NVIDIA_PDKROOT_DOCKERHOST_DPX1
    TARGETFS_DIR="$NVIDIA_PDKROOT_DOCKERHOST_DPX1/vibrante-t210ref-linux/targetfs"
  elif [ $PLATFORM == "DPX2" ]; then
    check_bash_dir_variables NVIDIA_PDKROOT_DOCKERHOST_DPX2
    TARGETFS_DIR="$NVIDIA_PDKROOT_DOCKERHOST_DPX2/vibrante-t186ref-linux/targetfs"
  fi

  if [ "$CLEANUP_ONLY" == "true" ]; then
    echo "Will perform cleanup now"
    targetfs_cleanup_chroot
    echo "Finished cleanly!"
    exit 0
  fi
}

#################
## Main script ##
#################

set -e #Exit as soon as any line in the bash script fails
#set -x #Prints each command executed (prefix with ++)

# Check if root
if [[ $EUID -ne 0 ]]; then
   echo "This script must be run as root" 1>&2
   print_help
   exit 1
fi

# Check if qemu static is installed
if ! which qemu-aarch64-static > /dev/null; then
   echo -e "qemu-aarch64-static command not found! Install? (y/N) \c"
   read REPLY
   if [ "$REPLY" == "y" ]; then
     apt-get install qemu-user-static
   else
     exit 1
   fi
fi

# Parse args
parse_args $*

# Prepare
targetfs_prep_chroot

# Run chroot session
echo "Will now start session, just exit the session to cleanly close down"
sudo chroot $TARGETFS_DIR qemu-aarch64-static /bin/bash || true # by using sudo we clear the user environment first!

# Finalize
targetfs_cleanup_chroot
echo "Finished cleanly!"
exit 0