How to upgrade my TX1 machine to R28.1

I am new in using the TX1 machine. Can someone tell me the detailed steps to upgrade my TX1 machine to R28.1. It currently has R24.2.1.

In short, you may save your data, code and config first into an external storage, and use JetPack3.1 (you can download it from NVIDIA [url]https://developer.nvidia.com/embedded/jetpack[/url]) for reflashing a fresh 28.1 image on your TX1.
You may have to reinstall some manually installed packages after that for setting up an equivalent environment for your apps.

I downloaded file JetPack-L4T-3.1-linux-x64.run into my windows machine. What are the steps for reflashing it into the TX1 machine?

JetPack is a front end to flashing and additional package updates to both host and Jetson, and runs only on an Ubuntu Linux host PC (Windows won’t work). Officially only Ubuntu 14.04 host PCs are supported, but mostly an Ubuntu 16.04 host PC works (there may be a sample which won’t work on Ubuntu 16.04).

I ran JetPack3.1 in my Ubuntu 14.04 x86 host PC. It encountered a problem at installing cuda-toolkit-8-0 so the installation stopped at the step of “Post Installation Jetson TX1”.

I tried to download cuda-8.0 at https://developer.nvidia.com/cuda-downloads?target_os=Linux manually, but it could not continue. The site does not have ARM64 architecture file.

At this point, the JetPack3.1 installation stopped at the step of “Post Installation Jetson TX1”.

I tried the installation twice but hang at the same place. What should I do?

It’s useful to know that JetPack flash and extra package installs are independent of each other…flash runs on the micro-B USB cable, package install runs only over wired ethernet. You can always reboot the Jetson and then check just the software install portion on JetPack.

There may already be some packages installed if the package install mechanism refuses. If there is a message in logs about network failure, then it is a different problem. You may need to look at the JetPack logs to see what it says (if you are using WiFi you don’t need to check…JetPack is not set up to work through the WiFi…you’d need to switch to wired ethernet).

I have wired internet connected.

I terminated the installation process because the TX1 seems booting with the updated Kernel. The new Kernel is
Linux tegra-ubuntu 4.4.38-tegra #1 SMP PREEMPT Thu Jul 20 00:41:06 PDT 2017 aarch64 aarch64 aarch64 GNU/Linux

I tried to compile our driver and library & samples. Compiling of library and samples are OK, but compiling driver failed. The error information is
/bin/bash: scripts/basic/fixdep: cannot execute binary file: Exec format error

What could be the problem?

I re-flashed my TX1 machine using the flash.sh utility from the R28.1 L4T release package and sample filesystem package. It has the same driver compiling problem, with a failure message as following:
/bin/bash: scripts/basic/fixdep: cannot execute binary file: Exec format error

My driver, however, is compiled without problem, using the Ubuntu 16.04 x86 host machine.

Looks like the TX1 R28.1 release package has problem with the driver compiling.

How can I resolve the problem?

The exec format error sounds like trying to run a program from the wrong architecture, e.g., a binary file for a PC trying to run on a Jetson, or vice versa.

Note that if you compiled a driver on the PC without using cross compile tools you will be producing code for the PC, not the Jetson. Did you cross compile?

NO, I compiled my driver for x86 in my Ubuntu 16.04 x86 host machine, just to see if the driver can be compiled correctly in Ubuntu 16.04.

In Jetson TX1 machine, the same driver failed in compiling, right after it was re-flashed to R28.1. I have another Jetson TX1 with R24.2.1. The same driver can be compiled without problem there.

Please help me to understand what is the problem of driver compiling with the R28.1, right after re-flashing with R28.1. I need to fix the problem to continue my work.

Are you building with the full source code on the Jetson? What is your starting config? This should preferably be from “/proc/config.gz”, and then setting CONFIG_LOCALVERSION, and only then enabling the new module. So it depends on your starting config. Can you describe the steps you are taking?

I started building driver right after flash.sh completes flashing the R28.1 image. I do not know if I need to do anything after the reboot.

I just looked into file /proc/config.gz, it has CONFIG_LOCALVERSION=“”.

Since I am new in using TX1, please tell me the detailed instruction to configure the system.

For kernel build the “Documentation” download for R28.1 has a kernel customization section. See:
https://developer.nvidia.com/embedded/linux-tegra

The basic concept is that a kernel has a configuration, and to build a modification or module you should start with the existing configuration. Consider for example what would happen if you want to build support for an ethernet device, but Internet Protocols were not enabled and the ethernet driver tried to call one of the protocol functions…it would end badly. Decompressing “/proc/config.gz” in some alternate location will give you a nearly exact match to the running kernel.

Every Linux system responds with a version from the command “uname -r”. The prefix of that version is the kernel source version, the suffix is from the setting of CONFIG_LOCALVERSION. So if “uname -r” says “4.4.38-tegra”, then you know source code was version 4.4.38, and that CONFIG_LOCALVERSION was “-tegra”. The reason config.gz is only “nearly exact” is because it leaves out CONFIG_LOCALVERSION. Using config.gz combined with setting CONFIG_LOCALVERSION gives an exact match.

You need an exact match because the kernel looks for modules at (there may also be other checks on module versions):

/lib/modules/$(uname -r)/

Follow the instructions from the Documentation, but instead of configuration with one of the “_defconfig” settings copy the decompressed “config.gz” as “.config”. Then edit CONFIG_LOCALVERSION as a match to your system, and finally, make the config change you were originally after.

You can typically build modules directly on the Jetson. Procedures in the Documentation are designed for cross compile, but other than using a cross compiler native build is not much different than cross compile. See also:
https://developer.ridgerun.com/wiki/index.php?title=Compiling_Tegra_X1_source_code

Because we want to build our drivers in the target machine directly, we would like to configure the TX1 machine so it can compile drivers natively.

From the information you provided in #13, we interpreted it has the following 3 steps:

  1. cp config.gz .config
  2. vi .config, change CONFIG_LOCALVERSION=“” to CONFIG_LOCALVERSION=“-tegra”
  3. reboot TX1

Please confirm if this is correct.

Don’t forget to “gunzip” the config:

cp /proc/config.gz .
gunzip config.gz
# Note that I use "cp" so the original is still available if you "make mrproper":
cp config .config
# Edit CONFIG_LOCALVERSION as you mentioned...
# Example for other config:
make nconfig

FYI, some of the text based configuration tools for a kernel require adding package “libncurses5-dev” (“sudo apt-get install libncurses5-dev”).

Some of the cross compile instructions use the “O=/some/location” in commands…this is still advisable so you can build cleanly outside of the original source tree. What you’d skip is setting any kind of CROSS_COMPILE environment variable.

I figured out the driver compiling problem. An extra step has to be done in /usr/src/linux-headers-4.4.38-tegra, after re-flashing the TX1 machine with R28.1. The command is “make modules_prepare”.

No change in .config is needed.