Jetson TX1 24.1 Release need help with complier directions , can not complie

I just down loaded the new 24.1 kernel to try it out since I am having issues try do do what I need in earlier version. I was able to figure out how to build the 23.2 kernel using the docs at Compiling Tegra X1/X2 source code - RidgeRun Developer Connection
since I could not understand the supplied directions from Nvidia.

I am again at this very frustrating point with trying to figure out how to build the 24.1 kernel source.

I am trying to go over the support document for 24.1
file:///home/jay/jetson/kernel/doc_24_1/nvl4t_docs/index.html#page/Tegra%2520Linux%2520Driver%2520Package%2520Development%2520Guide/l4t_overview.html

Under Building the NVIDIA Kernel it says to use the Jetson_TX1_toolchains so under Building the Toolchain it says to use the Jetson-tx1-toolchain-build.tbz2 but does not tell you where/how to find it.
I final did find it, but the path is not nicely given in the doc.

Next it has you untar the Jetson-tx1-toolchain-build and says to follow the supplied README in each of the directories.

So at the bottom of the readme files is says to

Update tar:
You need a version of tar that supports .xz format, so 1.22 or later.
Version 1.28, the latest as of this writing, works just fine.
$ cd $TOP/build

the only issue is it has never referred in the readme as to what $TOP is set to and I do not see any directory called build…
so I can not complete the directions in readme since I have no idea what to export $TOP as.

So why can the good folks at nvidia supply a good set of directions rather then refer me to this README in the toolchain build that seems to make no sense at all as what to do?
Stuck here again I guess.

Can anyone explain what the README file directions mean in terms of $TOP/build in the created toolchain-build-aarch64 and toolchain-build-armhf directories that are created?

Last I see in the nvidia direction for 24.1 under setup the tool chain it says

Host System Requirements
System requirements for the Ubuntu host systems includes:
•Ubuntu 10.04 32-bit distribution (64-bit distribution is not supported for building the toolchain)

Does this mean that Ubuntu 10.04 64bit which is what I think most people would be using, does not work at all to build the kernel under this toolchain? Can anyone a nvidia explain why they cannot find a method of compiling in Ubuntu 10.04 64bit if this is true?
(Why tell me at the bottom of the direction after setting rather then right at the top of the document that only Ubuntu 10.04 32-bit is supported if true.)

Is there a method to build the new 24.1 kernel on a Ubuntu 10.04 64bit Intel system at all?

Has someone built the new 24.1 kernel and written a understandable set of end user directions, other then what nvidia has?

I am quite willing to try and go over each step with someone a nvidia, me a very dumb end user that can not seem to read directions for some reason. If nvidia would take the feedback and write some directions that someone dumb like me could follow.

Thanks
Jayd

I have yet to go over any of the new stuff, but $TOP almost always implies directory of the source tree which is closest to “/”. Toplevel is a good name (a.k.a. “top”) because any “cd …” will put the user in a directory no longer related to the software.

I also have not gone over tool chains for R24.1, but this is almost always good to get from Linaro…there may be old chains listed. At least through R23.2 I’ve been using Linaro 5.2. See the pre-built tools at (5.3 releases are probably good too):
[url]https://releases.linaro.org/components/toolchain/binaries/[/url]

No doubt more information will be available in the next day or so.

So far as Ubuntu tool chains go, I believe the cross-compile tools of Ubuntu 10.04 were invented prior to the aarch64 architecture was even invented. aarch64 is of course 64-bit, so a 32-bit environment could have some difficulties. The Linaro 4.x series was basically the thing to use during ARMv7 32-bit days, but now ARMv8a 64-bit and ARMv8 32-bit are used…the older tools (relative to ARMv8+) were either completely missing needed features, or very primitive in their support.

Hi

So I had the same problem: Previously my build environment worked for L4T r23.1 and r23.2 with a prebuilt arm64 Linaro toolchain and the default gcc arm crosscompiler in Ubuntu 14.04. But with r24.1 my self compiled kernel (tag tegra-l4t-r24.1 from nv-tegra.nvidia Code Review - linux-3.10.git/summary) did not want to boot.

...
[    0.000000] Call trace:
[    0.000000] Code: bad PC value

With the 64bit userspace it seems to be necessary to use the toolchains as described by Nvidia in the Documentation http://developer.nvidia.com/embedded/dlc/l4t-documentation-24-1 (-> Building the Nvidia Kernel).

Now unfortunately the documentation is a bit confusing.
First of all it says the files for building the toolchain can be downloaded from jetson-tx1-toolchain-build.tbz2. But this link is (currently?) not present on the L4T website. Instead the tar file can be found in the Documentation you already have: (Tegra_Linux_Driver_Package_Documents_R24.1/nvl4t_docs/Tegra Linux Driver Package Development Guide/baggage/jetson-tx1-toolchain-build.tbz2).
Secondly it has to be noted that the Documentation describes two processes “Building the Toolchain” which is easily done with a shell-script and “Building the Toolchain Suite” which gives the same end result but is more complicated (and apparently requires a 32bit Host).

So go the easy route and build the toolchain with the shell script (make-*-toolchain.sh), no need for setting up a Ubuntu10.04 32bit host. The shell scripts have to be executed both for the AARCH64 toolchain and the armhf toolchain. Just follow the README files in the toolchain-build-aarch64 and toolchain-build-armhf directories. I built the toolchains on my Ubuntu14.04 64bit host. (The only problem was that i had to empty the LD path:)

export LD_LIBRARY_PATH=

After both shell scripts finished with Success! you find the finished toolchains in toolchain-build-armhf/install/bin and toolchain-build-aarch64/install/bin.
Finally set the environment variables for building L4T:

export ARCH=arm64
export CROSS32CC=/YOURPATH/jetson-tx1-toolchain-build/toolchain-build-armhf/install/bin/arm-unknown-linux-gnueabi-gcc
export CROSS_COMPILE=/YOURPATH/jetson-tx1-toolchain-build/toolchain-build-aarch64/install/bin/aarch64-unknown-linux-gnu-

Build the Image, tegra210-jetson-tx1-p2597-2180-a01-devkit.dtb and modules and place it all on your TX1.

Edit: Also it was necessary for me to apply these two patches for being able to build the kernel:

https://devtalk.nvidia.com/default/topic/894945/jetson-tx1/?offset=152#4744794

I built tool chain but got the following errors when building kernel:

/home/ubuntu/L4T/l4t24.1/toolchain/toolchain-build-aarch64/install/bin/aarch64-unknown-linux-gnu-ld: cannot find libgcc.a: No such file or directory
/home/ubuntu/L4T/l4t24.1/toolchain/toolchain-build-aarch64/install/bin/aarch64-unknown-linux-gnu-ld: cannot find libgcc.a: No such file or directory

Update:
I found the problem was missing gawk for tool chain build. I was able to use a laptop with Ubuntu14.04 to build 64 bit compiler and libs but failed to build 32 bit lib.
It appears l4t24.1 doesn’t need 32 bit libs. I was able to build r24.1 kernel using 64bit and 32 bit compilers and 64 bit libs.

I can confirm there is still a need for 32-bit compiler on this, likely because “arch/arm64/kernel/vdso32/” still names the 32-bit tool (note that 64-bit code is being called, but tool names 32-bit C) for build:

VDSO32C arch/arm64/kernel/vdso32/vgettimeofday.o

Using cross-compile with Linaro 5.2-2015.11 the following should get the job done:
File “drivers/base/Kconfig” should be edited. Line 234 has a quoted string, the trailing quote is missing. Add the quote back in, config should work for this. There are other dependency issues, but they do not seem to be fatal.

The old bug in “drivers/platform/tegra/tegra21_clocks.c:1065” still shows up on newer compilers, edit to instead be:

c->state = ((!is_lp_cluster()) == (c->u.cpu.mode == MODE_G)) ? ON : OFF;

The old issue of frame pointers also shows up if not optimizing (and default is not). In the top level Makefile edit the empty KBUILD_CFLAGS_KERNEL to be:

KBUILD_CFLAGS_KERNEL := -fomit-frame-pointer

My recipe for cross compile on Fedora 23 using Linaro 5.2-2015.11 installed at “/usr/local” is as follows (decide where your “${L4T_OUT}” will be, make sure it exists, and export the L4T_OUT). Set CONFIG_LOCALVERSION. This computer has 4 cores so -j4, adjust for your system:

export CROSS_COMPILE=/usr/local/gcc-linaro-5.2-2015.11-x86_64_aarch64-linux-gnu/bin/aarch64-linux-gnu-
export CROSS32CC=/usr/local/gcc-linaro-5.2-2015.11-x86_64_arm-linux-gnueabihf/bin/arm-linux-gnueabihf-gcc
export ARCH=arm64
export TEGRA_KERNEL_OUT=${L4T_OUT}/stage
export TEGRA_MODULES_OUT=${L4T_OUT}/modules
export TEGRA_FIRMWARE_OUT=${L4T_OUT}/firmware
make mrproper
make O=$TEGRA_KERNEL_OUT mrproper
make O=$TEGRA_KERNEL_OUT tegra21_defconfig
make O=$TEGRA_KERNEL_OUT menuconfig
make -j4 O=$TEGRA_KERNEL_OUT zImage
make -j4 O=$TEGRA_KERNEL_OUT dtbs
make -j4 O=$TEGRA_KERNEL_OUT modules
make O=$TEGRA_KERNEL_OUT modules_install INSTALL_MOD_PATH=$TEGRA_MODULES_OUT
make O=$TEGRA_KERNEL_OUT firmware_install INSTALL_FW_PATH=$TEGRA_FIRMWARE_OUT

Linaro precompiled binaries for those who do not have Ubuntu packages:
https://releases.linaro.org/components/toolchain/binaries/

I have not got to the point of testing 64-bit for vgettimeofday (the VDSO32C 32-bit being used on arch/arm64), but if this is fixed there is a chance of building without any 32-bit tools…time will tell.

Thanks for the feedback , sorry to hear others are having difficulties also.

My original comment was in regards to the 24.1 documentation presented in, file:///home/jay/jetson/kernel/doc_24_1/nvl4t_docs/index.html, which has errors and lacks clarity in regards to building the kernel and getting the complier tools. Simple things such as the directory tree being collapsed does not help find various sections.

In the getting started section, Building the NVIDIA Kernel. Step 1 lacks a sense of clarity intermixing examples of two compilers in the ‘where’ area.
It would be simpler to show each complier case with a comment about adding suffix gcc to the 32bit one and not the 64bit.

That is give the example for linaro GCC4.8 and above

export CROSS_COMPILE=<linaro_install>/aarch64-unknown-linux-gnu/bin/aarch64-unknown-linux-gnu-
export CROSS32CC=<linaro_install>/arm-unknown-linux-gnu/bin/arm-unknown-linux-gnueabi-gcc
export ARCH=arm64
export TEGRA_KERNEL_OUT=${L4T_OUT}/stage
and give one just like this for the CodeSourcery tool chain

Next in Step 2
It says cd /<kernel_source> then it says later Where <kernel_source> directory contains the kernel sources. Then you see a few lines below that Where is the parent of the Git root which is in the make O=$TEGRA_KERNEL_OUT tegra21_defconfig section which has no relationship to it. Simply in wrong order but what does that tell the reader, no one proof read this.

Then in Step 7
It refers to Step 4 in relationship to kernel modules where Step 4 for is the dtb build and step 5 is the module build.

Going on it is nice that this kernel build section gives you a click on to go to the Jetson TX1 toolchains sub section in the document. But in the toolchains section it does not have nice hit button to take one to the Downloads section, which one may not know contains the download button for jetson-tx1-toolchain-build.tbz2. Why is there not a nice button just like the one that took me here to get me to the downloads section. It took me a while to see that there was a section called downloads that did have a link to it.

Next in loading the give toolchains I found the provided README files in the downloads lacked some clarity in what it was directing one to do. In this case the nvidia document should really take the time to clarify some of the steps in the toolchain README files, if a third party tool document is referred to it should be read to see that most anyone can understand it also.

Anyway; Errors and lack of clear documents will certainly keep things out of any government programs.

In compiling using aarch64-unknown-linux-gnu, arm-uknown-linux-gnueabi I got a gcc.a ‘something’ not found at the end where it starts to build the Image. I think I am missing something in the setup.

I went back and used the aarch64-linux-gnu- and arm-linux-gnueabihf-gcc which after fixing the vdso32/Makefile and drivers/platform/tegra/tegra21_clocks.c. did build the Image.

I have not yet tried to compile with aarch64-unknown-linux-gnu again yet.

Anyway in the 24.1build with aarch64-linux-gnu- and arm-linux-gnueabihf-gcc I found that in Ubuntu
I could not setup myself as user, the display comes up trashed.

Also in my application I have a custom PCIe DMA driver, which in 23.2 I got to work for write/read files dma operations but did not work for IO control commands. In 24.1 the IO control commands work but it appears not to allocate the dma memory correctly for write/read hence it no long does PCIE dma ops.
Since the Jetson TX1 does have a PCIe port I would expect that someone would test it on each rev. to make sure it works, which is perhaps to much to ask, there appears to be no white paper app note for PCIe which would help again clarify how nvidia kernel and hardware allocate memory for PCIe based DMA.

All in all there a many ARM chip manufactures that do a very good job of support and documentation of there development boards. Others still need a lot work to bring up to par with end user expectations.

Jayd

We were not able to have driver for an FPGA card working under R23.1, I thought it was due to 32 bit user space. I’m sorry to hear there are issues for PCIe DMA under R24.1.

It’d be show stopper for our projects if we won’t be able to interface FPGA to TX1.

Yahoo2016,

The solution in 23.2 for PCIe DMA was to add vmalloc=256M cma=128M coherent-pool=96M to the boot parameters, in /boot/extlinux/extlinux.conf APPEND section to get dma_alloc_coherent to work for PCIe dma, thru the file system IO write/read calls.

In 24.1 it appears to no long work plus the ability to setup a new user and have the display not go bad.

Vendors of development boards need to test firmware/software revs against the hardware to make sure it still works, if something has changed and requires a work around then it needs to be properly documented by the development board vendor to the end user.
I will need to go back and use my Xilinx Zynq based PCIe bus master solution in which everything works well, and trash the Jetson TX1 as a workable design for doing any custom PCIe interface for.

Still hoping a person that is part of nvidia might respond to this. In regards to caring about their products and the end users. Do they??

Jayd

Jayd,
Thanks for sharing your work around for DMA issue. I was able to install FPGA driver but not to load it under R23.1. I’ll try building driver natively under 24.1, if that doesn’t work, I’ll try cross build. 32 bit user space is hard to work with.
We are actually trying to interface Zynq to TX1 to use GPUs. Our project needs the GPUs. I’ll post progress after install R24.1.

After install L4T 24.1 and run “sudo make module_prepare” in “usr/src/linux_headers-3.10.96-tegra” I was able to natively build and install driver for our FPGA card and run “modprobe fpga_driver”.

I do not have the FPGA card today, will try register and memory access tomorrow.

I am still experimenting around with the 24.1 build see if I can get it to run right; certainly going back to linuxdev’s comment.
File “drivers/base/Kconfig” should be edited. Line 234 has a quoted string, the trailing quote is missing. Add the quote back in, config should work for this. There are other dependency issues, but they do not seem to be fatal.

This is the Kconfig for building the DMA coherent allocation code in which if not correct looks like the reason I could not get PCIe DMA to work under 24.1 for TX1.
Make sure to fix this. I have not pushed a Image build for this out with this fix into the Jetson TX1 yet to try. (Someone added a new set of CONFIG_… lines to the given Kconfig at line 234 between 23.2 and 24.1 versions.)

We are after the same idea, in which I could have the potential to use the TX1 GPU cores with say MathLab tools in conjunction FPGA over PCIe with DMA and control.
Note, I am more a FPGA guy with some good C/Linux hacker skills. Somewhat retired after to many years in R&D.

What I have been using is a Avent mini-itx board with Zynq 045 see Zedboard.org and PCIe master PDF docs for it; setup as a root complex running under the Analog Devices Linux build. I have another Zynq ZC706 board setup as a root PCIe endpoint using Xilinx CDMA with a Linux driver I managed to hack up from various sources.To note for my build under the ADI Linux on their github site for my Zynq root complex side I had to set CONFIG_CMA_ALIGNMENT=10 in .config for my 4M DMAbuffers to be aligned right.

Let me see in the next day or so if I can figure out the various Jetson TX1 issues with 24.1.

Jayd

@jayds,
Hi, when you say that dma_alloc_coherent doesn’t work in 24.1 release, do you mean the API fails to allocate memory itself (or) it allocates memory but with that your PCIe end point device is not able to read/writes?

I am not quite sure what 24.1 is doing,from the driver probe print outputs I have it says its allocating the requested virtual/physical dram areas, looks just like in 23.2. It also makes the DMA write/ read calls, but if I look at the data it appears to be sending zeros over the PCIe bus. There is a a bus translation interconnect between DRAM and PCIe that may not get set or it did not really do the DMA coherent allocation request right. Pick one.
I did some diff between 24.1 and 23.2 and can see back in arch/arm64/mm and fs that several areas of code dealing with memory allocation have been changed between revs. It is all a little to complex for me to figure out what changes may have broken the memory allocation or getting the PCIe bus and DRAM bus connected.

In the char device call in 23.2 I found that setting .unlocked_ioctl in file operations to my ioctl call did not work, but it did in 24.1. Going back to 23.2 I was able to use .compat_ioctl in file operations to my ioctl call and it worked.
The main thing in 23.2 to add for DMA memory allocation is some bootargs in the form of vmalloc=256M cma=128M coherent-pool=96M to get the allocation request to work.

Jayd

Hi,

I just compiled 24.1 via cross compilers under Ubuntu 16.04 and I’ve been seeing a lot of pointer mismatches that look like were missed in the 32 to 64 bit kernel conversion. Anybody running into that as well ?

bill

I know nothing of the specific issue, but any notes on finding a specific pointer mismatch would be quite useful for debugging purposes.

There’s a bunch of assignment/comparison warning which are what concern me but many of them of are of the printk variety which are largely harmless. Is there a git repo that the most recent sources can be cloned against where I can submit patches ?

bill

There is this:
[url]http://nv-tegra.nvidia.com/gitweb/?p=linux-3.10.git[/url]

How current is that source tree ? Last commit was tagged to be about 4 weeks old

bill

I do not know how often that repo is updated. Any patch would need to be against the R24.1 kernel anyway.

Out of curiosity…, I have a patch file I apply to kernel source that makes it compile and adds support for the DUO MLX. Anyway, I skipped the section that added the -fomit-frame-pointer, and obviously error-ed in the kernel build. Went to manually apply it and noticed that they used march=armv7-a. Aren’t we meant to be -march=armv8-a. Again total nub on this department. Seems good so far, have not yet deployed, still building

The hardware is capable of both 32-bit and 64-bit, the 32-bit being ARMv8 and the 64-bit ARMv8-a. Although I’m not sure how much compatibility there is between 32-bit ARMv8 and 32-bit ARMv7, I suspect there are significant amounts of code which are directly compatible (perhaps with a recompile, perhaps even binary compatible). I have not tested this, but the 32-bit code might work if you change from march=armv7 (was this really armv7-a or just armv7??) to whatever the armv8 architecture name is under your compiler (I’m not where I can check this right now). R24.1 is new enough I’m not absolutely certain of how ARMv7, ARMv8, and ARMv8-a might mix and remain compatible. The “-fomit-frame-pointer” should apply to all of those architectures.