Xillybus Driver: Not built by default on TX2 and Xavier when it's built by default on X86. Why?

Hello;

I notice that the xillybus kernel driver modules xillybus_core.ko and xillybus_pcie.ko that are part of the default X86 ubuntu kernel are not built as part of the NVidia ARM ubuntu kernel.

Is there a specific issue with this driver that makes it unbuildable/unusable on Nvidia ARM platforms?

Please let me know when you get a moment.

Tnx!

johnu

You’ll find just about every module exists and is directly available for desktop distributions. Even if you have an NVIDIA graphics card you’ll also get AMD and Intel drivers. You’ll get a lot of hard disk controller software for controllers you don’t have. Embedded systems simply cannot do that…an average PC user might have a 1TB drive, but even though a TX2’s 32GB drive is enormous compared to other embedded systems, it is still small. It isn’t NVIDIA platforms which differ, its embedded platforms in general (Ubuntu for embedded does not start with the same packages as Ubuntu for a desktop PC).

There is something more subtle to consider. I’ll start with 32-bit armhf systems (the Jetson TK1 is such a system). When the kernel is loaded into memory it has a base address. The combination of kernel modules and initial ramdisk has to be in physical memory as well…this isn’t managed by a memory controller. That space is just below the load address of the base of the kernel (the kernel might load somewhere like 0x8000000 as an example). The entire space of initial ramdisk and kernel modules, just below the base load address, must be accessible by a direct branch instruction. Exceeding that space implies undefined behavior and (if you are lucky) a message about unreachable code. On 32-bit armhf the limit is 32MB combined for the initrd and all kernel modules…by default it is segregated as 8MB of driver modules and 24MB of initial ramdisk. That’s tiny.

On arm64 this increases to something like 128MB (haven’t looked in a long time, I think this is correct though). This is the entire maximum total content of initrd and modules combined. I don’t know what the x86_64 limit is, but arm64 is tiny compared to what a desktop PC architecture can do (it is the architecture which defines this). All of those extra/optional drivers you have on a PC are in the form of a module. If you add all of those modules, and load them below the base address of the kernel, and especially if you also use an initrd, then you are guaranteed that access to some of that code is going to end up as an unreachable code error. The consequences might be strange behavior, or it will bring the whole system down upon access.

Not including it by default is not the same as saying it is unbuildable or unusable. Just because it isn’t installed don’t assume it isn’t supported. This is Linux. Developers must know how to add a driver…adding as a module is a simple thing to do (provided you haven’t exceeded the total size of a direct branch instruction). Granted, in Windows there is some sort of automatic database where if hardware is detected it’ll try to download the driver…but even in Windows you don’t want to load the driver for say 10Gbit ethernet cards you don’t have, video cards you don’t have, so on. Systems with limited resources simply can’t add everything and despite not being automatic, it isn’t too difficult to add what you need.

The documentation with the L4T release provides some kernel building information, but you’re always free to ask more about this if unsure of something. Building and installing a module is usually much easier than installing an entire kernel, but there are a few details to know ahead of time if you are want to simplify your life.

Hello and thankyou for the thoughtful response!

I posted the question preemptively so that if the build died I would have hopefully have a FAQ style answer/solution awaiting me on the forum. :-)

Suffice it to say I have an ARM xillybus_core.ko and xillybus_pcie.ko built.

What’s unexpected and insightful about your post is the context you provided about how little boot image space is available on ARM/AARCH64.

Years of ever increasing boot real estate on X86 has taken boot space off the list of things to worry about. :-)

But migrating existing X86 paradigms to ARM keeps surprising me, usually not in a good way.

I greatly appreciate your info on this and will keep it in my mind as we spec out our OS image.