Cross compile CUDA engines

Is it possible to cross compile CUDA engines between platforms? I have a Titan rtx 2 feet away and a cloud of gpus I can use on demand. It really slows down the iterative process waiting 30m → hour Everytime I want to test something on the nano.

Hi,

Is it a pure CUDA kernel code or a TensorRT program?

To cross compile a pure CUDA kernel code is supported.
You can do this with an ARM cross compiler and add Nano compute capability(sm=5.3) to the nvcc.

But please noticed that TensorRT engine cannot be used cross platform.
The TensorRT PLAN need to be created directly on the target system.

Thanks.