Cross compile program, using OpenCV with Jetpack 2.3.1

Hello,
I have a problem with cross compilation on Linux with this Jetpack 2.3.1. The problem appears during linkage with Opencv Libraries with simple functions like putText.
Generally, for building with cross compiler, I copy all required shared objects libopencv_.so from board to the Linux host and provide the path to the directory, contents those files.
I prepared on host two directories:

  1.   Opencv8 With libopencv_*.so*, copied from board with latest version of Jetpack
    
  2.   Opencv7 With libopencv_*.so*, copied from board with previous version of Jetpack (L4T 24.1 Jetpack 2.2.1)
    

When I set the libraries path for linker to /usr/aarch64-linux-gnu/lib/Opencv8, I receive linker error “undefined reference to cv::putText”
When I set the libraries path for linker to /usr/aarch64-linux-gnu/lib/Opencv7, the linker works Ok.

The linker command is:

[b]Building target: Test
Invoking: NVCC Linker
/usr/local/cuda-8.0/bin/nvcc --cudart shared -L/usr/aarch64-linux-gnu/lib/Opencv8 -
Xlinker --unresolved-symbols=ignore-in-object-files -Xlinker --unresolved-
symbols=ignore-in-shared-libs --relocatable-device-code=false -gencode arch=
compute_52,code=compute_52 -gencode arch=compute_52,code=sm_52 -m64 -ccbin
aarch64-linux-gnu-g++ -link -o “Test” ./src/Test.o -lopencv_core

./src/Test.o: In function main': /media/sf_GPUShared/Test/Debug/../src/Test.cpp:24: undefined reference to cv
::putText(cv::Mat&, std::string const&, cv::Point_, int, double, cv::
Scalar_, int, int, bool)’

collect2: error: ld returned 1 exit status
make: *** [Test] Error 1
[/b]

This is a code:

#include "stdio.h"
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>

using namespace std;
using namespace cv;


int main(void)
{
puts("Start");
  cv::Size size = cv::Size(1000,800);
  cv::Mat img = cv::Mat::zeros(size, CV_8UC3);
    // Add text
    putText(img, "HELLO", Point(10, 80), CV_FONT_HERSHEY_COMPLEX_SMALL, 4, Scalar(80, 255, 80), 8, 4);
puts("Stop");
return 1;
}

Looks your build command misses libopencv_highgui. Try adding

-lopencv_highgui

and check that libopencv_highgui.so is available.

cv:putText defined in core.hpp and the same code linked well with other version of OpenCV. May be NVidia move function from one library to other?

Sorry for the confusion, this was because of including highgui.hpp which is probably useless in your sample code.
I think that cv::putText has moved into img_proc in opencv3. Are you using opencv4tegra(2.4) or opencv3 ?

Could you also check the differences between your libraries folders, especially permissions that may be modified depending on how they’ve been copied into your host ?

You are right, highgui.hpp is unused. I use opencv4tegra. I compared both folders of opencv libraries.
They have the same files with different sizes with the same permission. I copied them with scp and made chmod for them. I will try tomorrow to add img_proc library to the linker.

I add to the linker img_proc and highgui libraries, but this doesn’t help.

How did you compile test.o ? Is the same file that you are trying to link against both versions of JetPack libs ?
Have you tried to compile it including opencv4tegra/include of last JetPack (and cuda8’s includes instead of cuda7 in JetPack 2.2.1) ?

Furthermore, this may be unrelated, but you are compiling for CUDA arch 5.2, while for TX1 it should be 5.3.

Also seems for linking it looks for a prototype with one extra parameter of type bool that is not provided in your code. You could check in both versions of opencv includes if it was overloaded with a default value before and no longer, or passing the bool parameter in your call, something like that…

The test.o is the same file. I only change path to the library in Nsight Eclipse Environment. Includes are from last Jetpack.
About Cuda arch 5.2 - I read in some forum that this could be solution and tried it. I also tried with 5.3.
In both versions of opencv the declaration of function is identical and has a default parameter.

The error message from the linker shows that the default parameter has been found, my suggestion was not good.
I cannot help much more, but you can try to look at what is exported by both libopencv_core.so files with

nm -D libopencv_core.so | grep putText
or
objdump -T libopencv_core.so | grep putText

Looking at mine, it looks it is compiled with std=c++11. Not sure this is a better advice than previous ones, but you may try to recompile with flag -std=c++11 if not already used.

I inspected two versions of shared object. This is a result:

[b]ubuntu@L4TX1U14045:/usr/aarch64-linux-gnu/lib/opencv8$ nm -D libopencv_core.so |grep putText

0000000000087d60 T _ZN2cv7putTextERNS_3MatERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEN
S_6Point_IiEEidNS_7Scalar_IdEEiib

ubuntu@L4TX1U14045:/usr/aarch64-linux-gnu/lib/opencv7$ nm -D libopencv_core.so |grep putText

00000000000d0e40 T _ZN2cv7putTextERNS_3MatERKSsNS_6Point_IiEEidNS_7Scalar_IdEEiib
[/b]
There is a difference in versions, that I don’t know to explain.
I also tried to compile with c++11 option. This is an output with error message:
[b]
make all
Building file: …/src/Test.cpp
Invoking: NVCC Compiler
/usr/local/cuda-8.0/bin/nvcc -G -g -O0 -ccbin aarch64-linux-gnu-g++ -gencode
arch=compute_52,code=sm_52 -m64 -odir “src” -M -o “src/Test.d” “…/src/Test.
cpp”

/usr/local/cuda-8.0/bin/nvcc -Wno-deprecated-gpu-targets -G -g -O0 --compile -
m64 -ccbin aarch64-linux-gnu-g++ -x c++ -o “src/Test.o” “…/src/Test.cpp”

Finished building: …/src/Test.cpp

Building target: Test
Invoking: NVCC Linker
/usr/local/cuda-8.0/bin/nvcc --cudart shared -L/usr/aarch64-linux-gnu/lib -
Xlinker --unresolved-symbols=ignore-in-object-files -Xlinker --unresolved-
symbols=ignore-in-shared-libs --relocatable-device-code=false -gencode arch=
compute_52,code=compute_52 -gencode arch=compute_52,code=sm_52 -m64 -ccbin
aarch64-linux-gnu-g++ -link -o “Test” ./src/Test.o -lopencv_core

./src/Test.o: In function main': /media/sf_GPUShared/Test/Debug/../src/Test.cpp:24: undefined reference to cv
::putText(cv::Mat&, std::string const&, cv::Point_, int, double, cv::
Scalar_, int, int, bool)’

collect2: error: ld returned 1 exit status
make: *** [Test] Error 1

08:31:54 Build Finished (took 2s.726ms)

[/b]

I made some more experiments.
I compile my code on Host with cross compiler, after that I copied object file to the board and tried to link it on the board.
Received error was the same as in host:
root@tegra-ubuntu:/home/ubuntu/test# /usr/local/cuda-8.0/bin/nvcc -o test8.out -G -g -O0 -Wno-deprecated-gpu-targets -std=c++11 -lopencv_core test8.o

test8.o: In function main': /media/sf_GPUShared/Test/Debug/../src/Test.cpp:24: undefined reference to cv::putText(cv::Mat&, std::string const&, cv::Point_, int, double, cv::Scalar_, int, int, bool)’

collect2: error: ld returned 1 exit status

So the problem not in cross linker. Problem in cross compiler of functions of opencv.

I’m looking for any ideas???

Hi Alexs66,

The question is where is found libopencv_core.so when nvcc tries to link your app. Unless otherwise specified, it might find libs in standard paths such as /usr/lib. On your host I suppose you have 3 versions, the host one, and the two from both JetPack, so be sure to say your compiler where are the ones you want to link against.

One way to give it other paths is through environment variable LD_LIBRARY_PATH in your shell. This may be required as well for execution.

But the best way for linking is to specify explicitly the path where you libs are with -L. This path will be searched in priority.

It seems the two Jetson versions have been compiled with different toolchains (maybe gcc4 vs gcc5) or different options.
Can you build natively on TX1 with each JetPack version ? If yes, what are the gcc and nvcc version of each ?

Same question applies to include files. There may be not so much difference in the API, but some implementation details may be involved and different, I don’t know. I suggest you give explicit paths to each version directory for includes (with -I) and to libs (with -L).

I have to say that I have very few experience in cross-compiling for tegra (I use to build natively), I have currently no TX1 nor JetPack host, so I cannot say much more, but if it might help…

Hi,

Thanks for your question.

We are looking into this case, and will let you know when we can offer further advice.
Thanks.

Hello Honey_Patouceul,
As you can see in my first post, I define the exact path to the library on host:
/usr/local/cuda-8.0/bin/nvcc --cudart shared -L/usr/aarch64-linux-gnu/lib/Opencv8

Cuda Toolkit use nvcc compiler and not gcc compiler. Are any dependencies between them? What versions of compiler shall be for Jetpack 2.3.1?

I checked the versions of gcc compilers (gcc -v):
On Tegra TX1:5.4.0
On Host : 4.8.4
Is that Ok?

Hi,

Sorry for keeping you waiting.
I tried your code with Nsight and was able to build and execute without error.

Sharing my steps:

  1. New CUDA C/C++ Project → 5.3 PTX/GPU code → Select remote connection → [device IP] → CPU Architecture = AArch64

  2. Please add following config:

>> NVCC Linker -> Libraries -> Libraries(-l):
opencv_core

Could you follow our procedures and try it again?
Please also let us know the results.

Hello,
Thank you for a response. As you can see in my first post, there is “-lopencv_core” in options of linker and architecture defined as aarch64.
Could you send me a command line of your compilation and linking and also check the version of compiler on host?
I also will try to create new project again according to your guidelines.

Hi,

Thanks for your feedback. I can reproduce this error now!

Looks like error occurs when Target System set to ‘Local’ but works for remote connection.
We are going to check this issue and will update once we can provide further suggestion.

Currently, could you create a project that set Target Systems to remote as temporal solution?
Thanks and please also let us know the results.

Hello,
I created a new project with Target Systems like you define. I can build now the project, but compilation occurs in the board (sync-project) and not on the host with cross-compiler.