Building nvyolo plugin

So I’m trying to build the nvyolo plugin as instructed here:
https://github.com/vat-nvidia/deepstream-plugins

But it cannot seem to find gstnvivameta_api.h
This is the full error when I run make:

g++ -c -o gstyoloplugin.o -I "/usr/local/cuda-9.0/include" -I "~/nvgstiva-app_sources/nvgstiva-app//nvgstiva-app/includes" -I "/usr/include/aarch64-linux-gnu" -I "../../lib/" -O2 -fPIC -std=c++11 -lstdc++fs -Wall -Wunused-function -Wunused-variable -Wfatal-errors -pthread -I/usr/include/gstreamer-1.0 -I/usr/lib/aarch64-linux-gnu/gstreamer-1.0/include -I/usr/include/glib-2.0 -I/usr/lib/aarch64-linux-gnu/glib-2.0/include -I/usr/include/opencv gstyoloplugin.cpp
In file included from gstyoloplugin.cpp:26:0:
gstyoloplugin.h:34:30: fatal error: gstnvivameta_api.h: No such file or directory
compilation terminated.
Makefile:77: recipe for target 'gstyoloplugin.o' failed
make: *** [gstyoloplugin.o] Error 1

I can’t seem to find a solution for this. Any help/advice would be appreciated.

Hi,

Please remember to install the dependencies and DeepStream SDK first.

Denpendencies:

sudo apt-get install libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev

DeepStream SDK:
https://developer.nvidia.com/deepstream-jetson

Thanks.

Hi AastaLLL,

I am using Jetson Xavier with TensroRT 5.0.3, I cloned the repo and did all the steps required before I make the environment, my Makefile looks like this :

Makefile.config

CXX=g++
CUDA_VER:=10

#Set to TEGRA for jetson or TESLA for dGPU’s
PLATFORM:=TEGRA

#For Tesla Plugins
OPENCV_INSTALL_DIR:= /path/to/opencv-3.4.x
TENSORRT_INSTALL_DIR:= /path/to/TensorRT-5.x
DEEPSTREAM_INSTALL_DIR:= /path/to/DeepStream_Release_3.0

#For Tegra Plugins
#NVGSTIVA_APP_INSTALL_DIR:= /path/to/nvgstiva-app_sources
NVGSTIVA_APP_INSTALL_DIR:= /home/nvidia/Projects/deepstream-plugins-master/config/

After downloading the cfg and weights files of YOLOv3, I got the following error after making the file:

nvidia$deepstream-plugins-master/sources/plugins/gst-yoloplugin-tegra$ make && sudo make install
g++ -c -o gstyoloplugin.o -I “/usr/local/cuda-10/include” -I “/home/nvidia/Projects/deepstream-plugins-master/config//nvgstiva-app/includes” -I “/usr/include/aarch64-linux-gnu” -I “…/…/lib/” -O2 -fPIC -std=c++11 -lstdc++fs -Wall -Wunused-function -Wunused-variable -Wfatal-errors -pthread -I/usr/include/gstreamer-1.0 -I/usr/include/orc-0.4 -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib/aarch64-linux-gnu/glib-2.0/include -I/usr/include/opencv gstyoloplugin.cpp
In file included from gstyoloplugin.cpp:26:0:
gstyoloplugin.h:34:10: fatal error: gstnvivameta_api.h: No such file or directory
#include “gstnvivameta_api.h”
^~~~~~~~~~~~~~~~~~~~
compilation terminated.
Makefile:77: recipe for target ‘gstyoloplugin.o’ failed
make: *** [gstyoloplugin.o] Error 1

All the dependencies in the repo are installed, could you please advise?

Hi,

DeepStream doesn’t support Xavier yet.
Currently, please use trt-yolo-app which doesn’t have the dependency on deepstream SDK.

You can find this information in the installation list:
[url]GitHub - NVIDIA-AI-IOT/deepstream_reference_apps: Samples for TensorRT/Deepstream for Tesla & Jetson

Thanks.

Hi AastaLL,

I tried to deploy trt-yolo-app on jetson Xavier and I followed the instructions listed in the original Github repository. I was able to make the app but when I executed it I got the following error:

nvidia@jetson:~deepstream-plugins$ trt-yolo-app --flagfile=/home/nvidia/Projects/deepstream-plugins/config/deepstream-app_yolo_config.txt
Using previously generated plan file located at sources/lib/models/yolov3-kFLOAT-batch1.engine
Loading TRT Engine...
ERROR: The engine plan file is incompatible with this version of TensorRT, expecting 5.0.3.2got 5.0.2.6, please rebuild.
Loading Complete!
trt-yolo-app: yolo.cpp:100: Yolo::Yolo(uint): Assertion `m_Engine != nullptr' failed.
Aborted (core dumped)

I know that my jetson Xavier has 5.0.3 TensorRT version installed on it, but from where the 5.0.2.6 build is coming, is it the original build provided by the repo and how can I change it?

Thanks in advance

Hi,

The error indicates that the PLAN file is generated from different TensorRT package.
Could you delete the PLAN file and try it again?

sources/lib/models/yolov3-kFLOAT-batch1.engine

Thanks.

Thank you AastaLLL for your swift reply, it is running smoothly now

Another question regarding DeepStream SDK 3.0. I am curious to know if there is an image resize for my input. I am testing the model with 1024 x 636 and 1352 x 900 resolution images and I want to know if there is any transformation before the first CNN. Where in the code I can find this information. I know that YOLO resizes by a step of 32. I am wondering if it is the same for DeepStream.

Thanks in advance for your elaboration!!

Regarding DeepStream SDK on Jetson issue, please file topic into https://devtalk.nvidia.com/default/board/291/deepstream-sdk-on-jetson/

Thanks

Thank you.