Yolo for deepstream-app

Hi

How and which deepstream-apps source need to modify, so that deepstream-app can read nvyolo plugins’ properties and work well?

according to [url]https://github.com/vat-nvidia/deepstream-plugins#note[/url]
is deepstream_config_file_parser.c need to modify only?
but there is no deepstream_app_config_parser.c could be referenced.

I run both deepstream-app -c source4_720p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt and
deepstream-yolo-app Tegra sample_720p.h264 config/yolov3.txt work well.
My environments are Jetpack4.1.1, DeepStream3.0 and deepstream_reference_apps on Xavier.

Hi,
The two apps run different pipelines. In deepstream-app, we have primary detector and secondary classifiers. Do you want to replace existing models? Or insert into the pipeline? Please check application architecture in document and give more information about your desired pipeline.

Hi DaneLL,
I understand they are two apps run different pipelines; so that, I want to to replace existing models to YOLO model and related pipeline.

After the modification, maybe deepstream-app can display multi-stream in high fps(over 30 fps),
and it is based YOLO model(maybe it need to degrade the number of labeling).

Hi,

YOLO contains some TensorRT non-supported model so please use deepstream-yolo-app to get the custom layer implementation.

To enable it with l4l2 source, please update the gst componenet here:
[url]https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/blob/master/sources/apps/deepstream-yolo/deepstream-yolo-app.cpp#L166[/url]

And update the pipeline if needed:
[url]https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/blob/master/sources/apps/deepstream-yolo/deepstream-yolo-app.cpp#L234[/url]

Thanks.

Hi DaneLL,

If I prefer to deepstream-app, does the orginal model can be re-train? that is, I want to increase the classes of labeling, how to do it?

Hi,

They are different questions.

  1. To use deepstream-app, please compile the YOLO sample into a library and link it as deepstream plugin.
    Here is the tutorial: [url]https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/master/sources/samples/objectDetector_YoloV3[/url]

  2. Re-training is possible. Which model do you want to use?
    For YOLO, you can find the re-training steps with darknet in the author’s webpage:
    [url]https://pjreddie.com/darknet/yolo/[/url]

Thanks.

@AastaLLL,

If I have a pre-train model(resnet), how can I covert it into an input(XXX.engine) of deepstream-app?

Or, what system can re-train the resnet model and convert this training model into an input of TensorRT?
furthermore, the result can be transfered into a XXX.engine.

@AastaLLL,
I am sorry that you reply answer to me is the same time I post the new questions.
we hope a pre-train model or re-train model that can be used directly by deepstream-app.

Hi,

Resnet can be used by deepstream-app directly.
Please start from the .prototxt and .caffemodel file located at ‘/home/nvidia/Model/ResNet_18’.
And use our training app DIGITs: https://github.com/NVIDIA/DIGITS.

When the new model is available, update the model information of the config should be enough.

Thanks.

@AastaLLL,

Where is “/home/nvidia/Model/ResNet_18”? need I install what SDK/REPO to get this path?

The DIGITs: GitHub - NVIDIA/DIGITS: Deep Learning GPU Training System. seems support ubuntu 14.04 and 16.04, but our Xavier is ubuntu 18.04.

If I have a pre-trained model, is there a app or system can be used to overt it into an input(XXX.engine) of deepstream-app?

@AastaLLL

I follow the tutorial: https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/master/sources/samples/objectDetector_YoloV3

run deepstream-app -c deepstream_app_config_yoloV3.txt, and get the results:
** WARN: <parse_tiled_display:1018>: Unknown key ‘gpu-id’ for group [tiled-display]
** WARN: <parse_source:359>: Unknown key ‘gpu-id’ for group [source0]
** WARN: <parse_streammux:418>: Unknown key ‘gpu-id’ for group [streammux]
** WARN: <parse_streammux:418>: Unknown key ‘cuda-memory-type’ for group [streammux]
** WARN: <parse_sink:962>: Unknown key ‘gpu-id’ for group [sink0]
** WARN: <parse_osd:599>: Unknown key ‘gpu-id’ for group [osd]
** WARN: <parse_gie:783>: Unknown key ‘gpu-id’ for group [primary-gie]

Using winsys: x11

Using TRT model serialized engine /home/nvidia/deepstream-plugins/sources/apps/trt-yolo/trt-yolo-app crypto flags(0)
deepstream-app: engine.cpp:868: bool nvinfer1::rt::Engine::deserialize(const void*, std::size_t, nvinfer1::IGpuAllocator&, nvinfer1::IPluginFactory*): Assertion `size >= bsize && “Mismatch between allocated memory size and expected size of serialized engine.”’ failed.

there is no “engine.cpp” file, how can I fixed it?

my deepstream_app_config_yoloV3.txt, please reference attached 1

my config_infer_primary_YoloV3.txt, please reference attached 2

deepstream_app_config_yoloV3.txt (2.15 KB)
config_infer_primary_YoloV3.txt (3 KB)

Hi,

1) You can find the model on the Xavier after installing DeepStreamSDK.

2) Training must be done on the desktop environment.
Xavier is designed for inference. It’s not recommended to use it for training.

3) If you already have the model, update the model information in the deepstream-app config should be enough.

4) Please noticed that the sample you shared in #12 is for YOLO, not for resnet.
Which model do you want to use?

Thanks.

@AastaLLL

  1. Now, I have a pre-trained model of YOLO, according to #6 1), I can compile the YOLO sample into a library and link it as deepstream plugin…

but it has error messages as #12 described, can you help me solve this problem?

Hi,

deepstream-app -c deepstream_app_config_yoloV3.txt can be worked well now,
after I change the model-engine-file=/home/nvidia/deepstream-plugins/data/yolo/yolov3-kINT8-batch1.engine in deepstream_app_config_yoloV3.txt.

but the displayed video has noise(attachment #3), how to fix it?
deepstream_app_config_yoloV3.txt (2.27 KB)
config_infer_primary_YoloV3.txt (3 KB)

Hi,

I have the exact same result. All the labels are displayed at once.

I would be curious to know as to why.

Thanks.

@thhsiao

If i understand you correctly, you want to use the deepstream-app along with the nvyolo plugin, correct ?

If yes,

  1. Clone the github repo from here - GitHub - NVIDIA-AI-IOT/deepstream_reference_apps: Samples for TensorRT/Deepstream for Tesla & Jetson
  2. As mentioned in the README - https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps#note
    Have a look at the deepstream_config_file_parser.c (line 98), deepstream_app_config_parser.c, deepstream_app.c source files in the SDK. The code has been annotated with comments on how to add the ds-example plugin to the pipeline in deepstream-app
  3. You will need to implement the corresponding config file parser code for NvYolo plugin. Corresponding ds-example reference is deepstream_dsexample.c and deepstream_dsexample.h files in the SDK
  4. Trace the execution flow on how ds-example plugin in added in the pipeline and do the same for NvYolo plugin.
  5. Make corresponding changes in the deepstream-app config file similar to what you do to add ds-example plugin to the pipeline.
  6. This procedure holds good for any custom gstreamer plugin you have implemented and are interested to integrate with the deepstream-app

@NvCJR

Yes,

  1. I have cloned github repo as you said.

  2. there are only deepstream_config_file_parser.c and deepstream_app.c in SDK, but no deepstream_app_config_parser.c, is it OK? (both deepstream-app and deepstream-yolo-app are worked well)
    I have a look at the both file, the above 2 files are parsing config file and add related properties to pipeline, am I right?

3)Can I modify corresponding config file and reuse the original parser code for NvYolo plugin?
deepstream_dsexample.c in SDK apply dsexample_bin, how to make relationship with NvYolo, so that deepstream_dsexample.c can be a NvYolo plugin?

  1. Does “ds-example plugin” is deepstream_dsexample.c or deepstream_app.c?
    if it is deepstream_app.c, it add many related properties of config file to pipeline, the amount of pipeline operation about 100, which are need to modify for NvYolo plugin?

Hi,

You can follow the dsexample to add the required parser or update in the deepstream-app.
For example, you can check the attachment on how to enable nvyolo in the deepstream-app with full-frame mode.

Thanks.
yolo-for-deepstream_reference_apps.patch.zip (705 Bytes)
yolo-for-deepstream_sdk_on_jetson.patch.zip (3.18 KB)

@AastaLLL

I flow your attachments to modify related files, and execute make in
/home/nvidia/deepstream_sdk_on_jetson/sources/apps/sample_apps/deepstream-app/

it printout error messages:

cc -I../../apps-common/includes -I../../../includes `pkg-config --cflags gstreamer-1.0 gstreamer-video-1.0 x11`   -c -o ../../apps-common/src/deepstream_sink_bin.o ../../apps-common/src/deepstream_sink_bin.c
../../apps-common/src/deepstream_sink_bin.c:21:10: fatal error: gst/rtsp-server/rtsp-server.h: No such file or directory
 #include "gst/rtsp-server/rtsp-server.h"
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
<builtin>: recipe for target '../../apps-common/src/deepstream_sink_bin.o' failed
make: *** [../../apps-common/src/deepstream_sink_bin.o] Error 1

Could you tell me what should do?

there is no rtsp-server.h in /usr/include/gstreamer-1.0/gst/