Jetson Nano Camera with remote Desktop on pipeline IP camera RTSP

Hello,

When I execute on HDMI the live view is OK , but when I run “”./detect-camera multiped" from a remote connexion there is no live view. why ? Can you explain me a solution ?

My remote connexion : VNC on port 5901

My IP camera from RTSP.

My pipeline (in gstCamera.cpp) is : rtspsrc location=rtsp://xxxxxxxxxxx:xxxxxxxxxxx@192.XXX.XXX.XXX:554//h264Preview_01_main ! rtph264depay ! h264parse ! omxh264dec ! appsink name=mysink";

Can you help me ?

Thanks

Marco

PS : http://devtalk.nvidia.com/default/topic/1051241/jetson-nano/jetson-nano-camera-with-remote-desktop/post/5363111/ when i fit the trick for RTSP it does not work

Hi,
Please try software decoder avdec_h264. You may go to http://gstreamer-devel.966125.n4.nabble.com/

Once you get suggestion and work out a pipieline, it should work fine to replace avdec_h264 with omxh264dec.

Hi DaneLL,

Sorry, but the problem is same ; my execution is OK => but not display (or windows) picture from camera

Can you help me.

Thanks

My run (from VNC) is OK :

[TRT] ----------------------------------------------
[TRT] Timing Report /home/jm/jetson-inference/build/aarch64/bin/networks/ped-100/snapshot_iter_70800.caffemodel
[TRT] ----------------------------------------------
[TRT] Pre-Process CPU 0.06359ms CUDA 1.87786ms
[TRT] Network CPU 123.24638ms CUDA 115.67442ms
[TRT] Post-Process CPU 0.33521ms CUDA 0.33588ms
[TRT] Total CPU 123.64520ms CUDA 117.88817ms
[TRT] ----------------------------------------------

But not display (or windows) picture from camera

Can you help me ?

Hi,
The default code is for Argus souce such as camera board ov5693. You have modify the pipeline in

to rtsp source.

Hi, DaneLLL

Yes I modified the pipeline in “gstCamera.cpp”. My log :

####################################################################################

./build/aarch64/bin/detectnet-camera multiped
[gstreamer] initialized gstreamer, version 1.14.4.0
[gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVARGUS, camera 0
[gstreamer] gstCamera pipeline string:
rtspsrc location=rtsp://xxxxx:xxxxxxxxxxxxx@xxx.xxx.xxx.xxx:554//h264Preview_01_main ! rtph264depay ! h264parse ! avdec_h264 ! appsink name=mysink
[gstreamer] gstCamera successfully initialized with GST_SOURCE_V4L2, camera 0

detectnet-camera: successfully initialized camera device
width: 1280
height: 720
depth: 12 (bpp)

detectNet – loading detection network model from:
– prototxt networks/ped-100/deploy.prototxt
– model networks/ped-100/snapshot_iter_70800.caffemodel
– input_blob ‘data’
– output_cvg ‘coverage’
– output_bbox ‘bboxes’
– mean_pixel 0.000000
– mean_binary NULL
– class_labels networks/ped-100/class_labels.txt
– threshold 0.500000
– batch_size 1

[TRT] TensorRT version 5.0.6
[TRT] loading NVIDIA plugins…
[TRT] completed loading NVIDIA plugins.
[TRT] detected model format - caffe (extension ‘.caffemodel’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file /home/jm/jetson-inference/build/aarch64/bin/networks/ped-100/snapshot_iter_70800.caffemodel.1.1.GPU.FP16.engine
[TRT] loading network profile from engine cache… /home/jm/jetson-inference/build/aarch64/bin/networks/ped-100/snapshot_iter_70800.caffemodel.1.1.GPU.FP16.engine
[TRT] device GPU, /home/jm/jetson-inference/build/aarch64/bin/networks/ped-100/snapshot_iter_70800.caffemodel loaded
[TRT] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
[TRT] device GPU, CUDA engine context initialized with 3 bindings
[TRT] binding – index 0
– name ‘data’
– type FP32
– in/out INPUT
– # dims 3
– dim #0 3 (CHANNEL)
– dim #1 512 (SPATIAL)
– dim #2 1024 (SPATIAL)
[TRT] binding – index 1
– name ‘coverage’
– type FP32
– in/out OUTPUT
– # dims 3
– dim #0 1 (CHANNEL)
– dim #1 32 (SPATIAL)
– dim #2 64 (SPATIAL)
[TRT] binding – index 2
– name ‘bboxes’
– type FP32
– in/out OUTPUT
– # dims 3
– dim #0 4 (CHANNEL)
– dim #1 32 (SPATIAL)
– dim #2 64 (SPATIAL)
[TRT] binding to input 0 data binding index: 0
[TRT] binding to input 0 data dims (b=1 c=3 h=512 w=1024) size=6291456
[TRT] binding to output 0 coverage binding index: 1
[TRT] binding to output 0 coverage dims (b=1 c=1 h=32 w=64) size=8192
[TRT] binding to output 1 bboxes binding index: 2
[TRT] binding to output 1 bboxes dims (b=1 c=4 h=32 w=64) size=32768
device GPU, /home/jm/jetson-inference/build/aarch64/bin/networks/ped-100/snapshot_iter_70800.caffemodel initialized.
detectNet – number object classes: 1
detectNet – maximum bounding boxes: 2048
detectNet – loaded 1 class info entries
detectNet – number of object classes: 1
No protocol specified
[OpenGL] failed to open X11 server connection.
[OpenGL] failed to create X11 Window.
detectnet-camera: failed to create openGL display
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> avdec_h264-0
[gstreamer] gstreamer changed state from NULL to READY ==> h264parse0
[gstreamer] gstreamer changed state from NULL to READY ==> rtph264depay0
[gstreamer] gstreamer changed state from NULL to READY ==> rtspsrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> avdec_h264-0
[gstreamer] gstreamer changed state from READY to PAUSED ==> h264parse0
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtph264depay0
[gstreamer] gstreamer msg progress ==> rtspsrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtspsrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer msg new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> avdec_h264-0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> h264parse0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtph264depay0
[gstreamer] gstreamer msg progress ==> rtspsrc0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtspsrc0
[gstreamer] gstreamer msg progress ==> rtspsrc0
detectnet-camera: camera open for streaming
[gstreamer] gstCamera onPreroll
[gstreamer] gstCamera – allocated 16 ringbuffers, 1382400 bytes each
[gstreamer] gstreamer msg progress ==> rtspsrc0
[gstreamer] gstreamer msg progress ==> rtspsrc0
[gstreamer] gstreamer msg progress ==> rtspsrc0
[gstreamer] gstreamer changed state from NULL to READY ==> manager
[gstreamer] gstreamer changed state from READY to PAUSED ==> manager
[gstreamer] gstreamer changed state from NULL to READY ==> rtpssrcdemux0
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtpssrcdemux0
[gstreamer] gstreamer changed state from NULL to READY ==> rtpsession0
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtpsession0
[gstreamer] gstreamer changed state from NULL to READY ==> funnel0
[gstreamer] gstreamer changed state from READY to PAUSED ==> funnel0
[gstreamer] gstreamer changed state from NULL to READY ==> funnel1
[gstreamer] gstreamer changed state from READY to PAUSED ==> funnel1
[gstreamer] gstreamer changed state from NULL to READY ==> rtpstorage0
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtpstorage0
[gstreamer] gstreamer msg progress ==> rtspsrc0
[gstreamer] gstreamer changed state from NULL to READY ==> rtpssrcdemux1
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtpssrcdemux1
[gstreamer] gstreamer changed state from NULL to READY ==> rtpsession1
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtpsession1
[gstreamer] gstreamer changed state from NULL to READY ==> funnel2
[gstreamer] gstreamer changed state from READY to PAUSED ==> funnel2
[gstreamer] gstreamer changed state from NULL to READY ==> funnel3
[gstreamer] gstreamer changed state from READY to PAUSED ==> funnel3
[gstreamer] gstreamer changed state from NULL to READY ==> rtpstorage1
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtpstorage1
[gstreamer] gstreamer msg progress ==> rtspsrc0
[gstreamer] gstreamer changed state from NULL to READY ==> udpsink0
[gstreamer] gstreamer changed state from READY to PAUSED ==> udpsink0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> udpsink0
[gstreamer] gstreamer changed state from NULL to READY ==> fakesrc0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> fakesrc0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> fakesrc0
[gstreamer] gstreamer changed state from NULL to READY ==> udpsink2
[gstreamer] gstreamer changed state from READY to PAUSED ==> udpsink2
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> udpsink2
[gstreamer] gstreamer changed state from NULL to READY ==> fakesrc1
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> fakesrc1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> fakesrc1
[gstreamer] gstreamer msg progress ==> rtspsrc0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtpssrcdemux1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtpstorage1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtpsession1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> funnel2
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> funnel3
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtpssrcdemux0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtpstorage0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtpsession0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> funnel0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> funnel1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> manager
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> udpsrc1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> udpsrc1
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> udpsrc2
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> udpsrc2
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> udpsrc3
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> udpsrc3
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> udpsrc4
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> udpsrc4
[gstreamer] gstreamer msg progress ==> rtspsrc0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer msg element ==> rtpsession1
[gstreamer] gstreamer msg element ==> rtpsession0
[gstreamer] gstreamer changed state from NULL to READY ==> rtpptdemux0
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtpptdemux0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtpptdemux0
[gstreamer] gstreamer changed state from NULL to READY ==> rtpjitterbuffer0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtpjitterbuffer0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtpjitterbuffer0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer changed state from NULL to READY ==> rtpptdemux1
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtpptdemux1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtpptdemux1
[gstreamer] gstreamer changed state from NULL to READY ==> rtpjitterbuffer1
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtpjitterbuffer1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtpjitterbuffer1
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer msg stream-start ==> pipeline0
[gstreamer] gstreamer mysink missing gst_tag_list_to_string()
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer msg async-done ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
[gstreamer] gstCamera – allocated 16 RGBA ringbuffers

[TRT] ----------------------------------------------
[TRT] Timing Report /home/jm/jetson-inference/build/aarch64/bin/networks/ped-100/snapshot_iter_70800.caffemodel
[TRT] ----------------------------------------------
[TRT] Pre-Process CPU 0.07683ms CUDA 12.37469ms
[TRT] Network CPU 206.49744ms CUDA 171.91089ms
[TRT] Post-Process CPU 0.39563ms CUDA 0.38870ms
[TRT] Total CPU 206.96989ms CUDA 184.67429ms
[TRT] ----------------------------------------------

[TRT] note – when processing a single image, run ‘sudo jetson_clocks’ before
to disable DVFS for more accurate profiling/timing measurements

[TRT] ----------------------------------------------
[TRT] Timing Report /home/jm/jetson-inference/build/aarch64/bin/networks/ped-100/snapshot_iter_70800.caffemodel
[TRT] ----------------------------------------------
[TRT] Pre-Process CPU 0.07604ms CUDA 1.95714ms
[TRT] Network CPU 124.18271ms CUDA 116.09666ms
[TRT] Post-Process CPU 0.35688ms CUDA 0.35568ms
[TRT] Total CPU 124.61563ms CUDA 118.40948ms
[TRT] ----------------------------------------------


##################################################################################################

I do not have the window view of the camera since my VNC connection. When I use the HDMI local connection, the camera window is displayed correctly.
Can you help me for remote viewing ? Thanks

Is there a plug to install or other ?

Hi,
You may need to modify the code of display accordingly.
https://github.com/dusty-nv/jetson-utils/tree/master/display

Hi DaneLLL,

Can you explain me, the update for work (example code) in files

Just I modified “glDisplay.cpp” :

#########################################

// Constructor
glDisplay::glDisplay()
{
mWindowX = 1;
mScreenX = 0;
mVisualX = 0;
mContextGL = NULL;
mDisplayX = 0;
mInteropTex = NULL;

    mWidth      = 0;
    mHeight     = 0;
    mAvgTime    = 1.0f;

##########################################

But same problem in #6. My log :

##########################################

[TRT] binding to output 1 bboxes dims (b=1 c=4 h=32 w=64) size=32768
device GPU, /home/jm/jetson-inference/build/aarch64/bin/networks/ped-100/snapshot_iter_70800.caffemodel initialized.
detectNet – number object classes: 1
detectNet – maximum bounding boxes: 2048
detectNet – loaded 1 class info entries
detectNet – number of object classes: 1
No protocol specified
[OpenGL] failed to open X11 server connection.
[OpenGL] failed to create X11 Window.
detectnet-camera: failed to create openGL display
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> avdec_h264-0
[gstreamer] gstreamer changed state from NULL to READY ==> h264parse0
[gstreamer] gstreamer changed state from NULL to READY ==> rtph264depay0

##########################################

Can you help me ?

Thanks

Hi,
The display code is based on X11 APIs. It works with default HDMI output. We are not familiar with VNC. Some other users may share their experiences.

Just tried to get some information and see below link. It might help your usecase.
https://unix.stackexchange.com/questions/45916/what-is-the-relation-between-display-1-0-and-port-5901

Hi, DaneLL,

No solution for remote display execution “detectnet-camera multiped” by VNC or other ??? I dessapointed !

Thanks for your reply

Hi,
The sample runs in a default environment and we encourage users to do integration/customization for their usecases. Other users may share experiences.

Another alternative is to try DeepStream SDK 4.0. It is bsed on gstreamer and might be easier to understand and customize.

Hi DaneLLL,

I tested the deepstream SDK 4.0, but when I execute :

./deepstream-test3-app rtsp://user:pwd@1XX.XXX.XXX:554//h264Preview_01_main

There is the message :

One element could not be created. Exiting.

An idea ? Thanks

Hi,
Please check if the quick fix helps your case:
[url]https://devtalk.nvidia.com/default/topic/1058086/deepstream-sdk/how-to-run-rtp-camera-in-deepstream-on-nano/post/5369676/#5369676[/url]

Hi DaneLLL,

Sorry, I tried but I had the same error message : One element could not be created. Exiting.

A- For my fist install, I did :
############################################################################################
To install additional packages
• Enter the following command to install the prerequisite packages for installing the DeepStream SDK:
$ sudo apt install
libssl1.0.0
libgstreamer1.0-0
gstreamer1.0-tools
gstreamer1.0-plugins-good
gstreamer1.0-plugins-bad
gstreamer1.0-plugins-ugly
gstreamer1.0-libav
libgstrtspserver-1.0-0
libjansson4=2.11-1
To install librdkafka
Install librdkafka by running apt-get on the Jetson device:
$ apt-get install librdkafka1=0.11.3-1build1
To install the DeepStream SDK

• Method 2: Using the DeepStream tar package

  1. Download the DeepStream 4.0 Jetson tar package, deepstream_sdk_v4.0_jetson.tbz2, to the Jetson device.
  2. Enter this command to extract the package contents:
    $ tar -xpvf deepstream_sdk_v4.0_jetson.tbz2
  3. Enter these commands to extract and install DeepStream SDK:
    $ cd deepstream_sdk_v4.0_jetson
    $ sudo tar -xvpf binaries.tbz2 -C /
    $ sudo ./install.sh
    ##############################################################################################

B- After I copied the 2 files (https://devtalk.nvidia.com/default/topic/1058086/deepstream-sdk/how-to-run-rtp-camera-in-deepstream-on-nano/post/5369676/#5369676):
cp libgstnvegltransform.so /usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvegltransform.so
cp libnvbufsurftransform.so /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurftransform.so.1.0.0

C- But when I execute :
./deepstream-test3-app rtsp://user:pwd@1XX.XXX.XXX:554//h264Preview_01_main
I had the error message : One element could not be created. Exiting.

Do you have a solution ? Thanks

Hi,
Please clean the cache and try.
https://docs.nvidia.com/metropolis/deepstream/4.0/dev-guide/index.html

If the application encounters errors and cannot create Gst elements, remove the GStreamer cache, then try again. To remove the GStreamer cache, enter this command:
$ rm ${HOME}/.cache/gstreamer-1.0/registry.aarch64.bin

Hi DaneLLL,

A-
Just, I tested :
rm ${HOME}/.cache/gstreamer-1.0/registry.aarch64.bin

There is the message, when I execute “./deepstream-test3-app rtsp://user:pwd@1XX.XXX.XXX:554//h264Preview_01_main” :

##########################################################

nvbufsurftransform: Could not get EGL display connection
No protocol specified
nvbuf_utils: Could not get EGL display connection
One element could not be created. Exiting.
$
###########################################################

B-
A question, when I install Deepstream I execute '1. …, 2. tar -xpvf deepstream_sdk_v4.0_jetson.tbz2, 3. , …" in a personal directory in my home. Do I have to extract the files in a specific system directory ?

Thanks for your reply

Can you help me ? Thanks

Hi,
For error of EGL display, please do ‘export DISPLAY=:1(or 0)’

We usually create user as ‘nvidia’ and extract the package in home directory.

Hi, DaneLLLL,

I tried, but :

jm@xxx:~/Téléchargements/deepstream_sdk_v4.0_jetson/sources/apps/sample_apps/deepstream-test3$ export DISPLAY=:1
jm@xxx:~/Téléchargements/deepstream_sdk_v4.0_jetson/sources/apps/sample_apps/deepstream-test3$ ./deepstream-test3-app rtsp://user:pwd@192.XXX.XXX.XXX:554//
h264Preview_01_main
One element could not be created. Exiting.

jm@xxx:~/Téléchargements/deepstream_sdk_v4.0_jetson/sources/apps/sample_apps/deepstream-test3$ export DISPLAY=:0
jm@xxx:~/Téléchargements/deepstream_sdk_v4.0_jetson/sources/apps/sample_apps/deepstream-test3$ ./deepstream-test3-app rtsp://user:pwd@192.XXX.XXX.XXX:554//
h264Preview_01_main
One element could not be created. Exiting.
jm@xxx:~/Téléchargements/deepstream_sdk_v4.0_jetson/sources/apps/sample_apps/deepstream-test3$

And I find in file “deepstream_test3_app.c”:

###########################################

/* Create nvstreammux instance to form batches from one or more sources. */
streammux = gst_element_factory_make (“nvstreammux”, “stream-muxer”);

if (!pipeline || !streammux) {
g_printerr (“One element could not be created. Exiting.\n”);
return -1;
}
gst_bin_add (GST_BIN (pipeline), streammux);


########################################

A problem with the pipeline or streamux ?

A solution ?

What can it do to make it work?

C- And, when I checked “gst-inspect-1.0 -b” :
Blacklisted files:
libgstnvvideoconvert.so
libnvdsgst_multistreamtiler.so
libnvdsgst_multistream.so
libnvdsgst_infer.so
libnvdsgst_dsexample.so
libnvdsgst_tracker.so

Total count: 6 blacklisted files

An solution, please ?

Thanks

Hi,
It seems to be some mismatch in your system. Maybe system version does not match DS version, or some steps are not run in super user. A similar topic:
[url]https://devtalk.nvidia.com/default/topic/1060944/deepstream-sdk/-jetson-xavier-using-cv-videowriter-with-nvv4l2h264enc-dps4-0-and-memory-leak/post/5374232/#5374232[/url]

Please try to re-flash the system and DS4.0 via sdkmanager.

Hi, DaneLLL,

I dont find the arm file for install SDKManager on Jetson Nano.

I checked the post : [Jetson Xavier] using cv::VideoWriter with nvv4l2h264enc (DPS4.0)and memory leak - DeepStream SDK - NVIDIA Developer Forums

How to do install SDKManager on Jetson Nano ? What s the command for flash ?

Thanks for your reply