Jetson Nano on a Drone / Multicopter

I’m going to install the nano and three cameras on a multicopter, a vis-camera, the Sony RX0, a thermal camera and the R-Pi V2 camera for near infra red (NIR) and UV. The streaming via two HDMI-USB-3 adapters works fine and very fast. The thermal image is presented as picture in picture (pip) within the visible image. I’m new to gstreamer. Therefore the programming was a bit difficult, but with the help from the people in this forum now pip including transparency works.
At the moment the lags are:
Sony RX0 > Atomos Ninja display ~100 ms,
Sony RX0 > USB-3 adapter > jetson nano > Atomos Ninja display ~200 ms, see picture.
(There is a second stream parallel with a thermal camera to test pip.)
So the time for USB-3 > nano is about 100 ms.
Later, for the downlink I will use an Amimon Connex transmitter and receiver, see image.

I have attached 3 images. It are:

nvcompositor_transparent.jpg
The full screen image is from the Sony RX0, the thermal camera is a Flir Vue. The alignment is not perfect at the moment. The picture in the left bottom corner is from the r-pi V2 camera started as an extra task. This camera will be replaced by an IR-open camera.

PIP_Nano_Lag_9_10_2019.jpg
Latency test, Sony RX0 > USB-3 adapter > jetson nano > Atomos Ninja display ~200 ms.

Groundstation_2_10_2019.jpg
Monitor and receiver.

To be continued.
Best regards,
Wilhelm



1 Like

Fawn search and rescue by multicopter/drones is usually made with thermal cameras.
The fawns are hidden in the fresh high grass and not visible with a camera for visual light. However it appears as a bright spot in a thermal image. The problem is that other objects heated by the sun such as molehills or free areas also create a bright spot. This leads to misinterpretation. Therefore, some pilots have a vis-camera on board too.
Now my idea is (with the Jetson Nano in real time) to replace only the grass of the vis-video feed by the thermal feed. So only the relevant grass appears as a thermal image and it should be easier to identify fawns versus other objects. The advantage opposite to two separate video feeds is, that the pilot has only to observe one display.
I made the attached test pictures with a video taken last summer and the gstreamer on a Windows7 PC. The thermal image is simulated by the snow pattern from gstreamer. The Nano currently has a bug, so that chroma keying doesn’t work. I did inform the moderators already.
Best regards,
Wilhelm



You can use hdmi to csi module,the latency is below 100ms,such as auvidea b101/b102 board.

Tuyaliang, thank you for the information.

I have solved the Chroma Keying using Python and Opencv. The first image shows a picture in picture presentations of a thermal image, which is only visible on the grass. On the green cardboard a heat imprint of the palm of my hand is visible for test purpose.
The second application is an onboard NDVI calculation. This is my first NDVI image test taken from the living room looking outside. The full screen image was created by the following calculation: NDVI = (NIR-Vis) / (NIR + Vis). The brightness should correlate with the health of the vegetation. The algorithm could also be tested for fawn rescue, then of course with a NIR or Vis camera and a thermal imaging camera. The overlap is not perfect, because the cameras lay on the table.
The Nano creates about 5 images per second, so I will switch to the Xavier NX if available.
Best regards,
Wilhelm


Here are now two Chroma Keying pictures taken from the air. The thermal image is only displayed at the green vegetation (grass). The nano runs reliably and does not produce noticeable disturbances. I will do further tests when it is less windy.
Best regards,
Wilhelm


2 Likes