How to run Schematic Segmentation samples in Nano

I am able to run Imagenet and Object detection demos using USB camera without any issues, but when I run segnet-camera, i am getting error that Caffee parser is not found.
Looks like I need to get a snapshot (NET=20170421-122956-f7c0_epoch_5.0), is there a way for me to get a trained one, or is there any other model in the samples that i can use as it is.

Hi,

The segmentation network of jetson_inference can be found here:
[url]jetson-inference/CMakePreBuild.sh at master · dusty-nv/jetson-inference · GitHub

You can select one to download and update the corresponding $NET path.
Thanks.

Thanks, that helped.
Is there a way to invoke different models by passing different args as similar to this one (jetson-inference/detectnet-camera.md at master · dusty-nv/jetson-inference · GitHub)

FCN-AlexNet-CityScappes is very slow and i am getting 1.5 fps.

Yes, you can see the name strings of the build-in models that you can use in the arguments (like with imagenet-console and detectnet-console) here:

https://github.com/dusty-nv/jetson-inference/blob/bda3d60a6967d5c081162a940f0bbca399081a55/segNet.cpp#L73

Now, since the segmentation models are large, not all of them are pre-downloaded to save disk space. You can find the commented-out URL’s of the models that aren’t downloaded by default here:

https://github.com/dusty-nv/jetson-inference/blob/bda3d60a6967d5c081162a940f0bbca399081a55/CMakePreBuild.sh#L87

You can either manually download these, or uncomment them and delete your build directory and re-run CMake. It will then download them when you re-build.

BTW, if you try the “aerial-fpv” model, it should run faster. Semantic segmentation models are very resource intensive.

Thanks, I was able to download all models manually and save them in respective directory.

But looks like i am doing something silly

./segnet-camera aerial-fpv

./segnet-camera “aerial-fpv”

./segnet-camera -modelName “aerial-fpv”

doesn’t work, it keeps loading AlexNet-CityScappes.

have you tried ./segnet-camera --model=Aeraial-fpv

Hi all, just wanted to let you know I have been working on some new semantic segmentation models - 21-class FCN-ResNet18 trained with PyTorch and exported to ONNX that get 30 FPS on Nano. I hope to add them to the tutorial soon, so stay tuned.

1 Like

dusty_nv,
can you please provide an update concerning the semantic segmentation code / model?
(the “coming soon” part under GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.)

many thanks!

Hi alex73, you can now find the Python bindings for segNet and new FCN-ResNet18 models in the pytorch development branch of jetson-inference:

https://github.com/dusty-nv/jetson-inference/tree/pytorch

Check out the segnet-console.py example (I still need to add segnet-camera.py)

Thank you for interesting questions and answers.

I would like same question regarding to Segnet or Unet model. If I implement a model from scratch (similar structure to Segnet or Unet for image regression) with Tensorflow/Pytorch frameworks (since my input is not regular images, it may has more than 9 channels), are there anything that I have to pay attention to make the model’s transform to TensorRT works?.
Since I am not sure what the standard for a model is to be converted to TensorRT.

Plan is to perform on the training GPU on a PC and load the trained model to Jetson Nano. Do you know any guide that works to get started?

I am on step of gathering all info about inference model on jetson nano for image regression using model customized from Segnet or Unet.

Any hint is appreciated!

Hi 32nguyen, please refer to the TensorRT list of supported layers in addition to the supported ops for TensorFlow and ONNX formats. PyTorch models would be exported via ONNX.

Hi dusty_nv,

Thanks for the super quick answer. That will absolutely help.

OK folks, the pytorch dev branch with the new segmentation models and Python bindings for segNet have been merged into master.

The docs have been updated, see here:

[url]https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md[/url]
[url]https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-camera-2.md[/url]

Let me know if you encounter any issues with using the updated master branch, thanks.

Hi dusty_nv, thank you very much.

Hi Dustin,
yesterday I got the nano and flashed a 64 GByte SD card with jetson-nano-sd-r32.2.1.zip. Today I installed the updated inference project with all networks according to the tutorial. Everything, including segnet-camera, works fine and stable.

For drone/vehicle user: the current consumption at the power jack with running segnet-camera FCN-ResNet18-Pascal-VOC-320x320 is 1.4 A at 5 V and about 2 A with FCN-ResNet18-Cityscapes-2048x1024.
The image was taken from a video in our street. It was almost twilight. The red area is a person half hidden from a car.
Best regards,
Wilhelm

1 Like

Hi WiSi-Testpilot, glad you got it up & running - those results look pretty good! Thanks for sharing and keep us updated with your project!

Hi Dustin,
I have problems with USB 3 cameras, see the link.

Please can you try SegNet-camera with an USB3 camera?
Thank you in advance,
Best regards,
Wilhelm

Hi Wilhem, to use USB camera, launch the program with --camera=/dev/video0 argument (or substitute your desired V4L2 camera device for /dev/video0). See my reply to your other post here:

https://devtalk.nvidia.com/default/topic/1064077/jetson-nano/jetson-inference-examples-do-not-run-with-usb-cameras/post/5388264/#5388264