How to train a custom object detector and deploy it onto JTX2 with TF-TRT (TensorRT optimized)

I wrote a series of blog posts which form a complete tutorial on how to train an object detector with custom data and how to optimize the model with TF-TRT (TensorRT), and then to deploy it onto Jetson TX2. More specifically, I shared the complete code for training a hand detector and for deploying the trained tensorflow model on JTX2. In the end, I demonstrated ~24 fps real-time hand detection with a live camera feed.

I really tried my best to make the code simple and working well. If you find any issues or have any suggestions, feel free to leave a comment on my blog posts. Otherwise, if you find the posts helpful, I’d also appreciate you to comment and let me know.

Nice! Thanks for sharing and looking forward to see more updates from you!

Looks like exactly what I need. Thanks!

Thank you for doing this :) Will this work on the Jetson Nano?

Yes, I’ve also tested my trained SSD egohands models on Jetson Nano and shared the result in this blog post:

“Testing TF-TRT Object Detectors on Jetson Nano”: [url]https://jkjung-avt.github.io/tf-trt-on-nano/[/url]

Wonderful, thank you :). Also, I trained my own model from scratch. Is there a way to convert that to tensorRT, or must I start with one of the pre-trained networks and build on top of it?

If you’re using tensorflow object detection API, then my modified “tf_trt_models” code would likely just work. Just add pointers to the config file and trained model directory. Reference:

[url]https://github.com/jkjung-avt/tf_trt_models/blob/master/utils/od_utils.py#L51[/url]
[url]https://github.com/jkjung-avt/tf_trt_models/blob/master/utils/egohands_models.py[/url]

Thank you, but my model has no config file. What should I do?

If you have a frozen graph saved as a pb file, you could just load the pb file as a graph and call trt.create_inference_graph() to get the TensorRT optimized graph.

[url]https://github.com/jkjung-avt/tf_trt_models/blob/master/utils/od_utils.py#L57[/url]

Hi! jkjung13!!

Whether the tensoflow TensorFlow Object Detection API in your link https://jkjung-avt.github.io/hand-detection-tutorial/ is the newest version? I’m not able to download in this link,can you set it to be able to downloaded,thanks!
Additional, the version of protobuf is 3.0.0 in your link Ihttps://jkjung-avt.github.io/hand-detection-tutorial/,whether the problem is the version of protobuf in my errors https://devtalk.nvidia.com/default/topic/1049802/jetson-nano/object-detection-with-mobilenet-ssd-slower-than-mentioned-speed/5?
So I want to use your tensoflow TensorFlow Object Detection API and the prtotobuf3.0.0 to trian myself ssd_mobilenet_v2 model and test whether it can run correctly on the Jetson Nano.

My hand detection tutorial (https://github.com/jkjung-avt/hand-detection-tutorial) is using an older snapshot (hash key: 6518c1c) of TensorFlow Object Detection API. That version matches the one used in NVIDIA’s ‘tf_trt_models’ repository (https://github.com/NVIDIA-AI-IOT/tf_trt_models/tree/master/third_party).

As to tensorflow version, I’ve mostly been using 1.12.2 (or 1.12.0). I recommend using protobuf-3.6.1 along with tensorflow-1.12.x. For details, please refer to my blog posts.

“Building TensorFlow 1.12.2 on Jetson Nano”: https://jkjung-avt.github.io/build-tensorflow-1.12.2/
“Testing TF-TRT Object Detectors on Jetson Nano”: https://jkjung-avt.github.io/tf-trt-on-nano/

Hi jkjung13!
Thanks for your explanation,I havve solved my problem,the problem lies in the “numClasses=21,” and “inputOrder=[0, 2, 1],” firstly the inputOrder=[0, 1, 2] should correctly according to the NMS part in the .pbtxt file,secondly,the numClasses should added 1 to the detecte class number,for example:your detecte class number is 20,you should added 1 for the background.

Hope this is helpful for ones who encountered the same problem as me!!!