How to run custom model on Jteson nano

I retrained model (SSD_Mobilenet_V2_coco) with custom data set on Google Colab with Tensorflwo version 1.15. After training, I uploaded frozen_inference_grapgh.pb file on jetson nano and converted it into .uff. But there are warniings (Feature extractor FusedBatchNorm3 is unsupported).

Kindly guide me how i can run my custom model on Jetson Nano.

Hi,

FusedBatchNormV3 is a new operation and is not supported by current TensorRT.
You can find our support matrix here:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-601/tensorrt-support-matrix/index.html#supported-ops

A known workaround is to use older TensorFlow model. ex. v1.14.0, which won’t create this operation.
Alternatively, if you can share the model with us, we can check if this can be fixed by updating the config.py file.

Thanks.

Here is my model(.pb file)
https://drive.google.com/open?id=1cfiPEYZChezHy-Tv1ESsrHaFlUA7Ntts

i will be very thankfull if you can convert our .pb file to .uff which is attached in #3 for me.I Am running out of time and working on it for more than 2 month i just want to build tensorrt engine using this uff on my jetson to see the result so i can work further on my project…

Hi,

I don’t have the permission to access the model shared in comment#3.
Would you mind help to to check it?

Thanks.

Thanks AastaLLL for your help. I successfully trained model on TF version 1.12.0 and 1.13.1 and run inference on jetson nano.