I am relatively new to tensorrt.
I have been trying to apply my tesnorflow mobilenet_V2 trained model to tensorRT by using the example from tensorrt ‘sampleUffSSD’.
setup versions:
tensorflow-gpu 1.12.0
Cuda compilation tools, release 10.0, V10.0.326
tensorrt 5.1.2.2
Results
-
Testing with inceptions worked fine as described in the nvidia guide
-
Testing with Mobilenet_V2 without training outputs these errors
sampleUffSSD$ …/…/bin/sample_uff_ssd
&&&& RUNNING TensorRT.sample_uff_ssd # …/…/bin/sample_uff_ssd
[I] …/…/data/ssd/sample_ssd_relu6.uff
[I] Begin parsing model…
[E] [TRT] UffParser: Unsupported number of graph 0
[E] Failure while parsing UFF file
sample_uff_ssd: sampleUffSSD.cpp:542: int main(int, char**): Assertion `tmpEngine != nullptr’
failed.
Aborted (core dumped) -
Testing with custom trained Mobilenet_V2 output these errors
tensorrt/samples/sampleUffSSD$ …/…/bin/sample_uff_ssd
&&&& RUNNING TensorRT.sample_uff_ssd # …/…/bin/sample_uff_ssd
[I] …/…/data/ssd/sample_ssd_relu6.uff
[I] Begin parsing model…
[E] [TRT] UffParser: Validator error:
FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_5_3x3_s2_128_depthwise/BatchNorm/FusedBatchNormV3:
Unsupported operation _FusedBatchNormV3
[E] Failure while parsing UFF file
sample_uff_ssd: sampleUffSSD.cpp:542: int main(int, char**): Assertion `tmpEngine != nullptr’
failed.
Aborted (core dumped)
I noticed that fusedbatchNormV3 was introduced to my custom trained (tranfser learning) Mobilenet_v2 frozen model because it was not present in the original mobilenet_v2 frozen model that I downloaded from the internet. fusedbatchNormV3 Seems to be partly a reason for the issues.
I also think the config.py file setting may also help, I knew what to use it.
Please need help with this as it has held us from progressing for over a week now!
Would greatly appreciate a straightforward way of converting tensorflow mobilenet_V2 model to a working tensorrt model.