import uff
import tensorflow as tf
import tensorrt as trt
from tensorrt.parsers import uffparser
G_LOGGER = trt.infer.ConsoleLogger(trt.infer.LogSeverity.INFO)
print(tf.__version__)
I get the following error.
Traceback (most recent call last):
File "/home/px2host/Serin/PycharmProjects/tensorflow/tensorrt/tensorrt.py", line 3, in <module>
import tensorrt as trt
File "/home/px2host/Serin/PycharmProjects/tensorflow/tensorrt/tensorrt.py", line 4, in <module>
from tensorrt.parsers import uffparser
ImportError: No module named parsers
px2host@px2host1:~$ dpkg -l | grep TensorRT
ii libnvinfer-dev 4.0.4-1+cuda9.0 amd64 TensorRT development libraries and headers
ii libnvinfer-samples 4.0.4-1+cuda9.0 amd64 TensorRT samples and documentation
ii libnvinfer4 4.0.4-1+cuda9.0 amd64 TensorRT runtime libraries
ii python-libnvinfer 4.0.4-1+cuda9.0 amd64 Python bindings for TensorRT
ii python-libnvinfer-dev 4.0.4-1+cuda9.0 amd64 Python development package for TensorRT
ii python-libnvinfer-doc 4.0.4-1+cuda9.0 amd64 Documention and samples of python bindings for TensorRT
ii tensorrt 3.0.4-1+cuda9.0 amd64 Meta package of TensorRT
ii uff-converter-tf 4.0.4-1+cuda9.0 amd64 UFF converter for TensorRT package
So for deployment on a DPX2, native(C++) TensorRT3 has to be used?
Yes, this is met on a x86 Linux machine.
px2host@px2host1:~$ uname -a
Linux px2host1 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
I am using Tensorflow for modelling and training a network and am using a DXP2 for deployment. Is there an available workflow for the same?
For the earlier error, it was caused due to the multiple locations of TensorRT installations. I uninstalled all the versions of TensorRT, reinstalled the latest one and changed the LD_LIBRARY_PATH in ~/.bashrc to point to the new installation path. I no longer face the errors.
I want to convert my tensorflow faster-rcnn model to TensorRT, but the graph of tensorflow model has some ops:switch,assert…,they can’t be converted by the function:convert_tf2numpy_const_node(cls, tf_node),because invalid dtype or non data value.
so , what should I do for the task?
Shouldn’t step 2 be done on the DPX2 using C++ API? As the TensorRT optimizations will be specific to the device that runs it.
I read in some Jetson Forums that the .UFF file has to be copied to the device. I am assuming that it is similar for the DPX2 as well. If no, is there an example like sampleMNIST.cpp for importing the .engine file?
Hi,
has the tensorRT engine creation for .uff models (TensorFlow) also the ability to do INT8 calibration? I have a TF model with custom layers and want to deploy it to the PX2 by using IN8 precision. If it is possible, how would the work-flow look like?
Thanks
Thanks for your reply. I understand the workflow and what needs to be done. But I miss the details.
Are plugin layers supported by the .uff model parser?
Is INT8 calibration supported for the engine of a .uff model on PX2?
Which steps can be done in Python, which have to be implemented in C++?
@sivaramakrishan Step 1 states to convert .pb model to .uff model, but the documentation you mentioned doesn’t tell how to serialize this model to put it onto PX2. Can i just pickle/unpickle the file between host and device?
@sivaramakrishna
Yes, but from what the documentation has, it generates the uff_model within Python environment, I want to export the .uff file. I found a solution here,
@sivaRamaKrishna Following your steps from #4, I created a .uff file for a tensorflow model(not leNet) on MNIST. When I use sampleUFFMnist with the Input, Output(Line 266, 267) from my network, with the newly generated .uff file, it throws me an error.
ERROR: BiasAdd: kernel weights has count 800 but 22400 was expected
ERROR: Add: kernel weights has count 3211264 but 458752 was expected
ERROR: sample_uff_mnist: Unable to create engine
ERROR: sample_uff_mnist: Model load failed
Following is the .pbtxt file generated with uff conversion