TensorRT 3.0 RC now available with support for TensorFlow

A release candidate package for TensorRT 3.0 and cuDNN 7.0 is now available for Jetson TX1/TX2. Intended to be installed on top of an existing JetPack 3.1 installation, the TensorRT 3.0 RC provides the latest performance improvements and features:

  • TensorFlow 1.3 UFF (Universal Framework Format) importer
  • Upgrade from cuDNN 6 to cuDNN 7
  • New layers and parameter types

See the full Release Notes here and download the RC from developer.nvidia.com/tensorrt

Hi, glad to know the release of tensorrt3. But I want to know how to use Python API of tensorrt3? I have tried the provided methods for installing tensorrt given here:[url]https://developer.nvidia.com/nvidia-tensorrt3rc-download[/url]. And after that, I still can’t “import tensorrt” within a python environment. Can I install tensorrt3 in a python way through “sudo apt-get install python-tensorrt”?

Hi,

what the new layers and parameter types tensorRT3 support?

and what version of caffe it support?

by the way, can we install ver2 and ver3 simultaneously in our tx2?

Please consult the Release Notes for the new layers and parameter types:

  • The TensorRT deconvolution layer previously did not support non-zero padding, or stride values that were distinct from kernel size. These restrictions have now been lifted.
  • The TensorRT deconvolution layer now supports groups.
  • Non-determinism in the deconvolution layer implementation has been eliminated.
  • The TensorRT convolution layer API now supports dilated convolutions.
  • The TensorRT API now supports these new layers (but they are not supported via
    the NvCaffeParser):

  • unary
  • shuffle
  • padding
  • The Elementwise (eltwise) layer now supports broadcasting of input dimensions.

I’m confirming internally which version caffe should be used for training (nvcaffe-0.15 or 0.16)

[/quote]

If you extract the tarball provided on the TensorRT downloads page, you should be able to, however probably not with the debian package method.

Hi, from the tarball provided on that same download page, check the python directory from the extracted tarball. Within there are the python files, docs, and samples. I will check where these get installed to when using the debian package installation method.

Using the debian package, I could find examples folder here /usr/local/lib/python2.7/dist-packages/tensorrt

Well it seems that, tensorrt3 for tesla gpus have access to python API. I mean the tarball provided for tesla gpus has python directory and there do exist some python files in that directory. While tarball provided for jetson platforms has a python directory too, but there just exist some doc and data files in that directory. Does that mean jetson platforms,e.g.TX1,TX2, cannot use tensorrt3 API?

Thx for you explanation.

Is there any plan in the future to directly import layer.cpp or layer.cu by tensorRT caffeParser?

or for the custom layer, we still have to add parser rule in Iplugin.

Regarding your questions, let me check with the TensorRT team about it.

I was wondering if anyone could recommend a TensorFlow model to test the complete process with?

Where I could use Digits w/TensorFlow1.3 to Train the model, and then use TensorRT 3.0 RC on the Jetson for inference.

See section 2.3.2.1.1. (Training a Model in TensorFlow) from the TensorRT 3 User Guide (included in the RC download) for example TensorFlow code of training an example network model compatible with TensorRT.

The DIGITS examples also include some TensorFlow models, see here: https://github.com/NVIDIA/DIGITS/tree/master/examples

The Release Notes were amended to clarify the Python API is currently x86-only in the RC:

TensorFlow UFF models can still be imported on ARM platforms from C++ using the NvUffParser.h API.

Hi, I tried the examples provided by TensorRT-3-User-Guide.pdf. In part 2.3.2.1.3, the code collapse and I get the following erro. Can you give me some tips about what’s going wrong? And b.t.w, is there any Python/C++ API of Tensorrt3 provided? I don’t even know what variables and functions are provided for each python module and it’s a total mess when trying to use TensorRT3. - -!
Traceback (most recent call last):
File “/home/pc-201/Desktop/a.py”, line 181, in
parser.register_input(“Placeholder”, (1,28,28))
NotImplementedError: Wrong number or type of arguments for overloaded function ‘UffParser_register_input’.
Possible C/C++ prototypes are:
nvuffparser::IUffParser::registerInput(char const *,nvinfer1::DimsCHW,nvuffparser::UffInputOrder)
nvuffparser::IUffParser::registerInput(char const *,nvinfer1::DimsCHW)
nvuffparser::IUffParser::registerInput(PyObject *,PyObject *,nvuffparser::UffInputOrder)
nvuffparser::IUffParser::registerInput(PyObject *,nvinfer1::DimsCHW,nvuffparser::UffInputOrder)
nvuffparser::IUffParser::registerInput(char const *,PyObject *,nvuffparser::UffInputOrder)

Hi, I have searched the TensorRT-3-User-Guide, but I can not find. Could you give me the address here?

Does tensorrt 3 for tensorflow support custom layers defined in tensorflow graph?

I also meet the same problem like #13,I did as the doc TensorRT-3-User-Guide.pdf part 2.3.2.1.3 said and give the following error:parser.register_input(“Placeholder”, (28,28,1))

NotImplementedError Traceback (most recent call last)
in ()
----> 1 parser.register_input(“Placeholder”, (28,28,1))

NotImplementedError: Wrong number or type of arguments for overloaded function ‘UffParser_register_input’.
Possible C/C++ prototypes are:
nvuffparser::IUffParser::registerInput(char const *,nvinfer1::DimsCHW,nvuffparser::UffInputOrder)
nvuffparser::IUffParser::registerInput(char const *,nvinfer1::DimsCHW)
nvuffparser::IUffParser::registerInput(char const *,PyObject *,nvuffparser::UffInputOrder)
Who knows how to solve this problem?

parser.register_input(“Placeholder”, (1,28,28), 0)

The same problem, here.

parser.register_input("Placeholder", (1, 28, 28))

NotImplementedError: Wrong number or type of arguments for overloaded function ‘UffParser_register_input’.
Possible C/C++ prototypes are:
nvuffparser::IUffParser::registerInput(char const *,nvinfer1::DimsCHW,nvuffparser::UffInputOrder)
nvuffparser::IUffParser::registerInput(char const *,nvinfer1::DimsCHW)
nvuffparser::IUffParser::registerInput(PyObject *,PyObject *,nvuffparser::UffInputOrder)
nvuffparser::IUffParser::registerInput(PyObject *,nvinfer1::DimsCHW,nvuffparser::UffInputOrder)
nvuffparser::IUffParser::registerInput(char const *,PyObject *,nvuffparser::UffInputOrder)

Download the tarball from the TensorRT 3.0 RC, and the PDF is located inside the doc directory of the extracted archive. If you are using the Debian package there should be the /usr/share/doc/tensorrt directory. There is also python documentation in the python/doc folder of the tarball.

In the tarball for x86_64, the python HTML documentation is located in the python/doc/python2.7 and python/doc/python3.5 subdirectories.

Does ngsong’s suggestion work for you? parser.register_input(“Placeholder”, (1,28,28), 0)