TensorRT documentation

Hello everyone,

I’ve been using Theano extensively and I now wanted to use my trained Theano models along with TensorRT in order to improve performance. As this article suggests: GIE supports networks trained using popular neural network frameworks including Caffe, Theano, Torch and Tensorflow.

The problem is I have no clue on how to do this and I cannot find any documentation for this or anything at all, actually. TensorRT is a big black box for me at this point.

Where can I get the interface documentation, samples, and other useful help?

Thanks

Join the TensorRT early access program:

[url]https://developer.nvidia.com/tensorrt[/url]

TensorRT is under active development, I’m not sure Theano is supported yet.

Don’t worry about Theano, I might be into that in the future. For now, executing and understanding the samples would be enough. I can’t find any html, README files, or other stuff on how to use TensorRT-1.0.

Isn’t it there a fast way of testing tensorRT, with a Jetson TX1, with a Jetpack-2.3 full installation?

I have just requested access to the TensorRT-2.0 early program and now I’m waiting for it.

@mescarra I am struggling to find detailed documentation as well :(. I’m using GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson. to get started, but I haven’t yet found how to e.g. process multiple images at once, change the batch size or profile execution. Any hints would be appreciated.

I have found another example on Tegra: /usr/src/gie_samples/samples/sampleGoogleNet. This demonstrates how to vary the batch size for benchmarking, although uses zero data, not real images.

I’m currently in the process of unifying classification and benchmarking based on these examples using the Collective Knowledge framework: GitHub - dividiti/ck-tensorrt: Collective Knowledge repository for NVIDIA's TensorRT

Going to revive this thread. The docs I’ve found after the deb install are in

/usr/share/doc/tensorrt

Note however that I’m using tensorrt-2.1.2, but there should be docs for gie in /usr/share/doc if that’s what you have installed properly.

@RoundShape,you are using Tensor RT-2.1.2 on Jetson TX1 or Jetson TX2? The only version I can get for Jetson TX2 is Tensor RT1.

@maoxiuping I was considering using Tensor-RT on AWS with K80(s), but my models are trained in Keras. Dropping them into Tensorflow then using this library seems like far too manual a process, especially with dozens of constantly evolving layers and some custom work mixed in from our researchers. I think for my purposes, Tensorflow Serving is going to be the tool I try to use for versioning flexibility and better inference efficiency. :/

Hi RoundShape,

Do you know where TensorRT 2.1.2 install in TX2? I want to find the path.

Thanks.

Sorry, I have no idea. I don’t even have a TX1 or TX2. I was just interested in TensorRT.

I found a TensorRT User Guide (Last updated July 24, 2017) here:

http://docs.nvidia.com/deeplearning/sdk/tensorrt-user-guide/index.html#axzz4pnWkft9b

And on Jetson TX2, I could find TensorRT sample source code, after installing JetPack-3.1, at this location:

ls -l /usr/src/tensorrt/