I’ve been using Theano extensively and I now wanted to use my trained Theano models along with TensorRT in order to improve performance. As this article suggests: GIE supports networks trained using popular neural network frameworks including Caffe, Theano, Torch and Tensorflow.
The problem is I have no clue on how to do this and I cannot find any documentation for this or anything at all, actually. TensorRT is a big black box for me at this point.
Where can I get the interface documentation, samples, and other useful help?
Don’t worry about Theano, I might be into that in the future. For now, executing and understanding the samples would be enough. I can’t find any html, README files, or other stuff on how to use TensorRT-1.0.
Isn’t it there a fast way of testing tensorRT, with a Jetson TX1, with a Jetpack-2.3 full installation?
I have just requested access to the TensorRT-2.0 early program and now I’m waiting for it.
I have found another example on Tegra: /usr/src/gie_samples/samples/sampleGoogleNet. This demonstrates how to vary the batch size for benchmarking, although uses zero data, not real images.
@maoxiuping I was considering using Tensor-RT on AWS with K80(s), but my models are trained in Keras. Dropping them into Tensorflow then using this library seems like far too manual a process, especially with dozens of constantly evolving layers and some custom work mixed in from our researchers. I think for my purposes, Tensorflow Serving is going to be the tool I try to use for versioning flexibility and better inference efficiency. :/