TensorRT + Python on TX2?

According to the [url]https://devtalk.nvidia.com/default/topic/1029109/jetson-tx2/wheel-file-for-tensorrt-installation-on-nvidia-jetson-tx2/post/5236544/#5236544[/url]

it seems that Python API is only available on x86 Linux platform per Feb 2018.

So any plan to enable it works on TX2 platform?

If it doesn’t support TX2 platform, then what special procedures for us to develop TensorRT application for TX2 platform in Python?

Thanks.

Hi,

Python API is not available for Jetson platform and we cannot release our future plan here.

Here is a sample can give you some idea of your requirement:

  • Python for pre-process
  • C++ for TensorRT inference
  • Swig as interface between python and C++

Thanks.

I’ve created a TensorRT GoogLeNet example, in which I used Cython to wrap C++ code so that I could do TensorRT inferencing using python directly. Hope it helps.

The code was tested on Jetson Nano, but it should work on Jetson TX2 too.

Thanks for always sharing the awesome article to the community! Good job!