I have a caffe model (deploy.prototxt & snapshot.caffemodel files). I am able to run them on my Jetson TX2 using the nvcaffe / pycaffe interface (eg calling net.forward() in python). My understanding is that TensorRT can significantly speedup network inference. However, I have not been able to find a clear guide online on how to:
(1) convert my caffe network to tensorRT (.tensorcache) file
(2) perform inference with the tensorRT network.
Would really appreciate some guidance on this or at least a link to a useful guide.
Also, I am a python coder and my C++ knowledge is minimal. Any method which minimized C++ usage would be optimal.
My application is super-resolution, which takes an input image and outputs another bigger image. There are no bounding boxes or image labels involved. Thus, my understanding is that the precompiled imagenet-console (your example above), detectnet-console, and segnet-console would not apply, and I would have to make these files myself. Is this correct?
I looked at the TensorRT developer guide (the link provided above), and I still could not understand how to actually convert a caffe model to tensorRT. The 1 example, the MNIST example in section 3.2 only had bits and pieces of code. Is there a git file where this example is shown in full, and the associated workflow?
Hi rsandler00, yes you are correct, you would want to modify one of the above to expect your respective input and output blobs. imageNet, detectNet, and segNet use the generic tensorNet class underneath, which provides the caffemodel loading and accepts the input and output blob names. Then it is up to the child classes to interpret the output blobs (in your case, the superresolution image).
I’m working with rsandler00 and we’re trying to build our own custom child class for the super-resolution image. I’ve based the child class (named superResolution) on the existing child classes such as detectnet. I’ve created 3 files; superResolution.cpp, superResolution.h, and superResolution-console.cpp. Now I’m just wondering how to correctly build these files with TensorNet so we could run our own code. I copied the code to the jetson-inference build folder, tried modifying the existing CMakeLists.txt file, and removed the other child classes we don’t need for our use case, but I couldn’t build the project. Forgive me as I’m new to using cmake. Is there a proper way of building this? Any help will be appreciated!
We managed to build our own custom class by adding the 3 files mentioned in my previous post to their respective CMakeLists.txt files in the jetson-inference. Thank you for your help.
Now we are having issues with making our super resolution custom code to work correctly. I’m new to Caffe model and TensorRT and have the no clue on how the input Caffe model ties with the sample codes such as the detectNet. I wrote the 3 custom codes by stripping down the sample codes available in the jetson-inference repository. After successfully building the code, I thought we were golden and ready to generate super resolution images. I was able to load and save the images, but no image processing done in between. Being the newbie to these topics, I thought the image processing, in our case the super resolution, is done by the caffe model input and all I needed to do was load the image, model, prototxt, and other input parameters then let the caffe model do its magic. Of course, I’m wrong.
Now I’ve spent a lot of time trying to figure out how exactly the sample codes are making use of the input image and how does the whole Caffe model input ties into this whole thing. Could you guys shed some light or point me to the right direction on how to proceed in doing the super resolution image, please? We’ll greatly appreciate any help.
Thank you for the clarifications provided. I got that alexnet and googlenet is kind of compatible by default, but other networks need be checked.
However, another question emerged: How do I run inference-jetson on multiple cameras simultaneously? Is there a direct way for ding that? or I just to run in the terminal two instances of the imagenet[for example]. Though the tattler I will test by an experiment.
I’m not sure if this is the correct place to post this, but it looks to me like i have similar problem. I’m using Jetson TX2, Ubuntu 16.04, Jetpack 3.3. The model was made on Jetson TX2 using DIGITS.