tlt-converter - UFF parser Error

I used TLT to train resnet18_detectnetv2 and exported the corresponding etlt files. But I have a problem converting it to the engine file needed for Jetson devices. The Readme.md in the Convertor for Jetson devices seems to be corrupted and I am not able to run the converter. I went through Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation but still getting an error while converting.
[ERROR] UffParser: Could not open /tmp/fileaWVGGL
[ERROR] Failed to parse uff model
[ERROR] Network must have at least one output
[ERROR] Unable to create engine
Segmentation fault
How should I go about exporting to model to the config files in deepstream? Do I need .etlt or .engine or .trt?

Hi sharanssundar,
You can deploy etlt model or TRT engine in DS. See below link.
https://devtalk.nvidia.com/default/topic/1065558/transfer-learning-toolkit/trt-engine-deployment/

If you select the way of generating a TRT engine, please use tlt-converter tool.
For Jeston devices, make sure the tool is downloaded from https://developer.nvidia.com/tlt-converter.

Could you paste the full log along with command when you run the tlt-converter?

Also,please check if you provided the correct key by the -k argument to tlt-converter. The key should be exactly the same as used in the TLT training phase. Incorrect key will result in an invalid model and hence engine generation failure.

Thank you Morgan for the doc on deploying models in DS. I ll look up on it.
As far as tlt-conversion is concerned,I have verified that key provided was the same used for training. The full log is as follows

./tlt-converter -k YmI2a3U0cGk5aGdzMXNkdm5taWY3Yzd1OGg6MDM2NjMyYzktYWI4OS00OTQ1LwE4NmYtM2Y5YTA5ZTQ4NDVi -o output_cov/Sigmoid,output_bbox/BiasAdd -d 3,400,600 -e /home/sharan/resnet18_detector.engine /home/sharan/resnet18_detector.etlt
[ERROR] UffParser: Could not open /tmp/fileYklUR3
[ERROR] Failed to parse uff model
[ERROR] Network must have at least one output
[ERROR] Unable to create engine
Segmentation fault (core dumped)

I am trying to run this on Jetson Nano.

Hi sharanssundar,
Please double check below items if you run at Nano.

  1. The tlt-converter is downloaded from https://developer.nvidia.com/tlt-converter.
  2. The key is correct. The key should be exactly the same as used in the TLT training phase
  3. The etlt model is available at your link /home/sharan/
1 Like

I still have the TLT session up and running and so I am sure its the same key( I even trained now for 5 epochs just to test the conversion). I downloaded the converter from the link that you gave and also have my .etlt model at /home/sharan/. I still get the same error. What else might be the reason?

Hi,
See tlt doc section 2, for DetectNet_v2

Input size: C * W * H (where C = 1 or 3, W > =480, H >=272 and W,H are mutliples 16)
Image format: JPG, JPEG, PNG
Label format: KITTI detection

According to you command, I suspect your etlt model is 600*400, right?
600 is not the multiples of 16. It does not meet the requirement.

Yes the model had input of 600400. Thanks to you I realised and changed it now to 480400. But still I am getting the same error. Let me explain what I did in detail.

  1. Download converter from https://developer.nvidia.com/tlt-converter.
  2. Open readme and Export TRT_LIB_PATH and TRT_INC_PATH correspondingly.
  3. Right click the tlt-converter and make it executable(allow executinf file as program). If not it throws an error that ./tlt-convertor:command not found.
  4. Export keys and parameters following this Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation.
  5. Run the converter as per the commands in the above doc - only fp32 not INT8 Mode.
    And I run into
[ERROR] UffParser: Could not open /tmp/fileYklUR3
[ERROR] Failed to parse uff model
[ERROR] Network must have at least one output
[ERROR] Unable to create engine
Segmentation fault (core dumped)

I am not sure there is a problem with the exporting of the model. But the converter for x86 systems that is inside the nvidia-docker works fine and I am able to generate .trt model. The converter for Jetson devices only seems to throw this error.

Hi sharanssundar
I do not understand why you do the step2.
Open chrome and type https://developer.nvidia.com/tlt-converter in laptop, press enter , then the TLT_Converter.zip will be downloading. It is a zip file.
Then unzip it and copy to nano device. Please check you can see below md5sum.
$ chmod +x tlt-converter
$ md5sum tlt-converter
46de17fd216e0364a1fccead6d68707b tlt-converter

Hi Morgan,
Step 2 was to follow to steps in the Readme file provided in the https://developer.nvidia.com/tlt-converter zip file. But now I followed your steps and it works. I also got help from

as I realised that I had also done the same with my model. Thanks for your time and patience.

1 Like

Hi sharanssundar
Glad to know you solve the problem! Thanks very much for using TLT!

Hello!
I’m having the exact same error message, but I have downloaded a pretrained model from NGC since I don’t see the benefit of training my own on the exact same dataset. How do I get the exact key as it was trained on? I can’t find it on the page of the model.

https://ngc.nvidia.com/catalog/models/nvidia:tlt_facenet

@dangraf
If you were using the facenet as a pretrained model in your training, please use the key mentioned in https://ngc.nvidia.com/catalog/models/nvidia:tlt_facenet

The trainable and deployable models are encrypted and will only operate with the following key:

  • Model load key: nvidia_tlt
1 Like

Ahh. sorry for my stupidity, thought that “nvidia_tlt” was a command generating the string for the key on the same format as the one I generated for my self.

Thanks for writing this, I guess you are not alone in that, I also assumed the same thing and since past 6 hours I m surfing one thread to another.
So if someone new is looking at this thread, the key in training is set to tlt_encode. It is not a command but you have to copy your key there. if you forgot then use tlt_encode as a key.

1 Like