Incorrect Results during Inference using Tensorrt3.0 C++ uff parser
Hello there, I am currently having problems getting proper results using my mobilenet.uff file. I edited the sampleUffMNIST code so as to perform inference using my own .uff file. Currently it seems that the engine is loaded properly but the output i get is wrong. My output is below. [code] 5 eltCount --- OUTPUT --- [c] f = 1.000000 [c] f = 1.000000 [c] f = 1.000000 [c] f = 1.000000 [c] f = 1.000000 0 => 1 : *** 1 => 1 : 2 => 1 : 3 => 1 : 4 => 1 : Average over 1 runs is 467.3683472 ms. [/code] As you can see. The network is predicting "1" for all classes. I understand this could be due to input. I wrote the pixel values for my preprocessed image into a text file (I know this isn't the best option but I wanted to see if I could get the code working completely first before I added preprocessing code in the c++ file as I am not fluent in c++). This is my code block for writing into the txt file. [code] with open("/raid/nri/Classification_task/Data_files_txt/weeds/Mobilenet_v1_1.0/Preprocessed_array3.txt", "w") as outfile: counters=0 for k in range(3): for i in range(224): for j in range(224): outfile.write("{:10.4f}".format(final_image[i][j][k])+" ") [/code] And i am reading it like this to the input buffer [code] std::ifstream file (filename); while(file >> number) { inputs[i]=number; i++; [/code] I defined input to my network as so: [code] parser->registerInput("input", DimsCHW(3, 224, 224)); [/code] The network was originally trained in NHWC format. Does this have any impact? Please any help or advice is appreciated and if more information is needed, please let me know. Thanks!
Hello there,

I am currently having problems getting proper results using my mobilenet.uff file. I edited the sampleUffMNIST code so as to perform inference using my own .uff file. Currently it seems that the engine is loaded properly but the output i get is wrong. My output is below.
5 eltCount
--- OUTPUT ---
[c] f = 1.000000
[c] f = 1.000000
[c] f = 1.000000
[c] f = 1.000000
[c] f = 1.000000
0 => 1 : ***
1 => 1 :
2 => 1 :
3 => 1 :
4 => 1 :

Average over 1 runs is 467.3683472 ms.


As you can see. The network is predicting "1" for all classes. I understand this could be due to input. I wrote the pixel values for my preprocessed image into a text file (I know this isn't the best option but I wanted to see if I could get the code working completely first before I added preprocessing code in the c++ file as I am not fluent in c++). This is my code block for writing into the txt file.
with open("/raid/nri/Classification_task/Data_files_txt/weeds/Mobilenet_v1_1.0/Preprocessed_array3.txt",
"w") as outfile:
counters=0
for k in range(3):
for i in range(224):
for j in range(224):
outfile.write("{:10.4f}".format(final_image[i][j][k])+" ")

And i am reading it like this to the input buffer
std::ifstream file (filename);
while(file >> number) {
inputs[i]=number;
i++;


I defined input to my network as so:
parser->registerInput("input", DimsCHW(3, 224, 224));


The network was originally trained in NHWC format. Does this have any impact? Please any help or advice is appreciated and if more information is needed, please let me know. Thanks!

#1
Posted 12/11/2017 08:35 PM   
I had the similar issue. I couldn't reproduce the result I got from python API
I had the similar issue. I couldn't reproduce the result I got from python API

#2
Posted 12/12/2017 12:58 AM   
Hi, Thanks for your feedback. In summary: The result of python API is correct, but the result from c++ API is wrong. Is this correct? If yes, this is a significant issue and could you attach or share the model with us for further investigation. Thanks.
Hi,

Thanks for your feedback.

In summary:
The result of python API is correct, but the result from c++ API is wrong.
Is this correct?

If yes, this is a significant issue and could you attach or share the model with us for further investigation.

Thanks.

#3
Posted 12/12/2017 04:45 AM   
Hello, I will attach my .uff file here. For my case, I did not use the python API, I have only used the .pb file to get good results before I converted it and tried using the C++ API. Do you suggest i get it working on the python API first? Thanks a lot for the help!
Hello,

I will attach my .uff file here. For my case, I did not use the python API, I have only used the .pb file to get good results before I converted it and tried using the C++ API. Do you suggest i get it working on the python API first? Thanks a lot for the help!

#4
Posted 12/12/2017 04:12 PM   
Hi, Thanks for your feedback. Could you check if the result of python-based API is correct? This check will help to figure out the error comes from TensorRT or preprocess. Thanks.
Hi,

Thanks for your feedback.

Could you check if the result of python-based API is correct?
This check will help to figure out the error comes from TensorRT or preprocess.

Thanks.

#5
Posted 12/13/2017 06:09 AM   
Hi, Sorry it has taken me a while to get back to you. I have tried out the python API. The results I get from it are far different from what i get with my .pb file. They are not all 1's as in the case of my C++ code but the results are still not good. Honestly I am not really sure what is causing these problems but i'm fairly certain it is not pre-processing. I will attach the .pb file I converted to .uff format. I will also attach my .uff file. This one is a bit different from the one I shared earlier because this time when using convert_to_uff, I used the -I parameter. Also I used my new .uff file with my C++ code but with no luck, it still only prints 1's. Thanks a lot for any help that is given. Please let me know if you require more information.
Hi,

Sorry it has taken me a while to get back to you. I have tried out the python API. The results I get from it are far different from what i get with my .pb file. They are not all 1's as in the case of my C++ code but the results are still not good. Honestly I am not really sure what is causing these problems but i'm fairly certain it is not pre-processing. I will attach the .pb file I converted to .uff format. I will also attach my .uff file. This one is a bit different from the one I shared earlier because this time when using convert_to_uff, I used the -I parameter. Also I used my new .uff file with my C++ code but with no luck, it still only prints 1's. Thanks a lot for any help that is given. Please let me know if you require more information.

#6
Posted 12/20/2017 09:22 PM   
Hi, Could you share the log when coverting the TensorFlow model into UFF format? Please noticed that for a non-supported layer, uff parser may skip it automatically without assertion. Thanks.
Hi,

Could you share the log when coverting the TensorFlow model into UFF format?
Please noticed that for a non-supported layer, uff parser may skip it automatically without assertion.

Thanks.

#7
Posted 12/21/2017 09:37 AM   
Hi, This is the command i run [code] convert-to-uff tensorflow -o /raid/nri/Classification_task/Exported_uff_files/log_mobilenet.uff --input-file /raid/nri/Classification_task/Optimized_graphs/Mobilenet_no_squeeze/optimized_mobilenet.pb -O MobilenetV1/Predictions/Reshape_1 -I input,input,float32,3,224,224 [/code] And this is the log I get [code] Loading /raid/nri/Classification_task/Optimized_graphs/Mobilenet_no_squeeze/optimized_mobilenet.pb Using output node MobilenetV1/Predictions/Reshape_1 Using input node input Converting to UFF graph No. nodes: 633 UFF Output written to /raid/nri/Classification_task/Exported_uff_files/log_mobilenet.uff [/code]
Hi,

This is the command i run
convert-to-uff tensorflow -o /raid/nri/Classification_task/Exported_uff_files/log_mobilenet.uff --input-file /raid/nri/Classification_task/Optimized_graphs/Mobilenet_no_squeeze/optimized_mobilenet.pb -O  MobilenetV1/Predictions/Reshape_1 -I input,input,float32,3,224,224

And this is the log I get
Loading /raid/nri/Classification_task/Optimized_graphs/Mobilenet_no_squeeze/optimized_mobilenet.pb
Using output node MobilenetV1/Predictions/Reshape_1
Using input node input
Converting to UFF graph
No. nodes: 633
UFF Output written to /raid/nri/Classification_task/Exported_uff_files/log_mobilenet.uff

#8
Posted 12/21/2017 04:05 PM   
Hi, Thanks for your feedback. We are checking this issue and will update information with you later. Thanks.
Hi,

Thanks for your feedback.
We are checking this issue and will update information with you later.

Thanks.

#9
Posted 12/22/2017 06:59 AM   
Hi, Thanks a lot!
Hi,

Thanks a lot!

#10
Posted 12/22/2017 07:14 PM   
Hi, Thanks for sharing your model and results. Due to lots of operations, could you help to cut off the model and narrow down which layers yield the difference? Thanks.
Hi,

Thanks for sharing your model and results.
Due to lots of operations, could you help to cut off the model and narrow down which layers yield the difference?

Thanks.

#11
Posted 12/25/2017 08:43 AM   
Hi, Please could you elaborate? I'm not really sure how to do that. Maybe a simple example if possible. Thanks!
Hi,

Please could you elaborate? I'm not really sure how to do that. Maybe a simple example if possible.

Thanks!

#12
Posted 12/28/2017 05:10 AM   
Hi, First, you can get the TensorRT layers via generating the .pbtxt file: [code] uff.from_tensorflow(graphdef=frozen_graph, output_filename=UFF_OUTPUT_FILENAME, output_nodes=OUTPUT_NAMES, [b]text=True[/b]) [/code] Then, compare the output layer by layer. [code] ... parser.register_output("layer1") ... context.enqueue(1, bindings, stream.handle, None) cuda.memcpy_dtoh_async(h_layer1, d_layer1, stream) # Compare result between TensorRT and TensorFlow here ... [/code] Thanks
Hi,

First, you can get the TensorRT layers via generating the .pbtxt file:
uff.from_tensorflow(graphdef=frozen_graph,
output_filename=UFF_OUTPUT_FILENAME,
output_nodes=OUTPUT_NAMES,
text=True)


Then, compare the output layer by layer.
...
parser.register_output("layer1")
...
context.enqueue(1, bindings, stream.handle, None)
cuda.memcpy_dtoh_async(h_layer1, d_layer1, stream)
# Compare result between TensorRT and TensorFlow here
...

Thanks

#13
Posted 12/29/2017 06:38 AM   
Hello, Thanks for the response, I am currently trying this process out. I was wondering if it wasn't too much trouble, if you could provide or point to an example of inference using the uff models and the c++ API on 3 channel images. This could help me a lot in fixing my c++ code because I was using the mnist uff example as a reference to writing my code which only has 1 channel. I believed I changed the code accordingly to fit 3 channel images but I could be wrong. Thanks
Hello,

Thanks for the response, I am currently trying this process out. I was wondering if it wasn't too much trouble, if you could provide or point to an example of inference using the uff models and the c++ API on 3 channel images. This could help me a lot in fixing my c++ code because I was using the mnist uff example as a reference to writing my code which only has 1 channel. I believed I changed the code accordingly to fit 3 channel images but I could be wrong.

Thanks

#14
Posted 01/02/2018 04:46 PM   
Hello, Following your suggestion, I am trying to get the output of each layer to compare but I am getting the following error when parsing my UFF model stream [code] Using output node MobilenetV1/Predictions/Reshape_1 Converting to UFF graph No. nodes: 633 UFF Output written to /raid/nri/Classification_task/TensorRt_text_files/mobilenet_uff UFF Text Output written to /raid/nri/Classification_task/TensorRt_text_files/mobilenet_uff.pbtxt File "/usr/local/lib/python3.4/dist-packages/tensorrt/utils/_utils.py", line 186, in uff_to_trt_engine assert(parser_result) [TensorRT] ERROR: Failed to parse UFF model stream Traceback (most recent call last): File "/usr/local/lib/python3.4/dist-packages/tensorrt/utils/_utils.py", line 186, in uff_to_trt_engine assert(parser_result) AssertionError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/dami/TensorRt_test/CompareLayers.py", line 41, in <module> engine = trt.utils.uff_to_trt_engine(G_LOGGER,uff_model,parser,1,1 << 20) File "/usr/local/lib/python3.4/dist-packages/tensorrt/utils/_utils.py", line 194, in uff_to_trt_engine raise AssertionError('UFF parsing failed on line {} in statement {}'.format(line, text)) AssertionError: UFF parsing failed on line 186 in statement assert(parser_result) [/code] I am using the tensorrt3.0 user guide to write this code. This is the line that gives the error [code] engine = trt.utils.uff_to_trt_engine(G_LOGGER,uff_model,parser,1,1 << 20) [/code] This is how i am defining the parser [code] parser = uffparser.create_uff_parser() parser.register_input("input", (3,224,224),0) parser.register_output("MobilenetV1/Predictions/Reshape_1") [/code] I also tried using the provided lenet5_mnist_frozen.pb but it gives the same error.If you are wondering how i managed to get the pythonAPI to work earlier without this approach, it was because i used "trt.lite.engine". Thanks for any help provided!
Hello,

Following your suggestion, I am trying to get the output of each layer to compare but I am getting the following error when parsing my UFF model stream

Using output node MobilenetV1/Predictions/Reshape_1
Converting to UFF graph
No. nodes: 633
UFF Output written to /raid/nri/Classification_task/TensorRt_text_files/mobilenet_uff
UFF Text Output written to /raid/nri/Classification_task/TensorRt_text_files/mobilenet_uff.pbtxt
File "/usr/local/lib/python3.4/dist-packages/tensorrt/utils/_utils.py", line 186, in uff_to_trt_engine
assert(parser_result)
[TensorRT] ERROR: Failed to parse UFF model stream
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/tensorrt/utils/_utils.py", line 186, in uff_to_trt_engine
assert(parser_result)
AssertionError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/dami/TensorRt_test/CompareLayers.py", line 41, in <module>
engine = trt.utils.uff_to_trt_engine(G_LOGGER,uff_model,parser,1,1 << 20)
File "/usr/local/lib/python3.4/dist-packages/tensorrt/utils/_utils.py", line 194, in uff_to_trt_engine
raise AssertionError('UFF parsing failed on line {} in statement {}'.format(line, text))
AssertionError: UFF parsing failed on line 186 in statement assert(parser_result)


I am using the tensorrt3.0 user guide to write this code. This is the line that gives the error

engine = trt.utils.uff_to_trt_engine(G_LOGGER,uff_model,parser,1,1 << 20)


This is how i am defining the parser
parser = uffparser.create_uff_parser()
parser.register_input("input", (3,224,224),0)
parser.register_output("MobilenetV1/Predictions/Reshape_1")


I also tried using the provided lenet5_mnist_frozen.pb but it gives the same error.If you are wondering how i managed to get the pythonAPI to work earlier without this approach, it was because i used "trt.lite.engine". Thanks for any help provided!

#15
Posted 01/02/2018 10:02 PM   
Scroll To Top

Add Reply