how to import uff model from a UFF File
I don't konw how to import uff model from a UFF File. I can not find the information in the user guide.
I don't konw how to import uff model from a UFF File. I can not find the information in the user guide.

#1
Posted 10/20/2017 09:06 AM   
Hi, Please check this sample: [i]/usr/local/lib/python2.7/dist-packages/tensorrt/examples/uff_mnist.py[/i] [code] engine = trt.utils.uff_file_to_trt_engine(G_LOGGER, MODEL, parser, MAX_BATCHSIZE, MAX_WORKSPACE, trt.infer.DataType.FLOAT) [/code]
Answer Accepted by Original Poster
Hi,

Please check this sample:
/usr/local/lib/python2.7/dist-packages/tensorrt/examples/uff_mnist.py

engine = trt.utils.uff_file_to_trt_engine(G_LOGGER,
MODEL,
parser,
MAX_BATCHSIZE,
MAX_WORKSPACE,
trt.infer.DataType.FLOAT)

#2
Posted 10/23/2017 02:15 AM   
I am trying to run the example of uff_mnist.py that is provided in the tensorrt package. I am running this on a x86 machine as the python apis are not available on the Jetson yet. But I am getting the following error. Any idea what is wrong? I have not done any changes to the code. $ python /usr/local/lib/python2.7/dist-packages/tensorrt/examples/uff_mnist.py /usr/src/tensorrt/data [TensorRT] INFO: UFFParser: parsing Const_0 [TensorRT] INFO: UFFParser: parsing Const_1 [TensorRT] INFO: UFFParser: parsing Const_2 [TensorRT] INFO: UFFParser: parsing Const_3 [TensorRT] INFO: UFFParser: parsing Const_4 [TensorRT] INFO: UFFParser: parsing Const_5 [TensorRT] INFO: UFFParser: parsing Const_6 [TensorRT] INFO: UFFParser: parsing Const_7 [TensorRT] INFO: UFFParser: parsing Input_0 [TensorRT] INFO: UFFParser: parsing Conv_0 [TensorRT] INFO: UFFParser: parsing Binary_0 [TensorRT] INFO: UFFParser: parsing Activation_0 [TensorRT] INFO: UFFParser: parsing Pool_0 [TensorRT] INFO: UFFParser: parsing Conv_1 [TensorRT] INFO: UFFParser: parsing Binary_1 [TensorRT] INFO: UFFParser: parsing Activation_1 [TensorRT] INFO: UFFParser: parsing Pool_1 [TensorRT] INFO: UFFParser: parsing Const_8 [TensorRT] INFO: UFFParser: parsing Reshape_0 [TensorRT] INFO: UFFParser: parsing FullyConnected_0 [TensorRT] INFO: UFFParser: parsing Binary_2 [TensorRT] INFO: UFFParser: parsing Activation_2 [TensorRT] INFO: UFFParser: parsing FullyConnected_1 [TensorRT] INFO: UFFParser: parsing Binary_3 [TensorRT] INFO: UFFParser: parsing MarkOutput_0 [TensorRT] INFO: Original: 10 layers [TensorRT] INFO: After dead-layer removal: 10 layers [TensorRT] INFO: After scale fusion: 10 layers [TensorRT] INFO: Fusing Binary_0 with activation Activation_0 [TensorRT] INFO: Fusing Binary_1 with activation Activation_1 [TensorRT] INFO: Fusing Binary_2 with activation Activation_2 [TensorRT] INFO: After conv-act fusion: 7 layers [TensorRT] INFO: After tensor merging: 7 layers [TensorRT] INFO: After concat removal: 7 layers [TensorRT] ERROR: cudnnEngine.cpp (55) - Cuda Error in initializeCommonContext: 4 [TensorRT] ERROR: Failed to create engine File "/usr/local/lib/python2.7/dist-packages/tensorrt/utils/_utils.py", line 74, in uff_file_to_trt_engine assert(engine) Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/tensorrt/examples/uff_mnist.py", line 126, in <module> main() File "/usr/local/lib/python2.7/dist-packages/tensorrt/examples/uff_mnist.py", line 108, in main trt.infer.DataType.FLOAT) File "/usr/local/lib/python2.7/dist-packages/tensorrt/utils/_utils.py", line 82, in uff_file_to_trt_engine raise AssertionError('UFF parsing failed on line {} in statement {}'.format(line, text)) AssertionError: UFF parsing failed on line 74 in statement assert(engine)
I am trying to run the example of uff_mnist.py that is provided in the tensorrt package. I am running this on a x86 machine as the python apis are not available on the Jetson yet.

But I am getting the following error. Any idea what is wrong? I have not done any changes to the code.

$ python /usr/local/lib/python2.7/dist-packages/tensorrt/examples/uff_mnist.py /usr/src/tensorrt/data
[TensorRT] INFO: UFFParser: parsing Const_0
[TensorRT] INFO: UFFParser: parsing Const_1
[TensorRT] INFO: UFFParser: parsing Const_2
[TensorRT] INFO: UFFParser: parsing Const_3
[TensorRT] INFO: UFFParser: parsing Const_4
[TensorRT] INFO: UFFParser: parsing Const_5
[TensorRT] INFO: UFFParser: parsing Const_6
[TensorRT] INFO: UFFParser: parsing Const_7
[TensorRT] INFO: UFFParser: parsing Input_0
[TensorRT] INFO: UFFParser: parsing Conv_0
[TensorRT] INFO: UFFParser: parsing Binary_0
[TensorRT] INFO: UFFParser: parsing Activation_0
[TensorRT] INFO: UFFParser: parsing Pool_0
[TensorRT] INFO: UFFParser: parsing Conv_1
[TensorRT] INFO: UFFParser: parsing Binary_1
[TensorRT] INFO: UFFParser: parsing Activation_1
[TensorRT] INFO: UFFParser: parsing Pool_1
[TensorRT] INFO: UFFParser: parsing Const_8
[TensorRT] INFO: UFFParser: parsing Reshape_0
[TensorRT] INFO: UFFParser: parsing FullyConnected_0
[TensorRT] INFO: UFFParser: parsing Binary_2
[TensorRT] INFO: UFFParser: parsing Activation_2
[TensorRT] INFO: UFFParser: parsing FullyConnected_1
[TensorRT] INFO: UFFParser: parsing Binary_3
[TensorRT] INFO: UFFParser: parsing MarkOutput_0
[TensorRT] INFO: Original: 10 layers
[TensorRT] INFO: After dead-layer removal: 10 layers
[TensorRT] INFO: After scale fusion: 10 layers
[TensorRT] INFO: Fusing Binary_0 with activation Activation_0
[TensorRT] INFO: Fusing Binary_1 with activation Activation_1
[TensorRT] INFO: Fusing Binary_2 with activation Activation_2
[TensorRT] INFO: After conv-act fusion: 7 layers
[TensorRT] INFO: After tensor merging: 7 layers
[TensorRT] INFO: After concat removal: 7 layers
[TensorRT] ERROR: cudnnEngine.cpp (55) - Cuda Error in initializeCommonContext: 4
[TensorRT] ERROR: Failed to create engine
File "/usr/local/lib/python2.7/dist-packages/tensorrt/utils/_utils.py", line 74, in uff_file_to_trt_engine
assert(engine)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tensorrt/examples/uff_mnist.py", line 126, in <module>
main()
File "/usr/local/lib/python2.7/dist-packages/tensorrt/examples/uff_mnist.py", line 108, in main
trt.infer.DataType.FLOAT)
File "/usr/local/lib/python2.7/dist-packages/tensorrt/utils/_utils.py", line 82, in uff_file_to_trt_engine
raise AssertionError('UFF parsing failed on line {} in statement {}'.format(line, text))
AssertionError: UFF parsing failed on line 74 in statement assert(engine)

#3
Posted 12/07/2017 04:19 AM   
Hi, Do you have a GPU on the x86-machine? Please also make sure the required driver and CUDA are well installed. You can verify this via running CUDA sample code. For example, [code] /usr/local/cuda-8.0/bin/cuda-install-samples-8.0.sh . cd NVIDIA_CUDA-8.0_Samples/1_Utilities/deviceQuery make ./deviceQuery [/code] Thanks.
Hi,

Do you have a GPU on the x86-machine?
Please also make sure the required driver and CUDA are well installed.

You can verify this via running CUDA sample code.
For example,
/usr/local/cuda-8.0/bin/cuda-install-samples-8.0.sh .
cd NVIDIA_CUDA-8.0_Samples/1_Utilities/deviceQuery
make
./deviceQuery

Thanks.

#4
Posted 12/07/2017 06:36 AM   
Yes there is a GPU. Below is the output of the command you suggested. $ ./deviceQuery ./deviceQuery Starting... CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: "GeForce GTX 1080" CUDA Driver Version / Runtime Version 8.0 / 8.0 CUDA Capability Major/Minor version number: 6.1 Total amount of global memory: 8113 MBytes (8506769408 bytes) (20) Multiprocessors, (128) CUDA Cores/MP: 2560 CUDA Cores GPU Max Clock rate: 1848 MHz (1.85 GHz) Memory Clock rate: 5005 Mhz Memory Bus Width: 256-bit L2 Cache Size: 2097152 bytes Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384) Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 2048 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 2 copy engine(s) Run time limit on kernels: No Integrated GPU sharing Host Memory: No Support host page-locked memory mapping: Yes Alignment requirement for Surfaces: Yes Device has ECC support: Disabled Device supports Unified Addressing (UVA): Yes Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) > deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 1080 Result = PASS
Yes there is a GPU. Below is the output of the command you suggested.

$ ./deviceQuery
./deviceQuery Starting...

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX 1080"
CUDA Driver Version / Runtime Version 8.0 / 8.0
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 8113 MBytes (8506769408 bytes)
(20) Multiprocessors, (128) CUDA Cores/MP: 2560 CUDA Cores
GPU Max Clock rate: 1848 MHz (1.85 GHz)
Memory Clock rate: 5005 Mhz
Memory Bus Width: 256-bit
L2 Cache Size: 2097152 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 1080
Result = PASS

#5
Posted 12/07/2017 06:42 AM   
Hi, Sorry for the inconvenience. Could you check if it can be reproduced with TensorRT 3 GA package? https://developer.nvidia.com/nvidia-tensorrt-download Thanks
Hi,

Sorry for the inconvenience.

Could you check if it can be reproduced with TensorRT 3 GA package?

https://developer.nvidia.com/nvidia-tensorrt-download


Thanks

#6
Posted 12/07/2017 07:19 AM   
Same result unfortunately. $python uff_mnist.py TensorRT-3.0.1/python/data [TensorRT] INFO: UFFParser: parsing Const_0 [TensorRT] INFO: UFFParser: parsing Const_1 [TensorRT] INFO: UFFParser: parsing Const_2 [TensorRT] INFO: UFFParser: parsing Const_3 [TensorRT] INFO: UFFParser: parsing Const_4 [TensorRT] INFO: UFFParser: parsing Const_5 [TensorRT] INFO: UFFParser: parsing Const_6 [TensorRT] INFO: UFFParser: parsing Const_7 [TensorRT] INFO: UFFParser: parsing Input_0 [TensorRT] INFO: UFFParser: parsing Conv_0 [TensorRT] INFO: UFFParser: parsing Binary_0 [TensorRT] INFO: UFFParser: parsing Activation_0 [TensorRT] INFO: UFFParser: parsing Pool_0 [TensorRT] INFO: UFFParser: parsing Conv_1 [TensorRT] INFO: UFFParser: parsing Binary_1 [TensorRT] INFO: UFFParser: parsing Activation_1 [TensorRT] INFO: UFFParser: parsing Pool_1 [TensorRT] INFO: UFFParser: parsing Const_8 [TensorRT] INFO: UFFParser: parsing Reshape_0 [TensorRT] INFO: UFFParser: parsing FullyConnected_0 [TensorRT] INFO: UFFParser: parsing Binary_2 [TensorRT] INFO: UFFParser: parsing Activation_2 [TensorRT] INFO: UFFParser: parsing FullyConnected_1 [TensorRT] INFO: UFFParser: parsing Binary_3 [TensorRT] INFO: UFFParser: parsing MarkOutput_0 [TensorRT] INFO: Original: 10 layers [TensorRT] INFO: After dead-layer removal: 10 layers [TensorRT] INFO: After scale fusion: 10 layers [TensorRT] INFO: Fusing Binary_0 with activation Activation_0 [TensorRT] INFO: Fusing Binary_1 with activation Activation_1 [TensorRT] INFO: Fusing Binary_2 with activation Activation_2 [TensorRT] INFO: After conv-act fusion: 7 layers [TensorRT] INFO: After tensor merging: 7 layers [TensorRT] INFO: After concat removal: 7 layers [TensorRT] ERROR: cudnnEngine.cpp (56) - Cuda Error in initializeCommonContext: 4 [TensorRT] ERROR: cudnnEngine.cpp (56) - Cuda Error in initializeCommonContext: 4 [TensorRT] ERROR: Failed to create engine File "/usr/lib/python2.7/dist-packages/tensorrt/utils/_utils.py", line 133, in uff_file_to_trt_engine assert(engine) Traceback (most recent call last): File "uff_mnist.py", line 175, in <module> main() File "uff_mnist.py", line 157, in main trt.infer.DataType.FLOAT) File "/usr/lib/python2.7/dist-packages/tensorrt/utils/_utils.py", line 141, in uff_file_to_trt_engine raise AssertionError('UFF parsing failed on line {} in statement {}'.format(line, text)) AssertionError: UFF parsing failed on line 133 in statement assert(engine)
Same result unfortunately.

$python uff_mnist.py TensorRT-3.0.1/python/data
[TensorRT] INFO: UFFParser: parsing Const_0
[TensorRT] INFO: UFFParser: parsing Const_1
[TensorRT] INFO: UFFParser: parsing Const_2
[TensorRT] INFO: UFFParser: parsing Const_3
[TensorRT] INFO: UFFParser: parsing Const_4
[TensorRT] INFO: UFFParser: parsing Const_5
[TensorRT] INFO: UFFParser: parsing Const_6
[TensorRT] INFO: UFFParser: parsing Const_7
[TensorRT] INFO: UFFParser: parsing Input_0
[TensorRT] INFO: UFFParser: parsing Conv_0
[TensorRT] INFO: UFFParser: parsing Binary_0
[TensorRT] INFO: UFFParser: parsing Activation_0
[TensorRT] INFO: UFFParser: parsing Pool_0
[TensorRT] INFO: UFFParser: parsing Conv_1
[TensorRT] INFO: UFFParser: parsing Binary_1
[TensorRT] INFO: UFFParser: parsing Activation_1
[TensorRT] INFO: UFFParser: parsing Pool_1
[TensorRT] INFO: UFFParser: parsing Const_8
[TensorRT] INFO: UFFParser: parsing Reshape_0
[TensorRT] INFO: UFFParser: parsing FullyConnected_0
[TensorRT] INFO: UFFParser: parsing Binary_2
[TensorRT] INFO: UFFParser: parsing Activation_2
[TensorRT] INFO: UFFParser: parsing FullyConnected_1
[TensorRT] INFO: UFFParser: parsing Binary_3
[TensorRT] INFO: UFFParser: parsing MarkOutput_0
[TensorRT] INFO: Original: 10 layers
[TensorRT] INFO: After dead-layer removal: 10 layers
[TensorRT] INFO: After scale fusion: 10 layers
[TensorRT] INFO: Fusing Binary_0 with activation Activation_0
[TensorRT] INFO: Fusing Binary_1 with activation Activation_1
[TensorRT] INFO: Fusing Binary_2 with activation Activation_2
[TensorRT] INFO: After conv-act fusion: 7 layers
[TensorRT] INFO: After tensor merging: 7 layers
[TensorRT] INFO: After concat removal: 7 layers
[TensorRT] ERROR: cudnnEngine.cpp (56) - Cuda Error in initializeCommonContext: 4
[TensorRT] ERROR: cudnnEngine.cpp (56) - Cuda Error in initializeCommonContext: 4
[TensorRT] ERROR: Failed to create engine
File "/usr/lib/python2.7/dist-packages/tensorrt/utils/_utils.py", line 133, in uff_file_to_trt_engine
assert(engine)
Traceback (most recent call last):
File "uff_mnist.py", line 175, in <module>
main()
File "uff_mnist.py", line 157, in main
trt.infer.DataType.FLOAT)
File "/usr/lib/python2.7/dist-packages/tensorrt/utils/_utils.py", line 141, in uff_file_to_trt_engine
raise AssertionError('UFF parsing failed on line {} in statement {}'.format(line, text))
AssertionError: UFF parsing failed on line 133 in statement assert(engine)

#7
Posted 12/07/2017 10:40 AM   
Hi, Could you also share your driver and cuDNN version? Thanks
Hi,

Could you also share your driver and cuDNN version?
Thanks

#8
Posted 12/08/2017 06:42 AM   
I have pasted the driver details above. Here are the cuDNN details. $ nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2016 NVIDIA Corporation Built on Tue_Jan_10_13:22:03_CST_2017 Cuda compilation tools, release 8.0, V8.0.61
I have pasted the driver details above. Here are the cuDNN details.

$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Tue_Jan_10_13:22:03_CST_2017
Cuda compilation tools, release 8.0, V8.0.61

#9
Posted 12/08/2017 09:18 AM   
Scroll To Top

Add Reply