Deploy custom object detection, designed on tensorflow 1.14

Hi everyone,

I developed a custom object detection with TensorFlow 1.14 using ssd_mobilenet_v2. I want to deploy the model on the Nano that has Tensorflow 1.13 for nano installed. The graph is not compatible and it fails to do a prediction. So I installed TensorFlow 1.14 on the nano but now, I am getting the following error:

2019-07-29 09:49:47.926014: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-07-29 09:49:47.926093: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2019-07-29 09:49:47.926205: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2019-07-29 09:49:47.927832: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-07-29 09:49:47.927966: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      0 
2019-07-29 09:49:47.927992: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0:   N 
2019-07-29 09:49:47.928260: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-07-29 09:49:47.928519: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:973] ARM64 does not support NUMA - returning NUMA node zero
2019-07-29 09:49:47.928660: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1761 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)


2019-07-29 09:59:14.916523: F tensorflow/stream_executor/cuda/cuda_driver.cc:175] Check failed: err == cudaSuccess || err == cudaErrorInvalidValue Unexpected CUDA error: unknown error

Now I am thinking of converting the graph using tensorRT but it’s only compatible with Tensorflow 1.12
Any idea to solve this?

Hi,

There are some dependencies between TensorFlow and CUDA.
You will need to use the identical CUDA driver/toolkit which the package is built with.

Our latest release for JetPack4.2 is v1.13.1. Version 1.14 will be supported in our next release.

For now, you can build it on Jetson Nano with the instructions here:
[url]https://devtalk.nvidia.com/default/topic/1055131/jetson-agx-xavier/building-tensorflow-1-13-on-jetson-xavier/[/url]

Thanks.

Thanks for the support! Is there any date for the release of the next version of JetPack?
The link you send shows how to build TensorFlow 1.13.1 on Jetson Xavier, any feedback doing this with Tensorflow 1.14 on nano?

Thanks,

Hi,

The steps should be similar since Nano and Xavier adapt the same image.
The only different is GPU architecture. Please remember to add Nano GPU capacity when building the library.

Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 3.5,7.0]: <b>5.3</b>,7.2

Thanks.

1 Like

OK thanks for the support!