Parsing GridAnchor[Op: _GridAnchor_TRT]. ... /protobuf/repeated_field.h:1408] CHECK failed: (index) < (current_size_):

WARNING: To create TensorRT plugin nodes, please use the create_plugin_node function instead.
UFF Version 0.6.5
=== Automatically deduced input nodes ===
[name: “Input”
op: “Placeholder”
attr {
key: “dtype”
value {
type: DT_FLOAT
}
}
attr {
key: “shape”
value {
shape {
dim {
size: 1
}
dim {
size: 3
}
dim {
size: 640
}
dim {
size: 640
}
}
}
}
]

Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS_TRT yet.
Converting NMS as custom op: NMS_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_loc as custom op: FlattenConcat_TRT
WARNING:tensorflow:From /home/york/anaconda3/envs/tensorRT6.0/lib/python3.7/site-packages/uff/converters/tensorflow/converter.py:179: The name tf.AttrValue is deprecated. Please use tf.compat.v1.AttrValue instead.

Warning: No conversion function registered for layer: GridAnchor_TRT yet.
Converting GridAnchor as custom op: GridAnchor_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_conf as custom op: FlattenConcat_TRT
DEBUG [/home/york/anaconda3/envs/tensorRT6.0/lib/python3.7/site-packages/uff/converters/tensorflow/converter.py:96] Marking [‘NMS’] as outputs
No. nodes: 515
UFF Output written to /media/york/F/GitHub/tensorflow/train_model/ssd_mobilenet_v2_coco_focal_loss_trafficlight/export_train_bdd100k_baidu_truck_zl004_class4_wh640640_maxscale4_trainval299287_expansion_layer3_cosine_reducebox_v2_step320000-733/frozen_inference_graph.uff
UFF Text Output written to /media/york/F/GitHub/tensorflow/train_model/ssd_mobilenet_v2_coco_focal_loss_trafficlight/export_train_bdd100k_baidu_truck_zl004_class4_wh640640_maxscale4_trainval299287_expansion_layer3_cosine_reducebox_v2_step320000-733/frozen_inference_graph.pbtxt
[TensorRT] VERBOSE: Plugin Creator registration succeeded - GridAnchor_TRT
[TensorRT] VERBOSE: Plugin Creator registration succeeded - NMS_TRT
[TensorRT] VERBOSE: Plugin Creator registration succeeded - Reorg_TRT
[TensorRT] VERBOSE: Plugin Creator registration succeeded - Region_TRT
[TensorRT] VERBOSE: Plugin Creator registration succeeded - Clip_TRT
[TensorRT] VERBOSE: Plugin Creator registration succeeded - LReLU_TRT
[TensorRT] VERBOSE: Plugin Creator registration succeeded - PriorBox_TRT
[TensorRT] VERBOSE: Plugin Creator registration succeeded - Normalize_TRT
[TensorRT] VERBOSE: Plugin Creator registration succeeded - RPROI_TRT
[TensorRT] VERBOSE: Plugin Creator registration succeeded - BatchedNMS_TRT
[TensorRT] VERBOSE: Plugin Creator registration succeeded - FlattenConcat_TRT
TensorRT inference engine settings:

  • Inference precision - DataType.FLOAT
  • Max batch size - 1

[TensorRT] VERBOSE: UFFParser: BoxPredictor_2/Reshape/shape/3 →
[TensorRT] VERBOSE: UFFParser: Applying order forwarding to: BoxPredictor_2/Reshape/shape/3
[TensorRT] VERBOSE: UFFParser: Parsing BoxPredictor_2/Reshape/shape[Op: Stack]. Inputs: BoxPredictor_2/strided_slice, BoxPredictor_2/Reshape/shape/1, BoxPredictor_2/Reshape/shape/2, BoxPredictor_2/Reshape/shape/3
[TensorRT] VERBOSE: UFFParser: Applying order forwarding to: BoxPredictor_2/Reshape/shape
[TensorRT] VERBOSE: UFFParser: Parsing BoxPredictor_2/Reshape[Op: Reshape]. Inputs: BoxPredictor_2/BoxEncodingPredictor/BiasAdd, BoxPredictor_2/Reshape/shape
[TensorRT] VERBOSE: UFFParser: BoxPredictor_2/Reshape → [2400,1,4]
[TensorRT] VERBOSE: UFFParser: Applying order forwarding to: BoxPredictor_2/Reshape
[TensorRT] VERBOSE: UFFParser: Parsing concat_box_conf[Op: FlattenConcat_TRT]. Inputs: BoxPredictor_0/Reshape, BoxPredictor_1/Reshape, BoxPredictor_2/Reshape
[TensorRT] VERBOSE: UFFParser: Parsing Squeeze[Op: Squeeze]. Inputs: concat_box_conf
[TensorRT] VERBOSE: UFFParser: Squeeze → [124800,1,1]
[TensorRT] VERBOSE: UFFParser: Applying order forwarding to: Squeeze
[TensorRT] VERBOSE: UFFParser: Parsing GridAnchor[Op: GridAnchor_TRT].
[libprotobuf FATAL /externals/protobuf/x86_64/10.0/include/google/protobuf/repeated_field.h:1408] CHECK failed: (index) < (current_size
):
Traceback (most recent call last):
File “/media/york/F/GitHub/tensorflow/models/research/auto_driving/uff_ssd-TensorRT-6.0.1.5/detect_objects_trafficlight_trt6.0.py”, line 266, in
main()
File “/media/york/F/GitHub/tensorflow/models/research/auto_driving/uff_ssd-TensorRT-6.0.1.5/detect_objects_trafficlight_trt6.0.py”, line 240, in main
batch_size=args.max_batch_size)
File “/media/york/F/GitHub/tensorflow/models/research/auto_driving/uff_ssd-TensorRT-6.0.1.5/utils/inference.py”, line 116, in init
batch_size=batch_size)
File “/media/york/F/GitHub/tensorflow/models/research/auto_driving/uff_ssd-TensorRT-6.0.1.5/utils/engine.py”, line 125, in build_engine
parser.parse(uff_model_path, network)
RuntimeError: CHECK failed: (index) < (current_size
):

Process finished with exit code 1

my conf.py is like:

Input = gs.create_plugin_node(name=“Input”,
op=“Placeholder”,
dtype=tf.float32,
shape=[1, channels, height, width])
PriorBox = gs.create_plugin_node(name=“GridAnchor”, op=“GridAnchor_TRT”,
minSize=0.05,
maxSize=0.2,
aspectRatios=[1.0, 3.0, 0.33, 2.0, 0.5],
variance=[0.1,0.1,0.2,0.2],
#featureMapShapes=[19, 10, 5, 3, 2, 1],
featureMapShapes=[80, 40, 20],
numLayers=3
)
NMS = gs.create_plugin_node(
name=“NMS”,
op=“NMS_TRT”,
shareLocation=1,
varianceEncodedInTarget=0,
backgroundLabelId=0,
confidenceThreshold=1e-8,
nmsThreshold=0.6,
topK=100,
keepTopK=100,
numClasses=5,
inputOrder=[0, 2, 1],
confSigmoid=1,
isNormalized=1
)
concat_priorbox = gs.create_node(
“concat_priorbox”,
op=“ConcatV2”,
dtype=tf.float32,
axis=2
)
concat_box_loc = gs.create_plugin_node(
“concat_box_loc”,
op=“FlattenConcat_TRT”,
dtype=tf.float32,
axis=1,
ignoreBatch=0
)
concat_box_conf = gs.create_plugin_node(
“concat_box_conf”,
op=“FlattenConcat_TRT”,
dtype=tf.float32,
axis=1,
ignoreBatch=0
)
namespace_plugin_map = {
“MultipleGridAnchorGenerator”: PriorBox,
“Postprocessor”: NMS,
“Preprocessor”: Input,
“ToFloat”: Input,
“image_tensor”: Input,
“Concatenate”: concat_priorbox,
“concat”: concat_box_loc,
“concat_1”: concat_box_conf
}

what‘ the problem??? tensorflow detection model to tensorRT ,anyone can help?? my object is light, class is 4.

Hi,

I think this GitHub issue might have some useful info / suggestions for you to try: https://github.com/NVIDIA/TensorRT/issues/26

Thanks!

I think I have solved my problem。

method is like this:

1 modify function “create_ssd_anchors()” of file “multiple_grid_anchor_generator.py” in tensorflow object detection api(latest)
"
if base_anchor_size is None:
base_anchor_size = [1.0, 1.0]

  ### added by yourself
  base_anchor_size = tf.constant(base_anchor_size, dtype=tf.float32)

"

2 tricks:
when to trasform tensorflow model to tensorRT model, using “trt.Logger.VERBOSE” other than “trt.Logger.WARNING” may supply more information. like this:
"
# TRT_LOGGER = trt.Logger(trt.Logger.WARNING)
TRT_LOGGER = trt.Logger(trt.Logger.VERBOSE)
"

because google’s OD api is update ,but nvidia’s tensortRT’s AnchorGenerator_TRT’ input params don’t match.

using old OD api is a temp method,not suggested.

nvidia’s sample (uff_ssd) is old, so urgly

AastaNV is a good boy. reference:

https://github.com/AastaNV/TRT_object_detection/blob/master/config/model_ssd_mobilenet_v2_coco_2018_03_29.py

@yorkleesiat thank you. I was struggle a bit with this.

Can confirm your solution is working using
Python 3.6 inside Conda Env
TensorRT 7.0
TensorFlow 1.14.0
Cuda 10.0
Last Object-Detection Tensorflow API
Model: ssd_mobilenet_v2_coco

@yorkleesiat thank you.
but it’s not work for me…
train and export:
win10 64bit
Python 3.6 inside Conda virtual Env
Tensorflow1.14.0
Cuda10.0
latest object detection api
ssd_mobilenet_v1_coco
use the tensorflow/models/research/object_detection/legacy/train.py to train

jetson nano:
jetpack 4.3
TensorRT6.0
UFF0.6.5
Tensorflow1.14.0

the offcial ssd_mobilenet_v1_coco model can convert and work fine in my nano,but my custom model can’t convert…

any help will be appreciated.

all:
follow my suggestion:

1 modify function “create_ssd_anchors()” of file “multiple_grid_anchor_generator.py” in tensorflow object detection api(latest)
"
if base_anchor_size is None:
base_anchor_size = [1.0, 1.0]

added by yourself

base_anchor_size = tf.constant(base_anchor_size, dtype=tf.float32)
"

2 tricks:
when to trasform tensorflow model to tensorRT model, using “trt.Logger.VERBOSE” other than “trt.Logger.WARNING” may supply more information. like this:
"

TRT_LOGGER = trt.Logger(trt.Logger.WARNING)

TRT_LOGGER = trt.Logger(trt.Logger.VERBOSE)

I’m trying to train a custom ssd_mobilenet_v2 network. I’m using Google Colab to do the training, and I can see the model works fine in my notebook: https://github.com/brianegge/tensorrt_demos/blob/master/garbage_bin.ipynb

The challenge, as many have noted is trying to get a custom model to run on the Jetson Nano. My first issue was the FusedBatchNormV3, which I worked around by using an older TensorFlow on colab. I tried going back to 1.12.0 or 1.13.1, but had trouble getting it setup:

# [TensorRT] ERROR: UffParser: Validator error: 
  FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_4_3x3_s2_256/BatchNorm/FusedBatchNormV3: Unsupported operation 
 _FusedBatchNormV3
!pip install tensorflow==1.14.0 tensorflow-gpu==1.14.0

Now I’m stuck trying to resolve

RuntimeError: CHECK failed: (index) < (current_size_):

My jetson is as follows:

$ jetson_release
 - NVIDIA Jetson NANO/TX1
   * Jetpack 4.3 [L4T 32.3.1]
   * CUDA GPU architecture 5.3
   * NV Power Mode: MAXN - Type: 0
 - Libraries:
   * CUDA 10.0.326
   * cuDNN 7.6.3.28-1+cuda10.0
   * TensorRT 6.0.1.10-1+cuda10.0
   * Visionworks 1.6.0.500n
   * OpenCV 4.1.1 compiled CUDA: NO
 - Jetson Performance: inactive

I patched the tensor models on the Colab, which didn’t make any impact:

diff --git a/research/object_detection/anchor_generators/multiple_grid_anchor_generator.py b/research/object_detection/anchor_generators/multiple_grid_anchor_generator.py
index 86007c99..aa14cea1 100644
--- a/research/object_detection/anchor_generators/multiple_grid_anchor_generator.py
+++ b/research/object_detection/anchor_generators/multiple_grid_anchor_generator.py
@@ -313,6 +313,7 @@ def create_ssd_anchors(num_layers=6,
   """
   if base_anchor_size is None:
     base_anchor_size = [1.0, 1.0]
+  base_anchor_size = tf.constant(base_anchor_size, dtype=tf.float32)
   box_specs_list = []
   if scales is None or not scales:
     scales = [min_scale + (max_scale - min_scale) * i / (num_layers - 1)
--
2.17.1

I tried checking out models

!git checkout ae0a9409212d0072938fa60c9f85740bb89ced7e

as others have suggested, but in that release there is only ssd_mobilenet_v1 network.

I made the TRT_LOGGER logger verbose, it warns me my Jetson should be using TensorFlow 1.14.0

NOTE: UFF has been tested with TensorFlow 1.14.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
...
[TensorRT] VERBOSE: UFFParser: Squeeze -> [7668,1,1]
[TensorRT] VERBOSE: UFFParser: Applying order forwarding to: Squeeze
[TensorRT] VERBOSE: UFFParser: Parsing MultipleGridAnchorGenerator[Op: _GridAnchor_TRT].
[libprotobuf FATAL /externals/protobuf/aarch64/10.0/include/google/protobuf/repeated_field.h:1408] CHECK failed: (index) < (current_size_):

I have not setup TensorBoard, nor do I know enough on how to compare models from the stock mobilenet to my custom one.

The NVidia page says TensorRT 7.0.0 is available, but doesn’t seem like they’ve compiled it for arm64 yet.

@brianegge from what I’ve remember, that error comes from wrong config.py settings or model config settings.

Try modify num_classes inside model .config ( ssd_mobilenet_v2_coco.config ).

Also I’ve remember having problems with https://github.com/tensorflow/models/tree/master/research/object_detection/anchor_generators and replaced the files inside with ones from older commits.
Like https://github.com/tensorflow/models/tree/v1.12.0/research/object_detection/anchor_generators

[libprotobuf FATAL /externals/protobuf/x86_64/10.0/include/google/protobuf/repeated_field.h:1408] CHECK failed: (index) < (current_size_):
terminate called after throwing an instance of ‘google_private::protobuf::FatalException’
what(): CHECK failed: (index) < (current_size_):
hello, i meet same error as you. i convert the official ssd_mobilenet_v1_coco model and it can work well. but when i convert my custom trained ssd_mobilenet_v1 model, it reports error above. the numClasses has revised to custom classes + 1.
have you solved this error?

Just a heads up, the tabs don’t show on the post, the “### added by yourself” part is not inside the if block. Hope this helps future readers.

What the value of the base_anchor_size in ssd_mobilenet_v1?

guys, i have used this version of tensorflow object detection api,and problems have been solved(GitHub - tensorflow/models at 6518c1c7711ef1fdbe925b3c5c71e62910374e3e)

ps:i was inspired by jkjung’s excellent work in https://github.com/jkjung-avt/hand-detection-tutorial and Training a Hand Detector with TensorFlow Object Detection API

Use this in order to make sure the conversion will work:

pip install -U --pre tensorflow-gpu=="1.14.0"
git clone https://github.com/tensorflow/models --single-branch
cd models
git reset 59f7e80ac8ad54913663a4b63ddf5a3db3689648 -- research/object_detection/anchor_generators/
git checkout -- research/object_detection/anchor_generators/

In order to create UFF you need to use the same version of tensorflow (1.14.0) to generate frozen graph.
After you create the UFF, build TensorRT Engine over Jetson, because it dosen’t work properly if it’s build on PC.

Make sure you have the correct number of classes in every file and config you use and the order of channels it’s proper inside UFF conversion script.

You can also freeze using tensorflow 1.14.0 models compiled with tensorflow 2.0+ (I didn’t tested with SSD Mobilenet but with a simpler one) and then use UFF converter.

I’m glad the above works, but wouldn’t it be better to have a Jetson fork for the models, or to get the patches applied to the repo.

@brianegge what change solved your issue? I’ve tried modifying the line in anchor_generator and I’m still stuck with the same error.

I switched to ssd_mobilenet_v1_coco, applied the patch, and then it worked. Maybe it was switching to v1 which fixed it, or maybe redoing everything from the beginning fixed the issue. The notebook which it’s working in is here: garbage_bin/garbage_bin.ipynb at master · brianegge/garbage_bin · GitHub which as a step to apply the patch to the models after checkout.

Thanks for the quick reply. I’m trying to use the inceptionv2 architecture, as accuracy is more important than speed, but I might have to give up on that and try mobilenet.

@ yorkleesiat it’s work. thanks

I managed to make things work, changing the config.py from this site https://github.com/AastaNV/TRT_object_detection/blob/master/config/model_ssd_mobilenet_v2_coco_2018_03_29.py adding in the end of the file the following commands:

Create a constant Tensor and set it as input for GridAnchor_TRT

data = np.array([1, 1], dtype=np.float32)
anchor_input = gs.create_node(“AnchorInput”, “Const”, value=data)
graph.append(anchor_input)
graph.find_nodes_by_op(“GridAnchor_TRT”)[0].input.insert(0, “AnchorInput”)

return graph

the following tutorial helped a lot:

https://www.minds.ai/post/deploying-ssd-mobilenet-v2-on-the-nvidia-jetson-and-nano-platforms

and don’t forget to change the /usr/lib/python3.6/dist-packages/graphsurgeon/node_manipulation.py

def create_node(name, op=None, trt_plugin=False, **kwargs):
if not trt_plugin:
print(“WARNING: To create TensorRT plugin nodes, please use the create_plugin_node function instead.”)
node = tf.NodeDef()
node.name = name
node.op = op if op else name
node.attr[“dtype”].type = 1
for key, val in kwargs.items():
if key == “dtype”:
node.attr[“dtype”].type = val.as_datatype_enum
return update_node(node, name, op, trt_plugin, **kwargs)

1 Like

This works with ssd mobilenet v2 models.
Is there any workaround for ssd_inception_v2_coco_2018_01_28 model.

Tensorrt enigne wokrs perfectlyiwth ssd_inception_v2_coco_2017_11_17 but not with other model.

What is the problem??

I am using tensorflow 1.13. 0 in my jetson nano.