[TensorRT] ERROR: Could not register plugin creator: FlattenConcat_TRT in namespace:

Hello, I am seeing the error below when running the python uff_ssd sample

[TensorRT] ERROR: Could not register plugin creator: FlattenConcat_TRT in namespace

The sample creates a plugin called FlatttenCondat_TRT, is the error due to the fact that in the example the plugin i registered with the same name as an existing plugin?

I have compiled from source using the 6.0 branch in the NVIDIA/TensorRT github repo, and am using the 6.0.1.5 tarfile.

Note: I don’t get the error when using the TensorRT 5.1.3.2 package

GPU type - Tesla V100
Nvidia driver version 418.87.0
CUDA version - 10.1
CUDNN version - 7.6.3
Python version - python 3.6
Tensorflow version - 1.15
TensorRT version - 6.0.1.5
OS version - RHEL 7.6
platform - ppc64le

Thanks

Hi rjknight123,

This is a known issue and will be fixed in a future version. If you need a workaround for now just to see the sample complete, this should work:

  1. Clean up old artifacts if any:
rm -r tensorrt/samples/python/uff_ssd/workspace/
  1. Remove this block of code from detect_objects.py:
...
    try:
        ctypes.CDLL(PATHS.get_flatten_concat_plugin_path())
    except:
        print(
            "Error: {}\n{}\n{}".format(
                "Could not find {}".format(PATHS.get_flatten_concat_plugin_path()),
                "Make sure you have compiled FlattenConcat custom plugin layer",
                "For more details, check README.md"
            )
        )
        sys.exit(1)

(you can also remove all other lines mentioning flatten_concat in detect_objects.py, but not necessary for sake of running.)

  1. Add the axis/ignoreBatch params to the flatten_concat plugin nodes in utils/model.py like below:
...
    concat_box_loc = gs.create_plugin_node(
        "concat_box_loc",
        op="FlattenConcat_TRT",
        dtype=tf.float32,
        axis=1,
        ignoreBatch=0
    )
    concat_box_conf = gs.create_plugin_node(
        "concat_box_conf",
        op="FlattenConcat_TRT",
        dtype=tf.float32,
        axis=1,
        ignoreBatch=0
    )

Sincerely,
NVIDIA Enterprise Support

Thanks for the quick reply.

I tried your suggestions and I no longer see the error. However, I’m not certain my compiled plugin is being run by the example code. I added some print statements to it to monitor the progress but I don’t see anything printed out when the sample runs.

If I just run the sample with no changes, except adding debug statements to the plugin code it seems to run, I see my debug statements but I also see the error message, so it seems like the plugin code is doing the right thing, maybe the error message is erroneous in this case (?)

I started using this sample code to help debug the error message.

import ctypes
import tensorrt as trt

# initialize
TRT_LOGGER = trt.Logger(trt.Logger.INFO)
trt.init_libnvinfer_plugins(TRT_LOGGER, '')
runtime = trt.Runtime(TRT_LOGGER)

PLUGIN_CREATORS = trt.get_plugin_registry().plugin_creator_list

for plugin_creator in PLUGIN_CREATORS:
    print(plugin_creator.name)

running that code I can see there is already a plugin named FlattenConcat_TRT

(test-me) [builder@64eafe53afa3 uff_ssd]$ python test-plugin.py 
RnRes2Br1Br2c_TRT
RnRes2Br2bBr2c_TRT
SingleStepLSTMPlugin
FancyActivation
ResizeNearest
Split
InstanceNormalization
GridAnchor_TRT
NMS_TRT
Reorg_TRT
Region_TRT
Clip_TRT
LReLU_TRT
PriorBox_TRT
Normalize_TRT
RPROI_TRT
BatchedNMS_TRT
FlattenConcat_TRT

and if I add the following to the test script just before creating the TRT_LOGGER

ctypes.CDLL("build/libflattenconcat.so")

I see the error, and I also see the print statements in my compiled plugin

(test-me) [builder@64eafe53afa3 uff_ssd]$ python test-plugin.py 
enter FlattenConcatPluginCreator  --> My debug printf
enter getPluginNamespace          --> My debug printf
[TensorRT] ERROR: Could not register plugin creator:  FlattenConcat_TRT in namespace:

I decided to try renaming the plugin and that seemed to work.

in plugin/FlattenConcat.cpp change the name of the plugin to _FlattenConcat_TRT

namespace
{
const char* FLATTENCONCAT_PLUGIN_VERSION{"1"};
const char* FLATTENCONCAT_PLUGIN_NAME{"FlattenConcat_TRT"};  --> change to _FlattenConcat_TRT
}

and in utils/model.py also change the plugin op to _FlattenConcat_TRT and add the axis/ignoreBatch

concat_box_loc = gs.create_plugin_node(
        "concat_box_loc",
        op="_FlattenConcat_TRT",
        dtype=tf.float32,
        axis=1,
        ignoreBatch=0
    )
    concat_box_conf = gs.create_plugin_node(
        "concat_box_conf",
        op="_FlattenConcat_TRT",
        dtype=tf.float32,
        axis=1,
        ignoreBatch=0
    )

I no longer see the error and can see the debug statements I added to the plugin code.

I’m now wondering if renaming the plugin a valid solution, or should be able to override the existing FlattenConcat_TRT plugin and not see the ERROR message?

Thank you.