Locations and classes are right, while confidences are wrong with SSD net using TensorRT-3.0

I created a net of SSD using TensorRT with Plugin layers such as Permute, Flatten, Reshape and so on. While I run the net, it turns out the result as follow:

DetectionOutput:
0.000000 8.000000 0.208351 169.028671 26.303576 347.014343 356.516602 
0.000000 18.000000 0.040366 93.082977 35.931602 414.930786 343.525085 
0.000000 12.000000 0.014555 172.522110 31.663971 352.858398 360.000000 
0.000000 12.000000 0.011393 173.332123 286.391479 241.614105 350.504333 
0.000000 -1.000000 0.000000 0.000000 0.000000 0.000000 0.000000

I run the same SSD net with the same prototxt, modelfile and image using caffe, it turns out the result that:

/home/nvidia/workspace/caffe_ssd/caffe/examples/images/cat.jpg 8 0.999429 169 26 347 356
/home/nvidia/workspace/caffe_ssd/caffe/examples/images/cat.jpg 17 0.0102249 296 303 330 345

We can see that the location of class 8 is same while confidence is different. I’m not sure what cause the different.
I also have some questions. I create Normalize layer as:

else if (!strcmp(layerName, "conv4_3_norm"))
		{
			scales.type = nvinfer1::DataType::kFLOAT;//kHALF;
			float values[512];
			for(int i = 0; i < 512; i++)
				values[i] = 20;
			scales.values = values_tmp;
			scales.count = 512;
			bool acrossSpatial = false;
			bool channelShared = false;
			float eps = 0.0000000001f;

			assert(mPluginNormalize == nullptr);
//			assert(nbWeights == 0 && weights == nullptr);
			mPluginNormalize = std::unique_ptr<INvPlugin, decltype(nvPluginDeleter)>
				(createSSDNormalizePlugin((const Weights *)&scales, acrossSpatial, channelShared, eps), nvPluginDeleter);
			return mPluginNormalize.get();
		}

I change scales.values to 13 or 1, the result is a bit different but not very different. I think scales.values may be read from the modelfile and the value such as 20 is just initialized value. Will this different have a great impact?
Could someone give me some suggestions? Thank you in advance!

Hi,

Do you use Jetson platform?

For debugging plugin implementation, it’s recommended to compare the results layer by layer to narrow down the error.

createSSDNormalizePlugin doesn’t in our TensorRT document and also not in our support scope.
It may still under developing, and we don’t have information can share with you.

Thanks and sorry for the inconvenience.

Hi, AastaLLL, thanks for your reply. I still have a question. Plugin layer “createSSDNormalizePlugin” is in the NvInferPlugin.h as follows:

/* 
	 * \brief The Normalize plugin layer normalizes the input to have L2 norm of 1 with scale learnable.
     * \params scales scale weights that are applied to the output tensor
     * \params acrossSpatial whether to compute the norm over adjacent channels (acrossSpatial is true) or nearby spatial locations (within channel in which case acrossSpatial is false)
     * \params channelShared whether the scale weight(s) is shared across channels
     * \params eps epsilon for not diviiding by zero
	 */
    INvPlugin * createSSDNormalizePlugin(const Weights *scales, bool acrossSpatial, bool channelShared, float eps);
    INvPlugin * createSSDNormalizePlugin(const void * data, size_t length);

Dose it support the SSD layer whose type is “Normalize”?

Hi, AastaLLL, I compare the results of each layer. I found that the input datas of mbox_conf_reshape layer are the same between my TensorRT net and caffe-ssd net, while the input datas of mbox_conf_flatten layer are different between the two nets. Net layers connection is just as follows:

… ——> mbox_conf ——> mbox_conf_reshape ——> mbox_conf_softmax ——> mbox_conf_flatten ——> …

I’m considering there may be some different between softmax layer between the two nets. I can’t get the input datas of softmax layer for the softmax layer is not my plugin layer and it is packaged up. So I need some advice.

Hi,

createSSDNormalizePlugin is not in our official document and also not in our support scope.
We are sorry that there is no extra information can share with you.

Thanks and sorry for the inconvenience.

Hi, AastaLLL, thanks for your apply, I still have a question for the softmax layer. I compare the results of each layer. I found that the input datas of mbox_conf_reshape layer are the same between my TensorRT net and caffe-ssd net, while the input datas of mbox_conf_flatten layer are different between the two nets. Net layers connection is just as follows:

… ——> mbox_conf ——> mbox_conf_reshape ——> mbox_conf_softmax ——> mbox_conf_flatten ——> …

I’m considering there may be some different between softmax layer between the two nets. I can’t get the input datas of softmax layer for the softmax layer is not my plugin layer and it is packaged up. So I need some advice.

Hi,

Could you share more information about the softmax layer you mentioned here?
Is it the general softmax layer or a particular layer in SSD-plugins?

Thanks.

https://github.com/saikumarGadde/tensorrt-ssd-easy

I reference it ,tensorrt SSD success run.

Hi,

Thanks for your information.
We will share it with the user.