TensorRT 1.0 fails on SqueezeNet

SqueezeNet is a family of models that achieve AlexNet-level accuracy on the ImageNet validation set with 50 times fewer parameters: https://github.com/DeepScale/SqueezeNet.
As many people are wondering how to improve network designs, it’s really interesting to benchmark AlexNet vs SqueezeNet.

Unfortunately, TensorRT 1.0 fails to parse the SqueezeNet 1.0 and SqueezeNet 1.1 models with exactly the same message:

Parameter check failed in addPooling, condition: windowSize.h > 0 && windowSize.w > 0 && windowSize.h*windowSize.w < MAX_KERNEL_DIMS_PRODUCT
error parsing layer type Pooling index 64

The same models work fine through Caffe. Here are the links to the deploy.prototxt files:

https://github.com/DeepScale/SqueezeNet/blob/master/SqueezeNet_v1.0/deploy.prototxt
https://raw.githubusercontent.com/DeepScale/SqueezeNet/9d981310f66e5285083123cba364b3efa4a6ff55/SqueezeNet_v1.1/deploy.prototxt

[s]Assuming the error message is accurate, these models exceed the maximum window size supported by TensorRT.

This size restriction may be “soft” (a relatively easy to change somewhat arbitrary limit to keep resource use in check) or “hard” (a fundamental limit of the underlying algorithm, maybe due to numeric representation), but it would probably be advisable to file an enhancement request with NVIDIA to increase the maximum supported window size.[/s]

TensorRT caffe parser doesn’t support global pooling, so it’s just taking the H and W parameters from the network definition, and those default to 0.
The API check is complaining that there isn’t a valid pooling layer definition.
If you replace the global pooling with an explicitly defined window, TensorRT should work.

Thanks for the correction, mfatica. I didn’t consider that it may be the first portion of the error check that triggers here.

mfatica, thanks for your prompt answer! I’ve found a similar issue someone hit when converting these models to TensorFlow [url]Problems for convert · Issue #53 · ethereon/caffe-tensorflow · GitHub. They suggest to use: { kernel_size: 13, stride: 1 }. Using these makes the TensorRT parser happy.

Unfortunately, the predictions are then way off (i.e. 1000 mispredictions on 1000 images). Do you by any chance know what the right parameters should be?

By the way, I’m also tracking this issue at the SqueezeNet ([url]https://github.com/DeepScale/SqueezeNet/issues/30[/url]) and at the CK-TensorRT ([url]https://github.com/dividiti/ck-tensorrt/issues/3[/url]) GitHub issue trackers.

Many thanks,
Anton.

With many thanks to Maxim Milakov from NVIDIA, the right parameters are { kernel_size: 15, stride: 15 } for SqueezeNet 1.0 and { kernel_size: 14, stride: 14 } for SqueezeNet 1.1.

The updated deploy.prototxt files can be found in the two new CK-TensorRT packages: