How to implement batch normalization layer by TensorRT scale layer?

In the TensorRT-2.1 User Guide,it says that Batch Normalization can be implemented using the TensorRT Scale layer,but I can’t find a sample to realize it,so how to implement the batch normalization layer by scale layer?

Hi,

TensorRT can parse the ‘BatchNorm’ type directly.
No extra implmentation is needed.

Thanks

Hello,

Thanks for your reply,but when I use giexec to parse the model, it says :Error parsing text-format ditcaffe.NetParameter: 47:12: Message type “ditcaffe.LayerParameter” has no field named “bn_param”.
And the Batch Normalization layer in the prototxt is written like this:
layer {
bottom: “conv1_1”
top: “conv1_1”
name: “conv1_1_bn”
type: “BN”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 1
decay_mult: 0
}
bn_param {
bn_mode: INFERENCE
scale_filler {
type: “constant”
value: 1
}
shift_filler {
type: “constant”
value: 0
}
}
}

Hi,

Please change bn_param to batch_norm_param and give it a try.
Thanks.

Hi,

Does the ‘use_global_stats’ param in batch_norm_param works?
It seems that TensorRT ignore that param and set it to true by default.

using TensorRT3
Thanks

1 Like

Hi,

This flag is normally used for training only.
Since TensorRT is designed for run-time inference, ‘use_global_stats’ is not supported.

Thanks.

Hi,

Thanks for your reply,
I’m using this flag in the inference mode because I want it to act like Instance Normalization,
Do you know how can I implement InstanceNorm layer with the current TensorRT support ?

Thanks

Hi,

You can implement a custom layer with plugin API if using Caffe frameworks.

Here is some tutorial for your reference:
Native sample:
/usr/src/tensorrt/samples/samplePlugin/samplePlugin.cpp
/usr/src/tensorrt/samples/sampleCharRNN/sampleCharRNN.cpp
/usr/src/tensorrt/samples/sampleFasterRCNN/sampleFasterRCNN.cpp

Jetson sample:

Thanks.

Hi everybody,

I am using TensorFlow (v1.4) and I want to create a Batch Normalization layer. How can I do that with this Scale layer (based on the TRT documentation)? I couldn’t find any example. I’m using TensorRT v3.0.

Thanks!

Hi,

When inferencing, with given mean and variance, you can use this function:

tf.nn.batch_normalization

It should be able to convert to TensorRT with our uff parser.

Here is some relevant sample for your reference:
https://github.com/NVIDIA-Jetson/tf_to_trt_image_classification

Thanks.

Hi AastaLLL,

You suggest to use

tf.nn.batch_normalization

but how do you get the graph to calculate the mean and variance? I tried

x_mean, x_var = tf.nn.moments(batch_x, axes=[1], keep_dims=True)
x_norm = tf.nn.batch_normalization(batch_x, x_mean, x_var, None, None, 0.001)

and the UFF parser says:

Converting to UFF graph
Warning: keepdims is ignored by the UFF Parser and defaults to True
Warning: No conversion function registered for layer: SquaredDifference yet.
Converting as custom op SquaredDifference preprocess/moments/SquaredDifference
name: "preprocess/moments/SquaredDifference"
op: "SquaredDifference"
input: "input/IteratorGetNext"
input: "preprocess/moments/StopGradient"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: No conversion function registered for layer: StopGradient yet.
Converting as custom op StopGradient preprocess/moments/StopGradient
name: "preprocess/moments/StopGradient"
op: "StopGradient"
input: "preprocess/moments/mean"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

Warning: keepdims is ignored by the UFF Parser and defaults to True
Warning: No conversion function registered for layer: Neg yet.
Converting as custom op Neg preprocess/batchnorm/Neg
name: "preprocess/batchnorm/Neg"
op: "Neg"
input: "preprocess/moments/mean"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}

and the PLAN maker says:

Step 1/5: Creating Builder...
-> Builder Successfully Created.
Step 2/5: Creating Network...
-> Network Successfully Created.
Step 3/5: Parsing UFF Model...
ERROR: UFFParser: Validator error: preprocess/moments/StopGradient: Unsupported operation _StopGradient
ERROR: Failed to Parse UFF Model!

I can manually calculate the mean and variance, but I still get the UFF parser error for the batchnorm/Neg operation which is built into tf.nn.batch_normalization as shown here:

https://imgur.com/l3xj2Xu

Hi,

In general, the mean and variance is predefined when training time.

Thanks.