Does TensorRT Python API support uff model int8 calibration

I only saw there is an example of int8 calibration for caffemodel, not sure if it’s also ok for uff


66 class ModelData(object):
67 DEPLOY_PATH = “deploy.prototxt”
68 MODEL_PATH = “mnist_lenet.caffemodel”
69 OUTPUT_NAME = “prob”
70 # The original model is a float32 one.
71 DTYPE = trt.float32
72
73 # This function builds an engine from a Caffe model.
74 def build_int8_engine(deploy_file, model_file, calib):
75 with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.CaffeParser() as parser:
76 # We set the builder batch size to be the same as the calibrator’s, as we use the same batches
77 # during inference. Note that this is not required in general, and inference batch size is
78 # independent of calibration batch size.
79 builder.max_batch_size = calib.get_batch_size()
80 builder.max_workspace_size = common.GiB(1)
81 builder.int8_mode = True
82 builder.int8_calibrator = calib

Hi,

Yes, see sample_uff_ssd here: [url]Sample Support Guide :: NVIDIA Deep Learning TensorRT Documentation

This has int8 instructions as well.

Thanks,
NVIDIA Enterprise Support