TensorRT 1.0 RC on Titan X Maxwell

  1. Is it possible to make use of TensorRT 1.0 RC on Titan X Maxwell (in particular: non-Jetson, not via JetPack)?
  2. I have succesfully installed libgie-dev and libgie1 via QuickStart Instructions, but can’t find any documentation on its usage beyond the few snippets on https://devblogs.nvidia.com/parallelforall/production-deep-learning-nvidia-gpu-inference-engine/. Where can I find the documentation, are there any examples beyond the snippets? (I am aware of the GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson., however I am not able to build this on non-Jetson, maybe its not meant to)

Many thanks!

It is the gie guide I copy from Jetson TX1,hope helpful.

GIE User Guide

The NVIDIA GPU Inference Engine (GIE) is a C++ library that facilitates high performance inference on NVIDIA GPUs. It takes a network definition and optimizes it by merging tensors and layers, transforming weights, choosing efficient intermediate data formats, and selecting from a large kernel catalog based on layer parameters and measured performance.

This release of GIE is built with gcc 4.8.

GIE has the following layer types:

Convolution, with or without bias. Currently only 2D convolutions (i.e. 4D input and output tensors) are supported. Note: The operation this layer performs is actually a correlation, which is a consideration if you are formatting weights to import via GIE’s API rather than the caffe parser library.
Activation: ReLU, tanh and sigmoid.
Pooling: max and average.
Scale: per-tensor, per channel or per-weight affine transformation and exponentiation by constant values. Batch Normalization can be implemented using the Scale layer.
ElementWise: sum, product or max of two tensors.
LRN: cross-channel only.
Fully-connected with or without bias
SoftMax: cross-channel only
Deconvolution, with and without bias
While GIE is independent of any framework, the package includes a parser for caffe models named NvCaffeParser, which provides a simple mechanism for importing network definitions. NvCaffeParser uses the above layers to implement caffe’s Convolution, ReLU, Sigmoid, TanH, Pooling, Power, BatchNorm, Eltwise, LRN, InnerProduct, SoftMax, Scale, and Deconvolution layers. Caffe features not supported by GIE include:

Deconvolution groups
PReLU
Scale, other than per-channel scaling
EltWise with more than two inputs

Note: GIE’s caffe parser does not support legacy formats in caffe prototxt - in particular, layer types are expected to be expressed in the prototxt as strings delimited by double quotes.

Getting Started

There are two phases to using GIE:

In the build phase, the toolkit takes a network definition, performs optimizations, and generates the inference engine.
In the execution phase, the engine runs inference tasks using input and output buffers on the GPU.
The build phase can take considerable time, especially when running on embedded platforms. So a typical application will build an engine once, and then serialize it for later use.

The build phase performs the following optimizations on the layer graph:

elimination of layers whose outputs are not used
fusion of convolution, bias and ReLU operations
aggregation of operations with sufficiently similar parameters and the same source tensor (for example, the 1x1 convolutions in GoogleNet v5’s inception module)
elision of concatenation layers by directing layer outputs to the correct eventual destination.
In addition it runs layers on dummy data to select the fastest from its kernel catalog, and performs weight preformatting and memory optimization where appropriate.

The Network Definition

A network definition consists of a sequence of layers, and a set of tensors.

Each layer computes a set of output tensors from a set of input tensors. Layers have parameters, e.g. convolution size and stride, and convolution filter weights.

A tensor is either an input to the network, or an output of a layer. Tensors have a datatype specifying their precision (16- and 32-bit floats are currently supported) and three dimensions (channels, width, height). The dimensions of an input tensor are defined by the application, and for output tensors they are inferred by the builder.

Each layer and tensor has a name, which can be useful when profiling or reading GIE’s build log (see Logging below)

When using the caffe parser, tensor and layer names are taken from the caffe prototxt.

SampleMNIST

The MNIST sample demonstrates typical build and execution phases using a caffe model trained on the MNIST dataset using the NVIDIA DIGITS tutorial.

Logging

GIE requires a logging interface to be implemented, through which it reports errors, warnings, and informational messages.

class Logger : public ILogger           
{
    void log(Severity severity, const char* msg) override
    {
        // suppress info-level messages
        if (severity != Severity::kINFO)
            std::cout << msg << std::endl;
    }
} gLogger;
Here we suppress informational messages, and report only warnings and errors.

The Build Phase - caffeToGIEModel

First we create the GIE builder. The application must implement a logging interface, through which GIE will provide information about optimization stages during the build phase, and also warnings and error information.

IBuilder* builder = createInferBuilder(gLogger);
Then we create the network definition structure, which we populate from a caffe model using the caffe parser library:

INetworkDefinition* network = infer->createNetwork();
CaffeParser* parser = createCaffeParser();
std::unordered_map<std::string, infer1::Tensor> blobNameToTensor;
const IBlobNameToTensor* blobNameToTensor = 
    parser->parse(locateFile(deployFile).c_str(),
                             locateFile(modelFile).c_str(),
                             *network,
                             DataType::kFLOAT);
In this sample, we instruct the parser to generate a network whose weights are 32-bit floats. As well as populating the network definition, the parser returns a dictionary that maps from caffe blob names to GIE tensors.

Note: A GIE network definition has no notion of in-place operation - e.g. the input and output tensors of a ReLU are different. When a caffe network uses an in-place operation, the GIE tensor returned in the dictionary corresponds to the last write to that blob. For example, if a convolution creates a blob and is followed by an in-place ReLU, that blob’s name will map to the GIE tensor which is the output of the ReLU.

Since the caffe model does not tell us which tensors are the outputs of the network, we need to specify these explicitly after parsing:

for (auto& s : outputs)
    network->markOutput(*blobNameToTensor->find(s.c_str()));
There is no restriction on the number of output tensors, but marking a tensor as an output may prohibit some optimizations on that tensor.

Note: at this point we have parsed the caffe model to obtain the network definition, and are ready to create the engine. However, we cannot yet release the parser object because the network definition holds weights by reference into the caffe model, not by value. It is only during the build process that the weights are read from the caffemodel.

We next build the engine from the network definition:

builder->setMaxBatchSize(maxBatchSize);
builder->setMaxWorkspaceSize(1 << 20);
ICudaEngine* engine = builder->buildCudaEngine(*network);
maxBatchSize is the size for which the engine will be tuned. At execution time, smaller batches may be used, but not larger. Note that execution of smaller batch sizes may be slower than with a GIE engine optimized for that size.
maxWorkspaceSize is the maximum amount of scratch space which the engine may use at runtime
We could use the engine directly, but here we serialize it to a stream, which is the typical usage mode for GIE:

engine->serialize(gieModelStream);
Deserializing the engine

To deserialize the engine, we create a GIE runtime object:

IRuntime* runtime = createInferRuntime(gLogger);
ICudaEngine* engine = runtime->deserializeCudaEngine(gieModelStream);
We also need to create an execution context. One engine can support multiple contexts, allowing inference to be performed on multiple batches simultaneously while sharing the same weights.

IExecutionContext *context = engine->createExecutionContext();
Note: Serialized engines are not portable across platforms or GIE versions.

The Execution Phase - doInference()

The input to the engine is an array of pointers to input and output buffers on the GPU (Note: All GIE inputs and outputs are in contiguous NCHW format.) The engine can be queried for the buffer indices, using the tensor names assigned when the network was created.

int inputIndex = engine->getBindingIndex(INPUT_BLOB_NAME), 
    outputIndex = engine->getBindingIndex(OUTPUT_BLOB_NAME);
In a typical production case, GIE will execute asynchronously. The enqueue() method will add add kernels to a cuda stream specified by the application, which may then wait on that stream for completion. The fourth parameter to enqueue() is an optional cudaEvent which will be signaled when the input buffers are no longer in use and can be refilled.

In this sample we simply copy the input buffer to the GPU, run inference, then copy the result back and wait on the stream:

cudaMemcpyAsync(<...>, cudaMemcpyHostToDevice, stream);
context.enqueue(batchSize, buffers, stream, nullptr);
cudaMemcpyAsync(<...>, cudaMemcpyDeviceToHost, stream);
cudaStreamSynchronize(stream);
Note: The batch size must be at most the value specified when the engine was created.

SampleGoogleNet

SampleGoogleNet illustrates layer-based profiling and GIE’s half2 mode, which is the fastest mode for batch sizes greater than one on platforms which natively support 16-bit inference .

Profiling

To profile a network, implement the IProfiler interface and add the profiler to the execution context:

context.profiler = &gProfiler;
Profiling is not currently supported for asynchronous execution, and so we must use GIE’s synchronous execute() method:

for (int i = 0; i < TIMING_ITERATIONS;i++)
    engine->execute(context, buffers);
After execution has completed, the profiler callback is called once for every layer. The sample accumulates layer times over invocations, and averages the time for each layer at the end.

Observe that the layer names are modified by GIE’s layer-combining operations.

half2 mode

GIE can use 16-bit instead of 32-bit arithmetic and tensors, but this alone may not deliver significant performance benefits. Half2 is an execution mode where internal tensors interleave 16 bits from adjacent pairs of images, and is the fastest mode of operation for batch sizes greater than one.

To use half2 mode, two additional steps are required:

1) create an input network with 16-bit weights, by supplying the DataType::kHALF2 parameter to the parser

const IBlobNameToTensor *blobNameToTensor = 
  parser->parse(locateFile(deployFile).c_str(),
                locateFile(modelFile).c_str(),
                *network,
                DataType::kHALF);
2) set the ‘half2mode’ mode in the builder when building the engine:

builder->setHalf2Mode(true);
Using GIE with multiple GPUs

Each ICudaEngine object is bound to a specific GPU when it is instantiated, either by the builder or on deserialization. To select the GPU, use cudaSetDevice() before calling the builder or deserializing the engine. Each IExecutionContext is bound to the same GPU as the engine from which it was created. When calling execute() or enqueue(), ensure that the thread is associated with the correct device by calling cudaSetDevice() if necessary.

Data Formats

GIE network inputs and outputs are 32-bit tensors in contiguous NCHW format.

For weights:

Convolution weights are in contiguous KCRS format, where K indexes over output channels, C over input channels, and R and S over the height and width of the convolution, respectively.
Fully Connected weights are in contiguous row-major layout
Deconvolution weights are in contiguous CKRS format (where C, K, R and S are as above).
FAQ

Q: How can I use my own layer types in conjunction with GIE? 
A: This release of GIE doesn’t support custom layers. To use your own layer in the middle of GIE, create two GIE pipelines, one to run before your layer and one to run afterwards.

IExecutionContext *contextA = engineA->createExecutionContext();
IExecutionContext *contextB = engineB->createExecutionContext();

<...>

contextA.enqueue(batchSize, buffersA, stream, nullptr);
myLayer(outputFromA, inputToB, stream);
contextB.enqueue(batchSize, buffersB, stream, nullptr);
The first GIE pipeline will read the input and write to outputFromA, and the second will read from inputToB and generate the final output.

Q: How do I create an engine optimized for several possible batch sizes? 
A: While GIE allows an engine optimized for a given batch size to run at any smaller size, the performance for those smaller sizes may not be as well-optimized. To optimize for multiple different batch sizes, run the builder and serialize an engine for each batch size. A future release of GIE will be able to optimize a single engine for multiple batch sizes, thereby allowing for sharing of weights where layers at different batch sizes use the same weight formats.

TensorRT user guide can be found at: /usr/share/doc/gie/doc/User Guide.html
TensorRT API manual can be found at: /usr/share/doc/gie/doc/API/index.html
Sample code can be found at: /usr/src/gie_samples/samples

Hi revilokeb:
Did you find the documentation,I also meet same probleam

If you install the -dev package you should find the documentation in

/usr/share/doc/gie/ (in 1.0)

/usr/share/doc/tensorrt/ (in 2.0 EA and later)