Out of memory when using GIE (from JetPack 2.3) on Jetson TX1

I am testing the newly released JetPack 2.3 on my Jetson TX1, but run into some trouble.

I tried to use GIE to run InceptionBN on ImageNet. The batch_size is 1.

If I run only 1 batch or several (roughly 200~500) batches, everything is fine. However, if I tried to run the whole epoch, at some point the process will run out of memory.

During execution, I can see the memory usage is keep increasing.

My code structure is:

for (int i = 0; i < num_batch; i++) {
    //  copy input to buffer
    //  ...
      
    //  this is evil!
    ctx->execute(batch_size, buffers)
    
    //  copy output from buffer
    //  ...
}

If I commented out the single final code that executing the model:

// ctx->execute(batch_size, buffers)

everything will be OK and the memory usage will stay at a constant level.

As a result, I guess it is this API:

virtual bool nvinfer1::IExecutionContext::execute(
      int batchSize, 
      void **bindings
    )

that keeps consuming memory.

I have tried two ways to execute:

  1. Create a global nvinfer1::IExecutionContext for all batch.
  2. Create one nvinfer1::IExecutionContext for EACH batch, and destroy them immediately after each execution.

Both fails.

Have anyone run into the same situation, or does anyone know how to fix it?

I know nothing of the particular code, but in similar situations I’d test differences between this original line:

// Original.
    //  this is evil!
    ctx->execute(batch_size, buffers)

…and this line:

// Slight change to trigger any destructor for each loop.
    {
        //  this is evil!
        ctx->execute(batch_size, buffers)
    }

Does this change how fast memory runs out?

I have figured out where the problem is. It due to another part of the code. But anyway, thanks a lot!