I am testing the newly released JetPack 2.3 on my Jetson TX1, but run into some trouble.
I tried to use GIE to run InceptionBN on ImageNet. The batch_size
is 1.
If I run only 1 batch or several (roughly 200~500) batches, everything is fine. However, if I tried to run the whole epoch, at some point the process will run out of memory.
During execution, I can see the memory usage is keep increasing.
My code structure is:
for (int i = 0; i < num_batch; i++) {
// copy input to buffer
// ...
// this is evil!
ctx->execute(batch_size, buffers)
// copy output from buffer
// ...
}
If I commented out the single final code that executing the model:
// ctx->execute(batch_size, buffers)
everything will be OK and the memory usage will stay at a constant level.
As a result, I guess it is this API:
virtual bool nvinfer1::IExecutionContext::execute(
int batchSize,
void **bindings
)
that keeps consuming memory.
I have tried two ways to execute:
- Create a global
nvinfer1::IExecutionContext
for all batch. - Create one
nvinfer1::IExecutionContext
for EACH batch, and destroy them immediately after each execution.
Both fails.
Have anyone run into the same situation, or does anyone know how to fix it?