AllowGPUFallback is per layer or per model ?

Hi,

(Xavier)

I’m following:

tried the ResNet50 model

Got:
[W] [TRT] Default DLA is enabled but layer prob is not running on DLA, falling back to GPU.
[W] [TRT] Warning: no implementation of prob obeys the requested constraints, using a higher precision type

Q: Does the fallback applies only to the last layer (prob), or all the layers ? That is - do all the other layers run on the DLA, except the last one ?

Thanks for the help !

Hi,

The fallback applies to all the layers.
If layer is not supported on DLA it will fall back to GPU only when “–allowGPUFallback” flag is set.

Thanks

Hi,

  1. Is the DLA (on xavier) parallel to the GPU ? I ran trtexec on the GPU, and added one running on the DLA (similar to https://developer.nvidia.com/embedded/jetson-agx-xavier-dl-inference-benchmarks
    )

and the performance of the GPU seems to be unchanged from running the DLA, is this true ?
It happened with the ResNet50 with the last prob layer, so now im a bit confused …

  1. Are you aware of any classification and/or detection models that runs solely on the DLA, and is easy to train (say in DIGITS, or TLT) ? I tried the ResNet18 from the TLT, and ResNet50 from modelZoo (DIGITS), both throwing warnings of falling back to the GPU…

Thanks again for the help !