nvinfer serialization path?

Is there a way to get nvinfer to spit out a serialized model somewhere specific, or will it always use the same path as the model? I mentioned it in another thread, but it would be nice if a user cache could be used by default (~/.deepstream/cache or something). My deepstream app installs models to /usr/local/share/… and I don’t want to make that writable.

Hi,
There are two properties in nvinfer plugin:

> config-file-path    : Path to the configuration file for this instance of nvinfer
                        flags: readable, writable, changeable only in NULL or READY state
                        String. Default: ""

> model-engine-file   : Absolute path to the pre-generated serialized engine file for the model
                        flags: readable, writable, changeable only in NULL or READY state
                        String. Default: ""

It should work by configuring these properly. Please share more information about what you would like to do and why the current implementation cannot achieve it. Thanks.

DaneLLL,

Thanks for the response but I tried that and nvinfer doesn’t seem to care about what I set model-engine-file to at runtime. This is the behavior I am experiencing. Here is where I set properties on the nvinfer element:

# inference bin constuctor (this is Genie, not Python, but you get the idea):
	...
	self.pie.set_property("config-file-path", pie_config != null ? pie_config : DEFAULT_PIE_CONFIG)
	self.pie.set_property("model-engine-file", self.model_engine_file)

I’ve triple-checked that the properties are set on the nvinfer element (in my case to “/home/username/.app/redaction.engine”), and this path exists and is writable, but when the pipeline starts, nvinfer still wants to write to /usr/local/share where the model is loaded from in the config file.

0:00:00.818962868 20892   0x5588934200 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<pie> NvDsInferContext[UID 1]:useEngineFile(): Failed to read from model engine file
0:00:00.819060760 20892   0x5588934200 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<pie> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:15.592730144 20892   0x5588934200 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<pie> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /usr/local/share/app/models/fd_lpd.caffemodel_b1_fp32.engine

Am I doing something wrong?

Hi,
Please match the file name with batch-size, network-mode and give it a try.
For example,

model-engine-file=model_b1_fp32.engine

means batch-size=1 and network-mode=0(FP32)

That didn’t fix it. Still trying to store to the original filename at the original path.

[username@hostname] -- [~/.nvalhalla] 
 $ nvalhalla --uri $(youtube-dl -f best -g https://www.youtub3.com/watch?v=awdX61DPWf4)
Creating LL OSD context new
0:00:00.801387122  5992   0x558cf24070 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<pie> NvDsInferContext[UID 1]:useEngineFile(): Failed to read from model engine file
0:00:00.801466806  5992   0x558cf24070 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<pie> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:12.902342913  5992   0x558cf24070 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<pie> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /usr/local/share/nvalhalla/models/fd_lpd.caffemodel_b1_fp32.engine
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
Creating LL OSD context new
^CProcess 5992 has received SIGINT, ending...
[username@hostname] -- [~/.nvalhalla] 
 $ nvalhalla --uri $(youtube-dl -f best -g https ://www.youtub3.com/watch?v=awdX61DPWf4) --uri $(youtube-dl -f best -g https://www.youtub3.com/watch?v=FPs_lU01KoI)
Creating LL OSD context new
0:00:00.793193468  6034   0x55ba351630 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<pie> NvDsInferContext[UID 1]:useEngineFile(): Failed to read from model engine file
0:00:00.793282656  6034   0x55ba351630 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<pie> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:13.532042957  6034   0x55ba351630 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<pie> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /usr/local/share/nvalhalla/models/fd_lpd.caffemodel_b1_fp32.engine
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
Creating LL OSD context new
Opening in BLOCKING MODE 
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261

nvinfer “model-engine-file” property is set to “/home/username/.nvalhalla/redaction_b#_fp32.engine” but it doesn’t want to generate it there. I will attach a pdf of the pipeline (just after generation, before the sources are connected from the uridecodebin).
engine-filename-problem.pdf (33.5 KB)

Hi,
deepstream-test3 is based on uridecodebin. Could you please share a patch on it so that we can reproduce it?

Example is not deepstream-test3. Code is here in dev branch:

it dumps .dot files to ~/.nvlhalla/ and --gst-debug-level works

… but if you want I can probably patch deepstream-test3 so it breaks similarly. You prefer Python or C? or is the Genie above ok?

same issue with deepstream_test_3.py

WARNING: Overriding infer-config batch-size 1  with number of sources  2  

<b>setting model-engine-file property on pgie to /home/username/.pyds/resnet10_b2_int8.engine</b>
Adding elements to Pipeline 

Linking elements in the Pipeline 

Now playing...
1 :  https://r2---sn-nx5s7n7d.googlevideo.com/videoplayback?expire=1583300484&ei=JOteXpbvF9iMkwaJraP4Dw&ip=24.143.115.166&id=o-AJm41vM3QxQA_RlS1LHtHFe8zQFJb7YjclaLxQfFVbTb&itag=22&source=youtube&requiressl=yes&mm=31%2C29&mn=sn-nx5s7n7d%2Csn-nx5e6nez&ms=au%2Crdu&mv=m&mvi=1&pl=18&initcwndbps=2107500&vprv=1&mime=video%2Fmp4&ratebypass=yes&dur=6028.527&lmt=1582233218947338&mt=1583278772&fvip=2&fexp=23842630&c=WEB&txp=7316222&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cvprv%2Cmime%2Cratebypass%2Cdur%2Clmt&sig=ADKhkGMwRQIhANmxsc2t0MHPwKffU6S2MB-AwD-naFUieVfq9eLb0q4gAiBMcvgZzT3mSma4pf32s_t_3qIe02_vr2osVM2OzP5OuQ%3D%3D&lsparams=mm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps&lsig=ABSNjpQwRAIgHbPEuh5OCByxSRDTz3NcK_2tHQXni1nFhoIuWu-cqVQCIHsk-EoOs_TG9ud-SL6fa4oeVT71g9R6egGgS2PVdPI4
2 :  https://r6---sn-nx5e6nez.googlevideo.com/videoplayback?expire=1583300487&ei=J-teXqWmHIHLkgbyqr2IBg&ip=24.143.115.166&id=o-AAnABEhbC4xu1eHygPQu4Zjh4a9t0VPXZBF8UHd0-KRT&itag=22&source=youtube&requiressl=yes&mm=31%2C29&mn=sn-nx5e6nez%2Csn-nx5s7n7s&ms=au%2Crdu&mv=m&mvi=5&pl=18&initcwndbps=2107500&vprv=1&mime=video%2Fmp4&ratebypass=yes&dur=11984.584&lmt=1579852275658714&mt=1583278772&fvip=2&fexp=23842630&c=WEB&txp=5531432&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cvprv%2Cmime%2Cratebypass%2Cdur%2Clmt&sig=ADKhkGMwRgIhAPXNQI_VbLKVQboFxPOhj4bLWZfBTTvLtGZ092HLSW5vAiEAst55NMUwre59w-zg3z6-5fBDKVy3yC8mFARFE_SQ0CQ%3D&lsparams=mm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps&lsig=ABSNjpQwRQIhAJm3u05Rqvt7YK9h1igQm3tXxhdK5X6h-3m9TABMMU-5AiB2hj7RBVnj4pfJc51RL5AVgqjkI6N9VpOkVS3RR4QpMw%3D%3D
Starting pipeline 


Using winsys: x11 
Creating LL OSD context new
0:00:00.790471152  9634     0x2d1dbc40 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]:useEngineFile(): Failed to read from model engine file
0:00:00.790540980  9634     0x2d1dbc40 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:33.699119988  9634     0x2d1dbc40 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]:generateTRTModel(): <b>Storing the serialized cuda engine to file at /opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.caffemodel_b2_int8.engine</b>

Here is the patch

--- a/apps/deepstream-test3/deepstream_test_3.py
+++ b/apps/deepstream-test3/deepstream_test_3.py
@@ -20,6 +20,7 @@
 # DEALINGS IN THE SOFTWARE.
 ################################################################################
 
+import os
 import sys
 sys.path.append('../')
 import gi
@@ -53,6 +54,16 @@ TILED_OUTPUT_HEIGHT=1080
 GST_CAPS_FEATURES_NVMM="memory:NVMM"
 pgie_classes_str= ["Vehicle", "TwoWheeler", "Person","RoadSign"]
 
+
+def engine_filename(engine_basename:str, num_sources:int) -> str:
+    homedir = os.path.expanduser('~')
+    appdir = os.path.join(homedir, '.pyds')
+    os.makedirs(appdir, mode=0o755, exist_ok=True)
+    # the last bit will have to change for Nano
+    engine_filename = f"{engine_basename}_b{num_sources}_int8.engine"
+    return os.path.join(appdir, engine_filename)
+
+
 # tiler_sink_pad_buffer_probe  will extract metadata received on OSD sink pad
 # and update params for drawing rectangle, object information etc.
 def tiler_src_pad_buffer_probe(pad,info,u_data):
@@ -287,6 +298,9 @@ def main(args):
     if(pgie_batch_size != number_sources):
         print("WARNING: Overriding infer-config batch-size",pgie_batch_size," with number of sources ", number_sources," \n")
         pgie.set_property("batch-size",number_sources)
+    model_engine_file = engine_filename("resnet10", number_sources)
+    pgie.set_property('model-engine-file', model_engine_file)
+    print(f"setting model-engine-file property on pgie to {model_engine_file}")
     tiler_rows=int(math.sqrt(number_sources))
     tiler_columns=int(math.ceil((1.0*number_sources)/tiler_rows))
     tiler.set_property("rows",tiler_rows)

Hi,
We put the engine file at

$ ls -all /home/nvidia/resnet10.caffemodel_b1_int8.engine
-rw-rw-r-- 1 nvidia nvidia 4380695  三   4 17:08 /home/nvidia/resnet10.caffemodel_b1_int8.engine

and set

g_object_set (G_OBJECT (pgie), "model-engine-file",
      "/home/nvidia/resnet10.caffemodel_b1_int8.engine", NULL);

It does not re-generate the engine file. Have not tried python code, but it should work the same ideally

That isn’t a problem. If the .engine is already generated, I assume it will load if set to the path at runtime.

Unfortunately, if you apply that patch you will see that the model-engine-file property is not respected for writing the engine file if it needs to be generated. It always tries the model location.

This makes my app design more difficult since the model install path is not writable and I don’t intend on changing that. Best workaround I can see for me right now is to copy the model on app start to my ~/.app/models folder at startup, calculating the mangled .engine filename from the model’s absolute path and trying to use that.

Ideally, model-engine-file would just be a basename and nvinfer could figure out what .engine it needs to generate or load. Alternatively, a model-engine-path parameter which would dump/load the usual generated .engine name of model_bsize_precision.engine. A cache path is what I am hoping for, basically.

Hi,
Please check useEngineFile() in

deepstream_sdk_v4.0.2_jetson\sources\libs\nvdsinfer\nvdsinfer_context_impl.cpp

The default code is

ifstream gieModelFile(initParams.modelEngineFilePath);
if (!gieModelFile.good())
{
    printWarning("Failed to read from model engine file");
    return NVDSINFER_CONFIG_FAILED;
}

You may take a look and customize it to fit your usecase.

Thanks. That’s a big help. Is there a git repository I can clone? I’d like to push changes upstream if possible so people who download my app don’t need a custom version of nvinfer.

Hi,

No. It is installed through SDKManager and currently there is no git repository. Since these are open source code, it should be fine if you push it to your own github.

I will do that and submit a patch via email/pm if I make any changes. For now, however. I think I am just going to have my app ln/copy it’s model into ~/.app/models/ on app start if it doesn’t already exist, then just set the model-engine-file at runtime. Thanks again for your advice and help. I understand a lot more now about how these properties are used.