Pretrained Models for detectnet - Vehicles

Hello,

I’m currently using the detectnet-console and I’m wondering if there are any other pretrained models that are available. I’m actually looking for a model for vehicles. Cars if I need to be more specific. cars and truck would be fine.

If I absolutely need to, I can look at training my own, but I don’t want to jump into that aspect just yet.

Hi,

There is a relevant pre-trained model in the deepstream package.
A ResNet-18 network for detection of three classes of objects: car, people, and the two-wheeler.

The model can be downloaded here:

Thanks.

AastaLLL,

Thank you so much for the help. I downloaded the sdk and I’m looking through the package. Are you speaking of the resnet18.caffemodel? So I should be able to just run detectnet-console with it like this

./detectnet-console dog_1.jpg output_1.jpg resnet18.caffemodel or is there any further steps I need to take?

There’s also a DIGITS tutorial linked to from jetson-inference, it produces vehicle detectnet model trained with KITTI dataset:
[url]https://github.com/NVIDIA/DIGITS/tree/master/examples/object-detection[/url]

Hello, I’m having an issue with the resnet18 model recommended.

I relocated the resnet18 folder in the networks folder located at jetson-inference/data/networks. moved to the build folder and re-compiled. cmake … and make.

I followed the instructions here GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson. for how to use a new model.

I went into the resenet folder and verified that the cluster layer within deploy.prototxt was not present.

next I went to the bin folder jetson-inference/build/aarch64/bin where I set variable value of NET and ran detectnet-console with the new argurments

$ NET=networks/resnet18

$ ./detectnet-console ‘input file’ ‘output file’ \ –prototxt=$NET/deploy.prototxt \ –model=$NET/resnet18.caffemodel \ –input_blob=data \ –output_cvg=coverage
–output_bbox=bboxes

however, when I do this it continues to default to ped-100 model. Am I missing something? any suggestions?

Hi,

Could you run make command again?

There are some required copy and creat-link procedure in the CMakeList:
[url]jetson-inference/CMakeLists.txt at master · dusty-nv/jetson-inference · GitHub

Thanks

@mascenzi80 this is your result ?

nvidia@tegra-ubuntu:~/Desktop/jetson-inference/aarch64/bin$ ./detectnet-console drone_0436.png test.png \ --prototxt=$NET/deploy.prototxt \ --model=$NET/resnet18.caffemodel \ --input_blob=data \ --output_cvg=coverage \
> --output_bbox=bboxes
detectnet-console
  args (8):  0 [./detectnet-console]  1 [drone_0436.png]  2 [test.png]  3 [ --prototxt=networks/resnet18/deploy.prototxt]  4 [ --model=networks/resnet18/resnet18.caffemodel]  5 [ --input_blob=data]  6 [ --output_cvg=coverage]  7 [--output_bbox=bboxes]  

detectNet -- loading detection network model from:
          -- prototxt    networks/ped-100/deploy.prototxt
          -- model       networks/ped-100/snapshot_iter_70800.caffemodel
          -- input_blob  'data'
          -- output_cvg  'coverage'
          -- output_bbox 'bboxes'
          -- mean_pixel  0.000000
          -- threshold   0.500000
          -- batch_size  2

[GIE]  TensorRT version 2.1, build 2102
[GIE]  attempting to open cache file networks/ped-100/snapshot_iter_70800.caffemodel.2.tensorcache
[GIE]  cache file not found, profiling network model
[GIE]  platform has FP16 support.
[GIE]  loading networks/ped-100/deploy.prototxt networks/ped-100/snapshot_iter_70800.caffemodel
could not open file networks/ped-100/snapshot_iter_70800.caffemodel

Modify detectNet.cpp :)

detectNet* detectNet::Create( NetworkType networkType, float threshold, uint32_t maxBatchSize )

Aastall,

So what I ended up doing was deleting the build directory and starting fresh. Moved Resnet18 directory into the data/networks directory and then returned to a fresh build directory and did cmake … After several minutes of cmake running I then proceeded to make again.

After which I once again went to bin and tested it out. I got the same results.

The results that RaviKiranK posted are what I got with the exact same command. The link you provided looked just like code, not actually steps for me to follow. is this the case?

hey, before I forget thank you for your help.

RaviKirank,

what you posted is exactly the results that I’m seeing. Any insight?

Michael

Hi,

Just checking detectnet-console with a custom model. It can run correctly in our environment.
Could you try to set the absolute path to NET and give it a try?

Update our steps for your reference:
1. Put our custom model to HOME (/home/nvidia/DetectNet)

2. Run (No re-make)

$ NET=/home/nvidia/DetectNet
$ ./detectnet-console dog_0.jpg output_0.jpg \
  --prototxt=$NET/detectnet_fddb.prototxt \
  --model=$NET/detectnet_fddb.caffemodel \
  --input_blob=data \ 
  --output_cvg=coverage \
  --output_bbox=bboxes

Thanks.

AastaLLL,

Happy New Year!

So I figured out what I was doing wrong. Its now working proper, mostly. I’m getting some errors and failed attempts.

[GIE]    TensorRT version 2.1, build 2102
[GIE]    attempting to open cache file /home/nvidia/jetson-inference/build/aarch64/bin/networks/resenet18/resnet18.caffemodel.2.tensorcache
[GIE]    cache file not found, profiling network model
[GIE]    plateform has FP16 support
[GIE]    loading /home/nvidia/jetson-inference/build/aarch64/bin/networks/resnet18/deploy.prototxt /home/nvidia/jetson-inference/build/aarch64/bin/networks/resnet18/resnet18.caffemodel
[GIE]    failed to retrieve tensor for output 'coverage'
[GIE]    failed to retrieve tensor for output 'bboxes'
Segmentation fault (core dumped)

When I downloaded the deepstream sdk the folder resnet18 only had four files in it.

*CalibrationTable
*deploy.prototxt
*labels.txt
*resnet18.caffemodel

Should there be more files present?

If I run any of the other provided models such as Coco-Airplane or Coco-Bottle everything works just fine.

Hi,

Could you try your model with default tensorRT executable first?

cp -r /usr/src/tensorrt/ .
cd tensorrt/samples/
make
cd ../bin/
./giexec --deploy=/path/to/prototxt --output=/name/of/output

Thanks.

Just a clarification
for execution it should look like what?

./giexec --deploy=/home/nvidia/jetson-inference/build/aarch64/bin/networks/resnet18 --output-?

What should the output look like. I tried just a name, a name within quotes, etc.

After following your steps exactly what should I be seeing?

Hi,

output is the output layer name of your model.

For example, your use case should be:

./giexec --deploy=/home/nvidia/jetson-inference/build/aarch64/bin/networks/resnet18 --output=coverage --output=bboxes

This check will narrow down the issue is from TensorRT or Jetson_inference.
Thanks.

AastaLLL,

Excellent thank you!

Still no success. here is my input and output.

NVidia@tegra-ubuntu:~/tensorrt/bin$ ./giexec --deploy=/home/NVidia/jetson-inference/build/aarch64/bin/networks/resnet18 --output=coverage --output-bboxes
deploy: /home/NVidia/jetson-inference/build/aarch64/bin/networks/resnet18
output: coverage
output: bboxes
could not find output blob coverage
Engine could not be created
Engine could not be created
Nvidia@tegra-ubuntu:~/tensorrt/bin$

Your blob in your deploy.txt layer name should be “coverage”, if you don’t have it, you should add it or change the name.

Hi,

For ResNet-18, the output layer should be Layer11_cov and Layer11_bbox.

.giexec --deploy=/home/nvidia/jetson-inference/build/aarch64/bin/networks/resnet18 --output=Layer11_cov --output=Layer11_bbox

Back to your problem, DeepStream and jetson_inference are two different frameworks for the object detection use case.
The network and network output handling may have some difference.

1. Please remember to correct the output blob name:
https://github.com/dusty-nv/jetson-inference/blob/master/detectNet.h#L40

2. Please compare the implementation of bounding box drawing and apply the modification if needed.

Thanks.

AastaLLL,

yea so I’m having no luck with this direction. I appreciate all the help you have given.

As I speak I’m running digits to build a model on AWS. I’m hoping this works out better.

Again thank you so much for the help.

This video might be helpful for you

RaviKiranK,

thank you for the link. I’ve been watching a number of youtube videos on it. The one thing that I’m having questions regarding digits is how to adjust the batch size, learning rate, and etc based on the number of GPU’s I’m using.

for instance I’m following the instructions here. DIGITS/examples/object-detection at master · NVIDIA/DIGITS · GitHub

but I think its only running a single GPU for the testing. I ended up going with the p2.16xlarge instance. Which is the following

16 Nvidia K80 GPU’s
64 vCPU’s
732 RAM
192 GB GPU memory

as I wanted to get this trained as fast as possible. But what would be the settings to really take advantage of that.

I know this is off topic. I’m going to post the question as a new post.