Face recognition in LIVE

Hi all,

Using facenet i trained 20k images and i created .ckpt(graph) files. The size seems to be around 1 GB
by connecting camera to TK1 device need to undergo face authentication in LIVE
The existing RAM of Jetson TK1 seems to around 2GB only which includes for ubuntu OS as well.

The following are the questions from the above:

Q1) Please suggest the steps required to import the binary files to TK1 device

Q2) Since the binary file is huge how can i undergo face recognition within 2 secs.
pls suggest the relavant performance measures to be taken.

Q3) can i increase the RAM to 4GB

Q4) Is it possible for the battery backup upto 12 hours.

Thanks
vijay

Q1: Are you asking how to transfer files to the Jetson, or are you asking about some other procedure such as file type conversion?

Q2: Don’t know, though I will suggest having the binary already read would be required if you want to avoid file read time (loading a file or network source quickly is just one part of that puzzle…perhaps what you want is not possible…there isn’t enough detail for anyone to be able to answer). In some cases you can write code to skip transferring a file as a file to the Jetson and have your program directly read the file from the remote system over the network as it arrives (same thing, but using it directly without saving first as a file…you would adapt file read to become network read).

Q3: It isn’t possible to add more RAM on the default carrier board. I’m not even sure it is possible to add more RAM on a custom carrier board. About all you can do is add swap for tasks which are not the one you are interested in to not compete as much for RAM.

Q4: It shouldn’t be too difficult to get 12 hours of battery operation, it just depends on how much weight you can tolerate. A TK1 uses far less power than your average laptop, but how much battery you actually need might require some experimentation.

Hi,

Q1):
Currently, if you want to run TensorFlow model on Jetson, you need to install TensorFlow directly.
Tutorial is here: CUDA Musing: Building TensorFlow for Jetson TK1

Our next release will support Tensorflow model via TensorRT.
But please noticed that TensorRT is 64-bit only, which is not available on TK1.

Q2):
I will recommend getting a TX2 device first.
We have a Face Recognition sample with TensorRT on TX2. Performance is 15 FPS.
We use detectNet(trained with FDDB) for detection and GoogleNet(trained with VGG_Face) for recognition.
If you need to use TK1, the recommended framework is Caffe since TensorFlow needs too much resource to get good performance.

Q3):
Please use swap, but swap has poorer performance.

Q4):
Please check the reply by linuxdev. : )

Thanks.

Hi,

I have tried the above sample and it all works fine! (slightly off topic as I am using a TX2)

Your link on GIT jumped you to the wrong location.

How did you train the recognition model? I have used DIGITS, created a huge trainset (from VGG_Face) and tested it using DIGITS, it works as expected - I then deployed it, ran the code against the above sample and it failed in tensorNet.cpp

for (auto& s : outputs) network->markOutput(*blobNameToTensor->find(s.c_str()));

The blobname was bboxes_fd - I presume this fail for all of the blobs?

So thats why I ask, how did you train it so I can get my set working with the sample code?

Thanks

Hi, Polyhedrus

We got your question on another specific topic.
Let us follow this issue on the topic 1035526:
https://devtalk.nvidia.com/default/topic/1035526

Thanks

Hello!
Where can I find more explanation about it? Face recognition in JTX2.
You said that Face recognition in TX2 use detecNet for Face detection and GoogleNet for recognition.

  1. I want to know more information about it and how can I apply to my own dataset?
  2. Also, I trained my dnn using vgg_face with caffe framework and I got 95% accuracy but How can I do the inference in jetson with the model that I got?