I am trying to run a python script on Jetson Nano board, which performs facial detection and embeddings calculation using dlib library. When I try to print the embeddings. The values of 128 embeddings are displaying in very large number sometimes and sometimes as NaN. Same script is displaying values properly in all other devices like Tx2 or i386/amd64 Linux machine.
I have tried installing dlib using both methods like, “pip install dlib” as well as building from source. Both cases, I am getting similar result.
Could anyone please suggest, how this problem can be resolved?
I have the same issue, my code runs on Tx1, Tx2, and Xavier without problems, but it produces same error on nano,
I tried both dlib ver. 19.16, and ver. 19.17.
actually “face_recognition.batch_face_locations” function outputs correct location of faces using “CNN”
but the issue is with this function “face_recognition.face_encodings”! the output is very large numbers or NaNs.
We originally thought that this may be caused by a different OpenCV or dlib version across the platform.
But looks not.
For the usecase error occurs, guess that it may be related to the OOM.
Could you try to run your application with cuda-memcheck to get more information?
It appears that the patch you have provided works as a temporary solution.
Again using the sample code from earlier in the thread, below were my results from testing.
Expected: -0.08488056
Received: -0.08488055
Slight change in accuracy but that’s probably from using a different model in the DLIB library?
Will still be waiting for the patch when it comes out, but I can confirm that this works for an immediate solution.
For anyone else who would needs to use the workaround from above, ensure that you remove DLIB via pip before/after running the setup.py in the instructions above if you previously have it installed from such. If you do not, you will still get NaN and accuracy errors despite manually compiling and installing DLIB.
I can also confirm that pulling the current version of DLIB (19.17 at the moment) via git and applying this patch works.