DIGITS or somthing else

Hi,

I have been trying to use DIGITS to use on our Jetson Nano for quite some time now to help us get Detect-net models to run for image detection. I have run into a lot of errors with everything even by following the Jetson-Inference tutorial to a T. Is there any other programs that are better? We are using the current version of the Jetpack and we need a program without WIFI so we have to be able to run the tracking program locally. Any help would be appreciated!

Thanks,
Brayden Coyer

DIGITS isn’t meant to run on the NANO. This may be the cause of some of the errors you are running into. It’s meant to be run on a Linux desktop/server with Nvidia GPUS (and usually Nvidia-docker).

DIGITS is used to train something that you can run on the Nano. The Nano itself is not really suitable for most kinds of training.

I am not sure what kind of “tracking” you are intrested in doing on the Nano, but DeepStream may be suitable for you, and includes various samples and sample models.

You can install DeepStream by doing a “sudo apt install deepstream-4.0” (on the nano) if the latest JetPack is installed.

You may find the samples, etc, afterwards in:
/opt/nvidia/deepstream/deepstream-4.0

We want to be able to track an object with a camera while the camera is moving and know exactly where in the image it is. And do you know if there is any documentation on how to collect your own datasets, cause we only want it to train onto our own pictures of balls. Thanks!!

mdegans covered it well, DIGITS isn’t officially supported on ARM. As presented in the jetson-inference tutorial, DIGITS is intended to run on an x86_64 PC or server running Ubuntu.

What I will add, is that I’ve been transitioning the Hello AI World portion of the jetson-inference tutorial to do training (transfer learning) onboard Jetson using PyTorch. You can find that step for image classification here: https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-transfer-learning.md

Hi dusty-NV, we already collected our own datasets and got the image-net working to see out ball but we wanted to use detect net so we could get the position of the ball on the screen if there is an easier way to do this please let me know :)

If your ball is always visually distinctive enough from your background (a specific hue, for example), you might be able to use a simple blob detection/tracking library to do what you want (think PS Eye). It all depends on your use case.

The SSD-Mobilenet and SSD-Inception models are currently trained with TensorFlow (which is why they do not appear yet in the training portion of the Hello AI World tutorial, which uses PyTorch). So you could re-train these models on your dataset with the TensorFlow Object Detection API (ideally running on a PC/server/cloud instance, or it might run on the Nano - albeit slowly and with extra swap space). Here are some resources about that:

Alternatively, you could install/run DIGITS on a PC and use it to train DetectNet, however the SSD-based models above have improved performance.

Yes our ball is very distinctive! Its bright yellow on a grey floor and the only yellow object. Do you have any resources to help? We are using a pi camera on the jetson.

Using CUDA?

There is a suggestion to use “cv::cuda::HoughCirclesDetector” on this thread:

You will have to build and install OpenCV first for that specifically. You can search the forum for threads providing scripts that will do that for you.

Dusty_nv may have some better suggestions for such a simple use case. If you google CUDA blob detection you are likely to find many other code samples and suggestions. There are many approaches to this problem.

Hi Dusty,

First of all: I very much appreciate the work and time you are putting into the community; the last months I’ve learned a lot reading your and other people’s directions in relation to the Jetson nano.

Second, unfortunately I have not yet succeeded in training my own object detector using the nano and Tensorflow. Besides the above links, I’ve followed many more tutorials and (re-)flashed my sd card (64GB to avoid memory issues) over and over again to sure that I began from scratch. I’ve got the latest sd card images for the nano but I seem to keep stumbling when I want to start the training (tensor flow model_train.py fails with message: “import tensorflow.contrib.slim as slim ModuleNotFoundError: No module named ‘tensorflow.contrib’”); When I search the net on this message it appears that the contrib module was removed in tensorflow 2; with the 1.xx tensorflow versions I did not have success as well unfortunately.

Maybe I am suffering from “tunnel vision” by now but I have not been able to find 1 clear step-by-step tutorial to get the training with my custom images going. To summarise: I’ve got my image dataset (dice, showing either the 1 or 6), I’ve labelled the images (using LabelImage), I’ve created the TFRecord files for training and testing, I’ve installed tensorflow, protobuf, cocoapi, etc successfully, sufficient swap memory available…and then the above error shows.

Can you, or anybody else reading this, please point me to a A-to-Z tutorial that leads me to creating a trained object detector for my custom images?

Many many thanks in advance!

Best regards,
Richard

Hi, I really want to train the SSD mobile net on the nano. Because

  1. The dusty_nv tutorial is easy to follow. :>
  2. I only have a window PC and did try a virtual machine Linux but there are errors and library dependency issue using DIGITS.
  3. I have my model trained in Azure and it has several output formats, onnx, docker, tensorflow, how can I import my model into nano?
  4. If I train in, for example, tensorflow on window, I can copy the model to nano without any modification, right? I couldn’t find an official tensorflow example of training SSD mobilenet V2,Tutorial  |  TensorFlow Core, is there any link I can follow?
    Thanks