So I have played a bit with the python examples (like the detectnet-console.py) and they are working ok I think
I have an additional question: I would like to use image already loaded into memory instead of going via file system
I have an image already in memory via mqtt (sent as bytes) and processed by opencv
# convert string of image data to uint8
nparr = np.fromstring(msg.payload, np.uint8)
# decode image
image = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
But instead of using it directly, I have to save to disk first, then load it again and then do detections
cv2.imwrite(cnbr+'.jpg', image)
# load an image (into shared CPU/GPU memory)
img, width, height = jetson.utils.loadImageRGBA(cnbr+'.jpg')
detections = net.Detect(img, width, height, "box,labels,conf")
This works but seems unnecessary since I already have the image in memory to start. How can I do this directly? I tried with cudaFromNumpy(…) but I could not make it work, maybe I had the wrong format of the numpy ndarray
Finally I would like to create a resulting image in same format as original but with the detections added. Currently using
jetson.utils.saveImageRGBA("myoutput.jpg", img, width, height)
but this goes to file