Hi,
Has anyone been able to use visionworks with tensorflow? I am trying to use visionworks for the image processing and then tensorflow for object detection. However, tensorflow doesn’t allow other programs to share the gpu. I was wondering if there is anyway in visionworks to stop the cuda context? Or if you have ran both together how did you do it?
Here is the error tensorflow creates: current context was not created by the StreamExecutor cuda_driver API: 0x4aa180; a CUDA runtime call was likely performed without using a StreamExecutor context
Thank you for your assistance!
Edit: It seems that the issue might be solvable with this solution Context in use? · Issue #526 · tensorflow/tensorflow · GitHub but the nvidia tx1 doesn’t have the nvidia-smi command