Following up our previous webinar Embedded Deep Learning, tune into the latest talk from the series, Breaking New Frontiers in Robotics and Edge Computing with AI. This NVIDIA webinar will cover the latest tools and techniques to deploy advanced AI at the edge, including Jetson TX2 and TensorRT. Get up to speed on recent developments in robotics and deep learning.
By participating you’ll learn:
How to build high-performance, energy-efficient embedded systems
Workflows for training AI in the cloud and deploying at the edge
The latest upcoming JetPack release and its performance improvements
Real-time deep learning primitives for autonomous navigation
For creating sythetic training data for the simualtion what techique do you use. I have a use case that is to have the parts of a product to be assembled trained in the simulator for the vision system.
I was think about using Visual SFM to create a 3D object from photgraphs and then clea it p in Mesh Mixer. Any comments or ideas?
I’m not NVIDIA, but I’ve done some mechanical simulation before.
If you have the CAD models for the objects in question, then using those would be much better than trying to scan them.
You may even have more luck trying to re-model the objects rather than trying to create accurate replicas using SfM. If you have zero drafting expertise, then there are also more accurate, mechanically-oriented 3D scanners out there that you might want to use.
Then you need a good geometric simulation library to actually detect possible penetrations when performing the motions. You could test with game engines, which are reasonably affordable (or free!) and fast, although you’d preferably want to use a library with continuous (swept) motion support, which is not always a priority in those libraries. (Bullet has it.)
The draw-back with that is that you’d have to write significant chunks of code to integrate those libraries to your model formats and simulation data. If you have a framework/tool already that does good enough, that’s probably not worth it.