NVIDIA Webinar — Breaking New Frontiers in Robotics and Edge Computing with AI

External Media

Breaking New Frontiers in Robotics and Edge Computing With AI
Develop Autonomous Robots and Other Intelligent Machines with NVIDIA Jetson TX2

Thursday, July 20, 2017
1:00pm – 2:00pm EST

Following up our previous webinar Embedded Deep Learning, tune into the latest talk from the series, Breaking New Frontiers in Robotics and Edge Computing with AI. This NVIDIA webinar will cover the latest tools and techniques to deploy advanced AI at the edge, including Jetson TX2 and TensorRT. Get up to speed on recent developments in robotics and deep learning.

By participating you’ll learn:

  • How to build high-performance, energy-efficient embedded systems
  • Workflows for training AI in the cloud and deploying at the edge
  • The latest upcoming JetPack release and its performance improvements
  • Real-time deep learning primitives for autonomous navigation
  • NVIDIA’s latest Isaac Initiative for robotics

Slides (pdf) — available here.
On Demand — http://info.nvidia.com/breaking-new-frontiers-in-robotics-and-edge-computing-with-ai-jetson-tx2-webinar_RegPage.html

https://www.youtube.com/watch?v=QisCRGmidJ4

Hello Jetson Team

For creating sythetic training data for the simualtion what techique do you use. I have a use case that is to have the parts of a product to be assembled trained in the simulator for the vision system.
I was think about using Visual SFM to create a 3D object from photgraphs and then clea it p in Mesh Mixer. Any comments or ideas?

I’m not NVIDIA, but I’ve done some mechanical simulation before.
If you have the CAD models for the objects in question, then using those would be much better than trying to scan them.
You may even have more luck trying to re-model the objects rather than trying to create accurate replicas using SfM. If you have zero drafting expertise, then there are also more accurate, mechanically-oriented 3D scanners out there that you might want to use.

Then you need a good geometric simulation library to actually detect possible penetrations when performing the motions. You could test with game engines, which are reasonably affordable (or free!) and fast, although you’d preferably want to use a library with continuous (swept) motion support, which is not always a priority in those libraries. (Bullet has it.)
The draw-back with that is that you’d have to write significant chunks of code to integrate those libraries to your model formats and simulation data. If you have a framework/tool already that does good enough, that’s probably not worth it.

if I’m not mistaken a new release: Jetpack 3.1 was announced, which has tensorRT 2 and that was said to be published within a week or so

Correct, the pending release of JetPack 3.1 is slated for later today. EDIT: It’s live!

The webinar and slides have also been posted for viewing On Demand.