NVIDIA Webinar — TensorFlow to TensorRT on Jetson

AI at the Edge: TensorFlow to TensorRT on Jetson

Interested in running TensorFlow networks optimally on Jetson TX1/TX2? Deep neural networks developed with TensorFlow can be deployed on NVIDIA Jetson and accelerated up to 5x with TensorRT. This webinar covers the conversion of pretrained TensorFlow image detection models to TensorRT for deployment on the Jetson TX2 platform.

By attending this webinar, you’ll learn:

  1. How TensorFlow performance compares with TensorRT using popular models like Inception and MobileNet
  2. System setup for running TensorFlow and TensorRT on the Jetson
  3. How to inspect a TensorFlow graph for TensorRT compatibility
  4. Workflows for converting TensorFlow image classification models to TensorRT
  5. How to execute TensorRT image classification models on the Jetson TX2

Date & Time — Thursday, March 8, 2018, 11:00am - 12:00pm Pacific
Registration — http://info.nvidia.com/tensorflow-for-jetson-mar-2018-reg-page.html

Don’t forget to watch our previous series of webinars from the On Demand recordings below:

Breaking New Frontiers in Robotics and Edge Computing With AI
Develop Autonomous Robots and Other Intelligent Machines with NVIDIA Jetson TX2

Thursday, July 20, 2017
1:00pm – 2:00pm EST

Following up our previous webinar Embedded Deep Learning, tune into the latest talk from the series, Breaking New Frontiers in Robotics and Edge Computing with AI. This NVIDIA webinar will cover the latest tools and techniques to deploy advanced AI at the edge, including Jetson TX2 and TensorRT. Get up to speed on recent developments in robotics and deep learning.

By participating you’ll learn:

  • How to build high-performance, energy-efficient embedded systems
  • Workflows for training AI in the cloud and deploying at the edge
  • The latest upcoming JetPack release and its performance improvements
  • Real-time deep learning primitives for autonomous navigation
  • NVIDIA’s latest Isaac Initiative for robotics

Slides (pdf) — available here.
On Demand — http://info.nvidia.com/breaking-new-frontiers-in-robotics-and-edge-computing-with-ai-jetson-tx2-webinar_RegPage.html

https://www.youtube.com/watch?v=QisCRGmidJ4

Build Advanced Multi-Camera Products with Jetson

Register today for a free webinar on developing new applications for robotics, industrial automation and smart cities in a flash on NVIDIA Jetson – the fastest computing platform for AI at the edge.

Learn how our camera partners provide product development support in addition to image tuning services for other advanced camera solutions.

By attending this webinar, you’ll learn how:

  1. NVIDIA’s preferred camera solutions partners and how they can help build your complex imaging products
  2. e-con Systems synchronized six cameras on a single Jetson
  3. Different off-the-shelf camera modules built by our camera partners designed for Jetson that are available for your next project.

Join us after the presentation for live Q&A.
On Demand — http://info.nvidia.com/in-focus-build-advanced-multi-camera-products-with-nvidia-jetson-RegPage.html

Build Your Next Deep Learning Application for NVIDIA Jetson in MATLAB

Register today for our next Jetson Webinar featuring Bill Chou, product manager for MATLAB Coder and GPU Coder at Mathworks.

Join this free webinar to learn how you can use MATLAB to auto-generate highly optimized portable CUDA code that when cross-compiled and deployed to Jetson, will increase deep learning inference performance.

By attending this webinar, you’ll learn how to:

  1. Access and manage large image sets
  2. Visualize networks and gain insight into the training process
  3. Import reference networks such as AlexNet and GoogLeNet
  4. Automatically generate portable and optimized CUDA code from the MATLAB algorithm

Join us after the presentation for live Q&A.
On Demand — http://info.nvidia.com/build-your-next-nvidia-jetson-deep-learning-application-in-matlab-reg-page.html

Develop and Deploy Deep Learning Services at the Edge with IBM

Join us for a free webinar with Chris Dye from IBM and Amit Goel from NVIDIA to learn how IBM’s new Edge Solution and Watson IoT platform enables you to securely and autonomously deploy deep learning services on an NVIDIA Jetson TX2.

Learn how IBM’s Edge Solution leverages JetPack 3.2’s Docker support, and enables developers to easily build, test and deploy complex cognitive services with GPU access for vision audio inference, analytics and other deep learning services.

By attending this webinar, you’ll learn how to:

  • Enable custom Jetson TX2 deep learning services and multi-service patterns to be deployed over IBM's edge solution.
  • Demo AI and deep learning inference on a Jetson TX2, leveraging TensorFlow, OpenCV, Keras, TensorRT, Watson-Intu.
  • Manage and collect insights from multiple edge nodes using IBM’s Watson IoT platform and display aggregate statistics.

Join us after the presentation for a live Q&A session.

Date & Time — Wednesday, January 24, 2018, 11:00am - 12:00pm Pacific
On Demand — http://info.nvidia.com/develop-and-deploy-deep-learning-services-at-the-edge-with-ibm-reg-page.html

any updates on using Plugin API for non-supported layers, for TensorFlow models parsed via UFF parser

Hi,

A reply might not be appropriate, but I do not see how to start a new topic.
I am not even sure this is the right thread for this comment.

I attended the webinar and downloaded the TensorRT package. It works on my TX2. I wanted to compare the TensorRT classification scores with those of the Tensorflow model.

I am working with inception_v3 model. The results of classifying the image data/images/lifeboat.jpg

  • with Tensorflow are:
    0 9.46784 626 lifeboat
    1 5.38329 815 speedboat
    2 5.22412 555 fireboat
    3 5.22268 511 container ship, containership, container vessel
    4 3.76027 409 amphibian, amphibious vehicle

  • with the TensorRT are:
    0 10.371737 626 lifeboat
    1 3.901006 415 backpack, back pack, knapsack, packsack, rucksack, haversack
    2 2.526643 815 speedboat
    3 2.167920 736 poncho
    4 2.089388 601 hook, claw

This seems like more than just a rounding error.
If the Tensorflow results were not as good as theTensorRT results I would be looking at the scripts I used to run Tensorflow, but since they appear better I felt impelled to ask.

I reformatted the outputs from classify_image.cu and included the scores to enable easy comparison with the Python script used to run Tensorflow.

Any thoughts. Any one.

Hi Wahaj, TensorFlow 1.7 includes integrated TensorRT support, which supports custom layers since it is still a native TensorFlow environment.

Hi dbusby, you probably want to create a separate topic for this issue.
To do this, go to the main Jetson TX2 board, and click the ‘Create Topic’ button near the top of the page.

thats really an appreciable improvement to TensorRT

thanks @dusty_nv

Is the recording available ? Thanks