Module Power of TX2 vs TX2i

Hello,

I have a couple of questions about the TX2i

  1. The datasheet indicates that the Max-P goes from 15W on the TX2 to 20W on the TX2i. So is this saying that for the same amount of Flops/Performance, power goes up 5W on the TX2i? Why is that if it’s the same process node?

  2. When the module power is stated at 15W for the TX2 and 20W for the TX2i, is this at room temperature or hot temperature (80/85C)?

  3. For the module power Max-P, does this assume close to 100% CPU usage and 100% GPU usage?

Hi johnq, the difference between the two is most pronounced at the extremes of the temperature range, where TX2i goes up to 85C. The TX2i also has some different components for industrial environment like RAM, power circuitry, ect. Max-P performance implies loaded system, yes.

Hi Dusty,

I find that interesting that you indicated that 15W for the TX2 is at 80C with full load. I ran a test with all 6 CPUs 100% loaded and GPUs 80% loaded, and the Jetson reported 15W at 40C TTP interface and 19W at 80C TTP. So the number in the datasheet appears to be for room temperature, not hot temperature. Please let me know.

Thanks.

There is typically some normal deviation seen from the figures depending on the particular application, benchmark, power measurement, ect. It’s best to profile it for the conditions of your specific environment the way you are, also while consulting the Jetson TX2 Thermal Design Guide in the process.

Is there a power modelling application from Nvidia to guage power consumption under specific loads? Does it have to be tested in the lab? If so what test are you running to load all the CPUs/GPUs?

Thanks

The module power consumption is measurable in realtime via the INA sensors onboard the module.
You can read the INA’s via commands found in these documents:

The tegrastats console tool can also read out the usage information.

Thanks dust_nv! This looks like what I would need. Any comment on my last question about loading resources while taking power measurements?

There are a variety of tests that you could run, for example the N-body sample that comes with CUDA is intensive. You could run TensorRT inferencing via trtexec with a large number of runs and in FP16 mode. For the CPU there is a variety of programs you could use, i.e. compression or encryption, I know that for benchmarking we use SPECint.

Regardless, I would monitor the core utilizations via tegrastats and you can run various programs until your system is sufficiently loaded.