nvpmodel and jetson_clocks

Hi everyone,

I have some questions related to the power modes and the ./jetson_clocks script.

I have tried MobileNet from TensorFlow for the different nvpmodels and noticed the nvpmodel -m 3 performed better than nvpmodel -m 0 (which was meant to provide best performace). By looking the resources usage I realized that for the -m 3 it was using 95% of the RAM while for -m 0 only 50%. What is the influence of the mode in the RAM usage.

Another question is, if -m 0 is meant to provide maximum clock frequencies for CPUs and GPU, why do we get improved performance by running the ./jetson_clocks scripts?

I thank you for the insights,

Hi,

Thanks for your feedback.
You can find the corresponding setting of nvpmodel here:

In summary,
nvpmodel set the the available max/min frequency to a preferred value and jetson_clocks fix the frequency to maximal.
Ex.
nvpmodel -m 0

  • Raise max_freq to 1300(hw-max), set min_freq to 114(hw-min)
  • Curr_freq will between 114 - 1300

./jetson_clocks.sh

  • Fix freq to 1300 by overwriting min_freq = max_frwq = 1300
  • Curr_freq always equal to 1300

We didn’t notice that MAX-P ARM mode (-m 3) has better performance on TensorFlow use-case.
We will check this internally and update a comment here if any information.

Thanks.

Hi AastaLLL,

thank you very much for the clarifications. Could you tell me where I can find the information regarding the frequency ranges for the 5 operational modes implemented by the nvpmodel tool? I have benchmarked all of them and I would like to understand more in depth my findings.

I thank you very much.

You may find these settings in /etc/nvpmodel.conf.
Once you have selected one mode, you can get details of clocks with

sudo jetson_clocks.sh --show

Hi,

You can find the detail configuration of nvpmodel in our document:
https://developer.nvidia.com/embedded/dlc/l4t-documentation-28-2
[i]>> Clock Frequency and Power Management

Power Management for TX2 Devices
Max-Q and Max-P Power Efficiency[/i]

We have tested the inference time of TensorFlow/MobileNet and can’t reproduce this issue:

nvpmodel 	Inference time
0 	 	0.21 ms
1 	 	0.30 ms
2 	 	0.46 ms
3 	 	0.37 ms
4 	 	0.60 ms

One possible reason is that:
If you reset the configuration with nvpmodel, the GPU strategy go back to dynamic frequency.
Please remember to re-execute jetson_clocks.sh to lock the frequency to maximal each time.

Thanks.