Deploying with L4T/flash.sh breaks on my carrier board

Hi, Our carrier board fails part way through flashing. Our engineers have verified that the design is (or perhaps seems) functionally equivalent to the developer carrier board. The errors give by tegrarcm_v2 and tegradevflash_v2 are not quite as helpful as they could be because I have no idea what actually caused that error. (Because I don’t have source code for them).

So, I am hoping for a little insight as to what mechanism is actually failing at this point during the flash. Anything you can share will help me in debugging this problem in our hardware. If you can suggest a workaround that also would be very welcomed.

A successful flash looks like this at the point of interest:

[  19.6585 ] Added binary blob_tegra186-quill-p3489-1000-a00-00-ucm1_sigheader.dtb.encrypt of size 338704
[  19.6596 ] 
[  19.6597 ] Sending bootloader and pre-requisite binaries
[  19.6607 ] tegrarcm_v2 --download blob blob.bin
[  19.6615 ] Applet version 01.00.0000
[  19.6643 ] Sending blob
...
[  20.0771 ] 
[b][  20.0803 ] tegrarcm_v2 --boot recovery
[  20.0828 ] Applet version 01.00.0000
[  20.0871 ] 
[  21.0903 ] tegrarcm_v2 --isapplet
[  21.0925 ] USB communication failed.Check if device is in recovery
[  21.7936 ] 
[  21.7984 ] tegradevflash_v2 --iscpubl
[  21.7998 ] Bootloader version 01.00.0000
[  21.8047 ] Bootloader version 01.00.0000
[  21.8063 ] 
[  21.8064 ] Retrieving storage infomation
[  21.8073 ] tegrarcm_v2 --oem platformdetails storage storage_info.bin
[  21.8084 ] Applet is not running on device. Continue with Bootloader
[  21.8117 ] [/b]
[  21.8128 ] tegradevflash_v2 --oem platformdetails storage storage_info.bin
[  21.8139 ] Bootloader version 01.00.0000
[  21.8167 ] Saved platform info in storage_info.bin
[  21.8203 ] 
[  21.8203 ] Flashing the device
[  21.8215 ] tegraparser_v2 --storageinfo storage_info.bin --generategpt --pt flash.xml.bin
[  21.8226 ] 
[  21.8235 ] tegradevflash_v2 --pt flash.xml.bin --create
[  21.8242 ] Bootloader version 01.00.0000
[  21.8260 ] Erasing sdmmc_boot: 3 ......... [Done]
[  21.8617 ] Writing partition secondary_gpt with gpt_secondary_0_3.bin
[  21.8623 ] [................................................] 100%

[  21.9129 ] Erasing sdmmc_user: 3 ......... [Done]
[  25.0495 ] Writing partition master_boot_record with mbr_1_3.bin
[  25.0506 ] [................................................] 100%

I notice that the initial error happens on both systems but that the developer carrier seems to recover or ignore the error and go on. Both say the applet is not running but that does not seem to matter on the developer carrier.

Here is the log from the flash that fails.

[  24.1728 ] tegrahost_v2 --appendsigheader blob_tegra186-quill-p3489-1000-a00-00-ucm1.dtb zerosbk
[  24.1767 ] 
[  24.1806 ] tegrasign_v2 --key None --list blob_tegra186-quill-p3489-1000-a00-00-ucm1_sigheader.dtb_list.xml
[  24.1828 ] Assuming zero filled SBK key
[  24.2085 ] 
[  24.2132 ] tegrahost_v2 --updatesigheader blob_tegra186-quill-p3489-1000-a00-00-ucm1_sigheader.dtb.encrypt blob_tegra186-quill-p3489-1000-a00-00-ucm1_sigheader.dtb.hash zerosbk
[  24.2186 ] 
[  24.2223 ] tegrahost_v2 --chip 0x18 --generateblob blob.xml blob.bin
[  24.2248 ] number of images in blob are 9
[  24.2258 ] blobsize is 3701352
[  24.2262 ] Added binary blob_nvtboot_recovery_cpu_sigheader.bin.encrypt of size 203312
[  24.2310 ] Added binary blob_nvtboot_recovery_sigheader.bin.encrypt of size 89360
[  24.2324 ] Added binary blob_preboot_d15_prod_cr_sigheader.bin.encrypt of size 63104
[  24.2337 ] Added binary blob_mce_mts_d15_prod_cr_sigheader.bin.encrypt of size 2082144
[  24.2351 ] Added binary blob_bpmp_sigheader.bin.encrypt of size 533904
[  24.2362 ] Added binary blob_tegra186-a02-bpmp-storm-p3489-a00-00-ta795sa-ucm1_sigheader.dtb.encrypt of size 76080
[  24.2380 ] Added binary blob_tos-trusty_sigheader.img.encrypt of size 313152
[  24.2391 ] Added binary blob_eks_sigheader.img.encrypt of size 1440
[  24.2401 ] Added binary blob_tegra186-quill-p3489-1000-a00-00-ucm1_sigheader.dtb.encrypt of size 338704
[  24.2418 ] 
[  24.2420 ] Sending bootloader and pre-requisite binaries
[  24.2448 ] tegrarcm_v2 --download blob blob.bin
[  24.2471 ] Applet version 01.00.0000
[  24.2759 ] Sending blob
[  24.2769 ] [..............                                  ] 028%
[  24.2769 ] [............................                    ] 056%
[  24.2769 ] [..........................................      ] 084%
[  24.2769 ] [................................................] 100%
[  24.7092 ] 
[b][  24.7145 ] tegrarcm_v2 --boot recovery
[  24.7178 ] Applet version 01.00.0000
[  24.7370 ] 
[  25.7419 ] tegrarcm_v2 --isapplet
[  25.7445 ] USB communication failed.Check if device is in recovery
[ 111.0295 ] 
[ 111.0354 ] tegradevflash_v2 --iscpubl
[ 111.0386 ] Cannot Open USB[/b]
[ 196.0186 ] 
[ 197.0302 ] tegrarcm_v2 --isapplet
[ 197.0333 ] USB communication failed.Check if device is in recovery
[ 209.3401 ] 
[ 209.3492 ] tegradevflash_v2 --iscpubl
[ 209.3530 ] Cannot Open USB
[ 212.5000 ] 
[ 213.5057 ] tegrarcm_v2 --isapplet
[ 213.5083 ] USB communication failed.Check if device is in recovery
[ 216.6598 ] 
[ 216.6672 ] tegradevflash_v2 --iscpubl
[ 216.6712 ] Cannot Open USB
[ 219.8122 ]

hello crveit,

the flashing process need to set the board into forced-recovery mode for BootRom communication.
could you please enter forced-recovery mode and check the connection,
for example, you should see NVidia Corp. in the list.

$ lsusb
Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 004: ID 046d:c30e Logitech, Inc. UltraX Keyboard (Y-BL49)
Bus 001 Device 046: ID 0403:6011 Future Technology Devices International, Ltd FT4232H Quad HS USB-UART/FIFO IC
<b>Bus 001 Device 103: ID 0955:7c18 NVidia Corp. </b>
Bus 001 Device 003: ID 0424:2412 Standard Microsystems Corp.

lsusb does show the “NVidia Corp.” entry when I start the Flash. As you can see at time=24.24 to 24.27 I am able to flash at least one binary named blob.bin Unfortunately after that I lose the connection. I think that the state of the usb port reverts to host mode. It looks like it does the same thing on the developer carrier but on the dev-carrier it seems to be able to get back into forced-recovery. On our board that fails.
We have spent the last two weeks ensuring our circuit is the same as the dev-carrier but still have the failure after a partial success. Thats why I’m asking for insight about what those tools are actually trying to do at the point they emit those messages. Insight from that might help me understand what actually failed.

Are you using a VM for the host PC? If so, then this will cause the issue. VMs do not correctly handle the multiple disconnect/reconnect of USB during flash (the parent o/s must always pass that device, not just once, but every disconnect/reconnect).

I should have mentioned that I am Flashing from a 64 bit intel laptop running ubuntu 18.04 on the bare machine, (not on a VM). I’ve read many of the forum posts related to this topic and can correctly flash on the development kit. But, I am debugging our carrier design and even though the functionality related to the OTG port appears to be the same, I fail at the point shown above. Because I do not have source to the executables mentioned above, I only have the error message they give. It would be helpful to know what they were attempting to do when the errors occurred, and to know what the actual error codes were during those errors. That info is not available to me currently because it is in the tegrarcm_v2 and tegradevflash_v2 source code. It would be nice if those routines returned the error codes when they failed so we could look them up.
Of course any other useful insights about what is happening at that point in the flash process would be very welcome.

hello crveit,

could you please try below steps to put your board into forced-recovery mode for another testing.

  1. disconnect the power to fully shutdown the device.
  2. connect the power, and also hold the recovery button.
  3. press the power button to power-on the device, keep few seconds then release recovery button.
  4. execute the script to flash the target.

I have tested again following your steps. I waited about 10 seconds because I wait for the current draw to stabilize before releasing the recovery switch. The results are identical to the above with flash running several minutes, having a successful flash up to exactly the point shown above where it fails.

Just to double check I also tried to flash without properly getting into recovery mode. If I do that the flash process fails less than 1 second into the process with “Error: probing the target board failed.” As you can see the log shown above has gone far past that point.

I have been using that process for several weeks while trying to debug but on our carrier design it fails at the point shown after a partial flash.

Please answer the questions I asked above so I can get insight as to why the modules drops out of recovery mode after doing a partial flash. I really need to debug this board. We have a customer calling daily for solutions to this problem. I went through all the appropriate forum posts trying everything I could before even posting this question.

One other thought, if Nvidia engineers don’t have time to help solve this, can you provide source for tegracm_v2 and tegradevflash_v2 so we can properly examine what is happening at the point of failure. That would also allow us to print the actual error codes at those points as well. Without the code, part of the system is a black box, and we have no insight as to what was happening when our design fails.

If you are not using the micro-B USB cable which comes with the Jetson (e.g., if you purchased a module without the micro-B cable, or if you use a charger cable), then you might consider a better quality cable. Most “charger” cables were not really designed for data quality and are made as cheap as possible (they work for a bit of data, but are unreliable in longer/larger/faster transfers).

I am using the cable from my developer kit. It works fine with the kit. Good comment though, I’ve heard that other cables fail to work from others as well.

This is a shot in the dark, but perhaps it is the key. I have swapped back and forth between the TX2 8GB (default) SoM and the TX2 4GB SoM. At one point I provided the wrong <target_board> to the flash script and I got the same pattern of errors you are getting.

[   7.3156 ] tegrarcm_v2 --download blob blob.bin
[   7.3163 ] Applet version 01.00.0000
[   7.3352 ] Sending blob
[   7.3353 ] [................................................] 100%
[   7.9239 ] 
[   7.9249 ] tegrarcm_v2 --boot recovery
[   7.9257 ] Applet version 01.00.0000
[   7.9451 ] 
[   8.9486 ] tegrarcm_v2 --isapplet
[   8.9755 ] 
[   8.9766 ] tegradevflash_v2 --iscpubl
[   8.9777 ] CPU Bootloader is not running on device.
[   8.9954 ] 
[   9.9978 ] tegrarcm_v2 --isapplet
[  10.0238 ] 
[  10.0261 ] tegradevflash_v2 --iscpubl
[  10.0284 ] CPU Bootloader is not running on device.
...
[  19.4445 ] tegrarcm_v2 --isapplet
[  19.4454 ] USB communication failed.Check if device is in recovery
[  19.4463 ] 
[  19.4471 ] tegradevflash_v2 --iscpubl
[  19.4479 ] Cannot Open USB
[  19.4635 ] 
[  20.4659 ] tegrarcm_v2 --isapplet
[  20.4669 ] USB communication failed.Check if device is in recovery
[  20.4678 ] 
[  20.4686 ] tegradevflash_v2 --iscpubl
[  20.4692 ] Cannot Open USB

So making a hypothesis here:
Perhaps the recovery BootROM starts out in a state that has all of the pins in a state that allows your carrier board to run in Recovery mode. As the flash script sends parts to the BootROM they are executed and reconfigure the hardware along the way. One of those critical pieces is the pin configuration. Maybe your pin configuration (or other configuration) that the flash script sends is “bad” for your custom carrier board. Obviously, in my case you can’t send the TX2 8GB configuration to the TX2 4GB SoM and expect things to work properly. It works up to a point and then the configuration takes affect and causes a hard fault in the hardware or a reset of sorts breaking the flashing process.

Based on this I would carefully examine the configuration changes you have made from the default carrier board to your custom carrier board. Maybe one of the configuration changes is improper for your new custom carrier board OR you have not properly made all the changes that are required to allow your custom carrier board to execute “fully” through the entire flashing process. If there are extraneous changes to the configuration files you may have made that aren’t required you could try eliminating them to see if you can find the critical one that is breaking things.

I hope that helps. :-)

Hi JD, good thought, I have wondered about something along that line as well. I will need to check the pinmux and device tree again to be sure. The only thing I can think of that partly argues against this is that the same modules can be flashed again on our carrier (to this same stopping point). So it appears the device tree allows flashing. However, it may be preventing the change back to whatever mode is needed at the failure point. I wish I knew a little more about what was happening at the point in time, it might help identify the flaw. Interestingly even though the device tree (and pinmux cfg) are for our board the original nvidia carrier does not stop flashing at the location in question.

Note our carrier is config5 where the developer carrier is config2. There are some changes in the device tree to some of the USBs. But both carriers are almost identical for the otg port hardware and there were no intentional changes to the device tree for the otg port.

Thanks for the suggestion. I will look through the device tree again.

I have tried a few things of interest:

  1. By trying to flash the original NV image (config2 on our board (a config5 board) I learned that the circuit can flash on our carrier. So we have a mistake in config5 device-tree.

THIS NEXT ONE MAY BE A SOLUTION FOR A BUG IN Nvidia 32.2.1 Would love to hear your view on this! This is not a fix for that bug but I am still interested in hearing your thoughts.

  1. I changed the pinmux definition for USB0_EN_OC* on A17 (also known as USB_VBUS_EN0 and GPIO3_PL.04 and wake61) Changed pin direction from output to Open-Drain (left it Drive 0) and changed the wake pin fron no to yes. The result in the pinmux.cfg changed that pin from 0x021 to 0x061.
    This eliminated the first error message shown below. The second is still present. I get this same error in a normal working flash to the nvidia developer kit but on the kit the second error does not occur.
tegrarcm_v2 --isapplet
USB communication failed.Check if device is in recovery

tegradevflash_v2 --iscpubl
Cannot Open USB

I did not change the definitions for this pin found in the soc device-tree code.

I will cover the other changes that got rid of the second error message in the next post. For the moment I am wondering whether I should leave the open-drain definition in the pinmux.cfg.

Given this pin is connected to the overload error output of the usb vbus chip external to the module it actually makes no sense for this to be anything other than open-drain. (Event though it can be tristate high Z some of the time you still are connecting two outputs.)

What do you think?

While I’ve been able to use a couple of changes to toggle back and forth between flashing and not flashing successfully, I just discovered that if I do a full deploy with all our needed changes that I always am failing. I’ll post whatever insight I get today and maybe you can comment early next week.

The cause of the failure to flash happens after the first three partial flashes are done.
This indicates that the binary blobs that have been flashed are updated with something that causes the failure. any insight as to what might have been changed in those blobs that could cause the failure would be very helpful.

[  11.4976 ] Sending BCTs
[  11.4999 ] tegrarcm_v2 --download bct_bootrom br_bct_BR.bct --download bct_mb1 mb1_bct_MB1_sigheader.bct.encrypt
[  11.5022 ] Applet version 01.00.0000
[  11.5349 ] Sending bct_bootrom
[  11.5352 ] [................................................] 100%
[  11.5374 ] Sending bct_mb1
[  11.5379 ] [................................................] 100%
[  20.7685 ] 
[  20.7686 ] Generating blob
[  20.7713 ] tegrahost_v2 --chip 0x18 --align blob_nvtboot_recovery_cpu.bin
[  20.7743 ] 

...  a bunch of stuff gets created and added to the blob

[  20.9630 ] tegrahost_v2 --updatesigheader blob_tegra186-quill-p3489-1000-a00-00-ucm1_sigheader.dtb.encrypt blob_tegra186-quill-p3489-1000-a00-00-ucm1_sigheader.dtb.hash zerosbk
[  20.9644 ] 
[  20.9655 ] tegrahost_v2 --chip 0x18 --generateblob blob.xml blob.bin
[  20.9663 ] number of images in blob are 9
[  20.9667 ] blobsize is 3701352
[  20.9669 ] Added binary blob_nvtboot_recovery_cpu_sigheader.bin.encrypt of size 203312
[  20.9685 ] Added binary blob_nvtboot_recovery_sigheader.bin.encrypt of size 89360
[  20.9691 ] Added binary blob_preboot_d15_prod_cr_sigheader.bin.encrypt of size 63104
[  20.9697 ] Added binary blob_mce_mts_d15_prod_cr_sigheader.bin.encrypt of size 2082144
[  20.9703 ] Added binary blob_bpmp_sigheader.bin.encrypt of size 533904
[  20.9708 ] Added binary blob_tegra186-a02-bpmp-storm-p3489-a00-00-ta795sa-ucm1_sigheader.dtb.encrypt of size 76080
[  20.9717 ] Added binary blob_tos-trusty_sigheader.img.encrypt of size 313152
[  20.9723 ] Added binary blob_eks_sigheader.img.encrypt of size 1440
[  20.9726 ] Added binary blob_tegra186-quill-p3489-1000-a00-00-ucm1_sigheader.dtb.encrypt of size 338704
[  20.9726 ] 
[  20.9727 ] Sending bootloader and pre-requisite binaries
[  20.9736 ] tegrarcm_v2 --download blob blob.bin
[  20.9744 ] Applet version 01.00.0000
[  20.9913 ] Sending blob
[  20.9915 ] [................................................] 100%
[  21.4114 ] 
[  21.4144 ] tegrarcm_v2 --boot recovery
[  21.4171 ] Applet version 01.00.0000
[  21.4364 ] 
[  22.4391 ] tegrarcm_v2 --isapplet
[  22.4405 ] USB communication failed.Check if device is in recovery

...  The flash then fails following at this point...

If you have any comments today the open drain and any info about the contents of bct_bootrom, MB1 and the binary blob that could cause the failure to continue flashing it would be really appreciated.

I am having the same (or very similar) problem with a new revision of a custom carrier board. We are using a TX2-4GB module. The previous 5 revisions of our custom carrier board worked flawlessly. This 6th revision will not flash, and fails at the same line as yours:

.......

[   6.7223 ] Added binary blob_tegra186-a02-bpmp-lightning-p3489-a00-00-te770m_sigheader.dtb.encrypt of size 315296
[   6.7232 ] Added binary blob_tos-trusty_sigheader.img.encrypt of size 313152
[   6.7238 ] Added binary blob_eks_sigheader.img.encrypt of size 1440
[   6.7240 ] Added binary blob_undertow-tx2-4gb_sigheader.dtb.encrypt of size 236272
[   6.7241 ]
[   6.7241 ] Sending bootloader and pre-requisite binaries
[   6.7252 ] tegrarcm_v2 --download blob blob.bin
[   6.7261 ] Applet version 01.00.0000
[   6.7412 ] Sending blob
[   6.7413 ] [................................................] 100%
[   7.2782 ]
[   7.2802 ] tegrarcm_v2 --boot recovery
[   7.2821 ] Applet version 01.00.0000
[   7.2981 ]
[   8.3018 ] tegrarcm_v2 --isapplet
[   8.3193 ]
[   8.3218 ] tegradevflash_v2 --iscpubl
[   8.3237 ] CPU Bootloader is not running on device.
[   8.3393 ]
[   9.3431 ] tegrarcm_v2 --isapplet
[   9.3452 ] USB communication failed.Check if device is in recovery
[ 135.7418 ]
[ 135.7443 ] tegradevflash_v2 --iscpubl
[ 135.7462 ] Cannot Open USB
[ 189.2698 ]

The same device tree can be flashed to the previous revision of our carrier board with no changes.

On the hardware, we moved some of the other USB lines around to support an M.2 slot in this latest revision, but nothing with USB0 (recovery) changed at all. We switched from using CONFIG #2 to CONFIG #3. I suppose it is possible that something is wrong with our device tree, but is this really used during the flash process? In the meantime, we have been using the Nvidia Devkit to flash our modules, and all of the USB functionality works well, so I think the device tree is configured properly.

No, what you changed in kernel device tree would not affect the flash process.

To debug flash process, please share

  1. The flash log on host (you already shared)
  2. The uart log from tegra device. This would tell you why usb get failed.

Hi Wayne,

Thanks for the quick reply. Here is the uart log:

[0036.072] E> Blob is not set
[0036.081] C> I2C command failed
[0036.084] C> block index = (4) and rail_id = (1)
[0036.088] C> Addr: Reg = [0xe8:0x07]: 336166925
[0036.093] C> I2C command failed
[0036.096] C> block index = (5) and rail_id = (1)
[0036.100] C> Addr: Reg = [0xe8:0x07]: 336166925
[0036.615] I> Welcome to MB2(TBoot-BPMP) Recovery(version: 01.00.160913-t186-M-00.00-mobile-bc98f182)
[0036.624] I> bit @ 0xd480000
[0036.627] I> Boot-device: eMMC
[0036.638] I> sdmmc DDR50 mode
[0036.643] I> sdmmc bdev is already initialized
[0036.648] I> pmic: reset reason (nverc)	: 0x80
[0036.654] I> Found 16 partitions in SDMMC_BOOT (instance 3)
[0036.661] I> Found 30 partitions in SDMMC_USER (instance 3)
[0036.670] I> Binary(16) of size 533504 is loaded @ 0xd7800000
[0036.678] I> Binary(17) of size 314896 is loaded @ 0xd79b31e0
[0036.799] I> Copy BTCM section
[0036.803] I> Binary(13) of size 202912 is loaded @ 0x96000000
[0036.810] I> Binary(20) of size 235888 is loaded @ 0x8520f400
[0036.817] I> Binary(14) of size 312752 is loaded @ 0x8530f600
[0036.825] I> TOS boot-params @ 0x85000000
[0036.829] I> TOS params prepared
[0036.832] I> Loading EKS ...
[0036.835] I> Binary(15) of size 1040 is loaded @ 0x8590f800
[0036.841] I> EKB detected (length: 0x400) @ 0x8590f800
[0036.846] I> Copied encrypted keys
[0036.849] I> boot profiler @ 0x175844000
[0036.853] I> boot profiler for TOS @ 0x175844000
[0036.858] I> Unhalting SCE
[0036.861] I> Primary Memory Start:80000000 Size:70000000
[0036.866] I> Extended Memory Start:f0110000 Size:856f0000
[0036.873] I> MB2(TBoot-BPMP) Recovery done

NOTICE:  BL31: v1.3(release):a28d87f09
NOTICE:  BL31: Built : 16:56:00, Jul 16 2019
ipc-unittest-main: 1519: Welcome to IPC unittest!!!
ipc-unittest-main: 1531: waiting forever
ipc-unittest-srv: 329: Init unittest services!!!
keystore-demo: 141: Hello world from keystore-demo app
keystore-demo: 207: main: EKB contents match expected value
exit called, thread 0xffffffffea87ad58, name trusty_app_2_7d18fc60-e9fc-11e8
platform_bootstrap_epilog: trusty bootstrap complete
[0037.179] I> Welcome to TBoot-CPU Recovery(version: 01.00.160913-t186-M-00.00-mobile-b6c9c72e)
[0037.188] I> gpio framework initialized
[0037.191] I> tegrabl_gpio_driver_register: register 'nvidia,tegra186-gpio' driver
[0037.199] I> tegrabl_gpio_driver_register: register 'nvidia,tegra186-gpio-aon' driver
[0037.206] I> tegrabl_tca9539_init: i2c bus: 0, slave addr: 0xee
[0037.213] E> I2C: slave not found in slaves.
[0037.217] E> I2C: Could not write 0 bytes to slave: 0x00ee with repeat start false.
[0037.225] E> I2C_DEV: Failed to send register address 0x00000004.
[0037.231] E> I2C_DEV: Could not write 1 registers of size 1 to slave 0xee at 0x00000004 via instance 0.
[0037.240] E> tca9539_device_init: failed to write polar reg
[0037.245] E> tegrabl_tca9539_init: failed to init device!
[0037.251] I> tegrabl_tca9539_init: i2c bus: 0, slave addr: 0xe8
[0037.257] E> I2C: slave not found in slaves.
[0037.261] E> I2C: Could not write 0 bytes to slave: 0x00e8 with repeat start false.
[0037.269] E> I2C_DEV: Failed to send register address 0x00000004.
[0037.274] E> I2C_DEV: Could not write 1 registers of size 1 to slave 0xe8 at 0x00000004 via instance 0.
[0037.284] E> tca9539_device_init: failed to write polar reg
[0037.289] E> tegrabl_tca9539_init: failed to init device!
[0037.294] I> CPU: ARM Cortex A57
[0037.297] I> CPU: MIDR: 0x411fd073, MPIDR: 0x80000100
[0037.302] I> L2 ECC enabled : yes
[0037.305] I> CPU-BL Params @ 0x175800000
[0037.309] I>  0) Base:0x00000000 Size:0x00000000
[0037.313] I>  1) Base:0x177f00000 Size:0x00100000
[0037.318] I>  2) Base:0x177e00000 Size:0x00100000
[0037.323] I>  3) Base:0x177d00000 Size:0x00100000
[0037.327] I>  4) Base:0x177c00000 Size:0x00100000
[0037.332] I>  5) Base:0x177b00000 Size:0x00100000
[0037.336] I>  6) Base:0x177800000 Size:0x00200000
[0037.341] I>  7) Base:0x177400000 Size:0x00400000
[0037.345] I>  8) Base:0x177a00000 Size:0x00100000
[0037.350] I>  9) Base:0x177300000 Size:0x00100000
[0037.354] I> 10) Base:0x176800000 Size:0x00800000
[0037.359] I> 11) Base:0x30000000 Size:0x00040000
[0037.363] I> 12) Base:0xf0000000 Size:0x00100000
[0037.368] I> 13) Base:0x30040000 Size:0x00001000
[0037.372] I> 14) Base:0x30048000 Size:0x00001000
[0037.376] I> 15) Base:0x30049000 Size:0x00001000
[0037.381] I> 16) Base:0x3004a000 Size:0x00001000
[0037.385] I> 17) Base:0x3004b000 Size:0x00001000
[0037.390] I> 18) Base:0x3004c000 Size:0x00001000
[0037.394] I> 19) Base:0x3004d000 Size:0x00001000
[0037.399] I> 20) Base:0x3004e000 Size:0x00001000
[0037.403] I> 21) Base:0x3004f000 Size:0x00001000
[0037.407] I> 22) Base:0x00000000 Size:0x00000000
[0037.412] I> 23) Base:0xf0100000 Size:0x00010000
[0037.416] I> 24) Base:0x00000000 Size:0x00000000
[0037.421] I> 25) Base:0x00000000 Size:0x00000000
[0037.425] I> 26) Base:0x00000000 Size:0x00000000
[0037.430] I> 27) Base:0x00000000 Size:0x00000000
[0037.434] I> 28) Base:0x84400000 Size:0x00400000
[0037.438] I> 29) Base:0x30000000 Size:0x00010000
[0037.443] I> 30) Base:0x178000000 Size:0x08000000
[0037.447] I> 31) Base:0x00000000 Size:0x00000000
[0037.452] I> 32) Base:0x176000000 Size:0x00600000
[0037.456] I> 33) Base:0x80000000 Size:0x70000000
[0037.461] I> 34) Base:0xf0110000 Size:0x856f0000
[0037.465] I> 35) Base:0x00000000 Size:0x00000000
[0037.470] I> 36) Base:0x00000000 Size:0x00000000
[0037.474] I> 37) Base:0x1772e0000 Size:0x00020000
[0037.479] I> 38) Base:0x84000000 Size:0x00400000
[0037.483] I> 39) Base:0x96000000 Size:0x02000000
[0037.488] I> 40) Base:0x85000000 Size:0x01200000
[0037.492] I> 41) Base:0x175800000 Size:0x00500000
[0037.496] I> 42) Base:0x00000000 Size:0x00000000
[0037.501] I> 43) Base:0x00000000 Size:0x00000000
[0037.505] I> Boot-device: eMMC
[0037.518] I> sdmmc DDR50 mode
[0037.523] I> sdmmc bdev is already initialized
[0037.529] I> Found 16 partitions in SDMMC_BOOT (instance 3)
[0037.535] I> Found 30 partitions in SDMMC_USER (instance 3)
[0037.540] I> bl dtb load address = @0x8520f400
[0037.544] I> Recovery boot_type: 0
[0037.550] I> fixed regulator driver initialized
[0037.584] I> register 'maxim' power off handle
[0037.589] I> virtual i2c enabled
[0037.592] I> registered 'maxim,max77620' pmic
[0037.597] I> tegrabl_gpio_driver_register: register 'max77620-gpio' driver
[0037.603] I> Entering 3p server
[0037.606] I>  SUPER SPEED
[0037.630]  > XUSBF: Failed to configure enumeration.
[0037.635] E> usbf_enum failed err = 0x2f2f0c02
[0037.639] E> NV3P: Failed to open transport endpoint. Interface: 0, instance: 0.
[0037.646] E> NV3P_SERVER: Failed to open transport endpoint.
[0037.652] C> RCM boot failed
[0037.654] E> Top caller module: NV3P, error module: NV3P, reason: 0x11, aux_info: 0x01
[0037.662] I> TBoot-CPU Recovery hang

It looks like this poster is having a similar issue: https://devtalk.nvidia.com/default/topic/1062905/usbf_enum-error-on-flashing-different-lane-config/

To clarify, I don’t think the problem is caused by a DTS change, since the stock DTS also does not work on the new board, but works on the old board. I think the problem is caused by NOT changing the DTS to match my new carrier board. Is it possible the lane configuration in the DTS is causing this problem? Unfortunately that post doesn’t have a clear answer.

Hi,

Thanks for quick reply. I need to check this log with bootloader source. DTS should not affect the function of flash board. I notice you mentioned several boards in previous comment.

Does this issue happen to only single module?

Thanks for your help.

I have tried 4 different TX2-4GB modules and each one has the same result. I have also tried 4 copies of this carrier board, and checked the soldering of each one.

The flash process immediately works on the previous revision OR on the Nvidia Devkit, just not on this new revision of our carrier board.

Hi Undertow10,

Do you mean this issue only happen to your new revision carrier board?
If this is such case, then I don’t think it is software configuration issue.

Yes, that is correct. I would tend to agree about the hardware thing, but what could it be? It enumerates as the proper USB device when in recovery mode. The only difference with this board is that some other USB / PEX lanes moved around. Is it possible that the device tree does get loaded during the flash process and an improper setting is causing something to fail? Or a conflict with some hardware once a certain process runs during flash?