TX1/R23.1: New Flash Structure...How to Clone?
With the release of R23.1 for TX1, the "nvflash" app no longer exists, and appears to be just a blank placeholder. What would the clone tools be for cloning partition tables, root partition, and all partitions?
With the release of R23.1 for TX1, the "nvflash" app no longer exists, and appears to be just a blank placeholder. What would the clone tools be for cloning partition tables, root partition, and all partitions?

#1
Posted 11/24/2015 09:29 PM   
nvflash has been succeeded by tegraflash.py tegraflash.py includes the read and write commands for backing-up and restoring partitions, respectively. It looks like cloning the entire eMMC device including all partitions, may require additional scripting. The python source can easily be edited however.
nvflash has been succeeded by tegraflash.py
tegraflash.py includes the read and write commands for backing-up and restoring partitions, respectively.
It looks like cloning the entire eMMC device including all partitions, may require additional scripting. The python source can easily be edited however.
#2
Posted 11/25/2015 12:29 AM   
[quote="dusty_nv"]nvflash has been succeeded by tegraflash.py tegraflash.py includes the read and write commands for backing-up and restoring partitions, respectively. It looks like cloning the entire eMMC device including all partitions, may require additional scripting. The python source can easily be edited however.[/quote] It looks like tegraflash.py options for reading may need some explanation in order to clone TX1 partitions. It's nice that tegraflash.py has an interactive shell, and commands such as "read" prompt for information. The problem is, I don't know the answer to some of those parameters. I'm guessing "APP" (which is correct for TK1) is the root partition in TX1 as well, but to read "APP" (or to find out if it doesn't exist), a "--bl" option is also required. In TK1 it was fastboot.bin, which I believe interacts with 3pserver...there is no fastboot.bin in the R23.1 installer, but there are others which may be a candidate, such as mvtboot.bin and nvtboot_recovery.bin. Would it be possible to get a list of which partitions "read" can be used with to clone, and what to use with options like --bl?
dusty_nv said:nvflash has been succeeded by tegraflash.py
tegraflash.py includes the read and write commands for backing-up and restoring partitions, respectively.
It looks like cloning the entire eMMC device including all partitions, may require additional scripting. The python source can easily be edited however.


It looks like tegraflash.py options for reading may need some explanation in order to clone TX1 partitions. It's nice that tegraflash.py has an interactive shell, and commands such as "read" prompt for information. The problem is, I don't know the answer to some of those parameters. I'm guessing "APP" (which is correct for TK1) is the root partition in TX1 as well, but to read "APP" (or to find out if it doesn't exist), a "--bl" option is also required. In TK1 it was fastboot.bin, which I believe interacts with 3pserver...there is no fastboot.bin in the R23.1 installer, but there are others which may be a candidate, such as mvtboot.bin and nvtboot_recovery.bin. Would it be possible to get a list of which partitions "read" can be used with to clone, and what to use with options like --bl?

#3
Posted 11/29/2015 11:51 PM   
It turns out that cboot bootloader was built with the cloning commands. Use the following to clone APP partition, for example: [b][u]Backing up an image[/u][/b] sudo ./tegraflash.py --bl cboot.bin --applet nvtboot_recovery.bin --chip 0x21 --cmd "read APP my_backup_image_APP.img" [code]dusty@dusty-ubuntu-PC:~/workspace/JetPack-L4T-2.0/Linux_for_Tegra_tx1/bootloader$ sudo ./tegraflash.py --bl cboot.bin --applet nvtboot_recovery.bin --chip 0x21 --cmd "read APP my_backup_jetpack_231_APP.img" [sudo] password for dusty: Welcome to Tegra Flash version 1.0.0 Type ? or help for help and q or quit to exit Use ! to execute system commands [ 0.0025 ] Generating RCM messages [ 0.0047 ] tegrarcm --listrcm rcm_list.xml --chip 0x21 --download rcm nvtboot_recovery.bin 0 0 [ 0.0059 ] RCM 0 is saved as rcm_0.rcm [ 0.0105 ] RCM 1 is saved as rcm_1.rcm [ 0.0105 ] List of rcm files are saved in rcm_list.xml [ 0.0105 ] [ 0.0105 ] Signing RCM messages [ 0.0149 ] tegrasign --key None --list rcm_list.xml --pubkeyhash pub_key.hash [ 0.0164 ] Assuming zero filled SBK key [ 0.0313 ] [ 0.0313 ] Copying signature to RCM mesages [ 0.0325 ] tegrarcm --chip 0x21 --updatesig rcm_list_signed.xml [ 0.0339 ] [ 0.0339 ] Boot Rom communication [ 0.0348 ] tegrarcm --rcm rcm_list_signed.xml [ 0.0357 ] BootRom is not running [ 0.2092 ] [ 0.2093 ] Retrieving storage infomation [ 0.2104 ] tegrarcm --oem platformdetails storage storage_info.bin [ 0.2113 ] Applet version 00.01.0000 [ 0.3594 ] Saved platform info in storage_info.bin [ 0.3606 ] [ 0.3606 ] Reading BCT from device for further operations [ 0.3606 ] Sending bootloader and pre-requisite binaries [ 0.3619 ] tegrarcm --download ebt cboot.bin 0 0 [ 0.3630 ] Applet version 00.01.0000 [ 0.5354 ] Sending ebt [ 0.5381 ] [................................................] 100% [ 0.8105 ] [ 0.8111 ] tegrarcm --boot recovery [ 0.8117 ] Applet version 00.01.0000 [ 0.9603 ] [ 0.9603 ] Reading partition [ 0.9621 ] tegradevflash --read APP /home/dusty/workspace/JetPack-L4T-2.0/Linux_for_Tegra_tx1/bootloader/my_backup_jetpack_231_APP.img [ 0.9629 ] Cboot version 00.01.0000 [ 1.6797 ] Reading partition APP in file /home/dusty/workspace/JetPack-L4T-2.0/Linux_for_Tegra_tx1/bootloader/my_backup_jetpack_231_APP.img [ 1.6807 ] [................................................] 100% [ 3279.1986 ] [/code] [b][u]Restoring an image[/u][/b] sudo ./tegraflash.py --bl cboot.bin --applet nvtboot_recovery.bin --chip 0x21 --cmd "write APP my_backup_image_APP.img" [code]dusty@dusty-ubuntu-PC:~/workspace/JetPack-L4T-2.0/Linux_for_Tegra_tx1/bootloader$ sudo ./tegraflash.py --bl cboot.bin --applet nvtboot_recovery.bin --chip 0x21 --cmd "write APP my_backup_jetpack_231_APP.img" [sudo] password for dusty: Sorry, try again. [sudo] password for dusty: Welcome to Tegra Flash version 1.0.0 Type ? or help for help and q or quit to exit Use ! to execute system commands [ 0.0027 ] Generating RCM messages [ 0.0050 ] tegrarcm --listrcm rcm_list.xml --chip 0x21 --download rcm nvtboot_recovery.bin 0 0 [ 0.0061 ] RCM 0 is saved as rcm_0.rcm [ 0.0081 ] RCM 1 is saved as rcm_1.rcm [ 0.0089 ] List of rcm files are saved in rcm_list.xml [ 0.0124 ] [ 0.0125 ] Signing RCM messages [ 0.0146 ] tegrasign --key None --list rcm_list.xml --pubkeyhash pub_key.hash [ 0.0157 ] Assuming zero filled SBK key [ 0.0242 ] [ 0.0243 ] Copying signature to RCM mesages [ 0.0255 ] tegrarcm --chip 0x21 --updatesig rcm_list_signed.xml [ 0.0272 ] [ 0.0272 ] Boot Rom communication [ 0.0283 ] tegrarcm --rcm rcm_list_signed.xml [ 0.0293 ] BR_CID: 0x32101001640ca588100000000aff8380 [ 0.1769 ] RCM version 0X210001 [ 0.2645 ] Boot Rom communication completed [ 1.2711 ] [ 1.2711 ] Sending bootloader and pre-requisite binaries [ 1.2720 ] tegrarcm --download ebt cboot.bin 0 0 [ 1.2727 ] Applet version 00.01.0000 [ 1.4215 ] Sending ebt [ 1.4241 ] [................................................] 100% [ 1.7419 ] [ 1.7426 ] tegrarcm --boot recovery [ 1.7432 ] Applet version 00.01.0000 [ 1.8947 ] [ 1.8948 ] Writing partition [ 1.8966 ] tegradevflash --write APP /home/dusty/workspace/JetPack-L4T-2.0/Linux_for_Tegra_tx1/bootloader/my_backup_jetpack_231_APP.img [ 1.8973 ] Cboot version 00.01.0000 [ 2.6234 ] Writing partition APP with /home/dusty/workspace/JetPack-L4T-2.0/Linux_for_Tegra_tx1/bootloader/my_backup_jetpack_231_APP.img [ 2.6241 ] [................................................] 100% [ 823.6449 ][/code]
It turns out that cboot bootloader was built with the cloning commands. Use the following to clone APP partition, for example:

Backing up an image

sudo ./tegraflash.py --bl cboot.bin --applet nvtboot_recovery.bin --chip 0x21 --cmd "read APP my_backup_image_APP.img"

dusty@dusty-ubuntu-PC:~/workspace/JetPack-L4T-2.0/Linux_for_Tegra_tx1/bootloader$ sudo ./tegraflash.py --bl cboot.bin --applet nvtboot_recovery.bin --chip 0x21 --cmd "read APP my_backup_jetpack_231_APP.img"
[sudo] password for dusty:
Welcome to Tegra Flash
version 1.0.0
Type ? or help for help and q or quit to exit
Use ! to execute system commands

[ 0.0025 ] Generating RCM messages
[ 0.0047 ] tegrarcm --listrcm rcm_list.xml --chip 0x21 --download rcm nvtboot_recovery.bin 0 0
[ 0.0059 ] RCM 0 is saved as rcm_0.rcm
[ 0.0105 ] RCM 1 is saved as rcm_1.rcm
[ 0.0105 ] List of rcm files are saved in rcm_list.xml
[ 0.0105 ]
[ 0.0105 ] Signing RCM messages
[ 0.0149 ] tegrasign --key None --list rcm_list.xml --pubkeyhash pub_key.hash
[ 0.0164 ] Assuming zero filled SBK key
[ 0.0313 ]
[ 0.0313 ] Copying signature to RCM mesages
[ 0.0325 ] tegrarcm --chip 0x21 --updatesig rcm_list_signed.xml
[ 0.0339 ]
[ 0.0339 ] Boot Rom communication
[ 0.0348 ] tegrarcm --rcm rcm_list_signed.xml
[ 0.0357 ] BootRom is not running
[ 0.2092 ]
[ 0.2093 ] Retrieving storage infomation
[ 0.2104 ] tegrarcm --oem platformdetails storage storage_info.bin
[ 0.2113 ] Applet version 00.01.0000
[ 0.3594 ] Saved platform info in storage_info.bin
[ 0.3606 ]
[ 0.3606 ] Reading BCT from device for further operations
[ 0.3606 ] Sending bootloader and pre-requisite binaries
[ 0.3619 ] tegrarcm --download ebt cboot.bin 0 0
[ 0.3630 ] Applet version 00.01.0000
[ 0.5354 ] Sending ebt
[ 0.5381 ] [................................................] 100%
[ 0.8105 ]
[ 0.8111 ] tegrarcm --boot recovery
[ 0.8117 ] Applet version 00.01.0000
[ 0.9603 ]
[ 0.9603 ] Reading partition
[ 0.9621 ] tegradevflash --read APP /home/dusty/workspace/JetPack-L4T-2.0/Linux_for_Tegra_tx1/bootloader/my_backup_jetpack_231_APP.img
[ 0.9629 ] Cboot version 00.01.0000
[ 1.6797 ] Reading partition APP in file /home/dusty/workspace/JetPack-L4T-2.0/Linux_for_Tegra_tx1/bootloader/my_backup_jetpack_231_APP.img
[ 1.6807 ] [................................................] 100%
[ 3279.1986 ]


Restoring an image

sudo ./tegraflash.py --bl cboot.bin --applet nvtboot_recovery.bin --chip 0x21 --cmd "write APP my_backup_image_APP.img"

dusty@dusty-ubuntu-PC:~/workspace/JetPack-L4T-2.0/Linux_for_Tegra_tx1/bootloader$ sudo ./tegraflash.py --bl cboot.bin --applet nvtboot_recovery.bin --chip 0x21 --cmd "write APP my_backup_jetpack_231_APP.img"
[sudo] password for dusty:
Sorry, try again.
[sudo] password for dusty:
Welcome to Tegra Flash
version 1.0.0
Type ? or help for help and q or quit to exit
Use ! to execute system commands

[ 0.0027 ] Generating RCM messages
[ 0.0050 ] tegrarcm --listrcm rcm_list.xml --chip 0x21 --download rcm nvtboot_recovery.bin 0 0
[ 0.0061 ] RCM 0 is saved as rcm_0.rcm
[ 0.0081 ] RCM 1 is saved as rcm_1.rcm
[ 0.0089 ] List of rcm files are saved in rcm_list.xml
[ 0.0124 ]
[ 0.0125 ] Signing RCM messages
[ 0.0146 ] tegrasign --key None --list rcm_list.xml --pubkeyhash pub_key.hash
[ 0.0157 ] Assuming zero filled SBK key
[ 0.0242 ]
[ 0.0243 ] Copying signature to RCM mesages
[ 0.0255 ] tegrarcm --chip 0x21 --updatesig rcm_list_signed.xml
[ 0.0272 ]
[ 0.0272 ] Boot Rom communication
[ 0.0283 ] tegrarcm --rcm rcm_list_signed.xml
[ 0.0293 ] BR_CID: 0x32101001640ca588100000000aff8380
[ 0.1769 ] RCM version 0X210001
[ 0.2645 ] Boot Rom communication completed
[ 1.2711 ]
[ 1.2711 ] Sending bootloader and pre-requisite binaries
[ 1.2720 ] tegrarcm --download ebt cboot.bin 0 0
[ 1.2727 ] Applet version 00.01.0000
[ 1.4215 ] Sending ebt
[ 1.4241 ] [................................................] 100%
[ 1.7419 ]
[ 1.7426 ] tegrarcm --boot recovery
[ 1.7432 ] Applet version 00.01.0000
[ 1.8947 ]
[ 1.8948 ] Writing partition
[ 1.8966 ] tegradevflash --write APP /home/dusty/workspace/JetPack-L4T-2.0/Linux_for_Tegra_tx1/bootloader/my_backup_jetpack_231_APP.img
[ 1.8973 ] Cboot version 00.01.0000
[ 2.6234 ] Writing partition APP with /home/dusty/workspace/JetPack-L4T-2.0/Linux_for_Tegra_tx1/bootloader/my_backup_jetpack_231_APP.img
[ 2.6241 ] [................................................] 100%
[ 823.6449 ]
#4
Posted 01/18/2016 06:28 PM   
Is there any faster (and more efficient) way to Backup & Restore? This way I always have to move ~15Gb of data to Backup and Restore whether I just used a fraction of that with my current TX1 configuration...
Is there any faster (and more efficient) way to Backup & Restore? This way I always have to move ~15Gb of data to Backup and Restore whether I just used a fraction of that with my current TX1 configuration...

#5
Posted 04/07/2016 05:43 PM   
My suggestion is clone once. After that, loopback mount the clone on the host, and use standard rsync commands (or your favorite Linux backup software) and do an incremental update to the loopback mounted system. If critical, I'd recommend saving one clone without modification and updating to a copy of the clone.
My suggestion is clone once. After that, loopback mount the clone on the host, and use standard rsync commands (or your favorite Linux backup software) and do an incremental update to the loopback mounted system. If critical, I'd recommend saving one clone without modification and updating to a copy of the clone.

#6
Posted 04/07/2016 06:47 PM   
Thank you
Thank you

#7
Posted 04/07/2016 07:48 PM   
The advice above was very helpful. I've been using the commands to download and upload images amongst my TX1 boards. I've also been able to share my images with a colleague and he's able to upload to his TX1 as well. The problem we're seeing though is that if he creates an image file, I can't upload it to my board. The upload fails with the message, "00000004: Filesize is bigger than partition size". We've checked the partition sizes on our TX1 boards using df and they seem identical. We've also used md5sum on the image to confirm there's been no corruption during transmission from his machine to mine. We've also checked the upload and download commands and they are nearly identical. The only small difference we've noticed is that on my Ubuntu machine, the cboot.bin file is in my ../JetPack/TX1/Linux_for_Tegra_tx1/bootloader directory while on his machine it's in a subdirectory called t210ref. Running md5sum on the cboot.bin files shows they're identical between our machines. We haven't noticed any other differences although I suspect perhaps our JetPack versions are different or something like that. Any ideas?
The advice above was very helpful. I've been using the commands to download and upload images amongst my TX1 boards. I've also been able to share my images with a colleague and he's able to upload to his TX1 as well.

The problem we're seeing though is that if he creates an image file, I can't upload it to my board. The upload fails with the message, "00000004: Filesize is bigger than partition size". We've checked the partition sizes on our TX1 boards using df and they seem identical.

We've also used md5sum on the image to confirm there's been no corruption during transmission from his machine to mine.
We've also checked the upload and download commands and they are nearly identical. The only small difference we've noticed is that on my Ubuntu machine, the cboot.bin file is in my ../JetPack/TX1/Linux_for_Tegra_tx1/bootloader directory while on his machine it's in a subdirectory called t210ref. Running md5sum on the cboot.bin files shows they're identical between our machines. We haven't noticed any other differences although I suspect perhaps our JetPack versions are different or something like that.

Any ideas?

#8
Posted 06/13/2016 06:40 AM   
There are many partitions in a Jetson, the root file system is just the obvious one. When flashing I choose the maximum size for rootfs, option "-S 14580MiB". If you flashed another Jetson with a different size, then the partitions making up the more invisible part of the install would be unlikely to remain constant. Make sure the image itself, as a file, is the same exact size...the size seen from "df" of a running Jetson does not describe other details of the underlying file system, nor its offset for starting within the eMMC as a whole. Note that the "bootloader/cboot.bin" is a NULL-padded copy of the reference boot loader...other than some trailing NULL bytes, they are the same. Normally I would not expect a boot loader image without proper NULL-byte padding would get through, but if it did then the difference in partition sizes might be accounted for by this...offsets of starting point of other partitions following this may shift. I can't say for sure, but if you are using a Jetson with the trailing NULL bytes in the boot image to create a rootfs image, then the Jetson failing to be able to flash would possibly be able to flash that partition if it were flashed again using the same parameters as the one from which the image was created. The goal being to get the boot image and all partition sizes below the rootfs to match, and then try to append the rootfs.
There are many partitions in a Jetson, the root file system is just the obvious one. When flashing I choose the maximum size for rootfs, option "-S 14580MiB". If you flashed another Jetson with a different size, then the partitions making up the more invisible part of the install would be unlikely to remain constant. Make sure the image itself, as a file, is the same exact size...the size seen from "df" of a running Jetson does not describe other details of the underlying file system, nor its offset for starting within the eMMC as a whole.

Note that the "bootloader/cboot.bin" is a NULL-padded copy of the reference boot loader...other than some trailing NULL bytes, they are the same. Normally I would not expect a boot loader image without proper NULL-byte padding would get through, but if it did then the difference in partition sizes might be accounted for by this...offsets of starting point of other partitions following this may shift.

I can't say for sure, but if you are using a Jetson with the trailing NULL bytes in the boot image to create a rootfs image, then the Jetson failing to be able to flash would possibly be able to flash that partition if it were flashed again using the same parameters as the one from which the image was created. The goal being to get the boot image and all partition sizes below the rootfs to match, and then try to append the rootfs.

#9
Posted 06/13/2016 07:04 AM   
@linuxdev, Ok, thanks for the advice. I've tried to use "-S 14580MiB" but it doesn't seem to be an option for tegraflash.py. So I tried inserting it like this: sudo ./tegraflash.py --bl cboot.bin --applet nvtboot_recovery.bin --chip 0x21 -S 14580MiB --cmd "write APP my_image.img" ..and like this: sudo ./tegraflash.py --bl cboot.bin --applet nvtboot_recovery.bin --chip 0x21 --cmd "write -S 14580MiB APP my_image.img" ..but neither worked. Sorry for being helpless and possibly pedantic but could you provide the exact command?
@linuxdev,
Ok, thanks for the advice. I've tried to use "-S 14580MiB" but it doesn't seem to be an option for tegraflash.py. So I tried inserting it like this:

sudo ./tegraflash.py --bl cboot.bin --applet nvtboot_recovery.bin --chip 0x21 -S 14580MiB --cmd "write APP my_image.img"

..and like this:
sudo ./tegraflash.py --bl cboot.bin --applet nvtboot_recovery.bin --chip 0x21 --cmd "write -S 14580MiB APP my_image.img"

..but neither worked. Sorry for being helpless and possibly pedantic but could you provide the exact command?

#10
Posted 06/13/2016 08:13 AM   
[quote=""]We've also checked the upload and download commands and they are nearly identical. The only small difference we've noticed is that on my Ubuntu machine, the cboot.bin file is in my ../JetPack/TX1/Linux_for_Tegra_tx1/bootloader directory while on his machine it's in a subdirectory called t210ref. Running md5sum on the cboot.bin files shows they're identical between our machines. We haven't noticed any other differences although I suspect perhaps our JetPack versions are different or something like that[/quote]It sounds like you two are using different versions of Jetpack or L4T. It is recommended to use the same L4T to clone and restore the image. Try downloading the same version as your colleague. You can use Jetpack or technically L4T directly for this: (see archive: [url]https://developer.nvidia.com/embedded/linux-tegra-archive[/url]) If your issue persists, before restoring the cloned image try freshly flashing your Jetson with the same stock Jetpack/L4T as your colleague. This will get the partitions all set up the same.
said:We've also checked the upload and download commands and they are nearly identical. The only small difference we've noticed is that on my Ubuntu machine, the cboot.bin file is in my ../JetPack/TX1/Linux_for_Tegra_tx1/bootloader directory while on his machine it's in a subdirectory called t210ref. Running md5sum on the cboot.bin files shows they're identical between our machines. We haven't noticed any other differences although I suspect perhaps our JetPack versions are different or something like that
It sounds like you two are using different versions of Jetpack or L4T. It is recommended to use the same L4T to clone and restore the image. Try downloading the same version as your colleague. You can use Jetpack or technically L4T directly for this:
(see archive: https://developer.nvidia.com/embedded/linux-tegra-archive)

If your issue persists, before restoring the cloned image try freshly flashing your Jetson with the same stock Jetpack/L4T as your colleague. This will get the partitions all set up the same.
#11
Posted 06/13/2016 01:11 PM   
In addition to making sure you use the same L4T release version, I would also suggest using the "flash.sh" front-end, do not use the tegraflash.py script directly for the step where you are trying to make your two Jetsons have the same compatible environment. Here are some typical examples: [code]sudo flash.sh -S 14580MiB jetson-tx1 mmcblk0p1 [s]# sudo flash.sh -R -S 14580MiB jetson-tx1 mmcblk0p1[/s] sudo flash.sh -r -S 14580MiB jetson-tx1 mmcblk0p1[/code] Note that the latter case re-uses "bootloader/system.img", and if a clone file is there in place, that installs the clone. I also doubt the "re-use" option needs the "-S 14580MiB", but I'm a bit lazy and didn't want to flash a Jetson to find out...I know using "-S 14580MiB" does work. Using the front-end should do the right thing relative to any boot loader reference getting padded as needed.
In addition to making sure you use the same L4T release version, I would also suggest using the "flash.sh" front-end, do not use the tegraflash.py script directly for the step where you are trying to make your two Jetsons have the same compatible environment. Here are some typical examples:
sudo flash.sh -S 14580MiB jetson-tx1 mmcblk0p1
# sudo flash.sh -R -S 14580MiB jetson-tx1 mmcblk0p1
sudo flash.sh -r -S 14580MiB jetson-tx1 mmcblk0p1


Note that the latter case re-uses "bootloader/system.img", and if a clone file is there in place, that installs the clone. I also doubt the "re-use" option needs the "-S 14580MiB", but I'm a bit lazy and didn't want to flash a Jetson to find out...I know using "-S 14580MiB" does work. Using the front-end should do the right thing relative to any boot loader reference getting padded as needed.

#12
Posted 06/13/2016 07:30 PM   
Thanks very much for the responses above. I've found that if I simply truncate the image by a few K it uploads successfully. I attempted to follow the instructions linked below and chop about 4G from the image but that didn't work but I found that chopping just a few K off the end with the "truncate --size=15032385536 myImage.img" command did. http://softwarebakery.com/shrinking-images-on-linux I'm sure this is quite a questionable way to resolve the problem but it seems to have worked at least in my specific case. Thanks again.
Thanks very much for the responses above. I've found that if I simply truncate the image by a few K it uploads successfully. I attempted to follow the instructions linked below and chop about 4G from the image but that didn't work but I found that chopping just a few K off the end with the "truncate --size=15032385536 myImage.img" command did.
http://softwarebakery.com/shrinking-images-on-linux

I'm sure this is quite a questionable way to resolve the problem but it seems to have worked at least in my specific case.

Thanks again.

#13
Posted 06/14/2016 11:13 AM   
@linuxdev Like Randy I'm trying to clone an image to a TX1. Specifically one of his from arducopter.org to prove the TX1 as a companion computer to a Pixhawk on the Auvidea J100 carrier board. Randy appears to be using the J120 board at present. When I follow: sudo flash.sh -R jetson-tx1 mmcblk0p1 (The lack of -S appeared to make little difference as you suspected.) with the required image in the bootloader/system.img as suggested. The command reports a lack of a rootfs and after providing rootfs (from ../rootfs) sudo flash.sh -R rootfs jetson-tx1 mmcblk0p1 The command rebuilds system.img instead of using the clone file. (I also tried using tegraflash.py which appeared to hang at the image write stage.) Any help greatly appreciated. MS
@linuxdev

Like Randy I'm trying to clone an image to a TX1.
Specifically one of his from arducopter.org to prove the TX1 as a companion computer to a Pixhawk on the Auvidea J100 carrier board. Randy appears to be using the J120 board at present.
When I follow:

sudo flash.sh -R jetson-tx1 mmcblk0p1 (The lack of -S appeared to make little difference as you suspected.)

with the required image in the bootloader/system.img as suggested.
The command reports a lack of a rootfs and after providing rootfs (from ../rootfs)

sudo flash.sh -R rootfs jetson-tx1 mmcblk0p1

The command rebuilds system.img instead of using the clone file.
(I also tried using tegraflash.py which appeared to hang at the image write stage.)

Any help greatly appreciated.

MS

#14
Posted 07/01/2016 02:17 PM   
How far into the upload did it freeze? You can, try again - see if it gets farther. Remove USB hubs. Try a shorter USB cord between the PC and TX1. If you're flashing from within a VM, try a live CD or native install of Ubuntu 14.04. Or with the module on the devkit.
How far into the upload did it freeze?

You can, try again - see if it gets farther. Remove USB hubs. Try a shorter USB cord between the PC and TX1.

If you're flashing from within a VM, try a live CD or native install of Ubuntu 14.04. Or with the module on the devkit.
#15
Posted 07/01/2016 03:19 PM   
Scroll To Top

Add Reply