HDMI display failed on Jetson TX2

Hi everyone,
Today I connect Jetson TX2 HDMI to 4K monitor but it can not finish booting. But if I unplug HDMI and wait for Jetson TX2 finish booting and then plug HDMI cable to monitor again, it can work normally.
How can I fixed this problem?
The attached file is my booting log.
I tried method in link bellow that modify kernel source code, but there is no luck

https://devtalk.nvidia.com/default/topic/1003956/jetson-tx2/tx2-not-booting-up-with-hdmi-connected/2

Thank in advance

HDMI failed.docx (45 KB)

I don’t see anything like a kernel OOPS in the log…it seems more or less normal. This looks like serial console, so can you post the content from this:

sudo -s
cat `find /sys -name edid`
exit

There is a known 4k bug, but first you’d need to know if auto configuration is possible (which edid will tell us…the following steps are relevant only if edid is present). If edid is there, then add this to your “/etc/X11/xorg.conf” Section “Device”:

Option    "ModeDebug"

Once ModeDebug is added reboot and post the content of “/var/log/Xorg.0.log”.

In addition to the info linuxdev queries, any log after “[3.250006] tegradc 15210000.nvdisplay: hdmi: pclk:594000K, set prod-setting:prod_c_600M” ??

Dear WayneWWW,
After that, Jetson TX2 take some time then it reboot.
Thank

Hi Quang_OpenStack,

Please share the EDID.

Are you using devkit or custom board?

Hi,
Following is information of EDID

00 ff ff ff ff ff ff 00 1e 6d 01 00 01 01 01 01
 01 1b 01 03 80 a0 5a 78 0a ee 91 a3 54 4c 99 26
 0f 50 54 a1 08 00 31 40 45 40 61 40 71 40 81 80
 01 01 01 01 01 01 08 e8 00 30 f2 70 5a 80 b0 58
 8a 00 40 84 63 00 00 1e 02 3a 80 18 71 38 2d 40
 58 2c 45 00 40 84 63 00 00 1e 00 00 00 fd 00 3a
 79 1e 88 3c 00 0a 20 20 20 20 20 20 00 00 00 fc
 00 4c 47 20 54 56 0a 20 20 20 20 20 20 20 01 63
 02 03 57 f1 56 5d 10 1f 04 13 05 14 03 02 12 20
 21 22 15 01 5e 5f 62 63 64 3f 40 2c 09 57 07 15
 07 50 55 07 01 3d 06 c0 6e 03 0c 00 30 00 b8 3c
 20 00 80 01 02 03 04 e2 00 cf e3 05 c0 00 e3 06
 0d 01 e5 0e 60 61 65 66 ee 01 46 d0 00 26 0a 09
 00 ad 52 40 ad 23 0d 66 21 50 b0 51 00 1b 30 40
 70 36 00 40 84 63 00 00 1e 00 00 00 00 00 00 00
 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 8b

Thank
P.S. My 4K monitor is from LG

Looks like www.edidreader.com is down, so I can’t validate what modes are there or not, but this would verify that the data exists (and it would be rare for the EDID to not pass a checksum if the checksum was invalid). It would be nice though to see what modes were supported and compare against what the driver reports. Even so, someone can use this data for debugging.

The next step would still be to give the Xorg.0.log after adding ‘Option “ModeDebug”’ as listed above…this should show the driver itself stepping through the modes of the EDID and commenting on what it thinks of each mode.

Hi,
The attached file is Xorg.0.log after I add ‘Option “ModeDebug”’ into “/etc/X11/xorg.conf” Section “Device”.
Note that, this file is got after rebooting TX2 and plug HDMI cable when TX2 finish booting.
Thank
Xorg.0.log (162 KB)

www.edidreader.com is back up and verifies the checksum is valid.

I see from the log 640x480 is the virtual screen size. Looks like during the log there was no USB keyboard (I’ll guess you used serial console or network to get the log). I see the following modes the driver accepts as valid (these are modes the EDID told the driver about):

"4096x2160"
"4096x2160_30"
"4096x2160_30_0"
"4096x2160_25"
"4096x2160_24"
"4096x2160_24_0"
"4096x2160_24_1"
"4096x2160_24_2"
"3840x2160"
"3840x2160_60"
"3840x2160_30"
"3840x2160_30_0"
"3840x2160_30_1"
"3840x2160_30_2"
"3840x2160_25"
"3840x2160_25_0"
"3840x2160_24"
"3840x2160_24_0"
"3840x2160_24_1"
"3840x2160_24_2"
"1920x1080"
"1920x1080_120"
"1920x1080_100"
"1920x1080_60"
"1920x1080_60_0"
"1920x1080_60_1"
"1920x1080_50"
"1920x1080_30"
"1920x1080_30_0"
"1920x1080_25"
"1920x1080_24"
"1920x1080_24_0"
"1360x768"
"1360x768_60"
"1280x1024"
"1280x1024_60"
"1280x720"
"1280x720_60"
"1280x720_60_0"
"1280x720_50"
"1152x864"
"1152x864_60"
"1024x768"
"1024x768_60"
"800x600"
"800x600_60"
"720x576"
"720x576_50"
"720x480"
"720x480_60"
"720x400"
"720x400_70"
[b]"640x480"
"640x480_60"
"640x480_60_0"[/b]

Modes other than those were listed as invalid (interlaced modes are an example of modes rejected, plus modes which the clock is incapable of reaching would be invalid).

Notice that it chose a virtual buffer of 640x480 and that there is a valid 640x480@60Hz mode accepted. However, the final line of the log is this:

[   162.050] (II) NVIDIA(0): Setting mode "HDMI-0: nvidia-auto-select <b>@3840x2160</b> +0+0 {ViewPortIn=3840x2160, ViewPortOut=3840x2160+0+0}"

This mode is 3840x2160, which is supported. The part which seems off is that it chose a 640x480 virtual screen size even though mode is 3840x2160.

I have a monitor running at 1680x1050 using the same hardware and L4T release, and the virtual buffer and monitor modes match with both being 1680x1050. I think the issue is that something is setting a screen buffer at a size which doesn’t match the chosen monitor mode, but I’m not sure if this is really the issue. Certainly if a monitor runs in a mode with a smaller size than the buffer you will see only a part of the desktop and other parts will be clipped (though the mouse would be able to navigate to those pieces by scrolling the entire window). I’ve not seen a reverse case when the monitor was at a very high resolution and the screen was only a tiny subset, so I have no idea how it should behave…perhaps there would be a tiny square of something displayed at either a corner or in the middle. Do you have any indication that some small chunk of the screen is actually active?

Also, has there been anything which might have been done at some earlier point in time which may have told the driver that the virtual screen size should be 640x480? Possibly if something else used this size, then the size would be reused and not properly discarded when the larger monitor was added.

I think it maybe a known issue that the first mode from your EDID turns out not supported by display controller.

To check whether it is a pure mode selection issue, could you try to turn on the fallback mode in edid.c?

Please modify

int tegra_edid_get_monspecs(struct tegra_edid *edid, struct fb_monspecs *specs)
   {
         int i;
          int j;
          int ret;
          int extension_blocks;
          struct tegra_edid_pvt *new_data, *old_data;
          u8 checksum = 0;
          u8 *data;
          bool use_fallback = false; //set this to be true

Hi,
@linuxdev, after Jetson TX2 finishs booting and I just plug to 4K monitor, display work fine, I do not make any process.
@WayneWWW, I will try your method and feed back later.
Thank.
P.S. when I test with 4K monitor from Samsung, it work well without this bug

Hi,

I had an issue quite similar.
I found out it was due to NetworkManager.
Try deactivating it :
sudo systemctl disable NetworkManager
And configure systemd-networkd if you need.

bye

Hi WayneWWW,
If I turn on the fallback mode in edid.c as your suggestion, Jetson Tx2 can boot successfully with 4K LG Monitor even I plug HDMI cable before TX2 finish booting.
I try to capture 1920x1080 video from camera, it also work well

gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' ! nvtee ! nvoverlaysink -e

However, if I try to capture 4K video from camera, it do not work

gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160, format=(string)I420, framerate=(fraction)30/1' ! nvtee ! nvoverlaysink -e

On the serial console screen, I got this message

[  396.918158] tegradc 15210000.nvdisplay: Vertical downscale ratio greater than 2x

Could you please tell me how to solve this problem?
Thank

Hi Quang_OpenStack,

Sorry that this is just a workaround for your issue. It appears that the monitor has a detailed mode which our display driver does not support. It is also related to the driver boot up logic that only hotplug could prevent such error. If you want to render a 4k video, I would suggest to use other monitors or just use hotplug first.

I have filed this issue internally and hope it could be resolved soon.

Dear WayneeWWW,
I hope to see this problem is solved soon.
Thank

Hi Quang_OpenStack,

Not sure how much this error may be related to our monitors capabilities.
Is yours able to display 4K ?

Mine is certainly not, but I can see this error as well with standard onboard camera and my old monitor (using 1680x1050 59.96*+ 74.90).
Indeed:

gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM), format=I420, width=1280, height=720, framerate=30/1' ! nvoverlaysink overlay-w=640 overlay-h=480

works, although the frame aspect ratio conversion took the bottom left corner as reference.

However, using :

gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM), format=I420, width=2592, height=1944, framerate=30/1' ! nvoverlaysink overlay-w=640 overlay-h=480

or

gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM), format=I420, width=2592, height=1458, framerate=30/1' ! nvoverlaysink overlay-w=640 overlay-h=480

lead to the same nvdisplay errors in dmesg.

Using nvvidconv in the pipeline seems to be a good workaround:

gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM), format=I420, width=2592, height=1944, framerate=30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=I420, width=640, height=480' ! nvoverlaysink overlay-w=640 overlay-h=480

Not sure why you’re using nvtee, but you may give it a trial with a lower resolution as output of nvvidconv and same as overlay-w and overlay-h parameters.

Dear Honey_Patouceul,
I only have problem when I plug HDMI cable before finish booting. Otherwise, Jetson TX2 can work well with 4K resolution.
Did you testother 4K monitor?

Hi Honey_Patouceul,

Not sure if I understand your problem correctly. “When you use larger resolution, nvoverlaysink with 640x480 size would crash” Is this your problem?

Hi Wayne,

I have no hard problem, I’m mainly trying to understand how nvoverlaysink behaves, as it is not so obvious to me and documentation is not helping a lot. I’m also sharing my observations here: https://devtalk.nvidia.com/default/topic/1026818/nvoverlaysink-parameters-/.

During my experiments, I’ve faced the nvdisplay error:

tegradc 15210000.nvdisplay: Vertical downscale ratio greater than 2x

and searching the forum for this has lead me to this thread…So I was sharing my observation and potential workaround with nvvidconv.
Can you reproduce this or is it just a side effect of my monitor ? I haven’t yet understood why it gives this error for 2 resolutions, even with the 4:3 frame aspect ratio expected by overlay-w and overlay-h parameters, but works with 16:9 frame aspect ratio. The error message seems to indicate that vertical downscale greater than 2 is the problem, but this works on TX1:

gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM), format=I420, width=2592, height=1944, framerate=30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), format=I420, width=2592, height=1944' ! nvoverlaysink overlay-w=640 overlay-h=480

Furthermore, I see weird display with:

gst-launch-1.0 nvcamerasrc ! 'video/x-raw(memory:NVMM), format=I420, width=2592, height=1944, framerate=30/1' ! nvoverlaysink

while other camera modes are ok. On my TX2 it gives a red screen, I can only see a few lines on the top from the camera, but painted red. On my TX1, it gets better, but it’s flickering a lot for the bottom half of the image (note that a different monitor is connected to each Jetson: TX2 monitor is using 1680x1050 and TX1 monitor uses 1080p).
I can see this with R28.1 standard device tree, and no difference with patched dt flashed to partition15 ( although the latter indeed allows to use overlay 2).

Hi Honey_Patouceul,

We had similar bug of red screen on TX2 recently. I’ll dig it with internal team. Sorry for inconvenience.