Reference to Xorg.conf and Resolution Settings

This is being posted on the TX2 board, but should apply equally for everything from the TK1 through Xavier. I am hoping to clarify some video display topics which are a recurring theme on the forums. This is in part a question for NVIDIA, and in part a restatement of what is already known for other users on the topic (in other words, this is purposely longer than it needs to be for simply asking a question). This particular post is about “nvidia-auto-select”, modelines, and how to pick modes via “/etc/X11/xorg.conf”. There seems to be a bug forcing nvidia-auto-select to ignore otherwise valid EDID mode selections. An EDID mode may be listed simultaneously in the same log as both rejected and valid.

The equivalent documentation on X11 general driver setup (and specific to “/etc/X11/xorg.conf”) for the desktop PCs is a good starting document. Even so, this documentation is not actually valid for Jetsons and integrated GPUs which are directly attached to the memory controller. As an example desktop PC document can be found here:
https://download.nvidia.com/XFree86/Linux-x86_64/396.54/README/

It would be good to document the equivalent of the above document’s “Appendix B. X Config Options” for Jetsons and integrated GPUs (“iGPU”). At a minimum the single appendix regarding xorg.conf could be considered as required for many people needing fine configuration details. No such document exists for embedded GPUs. Currently it is difficult to know how to do even simple configuration changes, e.g., selecting a different video mode to default to. Much of the existing document for a desktop PC could simply be ignored once stating that only EDID modes will work and that no interlaced modes will work.

Modelines

Historically, video drivers have been given a “modeline” (or many modelines) to describe a mode (or list of modes) which a monitor can use. Before the DDC wire was added to connectors newer than the 15-pin D-sub VGA connector a manually entered modeline was the only way to tell a video driver what kind of setup would work. The extra "DDC wire on HDMI, DisplayPort, and most DVI (which uses i2c protocol and provides EDID data) is what changed monitor configuration to become automated. EDID contains all of the information needed to automatically create a modeline instead of requiring manual entry of the information (e.g., a monitor’s driver disk has modelines, and this is what the ancient monitor driver floppy installs…manually editing xorg.conf with vi and some knowledge does the same thing). In the past, before DDC/EDID, one could even install a database of known monitor timings and the end user could pick from the list. EDID ended that, but despite modelines no longer requiring manual intervention the modelines are still relevant. I am limiting this question to one of “how to select video modes or modelines” in xorg.conf when the driver agrees the EDID mode is valid. No interlaced mode will be considered since the driver rejects all interlaced modes. Non-EDID modes are not considered since only modes found from an EDID query are considered valid. Manual selection of a valid EDID mode other than what “nvidia-auto-select” would pick is the main goal of this question.

To start with, if a monitor is able to communicate its EDID, then it is available as hex data via this:

sudo -s
cat `find /sys -name edid'`
exit

…if this data cannot be found, then it means your monitor (or cables or connector adapters) cannot work with EDID. The iGPU must normally have EDID. When you find this data you should save a copy somewhere for reference.

If you copy the EDID hex data and paste this into http://www.edidreader.com, then you can see everything the monitor has told your video card about. At minimum you should examine if http://www.edidreader.com says the checksum is valid. There is a predefined list of available modes for a given GPU/driver, and after getting the monitor’s EDID the driver must decide which of the modes it claimed by the monitor are within the predefined list. Something which would be of extreme value as new documentation is a list of predefined modes acceptable by each driver release on the various Jetson platforms. Without a predefined list it is possible to query an existing monitor as to what is valid, but it isn’t possible to guess from specifications whether or not a future monitor purchase will work. A method to tell the driver to dump information on all known modes, regardless of the current monitor EDID, would also suffice (the driver would then be “self documenting”).

To specify video modes on any Jetson through “/etc/X11/xorg.conf” (a persistent, server-wide configuration), the theory is that we need to add a modeline which is an exact match to a mode which is reported by EDID (if the driver says the mode is acceptable, then we should be able to select that mode via a modeline). To do this we start by asking the NVIDIA driver to tell us what modes it sees, and then to build our modeline based on this. Adding ‘Option “ModeDebug”’ to the Device section of xorg.conf accomplishes the task of asking the driver to tell us about what it sees relative to the attached monitor’s EDID. An example xorg.conf edit, with “ModeDebug” added to what is already there by default:

Section "Device"
    Identifier  "Tegra0"
    Driver      "nvidia"
    Option      "AllowEmptyInitialConfiguration" "true"
    <u><b>Option      "ModeDebug"</b></u>
EndSection

The X11 log is file “/var/log/Xorg.0.log”, where the “0” is from the environment variable “DISPLAY”…the default is normally “export DISPLAY=:0” for the first local display, but there are exceptions, e.g., Xavier seems to be “:1” (thus “:0” is a different buffer which some other software is accessing other than the current desktop…perhaps CUDA or a vertual desktop…one can have as many displays as desired so long as each has a unique context via “DISPLAY”…not all "DISPLAY"s actually go to a monitor, but all "DISPLAY"s do go to a GPU or framebuffer).

With option “ModeDebug” in the “Device” section (the “Device” is a GPU driver coupled to a GPU), the driver will explicitly log all information about all modes the HDMI cable’s EDID query results in. With no monitor (or no DDC wire) there will be no EDID information; each time a monitor is connected and detected there should be EDID data logged for that monitor, and the Xorg.0.log should reflect this. HDMI is hot-plug, and upon a hot-plug connect event EDID is processed. Upon a hot-plug disconnect the previous EDID mode (which might have been a default mode if no monitor was ever connected) will most likely be preserved. A VGA monitor without EDID could in theory use the mode which was previously present due to preservation of a prior mode.

The reason why having EDID data in Xorg.0.log is so important is that EDID data all by itself does not tell us what the driver thinks of the various modes. It is possible for a monitor to support modes beyond the range of the GPU, or vice-versa. This “ModeDebug” log will not only tell you what modes were reported, it will also tell you what the driver thinks of those modes and the technical parameters needed to manually construct a modeline the way the driver would construct a modeline. EDID data, when by itself and without the context of a driver being known, won’t tell us anything about what the driver thinks of the modes.

If you are working on your Jetson’s video configuration you should add the ‘Option “ModeDebug”’ now to the “/etc/X11/xorg.conf” file within the “Device” section (a.k.a., driver options for a specific GPU), reboot, and save a copy. Between this log and the “/sys” “edid” file hex data we now know just about everything about the monitor and its relationship to the NVIDIA driver.

To see a general table of modes your Jetson’s video will accept from the known EDID modes of your monitor, try this with your “ModeDebug” log (this is the “mode pool” summary…the final list of possibilities for a given monitor using this particular GPU and driver combination):

gawk '/Modes in ModePool for/,/End of ModePool for/' /var/log/Xorg.0.log

(see footnote [1])

Hint: The statement in the mode pool of “from: NVIDIA Predefined” is subtle, but extremely important. On a desktop PC other modes beyond predefined modes may be achievable, but with the current generation of Jetsons there are no other achievable modes with any monitor. These predefined modes are the logical intersection of what the monitor claims it can do and what the GPU/driver allows. In a multi-monitor setup there will be one mode pool for each monitor.

To see detailed descriptions of all modes reported in EDID, regardless of whether the mode is accepted or rejected, try this:

gawk '/Validating Mode/,/Mode \".+\" is (valid|invalid)/' /var/log/Xorg.0.log | less -p 'rejected|invalid|valid'

…notice that the end of the mode will say if the mode is valid or invalid.
(see footnote [2])

An example valid EDID mode from a real world monitor is:

[     9.242] (II) NVIDIA(GPU-0):   Validating Mode "1280x960_60":
[     9.242] (II) NVIDIA(GPU-0):     Mode Source: NVIDIA Predefined
[     9.242] (II) NVIDIA(GPU-0):     1280 x 960 @ 60 Hz
[     9.242] (II) NVIDIA(GPU-0):       Pixel Clock      : 108.00 MHz
[     9.242] (II) NVIDIA(GPU-0):       HRes, HSyncStart : 1280, 1376
[     9.242] (II) NVIDIA(GPU-0):       HSyncEnd, HTotal : 1488, 1800
[     9.242] (II) NVIDIA(GPU-0):       VRes, VSyncStart :  960,  962
[     9.242] (II) NVIDIA(GPU-0):       VSyncEnd, VTotal :  965, 1000
[     9.243] (II) NVIDIA(GPU-0):       H/V Polarity     : +/+
[     9.243] (II) NVIDIA(GPU-0):     Mode "1280x960_60" is valid.

If you look at what a modeline is, then you’ll find this Wikipedia description tells you about the components of a modeline. See:
https://en.wikipedia.org/wiki/XFree86_Modeline#Syntax

Here is an excerpt of the Wikipedia description:

...
 Modeline syntax: pclk hdisp hsyncstart hsyncend htotal vdisp vsyncstart vsyncend vtotal [flags]
 Flags (optional): +HSync, -HSync, +VSync, -VSync, Interlace, DoubleScan, CSync, +CSync, -CSync

 Modeline "1600x1200" 155   1600 1656 1776 2048   1200 1202 1205 1263
 #           (Label) (clk)     (x-resolution)        (y-resolution)
 #                     |
 #              (pixel clock in MHz)

In the example, other than a title for the mode, the modeline can be created by reading the verbose “ModeDebug” log information in the order it appears (“xorg.conf” token “Modeline” goes in ‘Section “Monitor”’). This is a modeline representing the real world example above from Xorg logs (‘Mode “1280x960_60” is valid’):

ModeLine "1280x960_60" 108.00 1280 1376 1488 1800 960 962 965 1000 -hsync +vsync

…this example modeline, when put in the Monitor section in xorg.conf, is exactly equivalent to the EDID mode; this should work within xorg.conf if placed in the Monitor section. Note that “ModeDebug” has a comment on each modeline parameter so you know which field this is a match for.

When the NVIDIA video driver picks a mode via “nvidia-auto-select”, then this modeline is created in RAM and used for configuring the display. In order to use a mode via a modeline the modeline must exist and exactly match what EDID provides and what the driver accepts. Automatic determination of a modeline should be indistinguishable from a manually created modeline for any given mode taken from “ModeDebug”. Modelines not matching a mode in “ModeDebug” should be summarily rejected. It seems to be a bug that correctly matching modelines are rejected if they are not the mode “nvidia-auto-select” would pick.

Someone may wonder why there are more parameters than those which specifically set a mode, e.g., there are some timings related to start and end. Not all monitors start the displayable content at the exact same time as the time used for sync (especially analog monitors). There may be a need for translating/panning an image left/right or up/down, and there may be bounding box pixels in a frame which are not actually displayable (clipping). A mode is fairly distinct, but the timings used to adjust an individual monitor to clip and center correctly will differ among monitors. Two monitors of the same mode may have the details of the modeline differ.

Problems…

Ok, so barring the syntax question of how to actually set up xorg.conf to pick an EDID mode other than what “nvidia-auto-select” picks, I’ll state ahead of time that it doesn’t seem possible to disable “nvidia-auto-select”. From what I can tell we need a way to disable “nvidia-auto-select” when a valid modeline is used to select which EDID mode a given monitor will use. It seems there is a bug (or some required alternate xorg.conf syntax) where valid EDID modes set via modeline are rejected and overridden.

A sample configuration using the previously listed monitor follows. Here are the steps to try to enable “1280x960_60” (the instructions should allow reproduction of the issue for any monitor providing EDID). The mode this particular example monitor normally boots to (when not being forced into another mode) is “1680x1050”. The EDID of this monitor:

# cat `find /sys -name edid`
 00 ff ff ff ff ff ff 00 5a 63 1e 59 01 01 01 01
 1c 11 01 03 80 2f 1e 78 2e d0 05 a3 55 49 9a 27
 13 50 54 bf ef 80 b3 00 a9 40 95 00 90 40 81 80
 81 40 71 4f 31 0a 21 39 90 30 62 1a 27 40 68 b0
 36 00 da 28 11 00 00 1c 00 00 00 ff 00 51 41 35
 30 37 32 38 35 32 39 30 34 0a 00 00 00 fd 00 32
 4b 1e 52 0f 00 0a 20 20 20 20 20 20 00 00 00 fc
 00 56 58 32 32 33 35 77 6d 0a 20 20 20 20 00 ea

Pasting this into http://www.edidreader.com shows all modes are primary modes and this older monitor has no extension modes (the driver will refuse extensions…whether that is all extensions or just some of the extensions I do not know). Under “standard display modes” “1280x960_60” is not interlaced, so this mode should work. Note that horizontal of “1280” is listed, and then the aspect ratio must be used to find the vertical size: The vertical dimension for 1280 with 4:3 aspect can be computed as “1280 * (1/aspect) == 1280 * (3/4) == 960”. The mode pool verifies that 1280x960_60 is a predefined mode. The “ModeDebug” log shows the mode as “valid”.

However, there is a “catch” here…the mode is incorrectly logged multiple times when this xorg.conf ModeLine is used (same as previously stated, this is an “NVIDIA Predefined” mode):

ModeLine "1280x960_60" 108.00 1280 1376 1488 1800 960 962 965 1000 -hsync +vsync

This is particularly important because the exact quote of the mode is both rejected and validated. Different parts of the driver are in disagreement as to whether “1280x960_60” is valid. Here is a log excerpt from a single log prior to any GUI login (the login manager is present, but the window manager has not started) where the mode is shown as both valid and invalid:

[     9.177] (II) NVIDIA(GPU-0):   Validating Mode "1280x960_60":
[     9.177] (II) NVIDIA(GPU-0):     Mode Source: NVIDIA Predefined
[     9.177] (II) NVIDIA(GPU-0):     1280 x 960 @ 60 Hz
[     9.177] (II) NVIDIA(GPU-0):       Pixel Clock      : 108.00 MHz
[     9.177] (II) NVIDIA(GPU-0):       HRes, HSyncStart : 1280, 1376
[     9.178] (II) NVIDIA(GPU-0):       HSyncEnd, HTotal : 1488, 1800
[     9.178] (II) NVIDIA(GPU-0):       VRes, VSyncStart :  960,  962
[     9.178] (II) NVIDIA(GPU-0):       VSyncEnd, VTotal :  965, 1000
[     9.178] (II) NVIDIA(GPU-0):       H/V Polarity     : +/+
[     9.178] (II) NVIDIA(GPU-0):     <b>Mode "1280x960_60" is valid.</b>
...
[     9.178] (WW) NVIDIA(GPU-0):   Validating Mode "1280x960_60":
[     9.178] (WW) NVIDIA(GPU-0):     Mode Source: X Configuration file ModeLine
[     9.178] (WW) NVIDIA(GPU-0):     1280 x 960 @ 60 Hz
[     9.178] (WW) NVIDIA(GPU-0):       Pixel Clock      : 108.00 MHz
[     9.178] (WW) NVIDIA(GPU-0):       HRes, HSyncStart : 1280, 1376
[     9.178] (WW) NVIDIA(GPU-0):       HSyncEnd, HTotal : 1488, 1800
[     9.178] (WW) NVIDIA(GPU-0):       VRes, VSyncStart :  960,  962
[     9.178] (WW) NVIDIA(GPU-0):       VSyncEnd, VTotal :  965, 1000
[     9.178] (WW) NVIDIA(GPU-0):       H/V Polarity     : -/+
[     9.178] (WW) NVIDIA(GPU-0):     Mode is rejected: Only modes from the NVIDIA X driver's
[     9.178] (WW) NVIDIA(GPU-0):     predefined list and modes from the EDID are allowed
[     9.178] (WW) NVIDIA(GPU-0):     <b>Mode "1280x960_60" is invalid.</b>
...

I do not know why the mode is listed twice with disagreement between “valid” and “invalid”. Perhaps it is just because of the syntax used in xorg.conf (see footnote [3] for the xorg.conf used).

Remember that each time a monitor connect event is seen EDID will be processed. Starting a new instance of an X11 server could also be considered a connect event. Unfortunately I do not know why the mode was examined twice prior to any other connect event.

Once a GUI login has occurred some information can be gathered via “xdpyinfo”. Apparently the 1280x960_60 mode is allowed as a virtual desktop, but actual desktop is forced to the “nvidia-auto-select” size of “1680x1050”. I don’t really believe the virtual desktop is actually used because the desktop has no required panning to see the whole desktop, nor any clipping…the virtual and physical desktops appear to both be “1680x1050” regardless of what the driver thinks (see footnote [4] for an alias to quickly view that information). Example:

DISPLAY=:0 xdpyinfo | egrep "dimensions"
# dimensions:    1680x1050 pixels (445x278 millimeters)
egrep "Virtual screen size" /var/log/Xorg.0.log'
# [     9.198] (II) NVIDIA(0): Virtual screen size determined to be 1280 x 960

What is the proper method of manually configuring xorg.conf for a mode which is valid in EDID? I am thinking perhaps a MetaMode token is ignored, but I do not know of another way to mark a mode for use.

Footnotes:

[1][2][4]: For convenience you might want to add this bash functions in “~/.bash_aliases” (or “~/.bashrc”):

# Footnote [1]
# Displays the mode pool from the default Xorg.0.log log file. If an argument is
# named, then this instead gives the mode pool of the named log.
function pool () {
    local logname="/var/log/Xorg.0.log";
    if [[ $# -gt 0 ]]; then
        logname="$1";
    fi
    gawk '/Modes in ModePool for/,/End of ModePool for/' "${logname}";
}

# Footnote [2]
# Displays and highlights "ModeDebug" accept/reject comments from the default
# Xorg.0.log log file. If an argument is named, then this instead gives the
# "ModeDebug" accept/reject commands of the named log.
function modes () {
    local logname="/var/log/Xorg.0.log";
    if [[ $# -gt 0 ]]; then
        logname="$1";
    fi
    gawk '/Validating Mode/,/Mode \".+\" is (valid|invalid)/' "${logname}" | less -p 'rejected|invalid|valid';
}

# Footnote [4]
# Displays a logged in session's idea of virtual screen size and actual screen
# size.
alias dim='DISPLAY=:0 xdpyinfo | egrep "dimensions"; egrep "Virtual screen size" /var/log/Xorg.0.log'

[3]: The example’s xorg.conf:

Section "Module"
    Disable     "dri"
    SubSection  "extmod"
        Option  "omit xfree86-dga"
    EndSubSection
EndSection

Section "Device"
    Identifier  "Tegra0"
    Driver      "nvidia"
    Option      "AllowEmptyInitialConfiguration" "true"
    Option      "ModeDebug"
EndSection

Section "Monitor"
    Identifier     "Monitor0"
    VendorName     "Viewsonic"
    ModelName      "VX2235wm"
    <b>ModeLine        "1280x960_60" 108.00 1280 1376 1488 1800 960 962 965 1000 -hsync +vsync</b>
    HorizSync       31.0 - 76.0
    VertRefresh     56.0 - 76.0
    Option         "DPMS"
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Tegra0"
    Monitor        "Monitor0"
    DefaultDepth    24
    <b>Option         "MetaModes" "1280x960_60 +0+0"</b>
    SubSection     "Display"
        Depth       24
    EndSubSection
EndSection

Our X driver rejects the modes from non-EDID list. As a result, adding modeline by command or xorg.conf should not work. I’ve tried this long time ago (maybe when rel-24.1).

I think the logic behind this is tegra graphic and display driver have some limitation about the modes. Even if your hard-coded mode is from EDID, it does not guarantee display driver supports it.
For example, if your TV supports YUV mode and you pick one from your EDID and hack in xorg.conf. Tegra still cannnot support it.

The problem is that this mode, although not auto-selected, is an exact match for EDID. The driver claims it supports the mode, and then it claims the mode is not supported. The mode pool shows the mode (and it is from NVIDIA predefined). How does one select a mode which the driver says is valid and has in its mode pool? This and other cases I know of are all for HDMI monitors, not televisions.

Additional thought: Why would xrandr be allowed to make changes among modes which can’t be made through xorg.conf? Is xrandr the only way to make mode changes?

Just found another option that can enabled more logs in xorg. Please give it a try.
Option “moreVerboseModeValidation” “true”

I will discuss this with internal team for more info.

Using ‘“moreVerboseModeValidation” “true”’ I see this:

[    10.853] (WW) NVIDIA(0): Option "moreVerboseModeValidation" is not used

No logging seems to occur from this…perhaps it doesn’t apply to iGPUs.

I am also facing problems since I use a cheap HDMI monitor, so I am also looking into this topic and I’ll bring my 3 cents contribution.

Some unusual things about my case:

  • I’m using a TX2 as received flashed with R28.1.
  • I’m running a R28.2-DP2 (pre-release) on a SATA SSD:
head -n 1 /etc/nv_tegra_release 
# R28 (release), REVISION: 2.0, GCID: 10136452, BOARD: t186ref, EABI: aarch64, DATE: Fri Dec  1 14:20:33 UTC 2017

This is done through extlinux.conf, with R28.2 linux kernel image in R28.1 eMMC /boot directory.

  • I’ve been ok with this config so far, it is up-to-date for apt, and for some reasons I’d like to continue with it.

My monitor EDID:

sudo cat /sys/kernel/debug/tegradc.0/edid
 00 ff ff ff ff ff ff 00 4c 2d 7a 06 00 00 00 00
 32 13 01 03 80 10 09 78 0a ee 91 a3 54 4c 99 26
 0f 50 54 bd ee 00 01 01 01 01 01 01 01 01 01 01
 01 01 01 01 01 01 66 21 50 b0 51 00 1b 30 40 70
 36 00 a0 5a 00 00 00 1e 01 1d 00 72 51 d0 1e 20
 6e 28 55 00 a0 5a 00 00 00 1e 00 00 00 fd 00 18
 4b 1a 44 17 00 0a 20 20 20 20 20 20 00 00 00 fc
 00 53 41 4d 53 55 4e 47 0a 20 20 20 20 20 01 43
 02 03 23 f1 4b 84 13 05 14 03 12 10 1f 20 21 22
 23 09 07 07 83 01 00 00 e2 00 0f 67 03 0c 00 10
 00 b8 2d 01 1d 00 bc 52 d0 1e 20 b8 28 55 40 a0
 5a 00 00 00 1e 01 1d 80 18 71 1c 16 20 58 2c 25
 00 a0 5a 00 00 00 9e 01 1d 80 d0 72 1c 16 20 10
 2c 25 80 a0 5a 00 00 00 9e 8c 0a d0 8a 20 e0 2d
 10 10 3e 96 00 a0 5a 00 00 00 18 00 00 00 00 00
 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 dc

Using edidreader.com, I see serial number 0, so I am unsure if it is some original Samsung or counterfeited.

The Problem:
While it was working fine with previous monitor or with a HDMI TV, with this monitor I am seeing some problems.
With R28.1, it runs fine with resolution 1360x768.
But with R28.2-DP2 (using same default t186ref xorg.conf), here is what happens:
0- Virtual console starts with 768p.
When Ubuntu starts it gets:
1- 1080p with correct colors (a bit dark, though) for about 1 second.
2- Then it switches to another 1080p with flashy (pink) colors where I can log in, but the leftmost, topmost, rightmost and bottommost margin are wrong and I can’t see the whole screen.
3 - When Ubuntu starts my session, with 1080p I still get flashy colors and I can’t see the menu bar, and I can’t see the left half of icons in left bar.
3 bis - If I select 1360x768 in Ubuntu display settings, then when Ubuntu starts my user session the 1360x768 mode works fine with colors ok. However, for some applications such as Firefox browsing to Jetson forum, I have to resize to 80 or 90% in order to see everything, so I suppose something is confused somewhere.

So it not very harmful as I’m able to use it, but since R28.2 showed my monitor was able to perform 1080p, I have tried to see if I was able to adjust margins with modelines. I have seen that NVIDIA X server rejects any modeline from user through Xorg.conf and from builtin X server modes. In order to save many useless lines in Xorg.0.log, I’ve tried to use option ModeValidation set to NoXServerModes, it is accepted but seems similar to its current behavior…modes are probed but not validated.

The only way I’ve found to have some influence is using metamodes, but only when using some metamodes logged by NVIDIA X server. Otherwise it is silently ignored and it falls back to nvidia-auto-select.

If I specify metamode 1360x768, it seems to me it is working, but as soon as Ubuntu login screen appears, it seems there is a kind of reset or switch, it then sets again nvidia-auto-select 1080p flashy mode. Once I’m logged, it sets mode according to my Ubuntu settings.

So here are some questions:

  1. Is there a document that describes this X server/Ubuntu starting process ? Or may you give some explanations ?

  2. Is there a document that describes available options in X config ?

  3. Is there a chance I can use 1080p with this monitor ? Is there a way to adjust margins ? Is there a way to adjust colors ? I have to mention that I see varying high DPIs (215, 216 or 302, 302), but disabling DPI from EDID and falling back to 75 dpi didn’t changed.

  4. Is a custom EDID file option supported ?

Attached these logs:
Xorg.R28.1.log is the log with R28.1, and standard xorg.conf, just added Option ModeDebug.
Xorg.R28.2.log is the log with R28.2, and standard xorg.conf, just added Option ModeDebug.
Xorg.R28.2.Custom.log is the log with R28.2, and with the attached xorg.conf requesting 1360x768 metamode.

Thanks for any additional info.
Xorg.R28.1.log (160 KB)
Xorg.R28.2.log (159 KB)
xorg.conf.custom.txt (1.14 KB)
Xorg.R28.2.Custom.log (158 KB)

Additional info: Seems screen dimension is wrong (160mm x 90mm, but is about double in fact) as seen from xrandr:

xrandr --props
Screen 0: minimum 8 x 8, current 1360 x 768, maximum 32767 x 32767
HDMI-0 connected primary 1360x768+0+0 (normal left inverted right x axis y axis) <b>160mm x 90mm</b>
	TegraOverlayBlendmode: Opaque 
		supported: Opaque, SourceAlphaBlend, PremultSourceAlphaBlend
	TegraOverlayPriority: 255 
		range: (0, 255)
	EDID: 
		00ffffffffffff004c2d7a0600000000
		32130103801009780aee91a3544c9926
		0f5054bdee0001010101010101010101
		010101010101662150b051001b304070
		3600a05a0000001e011d007251d01e20
		6e285500a05a0000001e000000fd0018
		4b1a4417000a202020202020000000fc
		0053414d53554e470a20202020200143
		020323f14b841305140312101f202122
		2309070783010000e2000f67030c0010
		00b82d011d00bc52d01e20b8285540a0
		5a0000001e011d8018711c1620582c25
		00a05a0000009e011d80d0721c162010
		2c2580a05a0000009e8c0ad08a20e02d
		10103e9600a05a000000180000000000
		000000000000000000000000000000dc
	BorderDimensions: 4 
		supported: 4
	Border: 0 0 0 0 
		range: (0, 65535)
	SignalFormat: TMDS 
		supported: TMDS
	ConnectorType: HDMI 
   1920x1080     60.00 +  59.95    50.00    30.00    29.97    25.00    24.00    23.98  
   1360x768      60.02* 
   1280x720      60.00    59.94    50.00  
   1024x768      75.03    70.07    60.01  
   832x624       75.05  
   800x600       75.00    72.19    60.32  
   720x576       50.00  
   720x480       59.94  
   720x400       70.04  
   640x480       75.00    72.81    67.06    59.94

This might be related to above mentioned high DPIs.

Hello everyone,

We want to set 720x576 resolution instead of querying it from monitor for hdmi out. can we do that by editing xorg.conf?

Do we have to load edid data as well to set this resolution?

Thanks and regards,

I can’t give you an answer. What I can tell you is that if the mode is not available via EDID, then you can’t use that mode. If the mode is not within the predefined mode list, then the mode also cannot be used. If your monitor does not have EDID, then there may be ways of programmatically doing this through a modified kernel, but someone else will need to give those details. If you do have EDID, then hopefully this information will be clarified.

Sorry for late reply. I was out of office for weeks. Still checking if we could provide a list for Xorg options on forum.

As for custom mode and custom edid, the answer is NO. It is definitely not working in L4T xorg.conf.

To use custom edid, you need to add it in dts or kernel driver.

I am very curious about EDID and device tree relationship (I have never considered the device tree as a method of picking a mode). I personally consider “custom” to be a mode not available as a predefined mode; I consider picking of a mode within the allowed and predefined modes to simply be a manual pick of a mode which is predefined. In other words, I have been considering modifying the mode pool as custom, and picking an entry within the mode pool as standard and non-custom.

Is there a method via device tree to influence what nvidia-autoselect will pick from a mode pool when there are a large number of possible predefined modes? I’m still struggling to pick modes which I know the driver allows.

I am guessing 720x576 is not in the standard list of modes and is considered “custom” because it would not normally be part of a mode pool.

I think the reason behind “CustomEDID” not working is that it requires X to pass user defined edid from userspace to kernel, while our driver does not implement it.
This just avoids some potential problems for display initialization. As you know that the boot up procedure of tegradc is complicated. Need to consider all usecase if want to add this to X driver.

That also indicates why EDID in device tree can work well since it does not need to copy edid from userspace. Tegradc can just use it when display init.

Is there any possibility the “edid” file in “/sys” could be made writable? If this file were writable, perhaps we could create a custom EDID with just the mode we want. This would in no way cause the driver to allow modes not already in the mode pool, but we really need a way to configure the resolution when the resolution is valid.

In the case of device tree, what is the entry, and what goes there? Is it just the modeline in the same format as what xorg.conf would use?

For device tree solution, it seems there is a script in R28.2 located in kernel directory and named nv-enable-hard-coded-kernel-boot-display-mode.sh with a workaround for fbconsole pixel clock calculation issue. This is described in the R28.2 doc in kernelCustomization/DisplayConfigurationAndBringup/Hard-codingKernelDisplayBootModeForHDMI.

Hi WayneWWW/Honey_Patouceul,

Can you please tell me how do I set 720x576 resolution for hdmi out? we don’t have hdmi out exposed on tx2 interface board. we want to transfer hdmi output to hdmi receiver(adv7611) with this resolution.

what and where exactly I need to change in device tree?

Thanks and Regards,
Shivlal

I haven’t tried it yet, so be aware this is pure speculation, but from what I’ve seen you would just edit the script I’ve mentionned in /kernel, adapt to your settings, and running this script would patch the dtb (read the doc for details). Then flash the patched DT into Jetson and try.

shivlal12345,

Is your receiver providing an EDID to tegra? When you connect the cable to tegra, would tegra give out new kernel log?

Greetings,

This is to flag up that I’m having similar problems. The Nvidia driver rejects modes provided by the monitor’s EDID. What happens is that the requested pixel clock is up above 200MHz, so the driver thinks it’s invalid. I’m guessing this results from the fact that we’re using a DisplayPort to DVI-D converter (old monitor). If I force the driver to ignore pixel clock calculations (ModeValidation NoMaxPClkCheck) then Nvidia X server settings will let me choose the correct resolution … but the monitor just flickers uselessly.

Specs:
Ubuntu 18.04
Nvidia 390.XX driver , supported by Canonical
Quadro P4000
Dual monitor setup with separate X screens
Monitor 2 is Samsung SyncMaster XL30 connected via DP to DVI-D converter

If anyone can suggest the correct solution (if any) then please let me know.

Cheers,
JS

UPDATE - I upgraded to Nvidia driver 418 using the package from https://launchpad.net/~graphics-drivers/+archive/ubuntu/ppa but this did not resolve the issue. The Quadro P4000 is supposed to support 4 connected screens with 4096x2160 @ 120Hz each, so somehow I don’t believe that there’s actually a pixel clock issue. We should be able to run a monitor at 2560x1600 @ 60Hz, but the driver rejects this.
Xorg.1.log (287 KB)
xorg.conf.zip (1.8 KB)

Is it possible to use DVI monitor and use HMDI-DVI converter for jetson nano to automatically display the screen without manually modify the xorg.conf file?If so, what’s king of DVI? My DVI port of monitor has 28 pin.

Some DVI have EDID. Those should work with an HDMI-DVI converter, but beware that not all DVI (analog) provide EDID…those will fail. If the monitor and adapter support digital EDID you should be ok. DVI came out when transitioning from pure analog to digital, and so there was some backwards compatibility (DVI-D is pure digital, DVI-I is mixed, DVI-A is pure analog). There are some pictures of the DVI connector variations you might find useful here:
https://en.wikipedia.org/wiki/Digital_Visual_Interface

If you are testing, and if EDID is available, then you should find some data via:

sudo -s
cat `find /sys -name 'edid'`
exit