Introduction to Multiple Monitors in X

This article explains the basic concepts and terms related to monitors and graphics cards in the X11 world. You need to know these terms, if you ever need help with a multi-monitor environment. For actual configuration tutorials, see the links on RandR 1.2 page.

The X Protocol Screen

The most important concept in understanding how X deals with multiple monitors is the X protocol screen. This article refers to it as SCREEN. Traditionally in X11, a SCREEN had a one-to-one correspondence with a physical display device (a monitor). Dual- or multi-head graphics cards did not really exist, and things were simple. The X configuration file (xorg.conf) has one Section "Screen" for each SCREEN, either explicitly written in the file, or nowadays implicitly created by the X server.

SCREENs are independent. They can have, not only different video modes, but also different color depths and resolution (dots-per-inch, used in scaling e.g. true-type fonts). Windows can never span more than one SCREEN, nor can they be moved from one SCREEN to another. The only way (that your author knows about) to switch focus from one SCREEN to another is with the mouse. Internally, each SCREEN is driven by a separate driver (DDX) instance, since SCREENs had one-to-one correspondence with graphics cards, too.

When a graphical application starts, it connects to an X server. Usually the environment variable DISPLAY specifies which X server to connect to. The value of DISPLAY is of the form host:displaynumber.screen and the default value set by practically all graphical environments is :0.0, which means the local computer, the X server instance 0, and SCREEN 0. Host specifies the computer, as the X11 protocol can work over a network. Displaynumber specifies the X server instance, as one computer may have several X servers running. Screen number specifies the SCREEN the application has access to. In other words, the SCREEN where the application windows will appear, must be selected before the application starts (or connects to the X server). The SCREEN cannot be changed without opening a new connection to the X server, which in practice means you have to quit and restart the application.

Multiple Graphics Cards, or the story of Xinerama

The above limitation with SCREENs and applications is very inconvenient, when one has several graphics cards to run several monitors. The Xinerama desktop feature was developed to combine the graphics cards into a single SCREEN. This allows windows to span more than one monitor, and allow moving windows from monitor to monitor.

Along with the Xinerama feature came the Xinerama extension, which allows applications to query the physical monitor configuration. For instance, a Xinerama-aware window manager can maximize a window to fit one monitor instead of covering all monitors. For applications (window managers), there is the Xinerama library, libXinerama for using the Xinerama extension.

The drawbacks of Xinerama are noticeable. When any accelerated 2D rendering operation (core, Render, or Xv) is performed, it must be executed on every card. Each card maintains its own copy of all the rendering state, which means that all pixmaps (images) must be copied to every card, too. Because the X server is single-threaded, rendering takes more time by the number of cards. The rendering speed in a Xinerama configuration is practically always worse than what the slowest card is able to achieve alone. Furthermore, Xinerama does not handle GLX, so 3D acceleration is disabled. One more annoyance is, that the DPI (dots-per-inch) resolution is fixed to the same value over all monitors.

Dual-head Graphics Cards

Then came dual-head graphics cards with two physical monitor connections and things got complicated. It did not fit the one SCREEN, one card, one monitor scheme, so drivers had to invent ways to circumvent the X server architecture limitations.

One solution for a driver (DDX) is to create one SCREEN per head, which is called Zaphod mode (after Zaphod Beeblebrox, from the Hitchhiker's Guide to the Galaxy). This has the drawbacks of multiple SCREENs, but you get the DPI right.

Another solution is to pretend that there is only one monitor, and use just one SCREEN, which is what the Nvidia TwinView mode does. TwinView avoids the drawbacks of the Xinerama feature, but has a completely non-standard way of configuring it. Plus, it is proprietary.

The third and the only proper way to deal with it is the Randr extension, which is a relatively new invention. Randr exposes the dual-head card as a single SCREEN, yet having a standard way of managing the multiple monitors. It avoids the Xinerama feature drawbacks, but uses the Xinerama extension to provide applications information about the physical monitor layout. Randr configuration can be controlled on the fly with the command xrandr, and it can be written to the X configuration file. The default configuration is cloned views, i.e. all heads show the same image.

Multi-monitor desktop with Nouveau

If you have a single graphics card (GPU) with multiple heads, it should all just work for you with RandR 1.2 and offer full (whatever is implemented) graphics acceleration. If you really want multiple SCREENs on a dual-head card, there exists the experimental configuration option ZaphodHeads.

If you have multiple graphics cards, the only way to combine them into a single SCREEN is the Xinerama feature, and all the drawbacks listed for it apply. Notice, that a card with multiple GPUs counts as multiple cards. The end result depends on which outputs are driven by which GPUs.

Multiple cards and dual-head cards in Xinerama

For each dual-head card, RANDR joins the heads into a single SCREEN. With Xinerama, you could in theory join multiple cards (SCREENs) into a uniform desktop. The problem is, there are two levels of joins: RANDR and Xinerama. Only Xinerama information will be delegated to the window manager via the Xinerama protocol, which means you can have windows popping up in the middle of two monitors, fullscreen windows spanning not one, and not all monitors, but just some of them, and other funny effects.

You may try to use ZaphodHeads to split a dual-head card into separate SCREENs and then join them and other cards with Xinerama, but there will probably be bugs. It may not work, or it may have annoying features.

It has been confirmed by a number of people that multicard setup (two cards driving three or four monitors) works fine with nouveau. The following is a simplified example that with a few minor edit should just should work

Section "ServerLayout"
    Identifier  "Layout0"
    Option      "Xinerama" "on"
    Option      "Clone"    "off"
    # You would need one screen for each monitor
    Screen   0  "Screen0"
    Screen   1  "Screen1"  RightOf "Screen0"
    Screen   2  "Screen2"  LeftOf  "Screen0"
    Screen   3  "Screen3"  LeftOf  "Screen2"
EndSection

Section "Device"
    Identifier  "Device0"
    Driver      "nouveau"
    # Actual PCI location of first card/gpu
    BusID       "PCI:9:0:0"
    # Actual connector - as reported by /sys/class/drm/card0-xx (except HDMI, which is HDMI-x instead of HDMI-A-x)
    Option      "ZaphodHeads" "DVI-I-1"
    # Screen number for that PCI device, i.e. 0, 1, etc.
    Screen      0
EndSection
Section "Device"
    Identifier  "Device1"
    Driver      "nouveau"
    # Actual PCI location of first card/gpu
    BusID       "PCI:9:0:0"
    # Actual connector - as reported by /sys/class/drm/card0-xx (except HDMI, which is HDMI-x instead of HDMI-A-x)
    Option      "ZaphodHeads" "DVI-I-2"
    # Screen number for that PCI device, i.e. 0, 1, etc.
    Screen      1
EndSection
Section "Device"
    Identifier  "Device2"
    Driver      "nouveau"
    # Actual PCI location of second card/gpu
    BusID       "PCI:8:0:0"
    # Actual connector - as reported by /sys/class/drm/card1-xx (except HDMI, which is HDMI-x instead of HDMI-A-x)
    Option      "ZaphodHeads" "DVI-I-3"
    # Screen number for that PCI device, i.e. 0, 1, etc.
    Screen      0
EndSection
Section "Device"
    Identifier  "Device3"
    Driver      "nouveau"
    # Actual PCI location of second card/gpu
    BusID       "PCI:8:0:0"
    # Actual connector - as reported by /sys/class/drm/card1-xx (except HDMI, which is HDMI-x instead of HDMI-A-x)
    Option      "ZaphodHeads" "DVI-I-4"
    # Screen number for that PCI device, i.e. 0, 1, etc.
    Screen      1
EndSection

Section "Screen"
    Identifier  "Screen0"
    Device      "Device0"
EndSection
Section "Screen"
    Identifier  "Screen1"
    Device      "Device1"
EndSection
Section "Screen"
    Identifier  "Screen2"
    Device      "Device2"
EndSection
Section "Screen"
    Identifier  "Screen3"
    Device      "Device3"
EndSection

The future

Xinerama X server feature will be replaced by something, that allows to join several cards into a uniform desktop, with acceleration. It will take time, and is mostly not Nouveau specific.

Speculation: Wayland could solve this. One could write a Wayland Display Server, which takes over all heads of all graphics cards in a computer. The server receives complete frame buffers from graphical clients, and can use each card individually to composite the part of the desktop showing on the heads of a card. An implementation detail is how to get the frame buffers to all cards, which may or may not share memory for texturing. A Wayland client could still render their frame buffers fast with a GPU, and you might select on app-by-app basis which GPU to use. Running an X server on top of such Display Server should be trivial, resulting in a uniform accelerated X desktop over all graphics cards (a single SCREEN), something which is currently impossible. This might even trivially solve the Optimus problem (but without on-the-fly GPU switching), as the Display Server would solve the framebuffer copy from one GPU to another.