Most of the time, when you want to help or contribute to a "libre" project, you are lost. This page will try to help new developers getting up to speed on nouveau development by overviewing some base concepts and giving some code and web pointers.

The Memory

x86 architecture has 3 memory address spaces:

PCI (Peripheral Component Interconnect)

Your are going to plug your graphic cards into PCI slots on your motherboard. PCI specifications are not freely available to the public :( . But in our case, we will only want to have an overall understanding of the programming model. The programming entry point for this computer bus is the PCI configuration space. It is accessed with the 0xCF8 and 0xCFC I/O ports in the x86 architecture I/O ports address space. To be a bit more specific 0xCF8 is the address port, and the 0xCFC the data port. Each PCI entity (minimal addressable unit on the bus like the byte in memory) has its own configuration space. You can have a look at some code in the Linux kernel source code to have a deeper understanding. Check out linuxkernel_source_/arch/i386/pci/direct.c source file. In the case of the XORG server, you have a PCI programming module which write accesses /dev/mem (that's one of the reasons why the XORG server must run with root privileges). Have a look at xorg-serversource_/hw/xfree86/os-support/bus/Pci.c source file and xorg-serversource_/hw/xfree86/doc/devel/RAC.notes. You can play with the PCI system using the pciutils or directly with the Linux kernel sysfs (system file system) file in /sys/bus/pci/devices. Both methods will interpret the content of the configuration space. An entity will have in its configuration space 3 types of resources:

  • I/O memory. This is a chunk of physical memory which is decoded by the entity. Checkout the content of the /proc/iomem.
  • I/O ports. Those from the I/O ports space that will be decoded by the entity. Checkout the content of the /proc/ioports file.
  • IRQs (Interrupt ReQuests). Checkout the content of the /proc/irq directory.

The configuration of such resources is done using 2 systems:

  • PCI PNP (Plug-aNd-Play) which is obsolete.
  • ACPI (Advanced Configuration and Power Interface) which is the current way to do it, see below. PCI programming using the XORG server is ugly. It is a task which is meant to be done by the OS (Operating System) kernel.

AGP (Accelerated Graphic Port), PCI Express cards

A graphic card plugged into an AGP slot or a PCI Express slot will be seen by the system like a PCI device.

ACPI (Advanced Configuration and Power Interface)

Today, computers are ACPI capable. ACPI replaces PCI PNP for of entity configuration and adds power management, multiprocessors sweet, and other things to the lot. Unlike PCI, ACPI specification is freely available. ACPI is memory table based: at computer boot time, the OS must find the RSDP (Root System Description Pointer) in physical memory. On x86 architecture, it is a matter of finding the "RSD PTR" string in specific physical memory regions. Checkout the Intel Architecture - Personal Computer (IA-PC) section in the specification and the acpi_scan_rsdp function in linuxkernel_source_/arch/i386/kernel/acpi/boot.c.

XORG, DDX (Device Dependent X) and DIX (Device Independent X)

  • XORG is the lot, the entire project, the reference implementation of the client libs and servers of the X Window System. You need to understand the X Window core protocol. Keep in mind that XORG has more than one server like the DMX (Distributed Multihead X) server, the kdrive server or the famous XGL server.
  • DIX is the part of XORG which deals with the clients, network transparency and software rendering. In the XORG source package layout, it's almost everything except the xorg-serversource_/hw directory. You will find the main function, in xorg-serversource_/dix/main.c source file.
  • DDX is the part of XORG dealing with the hardware (and with the OSes to a certain extend). There are several DDX in the xorg-serversource_/hw directory of the XORG source package: XGL, kdrive, xfree86 etc... The one we are interested in is of course the xfree86 one since nouveau is a set of hardware drivers for this DDX. Indeed, each DDX comes with its hardware drivers. Have a look in xorg-serversource_/hw/kdrive, there you will find drivers for several cards for the kdrive X server. The assembly of the xfree86 DDX with the DIX part shape the famous XORG server.

XORG server design

One of the first things the XORG server will do is to load the nouveau video driver module. The xfree86 DDX implements a dynamic module loading system which is just a wrapper around the libc dynamic loader. The dynamic loader is based on symbols, namely function entry points or data pointers which are resolved or not at runtime. To understand the document below, you must know what is a server generation. When you start the XORG server, the first server generation is generated. When the last client goes away, then a new server generation will start. You will see in the DIX and DDX code that there is a significant difference between the first server generation and subsequent server generations because some operations are useless to be performed several times, for instance sockets creation. To understand the XORG server design, read very carefully the xfree86 DDX DESIGN document. X Window System model revolves around the screen object, and the previous document states that one screen is managed by one video driver only in the xfree86 DDX. Moreover don't forget that a screen has a xfree86 DDX structure (ScrnInfoRec living for the entire server life) and DIX structure (ScreenRec living for only one server generation), both embedding some per screen video driver private data. Of course, there is also in depth documentation.

Video mode setting and RandR (the X Resize, Rotate and Reflect Extension in short) 1.2 extension

One of latest big improvements in XORG is the RandR 1.2 extension. It's a system that allows fine grained control of video mode setting. The RandR model deals with outputs and CRTCs (Cathode Ray Tube Controllers). CRTC is a legacy name from the past when all display devices were Cathode Ray Tubes. Today, a CRTC can drive flat displays. Typical nvidia chips embed 2 CRTCs, and the cards which host those chips have typically a S-Video (Separate Video) connector which outputs PAL (Phase Alternating Line) or NTSC (National Television System(s) Committee) video signal, have a VGA (Video Graphics Array) connector which outputs a VGA signal, a DVI (Digital Visual Interface) connector which outputs a VGA signal and/or a single/dual TMDS (Transition Minimized Differential Signaling) digital signal. On mobile editions of nvidia chips, you will have pins outputting LVDS (Low Voltage Differential Signaling) in order to drive a LDFP (Local Digital Flat Panel). There is a GIT RandR-1.2 branch for the nouveau video driver.

The nouveau xfree86 DDX video driver module

If you read properly the the xfree86 DDX DESIGN document, and in particular its latest section, you will look for the nouveauModuleData structure in xf86-video-nouveau/src/nv_driver.c source file from the nouveau xfree86 DDX video driver source package directory layout. This structure will tell you there is only a setup function for this driver called right after the module is loaded in the XORG server address space, called nouveauSetup in xf86-video-nouveausource_/src/nv_driver.c source file. nouveauSetup will register the nouveau DriverRec structure that will points to the NVIdentify, NVProbe and NVAvailableOptions functions, still in xf86-video-nouveausource_/src/nv_driver.c. During the probe phase of the XORG server done using the NVProbe function of the video driver module, driver hooks in the xfree86 DDX ScrnInfoRec will be filled in. Here come, NVPreInit, NVScreenInit, NVSwitchMode, NVAdjustFrame, NVEnterVT, NVLeaveVT, NVFreeScreen, NVValidMode driver functions.

  • NVIdentify from the DriverRec structure named NV prints information about the chipsets handled by the driver. In xf86-video-nouveausource_/src/nv_driver.c this function logs the chipsets families supported and in xf86-video-nvsource_/src/nv_driver.c` this function logs all of the chipsets supported with their PCI ids (identifiers).
  • NVAvailableOptions from the DriverRec structure named NV provides the initialized OptionInfoRec structure which will describe the various configuration options for this driver. Yes, those options which can be set in the device section of the famous xorg.conf file. In xf86-video-nouveausource_/src/nv_driver.c, those options are statically defined in the OptionInfoRec structure named NVOptions in xf86-video-nouveausource_/src/nv_const.h. NVAvailableOptions from xf86-video-nvsource_/src/nv_driver.c is the same, except that there is a special OptionInfoRec structure for the riva 128 chip the riva driver files (for chipset archaeologists) and the common OptionInfoRec structure lies into xf86-video-nouveausource_/src/nv_driver.c.
  • NVProbe looks for the supported chipsets on the PCI bus and match them with the enabled device sections from the X server configuration. If the PROBE_DETECT flag is passed to this probe function, only detection is performed, no PCI resources are claimed by the driver. There is a special detection for some bridged AGP/PCI Express chipsets. Those AGP chipsets bridged to PCI Express have special PCI ids. The real PCI ids of the chipset for programming purpose is stored in the chipset PCI mmio (Memory Mapped Input Output). Indeed, this is an ugly hack.
  • NVPreInit is NVProbe on steroids. It will determine the amount of video ram, will validate video modes, etc... One thing the driver will do is to determine the chipset architecture from the ids calling NVDetermineChipsetArch (inline for the NV driver). It checks for 32 bpp (bits per pixel) support and that the color depth is 8,15,16 or 24 bits. It shows that there are at most 2 CRTCs. The NV driver supports Dual`Head which seems to be configured only through the xfree86 DDX vbe module and implies no hardware cursor and the following new event handling functions:NVSwitchModeVBE,NVEnterVTVBEandNVLeaveVTVBE`. Locating the VRAM mapping base physical address implies to mask the PCI entity second memory base address with the 0xFF800000 value. Same thing for the MMIO registers base physical address (first PCI entity memory base address) with the 0xFFFFC000 value.

EXA

EXA is not an acronym. It is an API for XORG acceleration. The xfree86 DDX is the only DDX to implement it. EXA acceleration is initialized (if enabled) each time a screen is initialized (for instance at the start of a new server generation), hence checkout the NVScreenInit function in xf86-video-nouveausource_/src/nv_driver.c. In the case EXA is enabled in the configuration file, you will be routed to NVExaInit in xf86-video-nouveausource_/src/nv_exa.c source file. In this function, all the EXA hooks will be sent to the XORG EXA module and initialized with the exaDriverInit EXA module function, namely: NVDownloadFromScreen, NVUploadToScreen, NVExaPrepareCopy, NVExaCopy, NVExaDoneCopy, NVExaPrepareSolid, NVExaSolid, NVExaDoneSolid, NVCheckComposite, NVPrepareComposite, NVComposite, NVDoneComposite, all those functions are in xf86-video-nouveausource_/src/nv_exa.c. More on EXA on freedesktop wiki and in xorg-serversource_/hw/xfree86/doc/devel/exa-driver.txt

DRI (Direct Rendering Infrastructure) and its kernel counterpart, the DRM (Direct Rendering Manager)

The DRI and the DRM are the plumbing for programming the graphic card hardware. It is mainly used by mesa, the "libre" opengl implementation, but since the access to the hardware must be synchronized among all graphic card hardware clients, the xfree86 DDX video driver module has to cope with it since itself is such client. Then when EXA will want to perform accelerated operations, it will have to DRI dialog with the hardware. You have a DRI xfree86 DDX module in xorg-serversource_/hw/xf86/dri for xfree86 DDX code which wants hardware access the DRI way. This module and the related xfree86 DDX code will use the DRM user level interface library, libdrm. In theory, future DRM evolution will allow us to get rid of the PCI programming code from XORG server.