Andy Ritger began his talk at the 2016 X.Org Developers Conference (XDC) with a disclaimer of sorts: "I am very much not a color expert" and welcomed corrections from attendees. But his talk delved into chromaticity and other highly technical color characteristics in order to talk about the advent of high dynamic range (HDR) displays—and what is needed for Linux to support them.

Ritger works for NVIDIA, which has been getting requests for HDR display support from its customers. The company has implemented that support in its Windows and Android drivers, but has not yet done so for its discrete GPU Linux drivers. Ritger has been studying the subject to try to understand what is needed to support these displays for Linux; the talk was meant to largely be a report on what he has learned.

The trend these days is toward ultra-high definition (UHD) displays, he said. That means higher pixel resolution (4K and 8K) but also a wider color gamut to display a larger range of colors than today's displays. There are also efforts to expand the range of luminance values that can be displayed, which is what characterizes an HDR display.

The ITU-R BT.2020 [PDF] specification has recommendations for UHD parameters, including resolutions, refresh rates, chromaticity, formats, transfer functions, and so on. But when people refer to "BT.2020", they typically mean the color gamut that is specified. Few displays today even get close to the gamut described, he said.

Different color spaces represent different sets of colors, which is what makes up a color gamut. They are typically described using the CIE XYZ color space, in particular using a 2D projection of the color space. Other color spaces are described using that projection by specifying the x and y coordinates of the red, green, and blue primary colors as well as the coordinates of the "white point".

It is important to recognize the difference between linear and non-linear color spaces, Ritger said. Linear color spaces behave in an intuitive way, where doubling a value doubles the intensity, for example. Graphics operations should always be done in a linear color space.

But human perception is not linear. Humans are more sensitive to darks than lights, so given a set of discrete steps like 0-255, a linear color space "is not great". There is insufficient granularity in the darks and wasted precision in the lights, so it is generally recommended to store color information in a non-linear color space. The most common one used is sRGB, which is what most pre-HDR monitors expect for input.

High dynamic range

Informally, HDR is about making the brights brighter and the darks darker so that details are more perceptible in the dark and bright regions. HDR increases the range and granularity of the luminance information to make the highlights brighter, but not to make the entire image brighter. Luminance is measured in candelas per square meter, which are also known as "nits". Pre-HDR displays have a maximum of around 100 nits, while first generation HDR displays max out at around 1,000 nits. The maximum value defined for HDR, though, is 10,000 nits.

Many 3D applications already do HDR rendering, Ritger said, using FP16 (half-float) buffers. Those buffers are tone mapped to a lower-precision, lower-luminance representation. But now that there are more capable displays, there is a need to give the applications the information they need to tone map for the HDR display. There is also a need to be able to pass the application's higher-precision data through to the display.

Ritger then outlined what the flow for 3D applications doing HDR rendering and display would look like. Applications would still render into FP16 buffers, but would use the scRGB color space, which makes it easier to composite HDR and standard dynamic range (SDR) content. That would then be tone mapped for the target monitor's capabilities, which would be handed to the driver or compositor along with some metadata for the monitor.

The driver or compositor would composite the tone-mapped image with any SDR content. The driver or GPU would then take the scRGB FP16 composited result and perform an inverse "electro-optical transfer function" (EOTF) to encode the FP16 data into the display signal. That would be sent to the monitor along with an "HDR InfoFrame" containing the metadata. The monitor would then apply the EOTF to decode the digital signal into HDR content.

The scRGB color space is also known as the "canonical compositing color space". It was introduced by Microsoft in the Vista time frame and has the same chromaticity coordinates as sRGB, but is linear. That makes it a good color space for compositing SDR and HDR content. For the HDR metadata, there are several relevant standards that specify the information needed by the GPU for rendering as well as the information needed by the monitor to know how to interpret the data it is receiving.

The EOTF defines how the display should convert the non-linear digital signal to linear light values. It is optimized for bandwidth, so it compresses the signal into as few bits as possible by sacrificing precision where it won't be missed. The de facto EOTF for SDR is sRGB and there are two common EOTFs for HDR. In order to create the digital signal for the monitor, the GPU needs to do an inverse EOTF (or OETF).

Missing pieces

There are still some missing pieces for Linux from that flow, however. Applications can already do the rendering, but they will need some API to get the HDR information from the display. It is available in the Extended Display Identification Data (EDID) that monitors provide, so maybe just parsing the information out of that would be sufficient. He was concerned, though, that drivers and possibly compositors might need to change some of the parameters.

In addition, the application needs a way to provide its HDR metadata to the monitor. That information might also need to be arbitrated, for example if two applications were rendering to windows using different HDR configurations. Currently, the NVIDIA Windows driver is full-screen only.

There is also the need for a way to display the FP16 buffers. NVIDIA hardware does not have support for doing the inverse EOTF, so a shader is being used for now. He is unsure about whether other graphics hardware has support for that. It would be nice if SDR content could be composited with HDR in a single desktop, he said; scRGB should help make that possible.

Wayland compositors will need to be FP16-aware so that they can accept FP16 buffers from clients. For X, there are a lot of unanswered questions, Ritger said. The primary producers of HDR content will be the 3D APIs (OpenGL, Vulkan) and the video APIs (VDPAU, VAAPI). He wondered if there was reason to allow X rendering into an FP16 buffer; it isn't strictly needed, but it might be easier to just allow it. Also, should the root window be allowed to be FP16?

He concluded by noting that the talk [YouTube] was intended to give folks some context; there are lots of design decisions that still need to be made. NVIDIA is definitely interested in participating in that process. He would like to see some straw-man proposals being made in the coming months. He noted that his final few slides [PDF] had links to specifications and web resources of interest.

[I would like to thank the X.Org Foundation for sponsoring my travel to Helsinki for XDC.]

Comments (7 posted)

In a world full of fancy development tools and sites, the kernel project's dependence on email and mailing lists can seem quaintly dated, if not positively prehistoric. But, as Greg Kroah-Hartman pointed out in a Kernel Recipes talk titled "Patches carved into stone tablets", there are some good reasons for the kernel community's choices. Rather than being a holdover from an older era, email remains the best way to manage a project as large as the kernel.

In short, Greg said, kernel developers still use email because it is faster than any of the alternatives. Over the course of the last year, the project accepted about eight changes per hour — every hour — from over 4,000 developers sponsored by over 400 companies. It must be doing something right. The list of maintainers who accepted at least one patch per day contains 75 entries; at the top of the list, Greg himself accepted 9,781 patches over the year. Given that he accepts maybe one third of the patches sent his way, it is clear that the patch posting rate is much higher than that.

Finding tools that can manage that sort of patch rate is hard. A poor craftsman famously complains about his tools, Greg said, but a good craftsman knows how to choose excellent tools.

So which tools are available for development work? Greg started by looking at GitHub, which, he said, has a number of advantages. It is "very very pretty" and is easy to use for small projects thanks to its simple interface. GitHub offers free hosting and unlimited bandwidth, and can (for a fee) be run on a company's own infrastructure. It makes life easy for the authors of drive-by patches; Greg uses it for the usbutils project and gets an occasional patch that way.

On the other hand, GitHub does not scale to larger projects. He pointed at the Kubernetes project, which has over 4,000 open issues and 511 open pull requests. The system, he said, does not work well for large numbers of reviewers. It has a reasonable mechanism for discussion threads attached to pull requests — GitHub has duplicated email for that feature, he said — but only the people who are actually assigned to a pull request can see that thread. GitHub also requires online access, but there are a lot of kernel developers who, for whatever reason, do not have good access to the net while they are working. In general, it is getting better, but projects like Kubernetes are realizing that they need to find something better suited to their scale; it would never work for the kernel.

Moving on to Gerrit, Greg started to list its good points, but stopped short, saying he didn't know any. Actually, there was one: project managers love it, since it gives them the feeling that they know what is going on within the project. He noted that Google, which promotes Gerrit for use with the Android project, does not use it for any of its internal projects. Even with Android, Gerrit is not really needed; Greg pointed out that, in the complicated flow chart showing how to get a patch into Android, Gerrit has a small and replaceable role.

Gerrit, he said, makes patch submission quite hard; Repo helps a bit in that regard, but not many projects use it. Gerrit can be scripted, but few people do that. An audience member jumped in to say that using Gerrit was like doing one's taxes every time one submits a patch. The review interface makes it clear that the Gerrit developers do not actually spend time reviewing code; he pointed in particular at the need to separately click through to view every file that a patch touches. It is hard to do local testing of patches in Gerrit, and tracking a patch series is impossible. All discussions are done through a web interface. Nobody, Greg said, will do reviews in Gerrit unless it's part of their job.

What about plain-text email? Email has been around forever, and everybody has access to it in one form or another. There are plenty of free email providers and a vast number of clients. Email works well for non-native speakers, who can use automatic translation systems if need be. Email is also friendly from an accessibility standpoint; that has helped the kernel to gain a number of very good blind developers. Email is fast, it makes local testing easy, and remote testing is possible. Writing scripts to deal with emailed patches is easily done. And there is no need to learn a new interface to work with it.

On the other hand, the quality of email clients is not uniformly good. Some systems, like Outlook, will uniformly corrupt patches; as a result, companies doing kernel development tend to keep a Linux machine that they can use to send patches in a corner somewhere. Gmail is painful for sending patches, but it works very well as an IMAP server. Project managers, he noted, tend not to like email. He seemed to think of that as an advantage, or, at worst, an irrelevance, since the kernel's workflow doesn't really have any project-manager involvement anyway.

Email integrates easily with other systems; it functions well with the kernel's 0-day build and boot testing system for example. It also is nicely supported by the patchwork system, which is used by a number of kernel subsystems to track the status of patches. Patchwork will watch a mailing list, collect the patches seen there, and track acks and such. It provides a nice status listing that project managers love.

In summary, Greg said, email matters because it is simple, supports the widest group, and is scalable. But the most important thing is that it grows the community. When new developers come in, the first thing they have to do is to learn how the project works. That includes reading the reviews that developers are doing; that is how one learns what developers care about and how to avoid mistakes. With the kernel, those reviews are right there on the mailing list for all to see; with a system like Gerrit, one has to work to seek them out.

As Rusty Russell once said, if you want to get smarter, the thing to do is to hang out with smart people. An email-based workflow lets developers hang out with a project's smart people, making them all smarter. Greg wants Linux to last a long time, so wants to see the kernel project use tools that help to bring in new developers. Email, for all its flaws, is still better than anything else in that regard.

[Your editor thanks Kernel Recipes for supporting his travel to this event.]

Comments (93 posted)