Who Cares About Levels?

Let’s say you have a full movie that you edited in Premiere. You watch it in the theatre or a festival. The whole time you’re thinking, this looks really washed out. But you can’t put your finger on why.

Or maybe you’re kicking out your final from Resolve with an Avid codec like DNxHR. You upload it to YouTube and it just doesn’t look quite right. The blacks are really crushed and the whites are blown out.

What gives??

Both of the scenarios above are potential levels issues.

So what are levels?

Like most things in video that are difficult to understand, levels come from the ancient past of video creation. Analog video hardware like tape decks and monitors were set to record and display video levels. Film scanners as well as computer generated graphics on the other hand usually recorded or used full range data.

Nowadays, levels aren’t understood well. The demise of expensive video hardware and the turn towards software has rendered some of the technical concepts of video useless. But these concepts still apply to professional video work.

So why learn about levels at all?

Since software and digital files are so much more ubiquitous now, levels are an important concept to understand especially as a professional. By understanding levels, you can set up a correct signal path in a color bay, render your files correctly with the right color space and encoding to another artist, or even export your film or commercial correctly for a film screening or the internet.

What are levels?

Levels refer to the range of values contained within an image file. Every image and video file is encoded within a specific range of values. At their most basic, there are two main distinctions: full for computer displays and video for video monitors.

Full level files encode their image data within a full range container. For full range 8-bit files, this means values from 0 to 255, 0 being pure black and 255 being pure white. For 8-bit video range files, this means values from 16-235, 16 being pure black and 235 being pure white.

This graphic is misleading but we’ll get into why later in the article.

Many cameras only shoot in video range with values between 16-235. For values beyond video levels, some cameras offer options to capture extended values higher than 235 or lower than 16. These values are sometimes referred to as super brights or super blacks. In file encoding these values can also be referred to as YUV headroom or footroom. We’ll talk more about these files later in the article.

RAW video files on the other hand can be debayered into full or video ranges depending on how the files are interpreted in the software.

All digital graphics are encoded as one or the other. Generally, video files are encoded with video levels and graphics or image sequences are encoded with full levels.

The reason for this is that traditionally video files were watched on video monitors which was designed to display video levels and graphic files were viewed on a computer monitor connected to the output of a computer’s graphics card which is full levels.

Full Levels and Video Levels Terminology

So why is this concept so confusing and convoluted?

Cameras, software, displays, codecs, scopes, etc. each have different ways of describing levels. And sometimes the terminology overlaps with color space terminology. Below are some examples of the different terminology used to describe full and video levels:

WFT!

That’s an insane amount of terminology to describe the same thing.

While the nomenclature is confusing, the basic concept itself isn’t. What is confusing is knowing what levels your files are and how your software is treating them.

Most of this terminology comes from broadcast history. Tapes and files were usually within the legal broadcast range for video which is 16-235. Any values beyond this would be clipped or your file or tape would be flagged during QC for out of range values. This is still the case for delivering broadcast compatible files.

Today, tapes have largely been replaced with files and software.

Software interprets the level designation based on the input information of your files. If you’re working with video files, it can be really tough to understand why your files look different in different pieces of software, why it looks right inside your program, but not when you export it and many other situations.

Levels and Software

Color management has evolved very rapidly in the last few years. With the accessibility of DaVinci Resolve, other software like Nuke, Avid, FCPX, and Flame have opened up options for color management. Levels are an important part of that conversion.

Each piece of software handles color management very differently. Some software works in a full range environment, other software is video range by default. More and more software packages offer options for working in a multitude of color spaces and level designations.

A few examples from post production software:

Nuke is full-range linear by default

Avid is rec709 video levels by default but can be changed

DaVinci Resolve is full-range 32-bit float internally

The most important thing to understand about working with video files within post production software is that values are scaled back and forth between full and video levels based on the software interpretation and project settings.

DaVinci Resolve, for example, which is one of the most flexible programs for color space, works in a full range, 32-bit float ecosystem internally. Any video level files that are imported into Resolve are flagged as video automatically and scaled to a full range values.

While the processing is 32-bit float, Resolve’s scopes display 10-bit full range values. With 10-bit values, full range is 0-1023 and video range is 64-940.

Resolve Scopes

So Resolve is working full range data internally. If there is a video card attached like the UltraStudio and data levels aren’t selected, Resolve will scale those internal full range values to video range values before sending a video signal to a display monitor.

The software is scaling values back and forth from video to full and back to video. This is an important concept to absorb.

That’s how Resolve works. Avid, on the other hand, has been re-designed with tons of color space options now as well. You can interpret your source files as video or full levels even after you’ve imported the files. And you can pick which type of space you’re working in. You can work in a video range space if that’s how your workflow.

Premiere is a little more limited when it comes to color management. There aren’t a lot of options for re-interpreting your source files into a project color space. You can use some color management settings, but compared with other major NLE’s, Premiere Pro and After Effects are definitely not leading the pack.

Scopes and IRE values

The next thing we need to understand when working with levels is how our scopes work and something called IRE.

Here is a definition of IRE from wikipedia:

An IRE is a unit used in the measurement of composite video signals. A value of 100 IRE is defined to be +714 mV in an analog NTSC video signal. A value of 0 IRE corresponds to the voltage value of 0 mV.

So IRE refers to actual voltage in an analog system. Actual electricity.

Why is an analog measurement like IRE important for the modern age of video?

Post production software still uses IRE values for scopes. If you see a scope with measurement values from 0-100, they are most likely IRE measurements.

How do these IRE values correspond to levels?

With 8-bit encoded files: Video Levels 1. Black is 0 IRE and 16 in terms of video levels. 2. White is 100 IRE and 235 in terms of video levels Full Levels 1. Black is 0 IRE and 0 in terms of full levels 2. White is 100 IRE and 255 in terms of full levels

Depending on your editing software, your scopes could correspond to video levels or full levels, but IRE will remain the same.

Not confusing at all right??

This is why levels are so convoluted among other reasons.

If you NLE is running in a Rec709 environment for example, the scopes could be represented as video levels like in Avid for example. You’ll see on either side of an Avid scope 16-235 on one side and 0-100 on the other side. Now you know why. 😉

Avid scopes

For a rec709 environment like an Avid project, full level files will be scaled to video levels.

Scopes are relative. While IRE is a bit of an outdated concept, it’s helpful to have a scale from 0-100 to simply describe the video range. For your particular piece of software, it’s important to understand what type of levels environment you’re looking at to understand the scopes.

Encoding Files: Scaling Between Full Levels and Video Levels

When we start to talk about video files, we have to get some more terminology out of the way. This is where we start to hear terms like 4:4:4, 4:2:2, RGB, YUV, and YCbCr. ProRes444 for example or Uncompressed YUV.

In general, RGB refers to digital computer displays and YCbCr refers to digital video displays.

Historically, RGB values are converted to YCbCr values to save space for video bandwidth as it used to be far more costly. We still live within this legacy to a certain extent. RGB is 4:4:4 full range values which means there is no chrome subsampling in the encoding. All that color information is maintained. YCbCr on the other hand is 4:2:2 and video range.

I won’t get into every description of these technical aspects of video encoding. The thing I want to focus on is what your files are doing and how your software is interpreting them based on what the file is. If you’re interested in digging into the history of video and more technical information about the above terms, Charles Poynton is the master of video technology. Here is a link to some his work: http://poynton.ca/Poynton-video-eng.html

There is a misconception that video files must be converted to full range to be viewed properly on a computer display.

This simply isn’t true. Video range files can be display correctly on a computer monitor. What is important is that the software that is playing back your video file, knows it a video range file. Thankfully, most software is designed to know the levels well based on its encoding.

For example, if you’ve exported a ProResHQ quicktime with video levels from Resolve (which Resolve will render by default for ProResHQ,) that file will look correct if you play it back with a program that understands it is a video level file.

For the most part, video software is good at estimating the correct level designation for your video files based on information in your file.

However.

This is where things get tricky.

The line between full range and video range files has become much more muddy with the latest digital codecs.

Quicktime and MXF codecs like DNxHR, ProRes444, Cineform can contain YUV values or RGB values.

While these are great, high quality codecs, they can also be tricky to use in real-life workflows. If you encode a file with video levels as ProRes444, most pieces of software will interpret your file correctly. However, if you encode your file as ProRes444 with full levels or RGB values, most post production software will incorrectly assume your file is video levels and clip values.

In my own testing with rendering from Resolve, Resolve assumes that ProRes444 is video levels and so does Premiere when you import it. Even though, according to the ProRes whitepaper, ProRes444 can encode RGB 4:4:4 data or YCbCr 4:2:2 data.

https://www.apple.com/final-cut-pro/docs/Apple_ProRes_White_Paper.pdf

BUT.

Even if you did encode RGB 4:4:4 data to a ProRes444 file, your software would still need to correctly interpret that data and not clamp values. This is why these newer 4:4:4 codecs are so confusing. Premiere in that case would clamp those full range values or assume the file was video range. You still might be able to access those values, but Premiere isn’t seeing them as intended.

DNxHR 444 is another codec that can confuse software. In my own tests with Resolve and Premiere, a auto level DNxHR 444 file rendered from Resolve will be interpreted as a full level file in Premiere. But Resolve actually encodes it to video levels with auto selected. Therefore the levels will be scaled twice leading to washed out luminance values in Premiere.

Here’s an example when rendering out a RED RAW clip as DNxHR 444 12-bit from Resolve with Auto Levels in the render options:

From Resolve’s Viewer

Matching Scopes in Resolve

Rendered File in Premiere’s Viewer

Lifted Values on Premiere’s scopes

Clearly it looks washed out in Premiere Pro. The base line at the bottom of the scope is hovering above 0 at around 16 (not a coincidence.) Premiere expects full levels from the DNxHR 444 file. But on auto levels from Resolve, it actually renders the file out with video levels.

This is actually one great way to tell if you are having an issue with levels. If your black level is clamped on your scopes around 16 (8-bit scopes) or 64 (10-bit scopes,) you probably have incorrect level interpretation going in your software.

So if we try again but change the data levels setting to Full instead of Auto, let’s see what happens.

Viola! Now Premiere matches Resolve

Levels are being interpreted correctly as full

In practice, it makes sense that Premiere would assume that a 444 file would be full range. Full range files are 4:4:4. Resolve on the other hand should know that DNxHR should be 4:4:4 when it is rendering it. But it appears that it thinks a DNxHR quicktime should be video range, not full.

Files have levels information embedded within the file headers. This is how the software knows how to scale the levels. Sometimes this information is wrong or incorrect based on which piece of software is interpreting the file which makes things very confusing.

Since there are so many codecs, color spaces and different range values, it would be very time consuming to compare them all. What is most important is that you understand that 4:4:4 codecs can be tricky and it’s important to test out your workflows and file scalings before using them in production.

Especially for Windows users of Resolve and Premiere, it’s important to understand how to use DNx codecs to pass files back and forth properly since ProRes encoding isn’t possible. Check out my other article about picking your NLE here: https://www.thepostprocess.com/2019/02/04/how-to-choose-your-video-editing-software/

Testing Levels with Color Bars

Generating color bars at the beginning of a shot or program is a great way to test out any issues with codecs or improper levels scaling. Then you’ll always know if what you’re seeing is correct.

The top example below is being scaled wrong as you can see from the parade scope. This is a video level file being interpreted as full by the software. The levels are being scaled to 64 and 940 which is a tell tale levels scaling issue. You can test this out from any file export and looking at a scope in any NLE.

Full level files being interpreted as video has the opposite issue. The values will go beyond 0 and 1023.

These color bars are correct as the video level values are scaling properly to 0 and 1023.

Working with Hardware and Levels

Understanding levels is key to setting up a proper viewing environment for your content. Even if you’re only making web videos on your computer without a dedicated video card, it’s important to understand the choice that you’re making.

Things are changing quickly with display technology. What is true about signal paths today might change tomorrow. So I’ll talk about the options for signal paths and how levels fit into that.

There are two major schools of thought when it comes to displaying and monitoring your video output:

Buy a dedicated video card with a dedicated video monitor for any video work (video range signal chain) Use the output of a computer’s graphics card to drive a calibrated computer monitor or television (full range signal chain)

For most people doing video work today, it’s important to have a video monitor. Why is this important even if you’re making web videos? A few reasons:

You need to be able to calibrate your display to match a standard color space Computer displays can’t be calibrated to consistently match standards color spaces for video work Operating systems and internal graphics cards make it difficult to match video standards anywhere as closely as an external display Graphics cards don’t offer the same software integration with timeline resolution and frame rates like a dedicated video card would Newer display technology and updated graphics cards may change the reliance on dedicated video gear for monitoring Old broadcast video standards of the past could go away with the rise of fully computer based workflows and signal chains

So a video monitor and a dedicated video card is still important. Today at least.

Does this mean we should only build video level signal chains?

Many video monitors today can display full range signals. Video cards can kick out full range signals.

So why should we stick with video range if we can do full?

For a few reasons. Most video codecs and video software still use color spaces based in the video world like rec709. Files like ProResHQ and DNxHD are video codecs with video levels at their core. Many post facilities are based around ProRes or DNx of some kind. Introducing an RGB signal chain into the mix is exciting in theory, but perfecting that setup would require a lot more effort for a small return.

That being said, RGB based workflows are becoming more popular. They might be the standard soon with all computer based workflows.

Usually in higher end workflows, quicktimes aren’t used. The files are most likely full range 10-bit DPX files or 16-bit float OpenEXR files which are much bigger containers than any signal chain or display technology currently available.

For film scanners and projects, full range signal chains are standard. In scenarios like this, it makes sense to build an RGB pipeline to maintain an unconverted, unscaled signal as the way through.

For CG heavy facilities or workflows, full range has its benefits.

For most editorial workflows, sticking with video range systems works the most painlessly for now. As codecs continue to increase in quality, drive speeds continue to go up at cheaper prices, RGB, full range files and hardware might begin to replace more traditional video based signal chains.

Best Practices for Working with Full and Video Levels

Now the big question. How do we use levels on a day to day practical basis?

Here are some practical rules of thumb for working with levels in your post production workflow:

If you’re using 444 codecs, understand that software might interpret or render them with the wrong scaling. This can lead to your files being clipped or washed out. Test out your workflow with 444 codecs.

Test out any renders or file exports with color bars to make sure the levels are being interpreted or scaled correctly

Most cameras shoot video level signals not full. Some of these cameras allow for YUV headroom. Check your settings for your camera to understand how your files are created so that you interpret them correctly in your software and use those out of range values.

Make sure that your signal path for monitoring is consistent and matches across software and hardware outputs whether it’s video or data levels.

Files exported for broadcast should be video levels. Most of the time broadcasters require Rec709 ProResHQ 4:2:2 which is video levels.

Files exported for the internet should be video levels. Most codecs that are used for file delivery are video levels not full. Encoders expect video level files for most delivery formats.

Exporting files using video levels WILL NOT lead to your files looking washed out. Exporting files using full levels WILL NOT make your files look better or more accurate. Even though your computer display is RGB, video level files will look correct on your screen because players will scale the values correctly.

Conclusion

I hope this article has helped to demystify the concepts of levels in video post production work. There are many misconceptions about levels on the internet. My hope is that this information will help to dispel some of the confusion with levels.

If you know how your software is interpreting your files when it comes to levels, you’ll be able to handle any scaling issues or signal chain mismatches very easily.

Please leave a comment below with any issues or questions regarding my conclusions above. Have a great day.

Other Links for Further Reading on Levels

https://bobpariseau.com/blog/2018/5/2/digital-video-or-lost-in-color-space

https://stackoverflow.com/questions/25145772/converting-rgb-values-in-0-1-range-to-high-dynamic-range-exr-format

http://www.anyhere.com/gward/hdrenc/