The State of Video Codecs 2015

Video compression is the critical enabler of all video streaming, and today we’re at a codec crossroads unlike any that we’ve experienced. Though H.264 remains firmly entrenched as the go-to codec for most digital video transmission and delivery, particularly for streaming, we’ve seen the first deployments of HEVC and VP9, and heard aggressive claims (“technical performance superior to H.265”) from Xiph and Mozilla about the open source ultra HD (UHD) codec Daala. Out of nowhere, RealNetworks demonstrated its RMHD codec, with “HEVC-like image quality” at CES 2015. At the same time, we’re hearing claims of bitrate optimizers that can deliver HEVC-like bandwidth savings to H.264.

So it’s appropriate to start asking questions—for instance, how much more efficiency can you tweak from H.264? How soon will H.265/HEVC become truly essential? Where does VP9 fit into the codec marketplace? Will Daala or RMHD ever become meaningful alternatives to HEVC or VP9? But let’s start with a look backward.

Perspective

The last great seismic codec shift occurred on Aug. 20, 2007, when Adobe incorporated H.264 playback into Flash, making H.264 playback free for all Flash-enabled computers and notebooks. In those premobile, pre-OTT days, this meant that at least 97 percent of the target consumers for virtually all streaming video could use the new codec as soon as they upgraded. This day marked the clear transition from VP6 to H.264.

If only the world were still so simple. Today, all forward-looking codec strategies must consider the installed base of non-upgradeable mobile and OTT devices, plus the impending demise of Flash in favor of HTML5’s Media Source Extensions (MSE). So unless you’re planning a totally greenfield video deployment solely to Ultra High Definition (UHD)-capable devices, there won’t be a clean break from H.264 to a UHD codec, at least not in 2015.

While the shift from VP6 to H.264 was relatively simple, the transition from H.264 to any UHD codec will be anything but. If you could have checked the 2014 holiday wish list of any streaming producer, it likely would have included the ability to achieve the bitrate savings promised by UHD codecs while continuing to actually use H.264. Interestingly, this is precisely the benefit promised by a relatively new group of vendors touting technologies called bitrate optimizers.

How Low Can H.264 Go?

In 2014, some companies began offering technologies that claim to reduce the delivery data rate of H.264 and other codecs by as much as 50 percent with minimal impact on visual quality. These technologies don’t replace the codec; they make it work more efficiently. One statement on the Faroudja Enterprises website says it best: “The Company does not perform compression coding or decoding. The company designs intellectual property to be used in conjunction with existing compression formats.”

In this regard, all technologies offer their own compression-enhancing special sauce, though they operate at different points in the encoding workflow. Faroudja’s technology is applied pre- and post-compression and utilizes a number of “advanced video processing techniques” that “improve perceived video image quality,” while delivering between a 35–50 percent bandwidth reduction “without any image quality degradation.” This technology appears to be targeted toward broadcast and similar markets, and it probably won’t impact the streaming marketplace in the short term.

Another technology, EuclidIQ’s EuclidVision, integrates with a standard encoder to “enhance the prediction layer within the encoder,” “improve temporal prediction,” and improve compression efficiency. EuclidIQ doesn’t target streaming publishers directly; it seeks to convince encoding vendors to integrate their technology directly into the encoding tools. While EuclidIQ’s H.264 technology does not appear to be shipping, the company’s MPEG-2 technology reportedly delivers “video compression gains of 10–30 percent over standard MPEG-2 encoding,”

InterDigital takes some interesting approaches. The first, called Perceptual Pre-Processing, “filters out parts of visual content that the human eye cannot see under certain viewing conditions.” This reportedly “delivers equivalent perceived quality while reducing bitrates” by a maximum of 45–60 percent for 4K video and 25 percent for 1080i and 720p.

The other technique, User-Adaptive Video Streaming, encodes and changes the playback stream to account for user behavior, such as the distance between the viewer and the screen, ambient light levels, and display pixel intensity. This way, the stream might change when a viewer places a device on a table (as measured by the accelerometer), or when the viewing environment becomes significantly lighter or darker (as measured by the camera). These techniques supposedly optimize the viewing experience and deliver the most efficient stream possible to all viewers, reducing bandwidth costs. This function includes encoding and decoding components that must be integrated into the streaming player via an SDK.

Can’t You Just Make My MP4 File Smaller?

Several companies offer technologies that reduce the bandwidth of existing MP4 and other files. For example, Beamr Video is a Linux-based optimizer that “imitates the perceptual qualities of the human visual system ... to ... remov[e] redundancies, without creating any visual artifacts in the process.” According to the company’s website, this technology delivers bitrate reductions of 50–75 percent for Blu-ray Discs, 40–50 percent for download services (iTunes and Amazon), and 20–40 percent for streaming services (Hulu, YouTube, Netflix).

Similarly, Cinova Crunch is a technology that “eliminates visually-redundant information” in the video stream. More specifically, Crunch uses a proprietary “visual sensitivity index” to analyze each macroblock to determine how much information it can discard without degrading quality (Figure 1).

The overarching problem both technologies address is that every video is different and compresses with more or less efficiency. If you’re working with disparate source clips and apply one set of presets, you may encode at too high a rate for some, which is wasteful, and too low a rate for others, which degrades quality. Ideally, you would experiment individually with each clip to find the optimal bitrate, but that would be exceptionally time consuming. To automate this process, both technologies use proprietary algorithms that identify the optimal quality/data rate point and adjust the data rate appropriately.

How well do any of these technologies work? There have been no comparative third-party reviews so it’s difficult to tell. All are nascent, so there are few case studies, though Beamr did host a panel called “Using Media Optimization to Improve Streaming Performance” at Streaming Media West that included executives from Warner Brothers, M-Go, Sony Pictures, and Yahoo! Flickr.

All of these technologies seem promising, but each presents its own integration and utilization challenges. Questions you should ask before even testing the technology include how it integrates into your existing encoding workflow and playback environments, how long the process takes, what types of files it produces, is it available for both live and VOD, and, of course, what it costs.

Whether any of these technologies can deliver UHD-like bitrate savings while retaining full H.264 quality and compatibility is unclear. What is clear, however, is that all roads to UHD are expensive and potentially disruptive. Any company that yearns for the bandwidth savings and other benefits promised by UHD codecs should consider these optimization technologies as part of their exploration into transitioning to HEVC or VP9.

Beyond H.264 to UHD

The two primary contestants in the UHD space are HEVC and VP9. Considering quality, encoding time, and CPU playback requirements on Windows and Mac computers, the two are closer than fraternal twins.

One big differentiator is that VP9 is royalty free and open-sourced, while HEVC is encumbered with a royalty of $0.20 per unit, with a $25 million maximum per year. On the flip side, HEVC is the collaborative effort of multiple companies and a true standard, while VP9 is essentially a proprietary technology. Sure, it’s open source, but the only company that knows where it’s going is Google.

These and other factors will determine how each codec succeeds in the three primary streaming markets: mobile, smart TV/OTT, and computers and notebooks. So let’s take a look, starting with mobile.

Mobile

There are obviously two major mobile platforms: iOS and Android. Both have already announced HEVC support to some degree. Specifically, Apple is using HEVC for FaceTime on the iPhone 6, but for no other video functions, though general-purpose HEVC recording and playback could presumably be enabled at any time. With Android 5.0 (Lollipop), Google added a software HEVC player licensed from Ittiam Systems that can play both streamed and downloaded content, and opened up hooks in the operating system to support the HEVC hardware acceleration that’s coming on most mobile chipsets.

Regarding VP9, it’s hell-freezes-over unlikely that Apple will ever support Google’s codec; it never supported VP8 and is famously in the MPEG-LA HEVC patent group. Strangely, while Android has supported VP9 playback since version 4.4, the format isn’t listed in Android’s Supported Media Formats. Similarly, where the Lollipop specs mention “State of the art video technology with support for HEVC,” VP9 is also nowhere to be found. The bottom line is that Android currently supports VP9, but doesn’t seem to think it warrants mention.

Of course, this describes the status quo for native, browser-based playback on both mobile platforms. If you’re creating an app for either mobile OS, you can use either codec. Certain developers offer SDKs for both codecs for both OSs, though Android’s ability to access HEVC hardware acceleration is a very significant advantage.

Please enable JavaScript to view the comments powered by Disqus.

Related Articles

Companies and Suppliers Mentioned