Emerging technology: Cobalt Digital extols the virtues of affordable HDR conversion with SL-HDR1 metadata

By Ciro A. Noronha, EVP of engineering, Cobalt Digital

High dynamic range (HDR) is an emerging technology that delivers noticeably better image quality than standard dynamic range (SDR) content with essentially the same bandwidth.

However, while HDR continues to gain popularity, SDR needs to be preserved – at least for the near future – because a large number of legacy consumer televisions do not support HDR. Since there are a mix of HDR and SDR devices in the marketplace, there are different ways to provide cost-effective, simultaneous service to both types of devices. The SL-HDR1 conversion option is an ideal choice for broadcasters.

HDR basics

Dynamic range is the ratio between the lowest and highest values of the luminance, or intensity of light emitted, from a display. Essentially, it is the ratio between the “whitest white” and “blackest black.” The human eye can perceive more dynamic range than offered by 10-bit SDR material. So, how do you pack an HDR signal into a 10-bit display?

Thankfully, the human eye is nonlinear in its response and perceives more detail at lower luminosities. HDR delivers images with improved details by assigning bits to light intensity in a nonlinear manner. More bits are assigned to the lower intensities and fewer bits to the higher intensities to express a higher range. Fundamentally, HDR shows more detail in the bright areas.

The luminance encoded in SDR signals is relative; at 100% it basically tells the display to show its whitest white. In contrast, HDR codes the absolute value of the luminance, using a non-linear transfer function based on what the eye can perceive. This is the SMPTE 2084 Perceptual Quantiser (PQ) transfer. The maximum absolute luminance encoded in a PQ signal is 10,000 nits, which most monitors cannot display.

What happens when a display is fed an HDR signal it cannot display because the luminance and/or color are out of range? The monitor must create an image as close as possible to the original source material – and to help the monitor do this job, metadata may be included in the stream. Metadata helps the monitor adapt the absolute luminance of the HDR signal to its capabilities.

HDR standards

There are many competing HDR standards available today, and most of them provide some support for simultaneous SDR and HDR support. From a high-level point of view, they can be classified as static (using static metadata or no metadata) and dynamic (using dynamic metadata).

One widely used static HDR standard is hybrid log-gamma (HLG), which is not based on SMPTE 2084 PQ. It is an attempt to use a backward-compatible transfer curve that will “work” with both SDR and HDR monitors without any metadata.

At the low luminance levels, it matches SDR, so an HLG signal applied to an SDR monitor will “look OK”, which an HDR monitor will show the improved ranges at the higher luminance levels. HLG trades simplicity (same signal everywhere, no metadata processing) with quality (it is not as good as the dynamic metadata options).

The dynamic HDR standards all start from a PQ base layer. The most relevant ones are SMPTE 2094-10 (Dolby Vision), SMPTE 2094-40 (HDR10+), and SL-HDR1.

The basic operation of various dynamic standards can be understood as transmitting “an image plus instructions” that can be processed by a monitor. Dolby Vision and HDR10+ start with an HDR image, and the “instructions” allow mapping to any monitor, all the way down to SDR. While it is possible for an end device to generate SDR from this HDR signal, that end device needs to understand and process the metadata to do so.

Support legacy SDR 

With SL-HDR1, the opposite happens. A standard SDR signal is transmitted, and the metadata allows a compatible device to reconstruct the original HDR signal (or any suitable intermediate level). This is the ideal way to support legacy SDR devices – they will just ignore the metadata not supported and simply display the SDR image.

To support a mix of SDR and HDR devices, broadcasters have several options. First is simulcasting two or more versions of the same signal, in HDR and SDR. Simulcast is well suited for over the top (OTT) distribution, where multiple versions of the same content are required. It is also the best solution for supporting legacy 8-bit devices, such as AVC 4:2:0 decoders.

Some broadcasters, however, are experimenting with adding SL‑HDR1 metadata to 8-bit signals, with reasonable results.

HLG is another option, as the same signal is compatible with both HDR and SDR. However, this can negatively impact the quality of the HDR signal, unless the distributor controls the production. Other dynamic modes transmit HDR with metadata that can be used to reconstruct SDR, but it requires a receiving device that understands HDR and metadata.

A better choice is SL-HDR1, which transmits a very good quality SDR signal with metadata to reconstruct the HDR image – and automatically delivers a full HDR experience to compatible monitors and/or set-top boxes. SL-HDR1 is defined in ETSI TS 103 433-1 and was approved in early 2018 as an amendment to ATSC A/341. Metadata is carried through inside the video elementary streams as SEI messages in compressed streams (H.264/H.265) or carried in the VANC using SMPTE 2108 for SDI.

For scalable HEVC (SHVC), the A/341 standard calls for two spatial layers – base and enhancement – and the SL-HDR1 metadata may be included in either layer. The spatial resolution of the enhancement layer can be up to three times that of the base layer. If the SL-HDR1 metadata is in the base layer, it applies to both layers, but if it is present in the enhancement layer, it applies only to it.

This gives the broadcaster flexibility to have different signal levels, and monetise them differently – possibly a “free” SDR level at lower resolution, and a “premium” HDR level at higher resolution.

Subscribe and Get SVG Europe Newsletters