---
title: What is HDR, really?
date: 2025-07-18
tags: [color]
description: "HDR is about having more details in shadows and highlights. A higher dynamic range is one piece of the puzzle, but not all of it."
---

Technically, dynamic range refers to the ratio between the brightest and
darkest absolute luminance that can be displayed. SDR (standard dynamic range)
consumer equipment can produce something like 0.32 cd/m² to 320 cd/m². HDR
(high dynamic range) consumer equipment can produce roughly 0.064 cd/m² to
1,000 cd/m² (see [Poynton
(2022)](https://library.imaging.org/admin/apis/public/api/ist/website/downloadArticle/cic/30/1/6)).

I was a bit confused when people talked about HDR, until I realized that they
were referring to the wider goal of having more details in shadows and
highlights. A higher dynamic range is one piece of the puzzle, but not all of
it.

In this post I will explore some of the other changes that contribute to that
goal.

## More Bits

A simple way to increase detail is just to increase the amount of bits we use
for each channel. In sRGB (the most common SDR color space for digital media)
we typically use 8 bits per channel, while HDR often uses 10 bits, giving us
four times as many color stops.

## Beyond White and Black

In SDR, black is 0% and white is 100%. HDR on the other hand makes a
distinction between surfaces and highlights. Surface white might for example be
defined at 90%, so that there is some headroom for highlights that are even
brighter. In fact, surface white on an HDR screen will in practice not be much
brighter than on an SDR screen.

## Wider Gamut

While not technically related to high dynamic range, HDR standards such as
[HDR10](https://hdr10plus.org/wp-content/uploads/2024/01/HDR10_Ecosystem_Whitepaper.pdf)
also define which exact colors the red, green, and blue lights (called
*primaries*) that make up each pixel should have. Most commonly these are the
ones defined in [ITU-R BT.2020](https://www.itu.int/rec/r-rec-bt.2020/), which
can produce a much wider range of colors than sRGB, P3, or even Adobe RGB.

## Curves

Some color spaces have the property of being *linear*, by which we mean that
adding the values of two colors in such a color space has the same effect as
mixing two corresponding physical light sources.

However, using such a linear color space would result in inefficient encoding,
because human perception is not linear in that sense. A lot of bits would be
used to encode differences that we cannot even perceive, while only few
bits would be left for areas that make a big difference for us. So while a lot
of processing happens in linear color spaces, storage and transmission often
uses color spaces that employ a non-linear *transfer function* before encoding
the values as integers.

Transfer functions actually take up most of the space in the relevant standards
(e.g. [SMPTE ST 2084](https://pub.smpte.org/latest/st2084/st2084-2014.pdf) or
[ITU-R BT.2100](https://www.itu.int/rec/R-REC-BT.2100)). The most common
one for HDR is called *perceptual quantizer (PQ)*.

I honestly don't care all that much about transfer functions though. They are
applied in encoding and reverted in decoding, so they don't actually change
anything about the colors. The worst-case scenario is that encoding is not as
efficient as it could be. With the move from 8 bits to 10 bits that shouldn't
be a major problem.

## Tone Mapping

When you want to display HDR content on an SDR screen, you have to do a lossy
conversion that is often called *tone mapping*. [ITU-R
BT.2408](https://www.itu.int/pub/R-REP-BT.2408) has some examples how this
could be done.

This is actually not a new phenomenon though. There are very similar issues
e.g. in printing (because ink on paper has a very different gamut from
light-emitting screens) or in games (because the lighting systems in modern
game engines produce an extremely high dynamic range).

One option is to clip everything that cannot be displayed to the target space.
Since clipping individual channels might change the hue, sometimes people opt
to reduce the saturation instead. Another option is to shrink down the entire
color space until it fits.

Which option you choose depends on your specific use case. [ICC
profiles](https://www.color.org/icc_specs2.xalter) (a file format for
conversion between color spaces) therefore can contain multiple mappings with
different *rendering intents*.

A special kind of tone mapping is *local tone mapping*, where the mapping is
different depending on context. For example, a dark pixel in a bright area of
the image might be mapped to black, but a pixel of the same color in a dark
area of the image might be mapped to a lighter grey to maintain local contrast.

## Personalization

Games commonly have a slider for gamma, which allows players to adjust the tone
mapping to their specific viewing conditions and personal preferences. Another
common case for color customization is a night mode, where screens are darker
and less blue at night.

I have found [one article](https://lightillusion.com/what_is_hdr.html) that
interprets PQ as prohibiting these kinds of user customization. In this
interpretation, every color value maps to an absolute luminance. This would not
only take power away from users, it would also disregard the huge influence
that viewing conditions have on human sight. If anything, we need more options
for personalization, not less.

I am not entirely sure if this is really what the authors of that specification
had in mind though.

## Scene-to-Scene differences

[ITU-R BT.2390](https://www.itu.int/pub/R-REP-BT.2390/) mentions that a higher
dynamic range could not just be used to increase the range within a scene, but
also scene-to-scene differences. In my opinion, that would be the worst
possible outcome of all of this. I fear that we will end up with a situation
like with sound, where you have to increase the brightness of your screen all
the way too see anything during a dark scene, only to get your eyes burned when
the next bright scene arrives.

## Is any of this relevant?

A lot of work is currently being put into supporting HDR. On
the hardware side we need cameras that can capture more details and screens
that can display them. On the software side we need support for different color
spaces across the stack. Many image and video formats have already been
adapted, [CSS color Level 4](https://www.w3.org/TR/css-color-4/) brings more
color spaces to the web, and Wayland recently gained a [color management
protocol](https://wayland.app/protocols/color-management-v1).

While I think that it is nice to have the option of a higher dynamic range and
a wider gamut (especially during production), sRGB is fine for most
everyday activity. Adding color management everywhere adds a whole lot of
complexity.

What I would like to see is better images and more personalization. What I fear
we will get is less personalization and more scene-to-scene differences.

## Further Reading

-   The Wayland ecosystem collected some information around HDR, among other things a [list of all relevant specification](https://gitlab.freedesktop.org/pq/color-and-hdr/-/blob/main/doc/specs.md).
-   [SMPTE RP 177](https://pub.smpte.org/pub/rp177/rp0177-1993_stable2010.pdf) explains how to calculate a conversion matrix from primaries and white point.
-   The kernel documentation has a surprisingly good [explanation of color spaces](https://www.kernel.org/doc/html/v4.9/media/uapi/v4l/colorspaces.html).
-   Matt Taylor has a good [article on tone mapping in games](https://64.github.io/tonemapping/) with lots of screen shots.
