Colour Science & Management
Modern colour grading has shifted from LUT-based workflows to colour-managed pipelines built on precise mathematical transforms. Understanding colour spaces, working spaces, and CSTs is now as fundamental as understanding the camera formats they connect — from sensor to screen.
01 — How colour is captured
Film and digital cameras both record light — but the way they do it is fundamentally different, and that difference is the reason the entire colour science pipeline exists.
Film: logarithmic by nature
Film captures light through a photochemical process. Silver halide crystals in the emulsion respond to photons: the more light, the more crystals darken. But that response is not linear — it follows a characteristic S-curve (the Hurter–Driffield curve) that compresses bright highlights and opens up shadow detail in a way that closely mirrors human vision. Film’s latitude — its ability to hold detail across a wide exposure range — was a built-in property of the medium. The toe of the curve gently lifts shadow detail; the shoulder rolls off highlights rather than clipping them.
Digital sensors: linear capture
A digital camera sensor works differently. Each photosite contains a photodiode that counts photons electrically. Double the light, double the electrical signal — the response is perfectly linear. This creates two practical problems.
First, the human visual system is not linear — we are far more sensitive to changes in dark areas than bright ones. A linear encoding wastes most of its code values on highlights the eye can barely distinguish, while crushing subtle shadow gradations into a handful of values. Second, a sensor’s full dynamic range cannot fit into a standard broadcast signal. A modern cinema camera captures 14–17 stops; Rec.709 video can represent around six to eight stops before clipping. Encoding directly to Rec.709 throws away most of what the sensor captured.
Log encoding: the solution
Log encoding applies a mathematical compression to the sensor’s linear signal before recording. The transformation redistributes code values to weight shadow and midtone detail more heavily — replicating what film’s chemistry did naturally — and squeezes the full dynamic range of the sensor into the available recording space. The result looks flat and desaturated, but all the captured information is preserved for grading.
Each camera manufacturer uses their own log encoding: ARRI’s Log C, Sony’s S-Log3, Blackmagic’s BRAW. All solve the same problem; they differ in their mathematical curve and the colour gamut they record into.
Why gamut matters
Sensors also capture a wider range of colours than Rec.709 can represent — particularly in greens and cyans. So log footage is recorded into a wide-gamut colour space — S-Gamut3, ARRI Wide Gamut, Blackmagic Wide Gamut — that can hold the full captured colour volume without clipping. This is why colour management exists: to track each input colour space, maintain a wide working space during grading, and transform precisely to whatever each deliverable requires.
02 — Colour spaces and gamuts
A colour space is defined by three parameters: its colour primaries (which specific red, green, and blue define the gamut boundaries), its white point (what neutral looks like — D65 for most standards), and its transfer function (how values are encoded — linear, gamma, or log). Two images can have identical RGB numbers and look completely different if they were encoded in different colour spaces.
The key colour spaces
P3: the bridge between broadcast and Rec.2020
DCI-P3 was developed for digital cinema projection and is the required delivery format for all DCPs (Digital Cinema Packages). It uses the P3 primaries with a slightly warm DCI white point (approximately 6300 K) and a simple 2.6 gamma transfer function. Every theatrical release is mastered to DCI-P3.
Display P3 uses the same colour primaries as DCI-P3 but with the standard D65 white point. This is the colour space used on all modern Apple devices (iPhone, iPad, MacBook Pro) and as the reference space for streaming platforms including Netflix, Apple TV+, and Disney+. When Netflix specifies a P3-D65 deliverable, they mean Display P3.
Although Rec.2020 is the defined standard for HDR delivery, no display currently achieves full Rec.2020 reproduction. In practice, HDR content is mastered to P3-D65 — it fits within Rec.2020, gives meaningful wide-gamut headroom over Rec.709, and can be accurately monitored with current display technology. Full Rec.2020 mastering is effectively theoretical at present.
P3 covers approximately 26% more colour volume than Rec.709, with the expansion mostly in greens and reds. Rec.2020 covers approximately 67% more than Rec.709. On a P3-capable display, the difference from Rec.709 is clearly visible — particularly in saturated foliage, skies, and skin tones.
03 — Colour management
In an unmanaged workflow, the grading application treats every piece of footage as if it is already in the working colour space. The colourist manually applies LUTs at the right points in the pipeline — and if anything is applied out of order, the colour is wrong. In a colour-managed workflow, the application knows the colour space of every input and every output, and handles the transforms automatically using mathematically precise Colour Space Transforms.
CSTs vs LUTs
A Colour Space Transform (CST) is a mathematical formula that converts every input colour to its correct output colour using the defined equations of both colour spaces. A LUT approximates the same transform by pre-computing results on a grid of sample values and interpolating between them. The distinction matters:
In modern grading, CSTs have largely replaced LUTs for all technical transforms — camera input, working space conversions, and display output. LUTs remain the right tool for creative looks, show LUTs, and on-set monitoring (see Section 04).
Three approaches to colour management
Resolve Color Management (RCM) and DaVinci Wide Gamut
Blackmagic Design’s own colour management system — built into DaVinci Resolve since version 17 — uses two components: DaVinci Wide Gamut (DWG) as the working colour gamut, and DaVinci Intermediate (DI) as the log encoding applied to it. Together they form Resolve Color Management.
DaVinci Wide Gamut is a scene-referred gamut wider than Rec.2020 and comparable in scope to ACES AP1. DaVinci Intermediate is a log curve designed to behave predictably under colour correction, with a toe and shoulder similar to ACEScct. When RCM is enabled in Resolve, the application automatically applies the correct input transform for each camera format on ingest and the selected output CST for the viewer and every deliverable — no OCIO configuration required.
RCM has become the dominant choice for productions graded entirely in Resolve. It is faster to configure than ACES, ships with manufacturer-certified input transforms for all supported cameras, and handles mixed-camera productions (ARRI + Sony + Blackmagic on the same timeline) automatically.
The CST node in Resolve
Even in unmanaged Resolve timelines, colourists can insert explicit CST nodes anywhere in the node graph to perform a precise transform between two colour spaces. This is how a colourist working without full RCM can still use mathematically correct transforms rather than LUTs — for example, converting S-Log3/S-Gamut3 footage to DaVinci Wide Gamut at the clip level using the camera manufacturer’s certified colour science, then reverting at the output for the deliverable colour space.
Choose one colour management approach — ACES, RCM, or direct LUT — and apply it consistently across the entire project. Mixing approaches within a timeline produces incorrect results and is one of the most common sources of colour pipeline errors in post-production.
DaVinci Wide Gamut is not the same as Blackmagic Wide Gamut. Blackmagic Wide Gamut is the native colour gamut of Blackmagic camera sensors — a camera-native gamut like ARRI Wide Gamut or Sony S-Gamut3. DaVinci Wide Gamut is a grading working space, designed to contain all camera gamuts without clipping. Both appear in a Resolve RCM workflow at different stages of the pipeline.
04 — LUTs: still essential
LUTs have not disappeared from modern post-production — they have found their natural role. As technical transforms have moved to mathematically precise CSTs, LUTs remain the right tool for the jobs that require a baked, portable, real-time colour mapping: on-set monitoring, creative looks, show LUTs for dailies, and output where a CST is not available.
1D vs 3D LUTs
Grid size and precision
Where LUTs still belong
On-set monitoring. A camera assistant’s monitor or a director’s reference display cannot run a full colour-managed pipeline in real time. A show LUT — baked to a 17- or 33-point cube for the specific camera format and display — is applied to give on-set viewers a consistent representation of the intended look.
Dailies and editorial. The show LUT distributed to editorial ensures that editors and producers see something close to the intended grade on offline monitors, without needing a colour-managed system.
Creative and show LUTs. The visual identity of a production — a drama’s warmth, a thriller’s cool desaturation, a period film’s emulsion-like contrast — can be encoded as a show LUT. Colourists increasingly build these looks as CST-based grades and then export the result as a LUT for distribution to set and editorial, giving the portability of a LUT file while retaining the precision of a managed grade in the suite.
Film emulation. LUTs for film stock emulation (Kodak 2383 print stock, Fuji 3513, etc.) remain common as creative starting points. These cannot be replicated accurately by a CST — they encode specific aesthetic choices about grain response, colour cross-over, and saturation behaviour that are creative rather than technical.
In a colour-managed pipeline, LUTs are applied within the working colour space — after the input CST and before the output CST. A creative LUT designed for Rec.709 must not be applied to log footage or inside a wide-gamut working space. Applying LUTs at the wrong point in the pipeline is the most common source of unexpected colour casts and contrast shifts.
05 — Log formats
Every major camera manufacturer ships their own log encoding and associated wide-gamut colour space. In a colour-managed pipeline, the colourist sets the input colour space per clip (or globally per camera) and the system handles the transform automatically. Understanding log formats is still necessary — you need to know which format a clip was recorded in to declare it correctly. A wrong input declaration produces subtly or obviously wrong colour throughout the grade.
Log footage is not broken footage. It is footage encoded to preserve maximum tonal information for grading. Viewing log material without a display transform on a standard monitor gives you no useful information about how it will look in the finished grade.
Always confirm the camera’s log format and firmware version with the DIT or camera department before starting a grade. Log C3 and Log C4 are not interchangeable — applying a Log C3 input CST to ALEXA 35 footage (Log C4) produces incorrect colour and clipped highlights.
06 — ACES
ACES — Academy Color Encoding System — is an open colour management framework developed by the Academy of Motion Picture Arts and Sciences. It defines a complete pipeline: a scene-linear reference space large enough to contain all real colours, standardised camera input transforms, and output transforms for every delivery standard. ACES and Resolve RCM are different implementations of the same principle (see Section 03); ACES’s key advantage is that it is facility-agnostic and supported across all major grading applications.
The ACES colour spaces
Input Transforms (IDTs) and Output Transforms (ODTs)
Getting footage into ACES requires an Input Device Transform (IDT) — a camera-specific conversion from the camera’s log colour space into the ACES reference space. IDTs are provided by camera manufacturers and maintained by the Academy. An ACES IDT is designed to produce scene-linear, perceptually accurate results in the reference space, not merely to look correct on a Rec.709 display.
Getting the grade out of ACES requires an Output Device Transform (ODT) — a conversion from the working colour space to a specific display standard. ACES provides ODTs for Rec.709, DCI P3, Rec.2020 PQ (HDR10), Display P3, and Dolby Vision. The same grade produces correct results for any display by swapping the ODT — the core value of ACES for productions with multiple delivery formats.
When to use ACES
For simpler productions — a single camera, a straightforward Rec.709 delivery, all work in Resolve — Resolve RCM is usually faster and equally accurate. ACES adds pipeline complexity. That complexity is worth it when the benefits above apply; it is unnecessary overhead when they do not.
07 — HDR and P3 delivery
High Dynamic Range video extends both the brightness range and colour volume of an image beyond what a standard Rec.709 signal can carry. A standard SDR display targets around 100 nits peak and Rec.709 colour. Modern HDR displays exceed 1000–2000 nits peak and reproduce a significantly wider gamut — typically targeting P3-D65, with Rec.2020 as the container. Three HDR formats are relevant to delivery, alongside the P3 colour space requirements for cinema and streaming.
HLG — Hybrid Log-Gamma
HLG was developed jointly by the BBC and NHK for broadcast. Its key property is backwards compatibility: an HLG signal displayed on a standard SDR television without any conversion looks like a normal SDR broadcast picture. This makes it practical for live transmission, where a single signal must serve both HDR and SDR receivers simultaneously. HLG carries no metadata — the signal contains all the information needed for display, simplifying it for broadcast chains.
HDR10
HDR10 uses the PQ (Perceptual Quantizer) transfer function, encoding absolute luminance values — a code value always represents the same number of nits regardless of the display. HDR10 carries static metadata (MaxCLL and MaxFALL — maximum content light level and maximum frame-average light level) which the display uses for tone mapping. HDR10 is an open standard and is the baseline HDR format required by every streaming platform. Content is delivered in a Rec.2020 container, typically mastered at P3-D65.
Dolby Vision
Dolby Vision is built on the same PQ transfer function as HDR10 but adds dynamic metadata — brightness and colour information that changes scene by scene. Where HDR10 gives the display a single set of numbers for the entire programme, Dolby Vision gives it per-scene instruction data, allowing more precise tone mapping for each scene. On a Dolby Vision-capable display this typically produces more accurate shadow detail and highlight rendering than HDR10 with static metadata.
Dolby Vision requires a licensed Dolby-certified facility for the trim pass — a secondary adjustment that creates the SDR version from the Dolby Vision master. The trim pass metadata travels in the file and allows a single deliverable to serve Dolby Vision, HDR10, and SDR playback.
P3 in cinema and streaming delivery
Cinema DCP requires DCI-P3 — the full P3 gamut with the DCI white point (approximately 6300 K, slightly warm relative to D65) and a 2.6 gamma transfer function. Cinema projectors are calibrated to this standard. A DCP graded on a D65-calibrated monitor must account for the white point difference to avoid a warm colour cast in the cinema.
Streaming platforms use Display P3 (P3 primaries, D65 white point) as their wide-gamut reference. Netflix requires P3-D65 as the SDR reference display for HDR mastering. Apple TV+ delivers content in Display P3 on P3-capable Apple devices. Disney+ uses Display P3 for its HDR reference. In practice, a properly mastered HDR10 deliverable (PQ, Rec.2020 container, P3-D65 mastering) satisfies the requirements of all major streaming platforms.
Simultaneous SDR and HDR delivery
Most streaming platforms require both an HDR master and an SDR version. The two are not brightness-adjusted copies of each other — HDR and SDR represent different creative decisions about how the image should look on their respective displays. Highlights that are rendered with full detail in HDR may need to be compressed differently for SDR; colours that exist in P3 need to be mapped back within Rec.709.
The industry-standard approach is to grade the HDR version first in a colour-managed pipeline, then create an SDR trim pass. In a Dolby Vision workflow the trim pass is embedded in the Dolby Vision metadata; in a non-Dolby workflow it is a separate SDR grade created by the colourist from the HDR reference. Either way, the SDR and HDR grades share the same creative intent — they are not independent grades.
Plan HDR delivery into the post schedule from the start. An HDR grade in a colour-managed pipeline takes more time than a standard SDR grade, and the trim pass requires a separate session. Treating HDR as an afterthought — requesting it after an SDR grade is locked — often means rebuilding the grade from scratch.