Xenon Post
Xenon Post — Tools & Reference

Colour Science & Management

Modern colour grading has shifted from LUT-based workflows to colour-managed pipelines built on precise mathematical transforms. Understanding colour spaces, working spaces, and CSTs is now as fundamental as understanding the camera formats they connect — from sensor to screen.

01 — How colour is captured

Film and digital cameras both record light — but the way they do it is fundamentally different, and that difference is the reason the entire colour science pipeline exists.

Film: logarithmic by nature

Film captures light through a photochemical process. Silver halide crystals in the emulsion respond to photons: the more light, the more crystals darken. But that response is not linear — it follows a characteristic S-curve (the Hurter–Driffield curve) that compresses bright highlights and opens up shadow detail in a way that closely mirrors human vision. Film’s latitude — its ability to hold detail across a wide exposure range — was a built-in property of the medium. The toe of the curve gently lifts shadow detail; the shoulder rolls off highlights rather than clipping them.

Digital sensors: linear capture

A digital camera sensor works differently. Each photosite contains a photodiode that counts photons electrically. Double the light, double the electrical signal — the response is perfectly linear. This creates two practical problems.

First, the human visual system is not linear — we are far more sensitive to changes in dark areas than bright ones. A linear encoding wastes most of its code values on highlights the eye can barely distinguish, while crushing subtle shadow gradations into a handful of values. Second, a sensor’s full dynamic range cannot fit into a standard broadcast signal. A modern cinema camera captures 14–17 stops; Rec.709 video can represent around six to eight stops before clipping. Encoding directly to Rec.709 throws away most of what the sensor captured.

CLIPS100%50%0%ShadowsMidtonesHighlightsRECORDED VALUESCENE EXPOSUREFilm — natural S-curveLog encodingLinear sensor (clips)highlight detail lost
Fig. 1 — Tone response curves. Film's natural S-curve preserves shadow and highlight detail. Log encoding solves the same problem mathematically — it compresses highlights to fit the full sensor range into a recordable signal. A linear sensor clips highlights entirely.

Log encoding: the solution

Log encoding applies a mathematical compression to the sensor’s linear signal before recording. The transformation redistributes code values to weight shadow and midtone detail more heavily — replicating what film’s chemistry did naturally — and squeezes the full dynamic range of the sensor into the available recording space. The result looks flat and desaturated, but all the captured information is preserved for grading.

Each camera manufacturer uses their own log encoding: ARRI’s Log C, Sony’s S-Log3, Blackmagic’s BRAW. All solve the same problem; they differ in their mathematical curve and the colour gamut they record into.

Why gamut matters

Sensors also capture a wider range of colours than Rec.709 can represent — particularly in greens and cyans. So log footage is recorded into a wide-gamut colour space — S-Gamut3, ARRI Wide Gamut, Blackmagic Wide Gamut — that can hold the full captured colour volume without clipping. This is why colour management exists: to track each input colour space, maintain a wide working space during grading, and transform precisely to whatever each deliverable requires.

02 — Colour spaces and gamuts

A colour space is defined by three parameters: its colour primaries (which specific red, green, and blue define the gamut boundaries), its white point (what neutral looks like — D65 for most standards), and its transfer function (how values are encoded — linear, gamma, or log). Two images can have identical RGB numbers and look completely different if they were encoded in different colour spaces.

D65 whiteRedGreenBlueRec.709Broadcast / SDR streamingP3Cinema DCP · streaming wide gamut≈26% larger than Rec.709Rec.2020HDR reference · ≈67% larger than Rec.709
Fig. 2 — Gamut comparison based on CIE primaries. Rec.709 is the broadcast standard. P3 extends significantly further in green and red — used for cinema DCP and streaming wide-gamut. Rec.2020 is the HDR reference standard; most content is mastered to P3, not full Rec.2020.

The key colour spaces

Colour spaceGamutWhite pointTransfer fnPrimary use
Rec.709Rec.709D65BT.1886 (≈2.4γ)Broadcast HD, SDR streaming
sRGB≈Rec.709D65sRGB (≈2.2γ)Web, computer monitors
DCI-P3P3DCI (6300 K)2.6γCinema DCP delivery
Display P3P3D65sRGB / 2.2γApple devices, streaming reference
Rec.2020Rec.2020D65PQ or HLGHDR delivery standard
ACES AP0ACES AP0≈D60LinearReference / archival
ACEScct (AP1)ACES AP1≈D60Log (ACEScct)ACES grading working space
DaVinci Wide GamutDWGD65DaVinci IntermediateResolve grading working space

P3: the bridge between broadcast and Rec.2020

DCI-P3 was developed for digital cinema projection and is the required delivery format for all DCPs (Digital Cinema Packages). It uses the P3 primaries with a slightly warm DCI white point (approximately 6300 K) and a simple 2.6 gamma transfer function. Every theatrical release is mastered to DCI-P3.

Display P3 uses the same colour primaries as DCI-P3 but with the standard D65 white point. This is the colour space used on all modern Apple devices (iPhone, iPad, MacBook Pro) and as the reference space for streaming platforms including Netflix, Apple TV+, and Disney+. When Netflix specifies a P3-D65 deliverable, they mean Display P3.

Although Rec.2020 is the defined standard for HDR delivery, no display currently achieves full Rec.2020 reproduction. In practice, HDR content is mastered to P3-D65 — it fits within Rec.2020, gives meaningful wide-gamut headroom over Rec.709, and can be accurately monitored with current display technology. Full Rec.2020 mastering is effectively theoretical at present.

P3 covers approximately 26% more colour volume than Rec.709, with the expansion mostly in greens and reds. Rec.2020 covers approximately 67% more than Rec.709. On a P3-capable display, the difference from Rec.709 is clearly visible — particularly in saturated foliage, skies, and skin tones.

03 — Colour management

In an unmanaged workflow, the grading application treats every piece of footage as if it is already in the working colour space. The colourist manually applies LUTs at the right points in the pipeline — and if anything is applied out of order, the colour is wrong. In a colour-managed workflow, the application knows the colour space of every input and every output, and handles the transforms automatically using mathematically precise Colour Space Transforms.

CSTs vs LUTs

A Colour Space Transform (CST) is a mathematical formula that converts every input colour to its correct output colour using the defined equations of both colour spaces. A LUT approximates the same transform by pre-computing results on a grid of sample values and interpolating between them. The distinction matters:

CST
Mathematically exact
Computes the correct result for every input value. Can be inverted, chained, and updated when manufacturers release improved colour science. No interpolation error. The gold standard for technical transforms.
LUT
Approximation
Pre-computed for a grid of values; intermediate colours are interpolated. Accurate within its intended input range but degrades outside it. Cannot be inverted cleanly. Best suited for creative looks and on-set monitoring.

In modern grading, CSTs have largely replaced LUTs for all technical transforms — camera input, working space conversions, and display output. LUTs remain the right tool for creative looks, show LUTs, and on-set monitoring (see Section 04).

Three approaches to colour management

ApproachCore mechanismWorking spaceBest for
Direct LUTManually applied LUT filesRec.709 or anySimple single-camera / single-delivery projects
ACESOCIO pipeline with IDTs and ODTsACEScct (AP1 log)Multi-facility, multi-vendor, VFX-heavy productions
Resolve RCMBuilt-in mathematical CSTs (no OCIO needed)DaVinci Wide Gamut + DIResolve-only pipelines, especially mixed-camera drama
CameraLog / RAWInput CSTCamera → Working spaceWorking SpaceDWG / ACEScctGradeCurves / nodes / looksOutput CSTWorking space → DisplayRec.709 / SDRP3-D65 / HDR10DCI-P3 CinemaDolby Vision
Fig. 3 — The colour-managed pipeline. Input CSTs convert each camera format to the shared working space; the colourist grades there; output CSTs produce every deliverable from the same grade. Swapping the output CST produces a different deliverable without touching the grade.

Resolve Color Management (RCM) and DaVinci Wide Gamut

Blackmagic Design’s own colour management system — built into DaVinci Resolve since version 17 — uses two components: DaVinci Wide Gamut (DWG) as the working colour gamut, and DaVinci Intermediate (DI) as the log encoding applied to it. Together they form Resolve Color Management.

DaVinci Wide Gamut is a scene-referred gamut wider than Rec.2020 and comparable in scope to ACES AP1. DaVinci Intermediate is a log curve designed to behave predictably under colour correction, with a toe and shoulder similar to ACEScct. When RCM is enabled in Resolve, the application automatically applies the correct input transform for each camera format on ingest and the selected output CST for the viewer and every deliverable — no OCIO configuration required.

RCM has become the dominant choice for productions graded entirely in Resolve. It is faster to configure than ACES, ships with manufacturer-certified input transforms for all supported cameras, and handles mixed-camera productions (ARRI + Sony + Blackmagic on the same timeline) automatically.

ACES
Open standard
Supported across DaVinci Resolve, Baselight, Nucoda, and any OCIO-compatible application. Required when collaborating across facilities or delivering scene-linear material to VFX vendors on other software.
Resolve RCM
Resolve-native
Faster to configure, built-in transforms for every supported camera. Preferred for Resolve-only pipelines. Cannot be used for cross-facility interchange with non-Resolve systems without conversion.

The CST node in Resolve

Even in unmanaged Resolve timelines, colourists can insert explicit CST nodes anywhere in the node graph to perform a precise transform between two colour spaces. This is how a colourist working without full RCM can still use mathematically correct transforms rather than LUTs — for example, converting S-Log3/S-Gamut3 footage to DaVinci Wide Gamut at the clip level using the camera manufacturer’s certified colour science, then reverting at the output for the deliverable colour space.

Choose one colour management approach — ACES, RCM, or direct LUT — and apply it consistently across the entire project. Mixing approaches within a timeline produces incorrect results and is one of the most common sources of colour pipeline errors in post-production.

DaVinci Wide Gamut is not the same as Blackmagic Wide Gamut. Blackmagic Wide Gamut is the native colour gamut of Blackmagic camera sensors — a camera-native gamut like ARRI Wide Gamut or Sony S-Gamut3. DaVinci Wide Gamut is a grading working space, designed to contain all camera gamuts without clipping. Both appear in a Resolve RCM workflow at different stages of the pipeline.

04 — LUTs: still essential

LUTs have not disappeared from modern post-production — they have found their natural role. As technical transforms have moved to mathematically precise CSTs, LUTs remain the right tool for the jobs that require a baked, portable, real-time colour mapping: on-set monitoring, creative looks, show LUTs for dailies, and output where a CST is not available.

1D vs 3D LUTs

1D LUT
Per-channel curves
Each colour channel (R, G, B) is mapped independently. A 1D LUT can adjust brightness, contrast, gamma correction, and white balance, but it cannot shift hue or mix information between channels. One number in, one number out.
3D LUT
Full colour volume
Input is a combination of R, G, and B together. The output can be any RGB combination — the LUT can shift hue, change saturation, and perform any cross-channel transformation. Any creative or technical colour operation requires a 3D LUT.
1D LUTRCURVERGCURVEGBCURVEBInputOutputIndependent per channelCannot shift hue or mix channels3D LUTRGBInput colours at grid nodesEach maps to any output RGB — hue, sat, cross-channel all possible
Fig. 4 — 1D LUT (left): each channel processed independently by a separate curve — cannot shift hue or mix channels. 3D LUT (right): input colours sit at grid nodes in a cube; each node maps to any output colour.

Grid size and precision

Grid sizeTotal nodesTypical use
17³4,913On-set and preview LUTs — fast to compute, acceptable for monitoring
33³35,937Standard show and creative LUTs; good balance of accuracy and file size
65³274,625High-precision output LUTs; ACES ODTs and display calibration

Where LUTs still belong

On-set monitoring. A camera assistant’s monitor or a director’s reference display cannot run a full colour-managed pipeline in real time. A show LUT — baked to a 17- or 33-point cube for the specific camera format and display — is applied to give on-set viewers a consistent representation of the intended look.

Dailies and editorial. The show LUT distributed to editorial ensures that editors and producers see something close to the intended grade on offline monitors, without needing a colour-managed system.

Creative and show LUTs. The visual identity of a production — a drama’s warmth, a thriller’s cool desaturation, a period film’s emulsion-like contrast — can be encoded as a show LUT. Colourists increasingly build these looks as CST-based grades and then export the result as a LUT for distribution to set and editorial, giving the portability of a LUT file while retaining the precision of a managed grade in the suite.

Film emulation. LUTs for film stock emulation (Kodak 2383 print stock, Fuji 3513, etc.) remain common as creative starting points. These cannot be replicated accurately by a CST — they encode specific aesthetic choices about grain response, colour cross-over, and saturation behaviour that are creative rather than technical.

In a colour-managed pipeline, LUTs are applied within the working colour space — after the input CST and before the output CST. A creative LUT designed for Rec.709 must not be applied to log footage or inside a wide-gamut working space. Applying LUTs at the wrong point in the pipeline is the most common source of unexpected colour casts and contrast shifts.

05 — Log formats

Every major camera manufacturer ships their own log encoding and associated wide-gamut colour space. In a colour-managed pipeline, the colourist sets the input colour space per clip (or globally per camera) and the system handles the transform automatically. Understanding log formats is still necessary — you need to know which format a clip was recorded in to declare it correctly. A wrong input declaration produces subtly or obviously wrong colour throughout the grade.

Log footage is not broken footage. It is footage encoded to preserve maximum tonal information for grading. Viewing log material without a display transform on a standard monitor gives you no useful information about how it will look in the finished grade.

CameraLog formatColour gamutNotes
ARRILog C3AWG3ALEXA Classic, Mini, LF, Mini LF — the most widely graded log format in high-end drama
ARRILog C4AWG4ALEXA 35 — higher dynamic range than Log C3; new IDTs required — not interchangeable with Log C3
SonyS-Log3S-Gamut3 / S-Gamut3.CineVenice 2, FX9, FX6, FX3, A7S III — S-Gamut3.Cine is the narrower gamut recommended for most workflows
BlackmagicBlackmagic Film / BRAWBlackmagic Wide GamutURSA Mini Pro, Pocket 6K — BRAW encodes log data and metadata into a raw container
REDLog3G10REDWideGamutRGBKOMODO, V-RAPTOR, MONSTRO — applied to REDCODE RAW on export
CanonCanon Log 3Cinema GamutC300 Mark III, C70, R5C — Canon Log 3 is preferred over Canon Log 2 for most grading workflows
PanasonicV-LogV-GamutEVA1, LUMIX S5 II / S5 IIX — same V-Log used in VariCam; available in smaller bodies
DJID-Log MDJI D-GamutMavic 3 Pro, Inspire 3 — D-Log M (Modified) is DJI’s current preferred log for grading
FujifilmF-Log2F-GamutGFX100 II, X-H2S — F-Log2 offers greater dynamic range than original F-Log

Always confirm the camera’s log format and firmware version with the DIT or camera department before starting a grade. Log C3 and Log C4 are not interchangeable — applying a Log C3 input CST to ALEXA 35 footage (Log C4) produces incorrect colour and clipped highlights.

06 — ACES

ACES — Academy Color Encoding System — is an open colour management framework developed by the Academy of Motion Picture Arts and Sciences. It defines a complete pipeline: a scene-linear reference space large enough to contain all real colours, standardised camera input transforms, and output transforms for every delivery standard. ACES and Resolve RCM are different implementations of the same principle (see Section 03); ACES’s key advantage is that it is facility-agnostic and supported across all major grading applications.

The ACES colour spaces

ACES2065-1 (AP0)
The reference space
Scene-linear, extremely wide gamut encompassing all visible colours. Used for interchange and archiving. Not used for grading — the values are not perceptually uniform and the gamut is larger than any display.
ACEScct
The grading space
A log-like encoding of the AP1 primaries (slightly narrower than AP0, still wider than P3 or Rec.2020). Behaves similarly to camera log in a grading suite. The recommended working space for most ACES projects.

Input Transforms (IDTs) and Output Transforms (ODTs)

Getting footage into ACES requires an Input Device Transform (IDT) — a camera-specific conversion from the camera’s log colour space into the ACES reference space. IDTs are provided by camera manufacturers and maintained by the Academy. An ACES IDT is designed to produce scene-linear, perceptually accurate results in the reference space, not merely to look correct on a Rec.709 display.

Getting the grade out of ACES requires an Output Device Transform (ODT) — a conversion from the working colour space to a specific display standard. ACES provides ODTs for Rec.709, DCI P3, Rec.2020 PQ (HDR10), Display P3, and Dolby Vision. The same grade produces correct results for any display by swapping the ODT — the core value of ACES for productions with multiple delivery formats.

When to use ACES

ScenarioWhy ACES helps
Multiple camera formatsAll cameras normalised to one reference space — the colourist sees consistent starting material regardless of camera
Multi-facility collaborationACES and OCIO are understood by every major grading application — material can be exchanged without colour interpretation issues
Significant VFX workVFX vendors receive scene-linear ACES2065-1 material — consistent results when compositing camera and CG elements
Simultaneous SDR and HDR deliverySwap the ODT to produce correct Rec.709 SDR, HDR10, and Dolby Vision outputs from the same grade
Long-form archivingACES2065-1 is the Academy’s recommended archival format — future-proof against display technologies that don’t yet exist

For simpler productions — a single camera, a straightforward Rec.709 delivery, all work in Resolve — Resolve RCM is usually faster and equally accurate. ACES adds pipeline complexity. That complexity is worth it when the benefits above apply; it is unnecessary overhead when they do not.

07 — HDR and P3 delivery

High Dynamic Range video extends both the brightness range and colour volume of an image beyond what a standard Rec.709 signal can carry. A standard SDR display targets around 100 nits peak and Rec.709 colour. Modern HDR displays exceed 1000–2000 nits peak and reproduce a significantly wider gamut — typically targeting P3-D65, with Rec.2020 as the container. Three HDR formats are relevant to delivery, alongside the P3 colour space requirements for cinema and streaming.

HLG — Hybrid Log-Gamma

HLG was developed jointly by the BBC and NHK for broadcast. Its key property is backwards compatibility: an HLG signal displayed on a standard SDR television without any conversion looks like a normal SDR broadcast picture. This makes it practical for live transmission, where a single signal must serve both HDR and SDR receivers simultaneously. HLG carries no metadata — the signal contains all the information needed for display, simplifying it for broadcast chains.

HDR10

HDR10 uses the PQ (Perceptual Quantizer) transfer function, encoding absolute luminance values — a code value always represents the same number of nits regardless of the display. HDR10 carries static metadata (MaxCLL and MaxFALL — maximum content light level and maximum frame-average light level) which the display uses for tone mapping. HDR10 is an open standard and is the baseline HDR format required by every streaming platform. Content is delivered in a Rec.2020 container, typically mastered at P3-D65.

Dolby Vision

Dolby Vision is built on the same PQ transfer function as HDR10 but adds dynamic metadata — brightness and colour information that changes scene by scene. Where HDR10 gives the display a single set of numbers for the entire programme, Dolby Vision gives it per-scene instruction data, allowing more precise tone mapping for each scene. On a Dolby Vision-capable display this typically produces more accurate shadow detail and highlight rendering than HDR10 with static metadata.

Dolby Vision requires a licensed Dolby-certified facility for the trim pass — a secondary adjustment that creates the SDR version from the Dolby Vision master. The trim pass metadata travels in the file and allows a single deliverable to serve Dolby Vision, HDR10, and SDR playback.

P3 in cinema and streaming delivery

Cinema DCP requires DCI-P3 — the full P3 gamut with the DCI white point (approximately 6300 K, slightly warm relative to D65) and a 2.6 gamma transfer function. Cinema projectors are calibrated to this standard. A DCP graded on a D65-calibrated monitor must account for the white point difference to avoid a warm colour cast in the cinema.

Streaming platforms use Display P3 (P3 primaries, D65 white point) as their wide-gamut reference. Netflix requires P3-D65 as the SDR reference display for HDR mastering. Apple TV+ delivers content in Display P3 on P3-capable Apple devices. Disney+ uses Display P3 for its HDR reference. In practice, a properly mastered HDR10 deliverable (PQ, Rec.2020 container, P3-D65 mastering) satisfies the requirements of all major streaming platforms.

FormatTransferColour gamutMetadataSDR compat.
HLGHybrid Log-GammaRec.2020NoneYes — native
HDR10PQRec.2020 (P3-D65 mastering)Static (MaxCLL / MaxFALL)No — separate SDR
Dolby VisionPQRec.2020 (P3 display)Dynamic (per-scene)Via trim pass in same file
DCI-P3 (DCP)2.6 gammaP3 (DCI white point)N/ANo — separate KDM

Simultaneous SDR and HDR delivery

Most streaming platforms require both an HDR master and an SDR version. The two are not brightness-adjusted copies of each other — HDR and SDR represent different creative decisions about how the image should look on their respective displays. Highlights that are rendered with full detail in HDR may need to be compressed differently for SDR; colours that exist in P3 need to be mapped back within Rec.709.

The industry-standard approach is to grade the HDR version first in a colour-managed pipeline, then create an SDR trim pass. In a Dolby Vision workflow the trim pass is embedded in the Dolby Vision metadata; in a non-Dolby workflow it is a separate SDR grade created by the colourist from the HDR reference. Either way, the SDR and HDR grades share the same creative intent — they are not independent grades.

Plan HDR delivery into the post schedule from the start. An HDR grade in a colour-managed pipeline takes more time than a standard SDR grade, and the trim pass requires a separate session. Treating HDR as an afterthought — requesting it after an SDR grade is locked — often means rebuilding the grade from scratch.

AU Broadcaster Delivery Specs →Aspect Ratios & Delivery →Codecs & Containers →