How deep is your colour?

Previously we've discussed encoding colour information in pixels with numbers from a zero-to-one range, where 0 stands for black, 1 for white and numbers in between represent corresponding shades of grey.

In a similar way, the RGB colour model uses three numbers for storing the brightness of each red, green and blue component, and represents a wide range of colours through mixing them. Here we'll address the precision of such a representation, which is defined by a number of bits dedicated in a particular file format to describing that 0 to 1 range, or a bit depth of a raster image.

Bits are the most basic units of storing information. Each can take only two values, which can be thought of as 0 or 1, off or on, the absence or presence of a signal, or black or white in our case. Therefore using a 1-bit per pixel (1-bit image) would give us a picture consisting only of black- and-white elements with no shades of grey.

However, the great thing about bits is that when you group them together there is exponential growth, as each new bit does not add two more values to the group, but instead doubles the amount of available unique combinations. It means that if we use 3 bits to describe each pixel value, we'd get not 6 (2*3) but 8 (2^3) possible combinations. 5 bits can produce 32, and 8 bits grouped together result in 256 different numbers.

That group of 8 bits is typically called a byte, which is another standard unit computers use to store data. This makes it convenient (although not necessary) to assign the whole byte to describe a colour of a pixel, and it is one byte which is most commonly used per channel.

This is true for the majority of digital images existing today, giving us the precision of 256 gradations from black to white possible – in either a monochrome picture or each Red, Green or Blue channel for RGB – and is what is called an 8-bit image in computer graphics, where the bit depth is traditionally measured per colour component.

In consumer electronics the same 8-bit RGB image would be called a 24-bit (True Colour) simply because they count the sum of all three channels together. An 8-bit RGB image can possibly reproduce 16,777,216 (256³) different colours and results in colour fidelity normally sufficient for not seeing any artifacts.

Moreover, regular consumer monitors are physically not designed to display more gradations (in fact they may be limited to even less, like 6 bits per channel) so why would someone bother and waste disk space or memory on higher bit depths?

Special cases

The most basic example when 256 gradations of an 8-bit image are not enough is a heavy colour-correction, which may quickly result in artifacts called banding. Rendering to higher bit depth solves this issue, and normally 16-bit formats with their 65,536 distinctions of grey are used for the purpose.

But even 10 bits like in Cineon/DPX format can give four times higher precision against the standard 8. Going above 2 bytes per channel, on the other hand, becomes impractical as the file size increases proportionally to the bit depth.

But regardless of the number of gradations (2, 4, 256, 65,536, and so on), as long as we are using an integer file format, these numbers all describe the values within the range from 0 to 1. For instance, middle grey value in sRGB colour space (the colour space of a regular computer monitor – not to be confused with RGB colour model) is around 0.5 – not 128, and white is 1 – not 255.

The same source image rendered to 8 bits integer, 16 bits integer and 16 bits float with two different colour corrections applied. Notice the colour banding in 8-bit and clipping highlights in integer versions

The same source image rendered to 8 bits integer, 16 bits integer and 16 bits float with two different colour corrections applied. Notice the colour banding in 8-bit and clipping highlights in integer versions

It is only because 8-bit representation is so popular, many programs by default measure the colour in it. But this is not how the underlying maths is working and thus can cause problems when trying to make sense of it. For example, take a Multiply blending mode – it's easy to learn empirically that it preserves the colour of the underlying layer in white areas of the overlay, and darkens the picture under the dark areas – but why is it called Multiply? What exactly is happening?

With black it makes sense – you multiply the underlying colour by 0 and get 0 – black, but why would it preserve white areas if white is 255? Multiplying something by 255 should make it much brighter, right? Well, it's because it is 1, not 255 (nor it is 4, or 16, or 65,536...) and so with the rest of the CG maths: white means 1.

The floating point

So far we've talked about how the bit depth works in integer formats; defining the amount of possible variations between 0 and 1 only. Floating point formats work in a different way. Bit depth does pretty much the same thing here – defines the colour precision. However, the numbers stored can be anything and may well lay outside of a 0 to 1 range, like brighter than white (above 1) or darker than black (negative numbers).

Internally this works by utilising the logarithmic scale and requires higher bit depths for achieving the same fidelity in the usually most important [0,1]. Normally at least 16 or even 32 bits are used per channel to represent floating point data with enough precision. At the cost of the memory usage, this allows for representing High Dynamic Range (HDR) imagery, additional freedom in compositing, and makes it possible to store arbitrary numerical data in image files like the World Position Pass.

It is natural for a 3D renderer to work in floating point internally, so most often the risk of clipping would occur when choosing a file format to save the final image. Having said that, even when dealing with already given low bit depth or clipped integer files, there are certain benefits in increasing its colour precision inside of the compositing software. (Nuke converts any imported source into a 32-bit floating point representation internally and automatically.)

The statement that 256 gradations of 8 bits are normally enough is illustrated here as well through comparison with 16 bits

The statement that 256 gradations of 8 bits are normally enough is illustrated here as well through comparison with 16 bits

Such conversion won't add any extra details or qualities to the existing data, but the results of your further manipulations would belong to a better colourspace with less quantisation errors (and a wider luminance range if you also convert an integer to float). Moreover, you can quickly fake HDR data by converting an integer image to float and gaining up the highlights (bright areas) of the picture.

This won't give you a real replacement for the properly acquired HDR, but should suffice for many purposes like diffuse IBL (Image Based Lighting). In other words, regardless of the output requirements, if you do your compositing in at least 16 bits the final downsampling and clipping for delivery won’t be an issue.

You can quickly fake HDR data by converting an integer image to float and gaining up the highlights

It's important to have a clear understanding of bit depth and integer/float differences to deliver the renders in adequate quality and not to get caught during the post-processing stage later. Read up on the file formats and options available in your software. For instance 16 bits can refer to both integer and floating point formats, which may be distinguished as Short (integer) and Half (float) in Maya.

As a general rule of thumb, use 16 bits if you plan to do extensive colour grading/ compositing, and make sure you render to floating point format to avoid clipping if any out-of- range values need to be preserved (like details in the highlights or negative values in z-depth or if you simply use a linear workflow). 16-bit OpenEXR files can be considered a good colour precision/file size compromise which can be used in most cases.

Words: Denis Kozlov

Denis Kozlov is a CG generalist with 15 years' experience in the film, TV, game advertising and education industries. He is currently working in Prague as a VFX supervisor. This article originally appeared in 3D World issue 182.

Thank you for reading 5 articles this month* Join now for unlimited access

Enjoy your first month for just £1 / $1 / €1

*Read 5 free articles per month without a subscription

Join now for unlimited access

Try first month for just £1 / $1 / €1

The Creative Bloq team is made up of a group of design fans, and has changed and evolved since Creative Bloq began back in 2012. The current website team consists of eight full-time members of staff: Editor Georgia Coggan, Deputy Editor Rosie Hilder, Ecommerce Editor Beren Neale, Senior News Editor Daniel Piper, Editor, Digital Art and 3D Ian Dean, Tech Reviews Editor Erlingur Einarsson and Ecommerce Writer Beth Nicholls and Staff Writer Natalie Fear, as well as a roster of freelancers from around the world. The 3D World and ImagineFX magazine teams also pitch in, ensuring that content from 3D World and ImagineFX is represented on Creative Bloq.