Codec is short for the compressor and decompressor of digital video. If only it were that simple. Of course with any technology that evolves quickly, there is more to the subject.

This article deals with how codecs for recording video with digital single-lens reflex (DSLR) or mirrorless still cameras that have video capabilities. It does not cover delivery codecs for sending videos to YouTube and other forms of playback.

Capture codecs

These are the codecs that a camera shooting video uses to compress the digital image. There are two parts of this codec: Bit depth and chroma subsampling.

Bit depth

Each frame of video has information about the three colors that make it up, red, green and blue. The more information per pixel the higher the bit depth. Higher bit depths are larger files and give editors more flexibility in the color and exposure changes they can make in post-production.

Video 101: Codecs
Data size comparison of 8, 10 and 12-bit depths.
  • An 8-bit video frame has color information that spans 256 steps or tones for each color red, green and blue.
  • A 10-bit video frame has 1024 tones per color and is twice the size of 8-bit.
  • A 12-bit video frame has 4096 tones per color and is four times larger than 8-bit.

The higher the bit depth the smoother the look on the screen. Lower bit depth captures can display banding for instance.

Chroma subsampling

Chroma subsampling compression lowers the color information without changing the luminance data. This type of compression lowers the bandwidth and the file size without compromising image quality too much. How does chroma subsampling work?

The human eye is more sensitive to brightness values in black and white than it is to color. When color television was developed one of the hurdles to overcome was the bandwidth of the broadcast signal to carry both the brightness data of the image and its color. These two parts of the image are called luma and chroma.

Luma and Chroma are separated into two 4-pixel blocks each. Human vision is more sensitive to luma (brightness) than it is to chroma (color.) This characteristic is used in the capture codecs.

Video 101: Codecs
Two 4-pixel luma (brightness) blocks
Video 101: Codecs
Two 4-pixel chroma (color) blocks.


Since brightness is most important, each pixel in each block is sampled. Since there are two blocks of 4-pixels, directly sampling one pixel for each of the four in the block, it is sampled 4 times. Sampling each pixel in each of the two 4-pixel chroma blocks results in the 4:4:4 codec. This is the best quality codec holding the most data and resulting in the largest file size. This codec is best for green screen and high-quality editing. This codec is uncompressed.

Video 101: Codecs
Video shot with 4:4:4 is not compressed. It is the highest data rate and file size.


This codec compresses data by sampling chroma pixels. It still has 1 pixel for each level of luma. The two blocks of chroma are representational.

Video 101: Codecs
Video shot in 4:2:2 has full luminance information and half the color information of 4:2:2.


This has the highest compression, smallest file size and the least amount of color information.

Video 101: Codecs
4:2:0 still has all of the luminance data but one-quarter of the chroma.

Uses of chroma subsampling

4:4:4 is the best version for editing although that is for very high end. Most video is shot in 4:2:2 and edited with that compression. Why? Because most of the video people see in streaming, from Blu-ray Discs or even in digital projection in a theater is 4:2:0.

The other primary use of 4:4:4 is in computer displays and in gaming applications and consoles.

The compressed codecs lose very little perceived quality when seen by human eyes. Artifacts from chroma subsampling is most noticeable when text is layered over a flat color.

Compression: Intraframe and interframe

These primary types of compression downsize the whole scene in a frame of video or only the parts of the video frame that are different using periodic keyframes as references. 


Video 101: Codecs
Intraframe compresses the entire scene in each frame of video.

Intraframe stores all of the scene in a frame. It samples areas of the frame then compresses them together. This makes a bigger file but it is much easier to edit since the computer doesn’t have to refer to keyframes to build each frame. While easier to edit, intraframe compression makes bigger files.


Interframe video compression throws away parts of a scene that do not move. It relies on keyframes and a fast computer to reassemble the individual frames for editing.

Interframe stores only the part of the picture in the frame that is different. It relies on periodic keyframes that have full pictures every few frames for reference. Interframe files are smaller but require a lot of computing power to rebuild the missing parts of each frame from the keyframes. The computer has to find the keyframe and assemble the moving part with it. This can be very processor intensive. When editing on a laptop or older computer, the savings in file size might not offset the extra time the computer has to take.


Containers also called wrappers, are the extension at the end of a video file like .mov, .avi, .mp4 and .wmv are common extensions for video files. Manufacturers decide what codec to put in what wrapper. Codecs are not necessarily compatible with all cameras and editing suites. Still cameras, both digital single-lens reflex and mirrorless, with a few exceptions, use common codecs that work on both Mac and Windows computers. In general, .mov has more data than .mp4.

The format of a video capture has the codec and its container.

Video 101

Still photographers are using the video features in their cameras more and more. Like me, they probably wonder what all of the video jargon in their camera’s menus and manuals mean. Video 101 is written to help answer those questions.