… for Codecs & Media

Tip #350: Isaac Newton’s Color Wheel

The Color Wheel is almost 400 years old!

A modern color wheel, modeled after Sir Isaac Newton’s initial work.

Topic $TipTopic

I was reading Blain Brown‘s excellent book, Digital Imaging, earlier this week and discovered that the color wheel that we use virtually every day was invented by Isaac Newton in 1666.

It started with Newton passing light through a prism to reveal the spectrum of light. While the spectrum of light is linear, Newton’s insight was to connect the two ends to form a circle. This made it much easier to see the relationships between primary (red, green and blue) colors with secondary (yellow, cyan, and magenta) colors.

His experiments led to the theory that red, yellow and blue were the primary colors from which all other colors are derived.  While that’s not entirely true, it’s still influential in the color wheels developed in the early 1800s as well as the color wheel currently used today. Add to his initial work the secondary colors of violet, orange and green—those which result from mixing the primary colors—and the color wheel begins to take shape.


The secondary colors – yellow and cyan – exist in the color spectrum and are formed by combining two primary colors. While magenta is formed by combining red and blue, they are at opposite ends of the color spectrum, which means that magenta, while a color, is not in the color spectrum!

Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #374: Constant Bitrate vs. Constant Quality

Two new encoding options for Blackmagic RAW media.

Topic $TipTopic

This article, written by Lewis MaGregor, first appeared in PremiumBeat. Let’s take a quick look at the two new encoding options in Blackmagic RAW.

  • Constant Bitrate. This makes sure your file sizes remain predictable and manageable because your media is never going to surpass the selected data rate. While Constant Bitrate is a surefire setting, to make sure the file sizes and quality will remain as advertised, it may cause issues when the footage being captured could do without the extra compression, ensuring that all details of a busy scene are clear.
  • Constant Quality. This has a variable bitrate with no upper data limit. This means if you’re filming a wedding and the guests start throwing confetti and rice, and more objects enter into focus, the bitrate will adjust to account for the increase in complex frame information, maintaining the overall quality of the entire image. Of course, this comes with larger file sizes that you can’t predict.

… for Codecs & Media

Tip #347: Codecs – Explained (Part 1)

Always something new to learn about codecs.

Topic $TipTopic

I’ve used the term “codec” for years. Still, there’s always something new to learn. For example, according to Wikipedia, “A codec is a device or computer program which encodes or decodes a digital data stream or signal. Codec is a portmanteau of coder-decoder.”

NOTE: A “portmanteau” is a linguistic blend of words, in which parts of multiple words or their phonemes (sounds) are combined into a new word. (Right, I didn’t know that either.)

“In the mid-20th century,” Wikipedia continues, “a codec was a device that coded analog signals into digital form using pulse-code modulation (PCM). Later, the name was also applied to software for converting between digital signal formats, including compander functions.

“In addition to encoding a signal, a codec may also compress the data to reduce transmission bandwidth or storage space. Compression codecs are classified primarily into lossy codecs and lossless codecs.

NOTE: See Tip #348 for a description of lossy vs. lossless.

“Two principal techniques are used in codecs, pulse-code modulation and delta modulation. Codecs are often designed to emphasize certain aspects of the media to be encoded. For example, a digital video (using a DV codec) of a sports event needs to encode motion well but not necessarily exact colors, while a video of an art exhibit needs to encode color and surface texture well.
Audio codecs for cell phones need to have very low latency between source encoding and playback. In contrast, audio codecs for recording or broadcast can use high-latency audio compression techniques to achieve higher fidelity at a lower bit-rate.”

Many multimedia data streams contain both audio and video, and often some metadata that permit synchronization of audio and video. Each of these three streams may be handled by different programs, processes, or hardware; but for the multimedia data streams to be useful in stored or transmitted form, they must be encapsulated together in a container format; such as MXF or QuickTime.

Here’s the original Wikipedia article.

… for Codecs & Media

Tip #348: Codecs – Explained (Part 2)

Lossy is smaller, Lossless is better

Topic $TipTopic

As we learned in Tip #347, there are two types of codecs: lossless and lossy. In this tip, I want to explain the difference. For this, we’ll turn to a Wikipedia article.


Lossless codecs are often used for archiving data in a compressed form while retaining all information present in the original stream. If preserving the original quality of the stream is more important than eliminating the correspondingly larger data sizes, lossless codecs are preferred. This is especially true if the data is to undergo further processing (for example editing) in which case the repeated application of processing (encoding and decoding) on lossy codecs will degrade the quality of the resulting data such that it is no longer identifiable (visually, audibly or both). Using more than one codec or encoding scheme successively can also degrade quality significantly. The decreasing cost of storage capacity and network bandwidth has a tendency to reduce the need for lossy codecs for some media.


Many popular codecs are lossy. They reduce quality in order to maximize compression. Often, this type of compression is virtually indistinguishable from the original uncompressed sound or images, depending on the codec and the settings used. The most widely used lossy data compression technique in digital media is based on the discrete cosine transform (DCT), used in compression standards such as JPEG images, H.26x and MPEG video, and MP3 and AAC audio. Smaller data sets ease the strain on relatively expensive storage sub-systems such as non-volatile memory and hard disk, as well as write-once-read-many formats such as CD-ROM, DVD and Blu-ray Disc. Lower data rates also reduce cost and improve performance when the data is transmitted.

… for Codecs & Media

Tip #321: Blend Modes in Brief

Blend modes create textures.

Blend mode options from Apple Motion.
Blend modes combine textures between clips. They are found in all modern NLEs.

Topic $TipTopic

Iain Anderson, at MacProVideo, wrote this up in more detail. But I liked his summary of blend modes, which I have modified from his article.

Blend modes allow us to combine textures, and sometimes colors, between clips or elements that are stacked vertically on top of each other.

Whether you are in Photoshop or Premiere, Final Cut or Motion, blend modes work the same way. These are arithmetical expressions, with nothing to adjust. You either like the effect or you don’t.

NOTE: If you don’t like the effect, tweak either the gray-scale or color value of the top clip and the results will change.

All these settings should be applied to the top clip. It will be the only clip that changes. Here’s what the settings mean.

  • Normal. This leaves the top clip’s image unaltered
  • Subtract, Darken, Multiply, Color Burn, and Linear Burn. These combine clips based upon darker grayscale values. For example, the top clip will darken clips below it. Multiply usually works best for adding darker areas.

NOTE: If nothing changes when you apply this setting, your top clip is too light. Darken it.

  • Add, Lighten, Screen, Color Dodge, and Linear Dodge. These combine textures between clips based upon lighter grayscale values. Screen usually works best for adding bright elements like sparks and flame.

IMPORTANT: Avoid using Add. It creates highlights that exceed legal white values. Screen does not.

  • Overlay, Soft Light, Hard Light, Vivid Light, Linear Light, Pin Light, and Hard Mix. These combine textures based on mid-tone grayscale values, often in a way that increases contrast. Overlay usually works best, though more often these days, I find myself using Soft Light.

NOTE: For better results, reduce opacity and play with the grayscale settings.

  • Difference and Exclusion. These mess with color values to create very hallucinogenic effects. What’s happening is that color values in the top clip are mathematically removed from the clips below in slightly different ways. Also useful for spotting the difference between two clips.
  • Stencil Alpha and Stencil Luma. These insert the background image into the foreground image. Use Stencil Alpha, provided the foreground has an alpha channel. If it doesn’t, use Stencil Luma, but the results may not be as good.
  • Silhouette Alpha and Silhouette Luma. These cut a hole into the background image based upon the foreground image shape. Again, use Silhouette Alpha if the foreground image has an alpha channel.
  • Behind. This displays the clips below the current effect. It is used when you are also using Stencil Alpha to insert one image into another.

The bottom choices will vary by application, and are covered in the Help files.