… for Codecs & Media

Tip #590: What is a Proxy File?

 Larry Jordan – LarryJordan.com

Proxy files allow editing on slower systems, using less storage.

Topic $TipTopic

Proxy files are designed to be smaller, easy-to-edit media files when compared to camera native. But what does “easy-to-edit” and “smaller” actually mean?


There are two ways to compress video:

  • GOP (also called “inter-frame compression”) compresses images in groups. This creates very small file sizes, but requires more CPU performance to edit because the computer needs to deconstruct each group before it can edit a specific frame.
  • I-frame (also called “intra-frame compression”) compresses each frame individually. This creates larger files, but even slower systems can edit them smoothly because each frame is immediately available for editing.

All proxy files use I-frame compression, which decreases the load on the CPU and speeds editing.


Proxy files are 1/4 the resolution of the camera native media, which decreases the load on storage system capacity and bandwidth. For example, a proxy file for a 1920 x 1080 frame is only 960 x 540 pixels. This reduces storage bandwidth by 75%.

However, this reduced resolution also means that proxy files should not be used for final output because they don’t have the full resolution of the master file.


Both Premiere and Final Cut support seamless proxy editing, which is recommended for larger frame sizes and multicam editing. Then, for final output, switch from the proxy to high-quality files to get the best images from your project.

Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Random Weirdness

Tip #548: What Is ISO?

Larry Jordan – LarryJordan.com

ISO affects gain after the image is captured.

Topic $TipTopic

This tip, presenting by Chris Lee, first appeared in PetaPixel.com. This is a summary.

ISO is probably THE most misunderstood term as it relates to digital photography. Stemming in part from people equating ISO sensitivity directly with film speed, and in part from some useful-but-misleading simplifications that are shared quite frequently, people often share two bits of misinformation:

  • ISO is one way to increase your exposure without changing shutter speed or aperture
  • ISO “increases your sensor’s sensitivity to light.”

As Lee explains in the video above, neither of these things are technically true, though both ARE useful ways to think about ISO when you’re out shooting.

ISO is a gain knob. Electrical amplification that is done after your camera is done gathering light. It has no impact on how much light your camera sensor’s photosites can gather during a given exposure, and therefore has no direct connection to exposure itself, despite being part of “the exposure triangle.”

At the most basic level—and Lee plans to do a follow-up explaining more in-depth concepts like ISO invariance and how different cameras handle this setting—ISO is the level of electrical amplification done to the analog “signal” collected by your image sensor before it’s sent to the analog to digital converters (ADCs), eventually producing an image.


Visit the link above and watch Chris Lee’s video. In 12 minutes you’ll understand what ISO is and how it affects exposure.

… for Codecs & Media

Tip #565: Frame Rate Does Not Create Motion Blur

Motion blur is based on shutter speed, not frame rate.

Topic $TipTopic

A frequent email from filmmakers asks about how to change their project’s frame rate to make it more “cinematic.” Specifically, they are looking to convert to 24 fps. The problem is that changing the frame rate will only make a video look worse.

Motion blur, which is a slight blurring of the edges of a moving object, is caused by something moving while the shutter is open. If the shutter speed is slow, meaning the shutter is open for a longer time, the motion blur is exaggerated. If the shutter speed is fast, the motion blur is minimized.

Changing the frame rate after an image is recorded won’t affect motion blur. Motion blur is determined at the moment the original image is recorded.

Changing the frame rate after a clip is recorded can only be done by removing or adding frames. For example, changing the frame rate from 30 fps to 24 fps means that every fifth frame of the original media will be removed. There’s no other way to do this, you can’t “reallocate” frames to match a different frame rate; you can only drop or add them.

In the case of dropping frames, this means that the video will have a slight “stutter” every five frames, which will mess with any kind of smooth camera move.

The moral of this story is: shoot the frame rate you need to deliver and don’t change frame rates after the fact.


The web supports any frame rate you can upload, unlike broadcast or cable. There’s no benefit to converting frame rates.

… for Codecs & Media

Tip #577: VoIP Audio is Not High-Quality

Larry Jordan – LarryJordan.com

Codecs are also why our phones work.

Topic $TipTopic

We’ve all heard of codecs. These convert audio, or video, from analog into digital signals and back.

Just as codecs are the heart of digital visual media, they are also at the heart of VoIP, which stands for Voice over IP. This technology is what allows you to connect a telephone to the Internet and have it actually work.

An audio codec works its magic by sampling the audio signal several thousand times per second. For instance, a WAV codec samples the audio at 64,000 times a second. It converts each tiny sample into digitized data and compresses it for transmission. When the 64,000 samples are reassembled, the pieces of audio missing between each sample are so small that to the human ear, it sounds like one continuous second of audio signal.

What I learned recently is that the codecs used for VoIP don’t sample at 64,000 samples per second. Rather, they sample at 8,000 samples per second. According to the Nyquist theorem, if you divide the sample rate by 2, that yields the maximum frequency response for that sample rate. This means that the maximum high frequency carried by most VoIP systems is 4,000 Hz. This is well below the frequency range of many consonants, such as “S” and “T.”


In case you were wondering, codecs use advanced algorithms to help sample, sort, compress and packetize audio data. The CS-ACELP algorithm (CS-ACELP = conjugate-structure algebraic-code-excited linear prediction) is one of the most prevalent algorithms in VoIP. CS-ACELP organizes and streamlines the available bandwidth. Annex B is an aspect of CS-ACELP that creates the transmission rule, which basically states “if no one is talking, don’t send any data.” The efficiency created by this rule is one of the greatest ways in which packet switching is superior to circuit switching. It’s Annex B in the CS-ACELP algorithm that’s responsible for that aspect of the VoIP call.

And, no, that won’t be on the quiz.

… for Visual Effects

Tip #556: Blend Modes in Brief

Larry Jordan – LarryJordan.com

Blend modes create textures.

Blend mode options in Photoshop.
Blend modes combine textures between clips. They are found in all modern NLEs, like this list from Photoshop.

Topic $TipTopic

Iain Anderson, at MacProVideo, wrote this up in more detail. But I liked his summary of blend modes, which I have modified from his article.

Blend modes allow us to combine textures, and sometimes colors, between clips or elements that are stacked vertically on top of each other.

Whether you are in Photoshop or Premiere, Final Cut or Motion, blend modes work the same way. These are arithmetical expressions, with nothing to adjust. You either like the effect or you don’t.

NOTE: If you don’t like the effect, tweak either the gray-scale or color value of the top clip and the results will change.

All these settings should be applied to the top clip. It will be the only clip that changes. Here’s what the settings mean.

  • Normal. This leaves the top clip’s image unaltered
  • Subtract, Darken, Multiply, Color Burn, and Linear Burn. These combine clips based upon darker grayscale values. For example, the top clip will darken clips below it. Multiply usually works best for adding darker areas.

NOTE: If nothing changes when you apply this setting, your top clip is too light. Darken it.

  • Add, Lighten, Screen, Color Dodge, and Linear Dodge. These combine textures between clips based upon lighter grayscale values. Screen usually works best for adding bright elements like sparks and flame.

IMPORTANT: Avoid using Add. It creates highlights that exceed legal white values. Screen does not.

  • Overlay, Soft Light, Hard Light, Vivid Light, Linear Light, Pin Light, and Hard Mix. These combine textures based on mid-tone grayscale values, often in a way that increases contrast. Overlay usually works best, though more often these days, I find myself using Soft Light.

NOTE: For better results, reduce opacity and play with the grayscale settings.

  • Difference and Exclusion. These mess with color values to create very hallucinogenic effects. What’s happening is that color values in the top clip are mathematically removed from the clips below in slightly different ways. Also useful for spotting the difference between two clips.
  • Stencil Alpha and Stencil Luma. These insert the background image into the foreground image. Use Stencil Alpha, provided the foreground has an alpha channel. If it doesn’t, use Stencil Luma, but the results may not be as good.
  • Silhouette Alpha and Silhouette Luma. These cut a hole into the background image based upon the foreground image shape. Again, use Silhouette Alpha if the foreground image has an alpha channel.
  • Behind. This displays the clips below the current effect. It is used when you are also using Stencil Alpha to insert one image into another.

The bottom choices will vary by application, and are covered in the Help files.

… for Codecs & Media

Tip #559: What is “Frame Reordering” in Apple Compressor?

Larry Jordan – LarryJordan.com

This defaults to on. Leave it that way.

The Frame Reordering option in Apple Compressor.

Topic $TipTopic

Ever wonder what Frame Reordering does in Apple Compressor? Me, too. So, I did some research. Here’s what I learned.

Frame reordering is the concept of allowing frames to be decompressed in a different order than their display order. For almost all cases, leave this box checked for H.264 encoding. Some more advanced compressors use “frame reordering” to more efficiently represent movie data.

Important: If you select “Allow frame reordering,” your output file may be more efficiently compressed but may not be compatible with decoders on older hardware. For example, if someone asks you to create your content with “B-frames turned off.”

In looking at YouTube’s latest upload specs, they make no mention of this setting. My suggestion is to leave it on unless you are specifically required to turn it off.

… for Codecs & Media

Tip #560: What is “Clean Aperture” in Apple Compressor?

Larry Jordan – LarryJordan.com

This option should be off for most media today.

Uncheck “Add clean aperture information” when working with digital media.

Topic $TipTopic

One of the checkboxes in Apple Compressor is “Add clean aperture information.” What is this and should it be checked or unchecked?

Apple’s Help Files state: Select this checkbox to define clean picture edges in the output file. This property adds information to the output file to define how many pixels to hide, ensuring that no artifacts appear along the edges. When you play the output file in QuickTime Player, the pixel aspect ratio will be slightly altered. This process doesn’t affect the actual number of pixels in the output file—it only controls whether information is added to the file that a player can use to hide the edges of the picture.

For example, this setting would clean up edges from a VHS tape transfer that waver.

YouTube, on the other hand, prefers this option be unchecked to prevent the video from being cropped during playback.

My suggestion: If you are dealing with digital media with clean edges, uncheck this before starting compression.

… for Apple Motion

Tip #538: What Does “Four Corner” Do?

The “Four Corner” setting determines image distortion.

The Four Corner settings (top) determine image distortion (bottom).

Topic $TipTopic

When you select an object in Motion, one of the adjustments you can make is Four Corner. Inspector > Properties > Four Corner allows you to distort whatever you have selected. Here’s how it works.

When you adjust Inspector > Properties > Position, you can modify the position of the frame containing whatever you have selected.

However, when you adjust Inspector > Properties > Four Corner, you can distort the object itself, as illustrated in this screen shot.

Four Corner also provides separate control over the horizontal and vertical position of each corner.


Keep in mind that all these distortion settings can be keyframed to animate a shape over time.

… for Visual Effects

Tip #542: What is Rotoscoping?

Larry Jordan – LarryJordan.com

Rotoscoping allows us to transfer an object onto a different background.

Image in the public domain.
Max Fleisher’s original rotoscope (1915).

Topic $TipTopic

Rotoscoping is an animation technique that animators use to trace over motion picture footage, frame by frame, to produce realistic action. Originally, animators projected photographed live-action movie images onto a glass panel and traced over the image. This projection equipment is referred to as a rotoscope, developed by Polish-American animator Max Fleischer. This device was eventually replaced by computers, but the process is still called rotoscoping.

In the visual effects industry, rotoscoping is the technique of manually creating a matte for an element on a live-action plate so it may be composited over another background.

Rotoscoping has often been used as a tool for visual effects in live-action movies. By tracing an object, the moviemaker creates a silhouette (called a matte) that can be used to extract that object from a scene for use on a different background. While blue- and green-screen techniques have made the process of layering subjects in scenes easier, rotoscoping still plays a large role in the production of visual effects imagery. Rotoscoping in the digital domain is often aided by motion-tracking and onion-skinning software. Rotoscoping is often used in the preparation of garbage mattes for other matte-pulling processes.

Rotoscoping has also been used to create a special visual effect (such as a glow, for example) that is guided by the matte or rotoscoped line. A classic use of traditional rotoscoping was in the original three Star Wars movies, where the production used it to create the glowing lightsaber effect with a matte based on sticks held by the actors. To achieve this, effects technicians traced a line over each frame with the prop, then enlarged each line and added the glow.

Learn more at Wikipedia.

… for Visual Effects

Tip #543: What is Planar Tracking?

Larry Jordan – LarryJordan.com

Planar tracking solves problems with lost tracking points.

Topic $TipTopic

A planar tracker uses planes and textures to track as opposed to points or groups of pixels. This allows the tracker to stay on track even if your shot contains motion blur or a very shallow depth of field. Here’s a quick overview.

Planar tracking was developed by Allan Jaenicke and Philip McLauchlan in the University of Surrey. They founded Imagineer Systems in 2000 to provide commercial applications for this technology.

“Planar Tracking” gains its name from how the system analyzes the source video. It seeks out different ‘planes’, isolating surfaces that can be followed through a shot. The user can define a plane for the computer to follow, and if tracked successfully, the movement of the ‘tracked’ object can be used to drive the motion of newly composited elements, or inversely to stabilize footage within a frame.

Mocha, by Imagineer Systems, is an example of this technology. Once tracking information is derrived from a videoclip within Mocha, it can be used in After Effects to animate the motion of any composited layer. Virtual elements can use this tracking information to control what is essentially a camera move that mimics that of the original shot, so that the virtual and live action elements appear to have been shot by the same camera.


While mocha was the first planar tracker, similar technology can be found in:

  • Nuke, The Foundry
  • Syntheyes, Andersson Technologies
  • Flame, Autodesk
  • fayIN, fyateq

Learn more from BorisFX, who acquired Imagineer Systems, here.