Media Apple FCP X

… for Codecs & Media

Tip #1130: Not All Proxy Files are the Same

Larry Jordan –

Proxies are smaller files than camera masters, but not all proxies are equal.

Topic $TipTopic

While working on my webinar this week on Multicam Editing in Adobe Premiere Pro I started looking into proxy files. By definition, a proxy file is a smaller file than the camera native file is is derived from. But not all proxy files are created equal.

  • Final Cut Pro X, for example, defaults to ProRes Proxy for all proxy files.
  • Premiere Pro provides the choice of H.264, ProRes or Cineform. (DNx is reserved for 360° VR video.)

H.264 provides the smallest files. Based on my tests, H.264 files are about 1/10 the size of ProRes Proxy, while ProRes Proxy is about 1/10 the size of ProRes 422. But, due to the GOP-compression that H.264 uses, these files are less efficient to edit; especially on slower systems.

  • If you are looking for smooth playback and faster rendering, ProRes Proxy is a better choice.
  • If you are looking for the smallest files, for example, to transfer over the web for another editor to work on, H.264 is a better choice. Just remember that H.264 will require a newer computer with a fast CPU to edit effectively.

The webinar has more details on all of these.

Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #1102: What is a Render File?

Larry Jordan –

A render file is a media file calculated based on the effects applied to a source clip.

(Image courtesy of

Topic $TipTopic

Since all NLEs are non-destructive editors (leaving original media intact), render files are created when you alter an original clip, create a clip via a generator or use a still image in the Timeline. Those alterations need to be turned into media. That process is called “rendering.”

Essentially render files are new media files that match your project/sequence settings that your NLE uses instead of the original clips. They effectively ‘replace’ the original media and the NLE will refer to them instead of the original media once they’ve been rendered.

If you make additional changes to a rendered clip, the NLE will delete the old render file and it will be replaced by a new render file.

Generally, render files are retained by the NLE if you delete a clip from the timeline, which saves time in case you put it back in.

To pick up storage space, you can delete render files. If that file is needed again, the NLE will re-render it.

Once a project is complete, render files can be deleted. If they are needed in the future, the NLE will re-render them, as well.

… for Codecs & Media

Tip #1104: Generate More Viewers on YouTube

Larry Jordan –

Simply uploading a video isn’t enough.

The Details section for each video in YouTube Studio for entering metadata.

Topic $TipTopic

I was reading a blog recently by Richard Tiland about posting videos to YouTube. In it, he wrote: “Uploading your video to YouTube isn’t enough. You need to include metadata so that the site understands what your video is all about.”

His points included:

  • Optimize the title with keywords. Keep it short, but searchable.
  • Add a detailed and accurate description. Length is less important here.
  • Include a transcript to help viewers take in your content without turning up the sound.
  • Organize content using playlists. This helps both viewers and YouTube’s search algorithms.
  • Create a cohesive look to improve branding. Make your videos look like they are coming from the same creative source.
  • Finally, don’t forget the Call to Action. This is the explicit behavior you want the audience to take after watching your video.

Metadata always seems intimidating somehow. But, really, all we are doing is enabling viewers to find our media faster and easier. And that is always a good thing.

… for Codecs & Media

Tip #1105: What is Sharpening?

Larry Jordan –

Sharpening means enhancing the contrast of edges in an image.

Original file (left), blurred image, the subtracted image, the final result (right).

Topic $TipTopic

This article, written by Henry Guiness, first appeared in This is a summary.

We have all heard the term “sharpening.” But what is it and how do we apply it? There are three main reasons to sharpen your image: to overcome blurring introduced by camera equipment, to draw attention to certain areas and to increase legibility.

Before I explain how to sharpen, we need to understand that sharpness is subjective.

Sharpness is a combination of two factors: resolution and acutance. Resolution is straightforward and not subjective. It’s just the size, in pixels, of the image file. All other factors equal, the higher the resolution of the image—the more pixels it has—the sharper it can be. Acutance is a little more complicated. It’s a subjective measure of the contrast at an edge. There’s no unit for acutance—you either think an edge has contrast or think it doesn’t. Edges that have more contrast appear to have a more defined edge to the human visual system.

Sharpening then, is a technique for increasing the apparent sharpness of an image. Once an image is captured, Photoshop can’t magically add more details: the actual resolution remains fixed. Yes, you can increase the file’s size but the algorithms any image editor uses to do so will decrease the sharpness of the details.

In the image in the screen shot, the author mimicked the Unsharp Mask effect in Photoshop. The first image is the original file, the second is a blurred copy, the third is the one subtracted from the other so as to detect the edges, and the fourth is the original image sharpened using the edge layer. Unsharp masking is the oldest sharpening technique. It subtracts a blurred (unsharp) copy from the original image to detect any edges. A mask is made with this edge detail. Contrast is then increased at the edges and the effect is applied to the original image. While unsharp masking was originally a film technique, it’s now the basis of digital sharpening.

The article continues with sharpening settings and illustrations of both good and bad sharpening, along with links to learn more.

… for Codecs & Media

Tip #1083: Pick the Right Version of ProRes

Larry Jordan –

ProRes provides lots of choices – all supporting 10-bit, high-quality images.

The Apple ProRes logo.

Topic $TipTopic

Apple ProRes comes in a variety of formats. Which one should you pick for your projects? Here’s some advice.

NOTE: Much this tip is taken from the Apple ProRes White Paper (Jan. 2020). (Link)

Apple ProRes is one of the most popular codecs in professional post-production and, as of Oct. 2020, it also won an Engineering Emmy Award for its quality, licensing and innovation. The ProRes family of video codecs has made it both possible and affordable to edit full-frame, 10-bit, 4:2:2 and 4:4:4:4 high-definition (HD), 2K, 4K, 5K, and larger video sources.

ProRes codecs take full advantage of multicore processing and feature fast, reduced-resolution decoding modes. All ProRes codecs support any frame size (including SD, HD, 2K, 4K, 5K, and larger) at full resolution. The data rates vary based on codec type, image content, frame size, and frame rate.

Here are the current formats:

  • Apple ProRes RAW. Captures data from the camera sensor. Existing media can’t be converted into ProRes RAW.
  • Apple ProRes 4444 XQ. The highest-quality version of ProRes for 4:4:4:4 image sources (including alpha channels), with a very high data rate to preserve the detail in high-dynamic-range imagery generated by today’s highest-quality digital image sensors.
  • Apple ProRes 4444. An extremely high-quality version of ProRes for 4:4:4:4 image sources (including alpha channels). This codec features full-resolution, mastering-quality 4:4:4:4 RGBA color and visual
  • fidelity that is perceptually indistinguishable from the original material.

NOTE: Apple ProRes 4444 XQ and Apple ProRes 4444 are ideal for the exchange of motion graphics media because they are virtually lossless, and are the only ProRes codecs that support alpha channels.

  • Apple ProRes 422 HQ. A higher-data-rate version of Apple ProRes 422 that preserves visual quality at the same high level as Apple ProRes 4444, but for 4:2:2 image sources.
  • Apple ProRes 422. A high-quality compressed codec offering nearly all the benefits of Apple ProRes 422 HQ, but at 66 percent of the data rate for even better multistream, real-time editing performance.
  • Apple ProRes 422 LT. A more highly compressed codec than Apple ProRes 422, with roughly 70 percent of the data rate and 30 percent smaller file sizes. This codec is perfect for environments where storage capacity and data rate are at a premium.
  • Apple ProRes 422 Proxy. An even more highly compressed codec than Apple ProRes 422 LT, intended for use in offline workflows that require low data rates but full-resolution video.


  • Use ProRes RAW if your camera or external storage can record it from the sensor.
  • Use ProRes 4444 for images recorded using HDR values, or for media created on the computer. (For example using Motion or After Effects.)
  • Use ProRes 422 for images recorded using a camera for Rec. 709 (HD) media.
  • Use ProRes Proxy where image quality is less important than small file size; for example in screening and rough cuts.

… for Codecs & Media

Tip #1068: When to Choose JPEG, PNG or TIFF

Larry Jordan –

Pick the codec that works best for your project.

Export codec list from Adobe Photoshop.

Topic $TipTopic

A codec is the mathematical formula that determines how to convert light, or sound, into binary digits for the computer to store or display. While there are a LOT of still image codecs, there are only four that you’ll need to choose between for most of your video projects:

  • PSD
  • JPG
  • PNG
  • TIFF

So, which should you choose? Here are some tips.


This is the native Photoshop format.

Use this when you need to retain the ability to edit the elements of an image or when you want to enable, or disable, specific layers within the image.

NOTE: For best results, always embed media into the Photoshop file.


This is a highly-compressed file best used for final distribution. Good image quality in a very small file size.

Part of compressing a JPEG file involves throwing away color data and reducing some of the image quality. While this is almost always OK for images destined for the web, it is not a good idea for any image that you want to edit.

NOTE: Compressing an already compressed file will materially damage quality.


This is a modestly compressed image format. Excellent image quality with a large file size.

This is a more modern format than TIFF and is the best choice for outputting finished images at high quality. While you can’t reedit a PNG image the way you can a PSD, this provides excellent image quality. PNGs, unlike JPEG, supports an alpha channel for transparent image elements.

The only limit of PNG is that it is only supports 8-bit color.


This is a lightly compressed image format, providing excellent image and color quality with a large file size.

TIFF is my go-to still image format. Supporting up to 10-bit color, alpha channels and essentially lossless images, it has been around for a long, long time.

The only limitation of TIFF is that, unlike PSD, you can’t edit elements within the image.

… for Codecs & Media

Tip #1069: Create a Custom Poster Frame

Larry Jordan –

Poster frames represent your media in the Finder.

Copy a new poster frame image into the image well in the Finder > Get Info window.

Topic $TipTopic

Ian Brown suggested this tip.

There’s a very fast way to create a poster frame for a QuickTime movie. (Poster frames appear in the Finder, and other locations, to illustrate the contents of a clip.)

  • Open the video in QuickTime Player
  • Move the playhead to the frame you want to use as a poster frame
  • Choose Edit > Copy (shortcut: Cmd + C)
  • Close the video
  • Select the file icon in the Finder
  • Choose File > Get Info (shortcut: Cmd + I)
  • Select the small icon in the top left corner
  • Choose Edit > Paste (shortcut: Cmd + V)



Actually, anything you paste into that top left box will become the poster frame. It doesn’t need to be an image, it could also be a graphic.

… for Codecs & Media

Tip #1070: What Determines Storage Performance?

Larry Jordan –

Aerial density, RPM and whether an SSD is in the mix all affect storage performance.

The Seagate logo.

Topic $TipTopic

Seagate published an interesting article, titled: Choosing High Performance Storage Isn’t Just About RPM. This is a summary.

The performance of a hard drive is most effectively measured by how fast data can be transferred from the spinning media (platters) through the read/write head and passed to a host computer. This is commonly referred to as data throughput and typically measured in gigabytes (or gigabits) per second. In either case, data throughput is directly related to how densely data is packed on the hard drive platters and how fast these platters spin.

Higher revolutions per minute represent a faster hard drive, but the rate of media transfer is just as important for data storage solutions.

For the areal density specification, we can measure data density on a hard drive in two ways: bits per inch (BPI) and tracks per inch (TPI). As tracks are placed closer together, TPI increases. Similarly, as data bits are placed closer and closer to each other along a track, BPI increases. Together, these represent areal density.

As a rule, when areal density increases on a hard drive, so does data throughput performance. This is because the data bits pass by the read/write head of the hard drive faster, which leads to faster data rates.

For the RPM specification, platters need to spin faster to increase performance in a hard drive. This results in moving the data bits past the read/write head faster, which results in higher data rates. Hard drives have been engineered with spin rates as low as 1200 RPM and as high as 15K RPM. But today’s most common RPM rates, in both laptop and desktop PCs, are between 5400 and 7200 RPM.

Given two identically designed hard drives with the same areal densities, a 7200 RPM drive will deliver data about 33% faster than the 5400 RPM drive. Consequently, this specification is important when evaluating the expected performance of a hard drive or when comparing different HDD models.

However, when moving to a solid state hybrid drive (SSHD), RPM is largely irrelevant. Why?

SSHD design is based on identifying frequently used data and placing it in the solid state drive (SSD) or NAND flash portion of the drive. NAND flash media is very fast, partly because there are no moving parts—since it’s made of solid state circuitry. Therefore, when data is requested by host computers there is typically not a dependence on pulling this data directly from the spinning media in the hard drive portion.

Sometimes, however, data will be requested that is not in the NAND flash, and only during these instances does the hard drive portion of the device become a bottleneck. Since the technology is so effective at identifying and storing frequently used data in the NAND area, SSHD technology is much more efficient in delivering data to a host computer quickly.

In tests conducted by Seagate to illustrate this article, the fastest performance for an SSHD drive came from one where the platters only spun at 5400 RPM.

Here’s a link to the full article.

… for Codecs & Media

Tip #1046: For HDR, Shadows are More Important

Larry Jordan –

Shadow detail is important to perception that highlights – as both SDR and HDR reflect.

Image courtesy of Venera Technologies.
In both SDR and HDR, 50% of all grayscale values are the shadows.

Topic $TipTopic

In earlier tips (#1043 and #1049) we compared differences in grayscale values between SDR and HDR. What I discovered during this research is how important shadow detail is for both SDR and HDR.

NOTE: The screen shot and the information in this article are taken from a Venera Technologies article.

Human beings are more sensitive to changes in darker regions compared to changes in brighter regions. This property is exploited in HDR systems providing more granularity (detail) in darker regions compared to brighter regions. The screenshot depicts that the light level range in darker regions are represented by a larger signal value range compared to the brighter regions – meaning more detail in the shadows.

While grayscale values are more evenly distributed for Rec. 709-based displays, they become less granular for HDR displays in the brighter regions. In the case of HLG, more than half of signal values are represented for light levels between 0-60 Nits while the remaining signal values span 60-1000 Nits. Similarly, in the case of PQ-based displays, approximately half of the signal values are represented for light levels between 0-40 Nits while the remaining half of the signal values are represented in a range of 40-1000 Nits.

In other words, for both HDR and SDR, half the total signal range is reserved for shadow values of less than 50 IRE; while, for HDR, the remaining highlight values are spread up to 10,000 IRE (Nits)!

… for Codecs & Media

Tip #1049: HDR HLG vs PQ on SDR Monitors

Larry Jordan –

HLG looks better on SDR than PQ. But PQ looks better on HDR monitors.

Image courtesy of Venera Technologies.
HLG looks better on SDR monitors, but PQ has more detail in the highlights.

Topic $TipTopic

Tip #1043 compared the grayscale differences between HDR HLG and SDR. This tip illustrates the differences between watching HLG and PQ on an SDR monitor.

NOTE: The screen shot and the information in this article are taken from a Venera Technologies article.

To display the digital images on the screen, display devices need to convert the pixel values to corresponding light values. This process is usually non-linear and is called EOTF (Electro-Optical Transfer Function).

While SDR uses Rec. 709, HDR defines two additional transfer functions to handle this issue – Perceptual Quantizer (PQ) and Hybrid Log-Gamma (HLG). HDR PQ is an absolute, display-referred signal while HDR HLG is a relative, scene-referred signal. This means that HLG-enabled display devices automatically adapts the light levels based on the content and their own display capabilities while PQ enabled display devices need to implement tone mapping to adapt the light levels.

Under ideal conditions, dynamic PQ based transformation will achieve the best quality results at the cost of compatibility with existing display systems.

As you can see from the screen shot, HLG images look better on SDR monitors than PQ images. However, while PQ based transforms promise to display the best quality results on HDR enabled monitors, in comparison to HLG, PQ requires proper tone mapping by the display device.


As you may be able to see in the screenshot, PQ offers more detail in the highlights than HLG.