Media Apple FCP X

… for Random Weirdness

Tip #477: How to Test the Lenses You Buy

Larry Jordan –

It is better to test your lens than find a problem during a shoot.

(Image courtesy of

Topic $TipTopic

The team at PetaPixel and Michael the Maven have an interesting article and YouTube video on the importance of testing your lenses. Here’s the link. This is an excerpt.

You may not be aware that no two lenses are exactly the same. Why? Sample variation. Performance can vary widely from edge to edge or from wide to tight.

Here’s a quick way to test your lenses: Set your camera on a tripod in front of a flat, textured surface like a brick wall and snap photos at various apertures: wide open, f/2.8, f/4 and f/8. Feel free to add in f/5.6 if you’re feeling comprehensive. If you’re testing a zoom lens, we recommend repeating this process at various focal lengths as well.

Try to get the sensor as parallel to the wall as possible, and inspect each photo from the center out to the edges. It should be immediately obvious if you have a really bad lens at any particular focal length.

Then, as a bonus test, shoot some power lines against a blue sky and see if the lens is producing any dramatic chromatic aberration, which will show up as color fringing at the high-contrast edges between the black wires and the blue sky.

Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #483: Adobe Supports ProRes on Mac and Windows

Larry Jordan –

Adobe announced full support for ProRes on Windows.

Topic $TipTopic

At the start of 2019, Adobe announced expanded support for ProRes, on both their Mac and Windows software. Here’s the link. ProRes has long been popular on Mac-based editing systems, including those from Adobe. But, its support on Windows has been much weaker. That changed with this announcement from Adobe.

Apple ProRes is a codec technology developed by Apple for high-quality, high-performance editing. It is one of the most popular codecs in professional post-production and is widely used for acquisition, production, delivery, and archive. Adobe has worked with Apple to provide ProRes export to post-production professionals using Premiere Pro and After Effects. Support for ProRes on macOS and Windows helps streamline video production and simplifies final output, including server-based remote rendering with Adobe Media Encoder.

With the latest Adobe updates, ProRes 4444 and ProRes 422 export is available within Premiere Pro, After Effects, and Media Encoder on macOS and Windows 10.

… for Codecs & Media

Tip #474: DNxHR vs. ProRes

Larry Jordan –

These two codecs are directly comparable, but not the same.

Topic $TipTopic LowePost summarized the differences between Avid’s DNx and Apple’s ProRes codecs. Here’s the link. This is an excerpt.

The Avid DNxHR and Apple Prores codec families are designed to meet the needs of modern, streamlined post-production workflows.

Both the DNxHR and ProRes families offer a variety of codecs for different compressions, data rates and file sizes. Some with just enough image information needed for editing, others for high-quality color grading and finishing, and lossless ones for mastering and archiving.

Codec facts

  • DNxHR 444, ProRes 4444 and ProRes 4444 QC are the only codecs with embedded alpha channels.
  • DNxHR 444 and ProRes 4444 XQ are the only codecs that fully preserve the details needed in HDR- (high-dynamic-range) imagery.
  • Both codec families are resolution independent, but bitrate will vary depending on if you output a proxy file or a higher resolution file.
  • Both codec families can be wrapped inside MXF or MOV containers.

An important difference, however, is that some of the major editing and finishing systems available lacks support for ProRes encoding for Windows. This means Windows users can read a ProRes encoded file, but in some cases cannot export one. For this reason, many post-production facilites have abandoned ProRes and implemented a full DNxHR workflow.

… for Apple Motion

Tip #467: Render Settings Improve CPU Performance

Larry Jordan –

These render options allow us to avoid overloading the CPU.

Render options in the Render menu of the Canvas.

Topic $TipTopic

(This is an excerpt from the Motion Help files.) Choose the render quality and resolution of the canvas display, and enable or disable features that can impact playback performance. When an option is active, a checkmark appears beside the menu item. If a complex project is causing your computer to play at a very low frame rate, you can make changes in this menu to reduce the strain on the processor.

The Render pop-up menu displays the following items:

  • Dynamic: Reduces the quality of the image displayed in the canvas during playback or scrubbing in the Timeline or mini-Timeline, allowing for faster feedback. Also reduces the quality of an image as it is modified in the canvas. When playback or scrubbing is stopped, or the modification is completed in the canvas, the image quality is restored (based on the Quality and Resolution settings for the project).
  • Full: Displays the canvas at full resolution (Shift-Q).
  • Half: Displays the canvas at half resolution.
  • Quarter: Displays the canvas at one-quarter resolution.
  • Draft: Renders objects in the canvas at a lower quality to allow optimal project interactivity. There’s no anti-aliasing.
  • Normal: Renders objects in the canvas at a medium quality. Shapes are anti-aliased, but 3D intersections are not. This is the default setting.
  • Best: Renders objects in the canvas at best quality, which includes higher-quality image resampling, anti-aliased intersections, anti-aliased particle edges, and sharper text.
  • Custom: Allows you to set additional controls to customize rendering quality. Choosing Custom opens the Advanced Quality Options dialog. For more information, see Advanced Quality settings.
  • Lighting: Turns the effect of lights in a project on or off. This setting does not turn off lights in the Layers list (or light scene icons), but it disables light shading effects in the canvas.
  • Shadows: Turns the effect of shadows in a project on or off.
  • Reflections: Turns the effect of reflections in a project on or off.
  • Depth of Field: Turns the effect of depth of field in a project on or off.
  • Motion Blur: Enables/disables the preview of motion blur in the canvas. Disabling motion blur may improve performance.

Note: When creating an effect, title, transition, or generator template for use in Final Cut Pro X, the Motion Blur item in the View pop-up menu controls whether motion blur is turned on when the project is applied in Final Cut Pro.

… for Codecs & Media

Tip #451: Audio Compression for Podcasts

Larry Jordan –

You can compress audio a lot, without damaging quality.

Topic $TipTopic

If you are compressing audio for podcasts, where it’s just a few people talking, you can make this a very small file by taking advantage of some key audio characteristics.

To set a baseline, an hour of 16-bit, 48k uncompressed stereo audio (WAV or AIF) is about 660 MB. (1 minute of stereo = 11 MB, 1 minute of mono = 5.5 MB).

If we are posting this to our own web site, streaming it live where bandwidth requirements make a difference, or posting it to service that charges for storage, we want to make our file as small as possible, without damaging quality. Here’s what you need to know.

Since people only have one mouth, if all they are doing is talking, not singing with a band, you don’t need stereo. Mono is fine.

This reduces file size by 50%.

NOTE: Mono sounds play evenly from both left and right speakers placing the sound of the audio in the middle between them.

According to the Nyquist Theorem, dividing sample rate by 2 determines maximum frequency response. Human speech maxes out below 10,000 Hz. This means that compressing at a 32K sample rate retains all the frequency characteristics of the human voice. (32 / 2 = 16K Hz, well above frequencies used for human speech.)

This reduces file size by another 33%.

Without doing any compression, our 660 MB one hour audio file is reduced to about 220 MB.

Finally, using your preferred compression software, set the compression data rate to 56 kbps. This creates about a 25 MB file for a one-hour show. (About 95% file size reduction from the original file.)

And for podcasts featuring all-talk, it will sound great.

… for Codecs & Media

Tip #282: When to Use HEVC vs. H.264

Larry Jordan –

Which to choose and why?

Topic $TipTopic

As media creators, there’s a lot of confusion over whether we should use H.264 or HEVC to compress our files for distribution on the web. Here’s my current thinking.

The big benefit to HEVC is that it achieves the same image quality with a 30-40% savings in file size.

The big disadvantage is that HEVC takes a lot longer to compress and not all systems – especially older systems – can play it.

If you are sending files to broadcast, cable or digital cinema, they will want something much less compressed than either of these formats. So, for those outlets, this is not a relevant question.

For me, the over-riding reason to use H.264 instead of HEVC is that YouTube, Facebook, Twitter and most other social media sites recompress your video in order to create all the different formats needed to them to re-distribute it. (I read somewhere a while ago that YouTube creates 20 different versions of a single video.)

For this reason, spending extra time creating a high-quality HEVC file, when it will only get re-compressed, does not make any sense to me. Instead, create a high-bit-rate H.264 version so that when the file is recompressed, it won’t lose any image quality.

Where HEVC makes sense is when you are serving files directly to consumers via streaming on your website. And, even in those cases, HTTP Live Streaming will be a better option to support mobile devices.

HEVC is mostly a benefit to service providers and social media firms.

… for Codecs & Media

Tip #284: What is a Proxy File?

Larry Jordan –

Proxies save time, money and storage space.

Topic $TipTopic

A Proxy file, regardless of the codec that created it, is designed to meet three key objectives: save time, save money and use less expensive gear. Proxies meet these objectives because they:

  • Reduce required storage capacity
  • Reduce required storage bandwidth
  • Reduce the CPU load to process the file

It accomplishes these goals in two significant ways:

  • It converts all media into a very efficient intermediate codec that is easy to edit. For example, ProRes 422 or DNx.
  • It cuts the frame size by 50%. So, a UHD file, with a source frame size of 3840 x 2160, has a proxy size of 1920 x 1080. A 6K frame becomes 3K.

Proxies are best used for the initial editorial where you are reviewing footage, creating selects, building a rough cut and polishing the story. For most of us, that’s 80% of the time we spend editing any project. Proxy files can also be used for most client review exports, because they render and export faster and, at the early stage, clients aren’t looking for the final look.

Using proxies means we can use less powerful and much less expensive computers and storage for the vast majority of time spent on a project. Proxy files also allow us to get out of the edit suite and edit on more portable gear.

Switching out of proxy mode is necessary for polishing effects, color grading, final render and master export.

Many editors feel that it is a sign of weakness to edit proxies. This is nonsense. Back when we edited film, we used workprints – which is the film version of a proxy file – for everything. Somehow, great work was still turned out.

Avid, Adobe and Apple all support proxy workflows. Proxies are worth adding to your workflow.

… for Codecs & Media

Tip #304: What is FFmpeg?

Larry Jordan –

An open source project supporting hundreds of media formats.

The FFmpeg logo, reflecting how many media files are compressed.

Topic $TipTopic

FFmpeg is a free and open-source project consisting of a vast software suite of libraries and programs for handling video, audio, and other multimedia files and streams. At its core is the FFmpeg program itself, designed for command-line-based processing of video and audio files, and widely used for format transcoding, basic editing (trimming and concatenation), video scaling, video post-production effects, and standards compliance.

FFmpeg is part of the workflow of hundreds of other software projects, and its libraries are a core part of software media players such as VLC, and has been included in core processing for YouTube and the iTunes inventory of files. Codecs for the encoding and/or decoding of most of all known audio and video file formats are included, making it highly useful for the transcoding of common and uncommon media files into a single common format.

The name of the project is inspired by the MPEG video standards group, together with “FF” for “fast forward”. The logo uses a zigzag pattern that shows how MPEG video codecs handle entropy encoding.

The FFmpeg project was started by Fabrice Bellard in 2000. Most non-programmers access the FFmpeg suite of programs using a “front-end.” This is software that puts a user interface on the FFmpeg engine. Examples include: Handbrake, ffWorks, MPEG Streamclip, and QWinFF.

… for Codecs & Media

Tip #303: What is MXF OP1a?

Larry Jordan –

MXF is an industry-workhorse because it is so flexible.

Topic $TipTopic

MXF (Material Exchange Format) was invented by SMPTE in 2004. MXF is a container that holds digital video and audio media. OP1a (Operational Pattern 1a) defines how the media inside it is stored.

MXF has full timecode and metadata support, and is intended as a platform-agnostic stable standard for professional video and audio applications.

MXF had a checkered beginning. In 2005, there were interoperability problems between Sony and Panasonic cameras. Both recorded “MXF” – but the two formats were incompatible. Other incompatibilities, such as randomly generating the links that connect files, were resolved in a 2009 redefinition of the spec.

MXF generally stores media in separate files. For example: video, audio, timecode and metadata are all separate. This means that a single MXF container actually supports a variety of different media codecs inside it.

Another benefit to MXF OP1a is that it supports “growing files.” These are files that can be edited while they are still being recorded. (Think sports highlights.)

… for Codecs & Media

Tip #228: How Much RAM Do You Need For Editing?

Larry Jordan –

More RAM helps – to a point.

This chart illustrates how RAM needs increase as frame sizes increase.
RAM requirements for 30-fps, 8-bit video at different frame sizes (MB/second).

Topic $TipTopic

The way most NLEs work is that, during an edit, the software will load (“buffer”) a portion of a clip into RAM. This allows for smoother playback and skimming, as you drag your playhead across the timeline.

When a clip is loaded into RAM, it is uncompressed, allowing each pixel to be processed individually. This means that the amount of RAM used for buffering depends upon several factors:

  • How much RAM you have
  • The frame size of the source video clip
  • The frame rate of the source video clip
  • The bit-depth of the source video clip

This graph illustrates this. It displays the MB required per second to cache 8-bit video into RAM. As you can see, RAM requirements skyrocket with frame size. These numbers increase when you have multiple clips playing at the same time.

NOTE: These numbers also increase as bit-depth increases, however the proportions remain the same.

The amount of RAM you need varies, depending upon the type of editing you are doing.

  • 8 GB RAM. You can edit with this amount of RAM, but editing performance may suffer for anything larger than 720p HD
  • 16 GB RAM. Good for most editing.
  • 32 GB RAM. My recommendation for editing 4K, 6K, multicam and HDR.
  • 64 GB RAM. Potentially good for massive frame sizes, but not required.

Anything more than 64 GB of RAM won’t hurt, but you won’t see any significant improvement in performance; especially considering the cost of more RAM.