Media Apple FCP X

… for Apple Final Cut Pro X

Tip #696: What Does the Alpha Channel Show?

Larry Jordan – LarryJordan.com

Alpha channels define the amount of translucency for each pixel.

When viewing alpha channels, black is transparent, gray is translucent and white is opaque.

Topic $TipTopic

Just as the red, green and blue channels define the amount of each color a pixel contains, the alpha channel defines the amount of transparency each pixel contains.

A pixel can be fully transparent, fully opaque or somewhere in between. By default, every video pixel is fully opaque.

NOTE: The reason we are able to key titles over backgrounds is that titles contain a built-in alpha channel that defines each character as opaque and the rest of the frame as transparent.

Using either the View menu at the top right corner of the Viewer or View > Show in Viewer > Color Channels > Alpha to display the alpha channel for whichever clip contains the playhead (or skimmer).

While we can easily work with alpha channels inside Final Cut, in order to export video that retains transparency information, we need to use the ProRes 4444 or Animation codecs. No other ProRes, HEVC or H.264 codec supports alpha channels.

EXTRA CREDIT

The Event Viewer also supports displaying alpha channels.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #689: What Does Video Bit-Depth Determine?

Larry Jordan – LarryJordan.com

Bit-depth determines the maximum number of colors in a video frame.

Image courtesy of VideoMaker.com

Topic $TipTopic

So, what is bit depth? Well, essentially it determines the range of possible colors your camera is capable of capturing. The higher the bit depth, the higher the number of possible colors your camera is able to capture, which means smoother gradations and less (or no) color banding. However, the higher the bit depth, the larger the files, which means a higher need for storage space and possibly a more powerful computer to handle all of the data.

Keep in mind, though, that even if you go with a camera whose file formats support higher bit depths, that doesn’t necessarily automatically translate to amazing image quality. There are many other factors that play a role in both gamut and color depth, including color sampling and data rate.

If you’re still confused about whether or not you need a camera that offers high bit depth, keep these things in mind.

  • Color banding is ugly.
  • Can you handle all that extra data?
  • Higher bit depth affords you more latitude during color grading.

EXTRA CREDIT

Here’s a link to a VideoMaker presentation, on NoFilmSchool.com, that explains bit depth in three minutes.


… for Codecs & Media

Tip #675: Which Codecs Support Alpha Channels

Larry Jordan – LarryJordan.com

Not all codecs support transparency. When you need it, use one of these.

Topic $TipTopic

To include transparency in video, you need to create it in software which supports alpha (transparency) channels. These include Final Cut, Motion, Premiere, After Effects, Avid and many other professional editing packages.

Then, you need to choose a codec which also supports alpha channels. Not all of them do.

Rocketstock has compiled a list, though not all of these are video codecs:

  • Apple Animation
  • Apple ProRes 4444
  • Avid DNxHD
  • Avid DNxHR
  • Avid Meridien
  • Cineon
  • DPX
  • GoPro Cineform
  • Maya IFF
  • OpenEXR Sequence With Alpha
  • PNG Sequence With Alpha
  • Targa
  • TIFF

… for Adobe Premiere Pro CC

Tip #624: Not All Captions Look Alike

 

Captions are designed for simplicity, not fancy formatting.

SRT Caption formatting controls in Apple Final Cut Pro X.

Topic $TipTopic

When you import SRT files and XML files that have open caption data in them, Premiere Pro automatically converts these files to CEA-708 CC1 closed caption files. You can then edit these files and burn in the captions as subtitles while exporting using Premiere Pro or Adobe Media Encoder.

However, SRT closed captions are designed for readability and flexibility, not formatting. The Federal Communications Commission’s rules about closed captioning include details about caption accuracy, placement, and synchronicity. They don’t say anything about formatting. Avoid problems – read this.

Captions are designed for readability and flexibility – you can turn them on or off, or choose between languages. Captions are not designed to be styled. All captions, except SCC, are designed to be stored in sidecar files. These are separate files from the media, but linked to it.

SCC captions, which can be embedded in the video itself — well, one language at least – are limited to two lines per screen each with only 37 characters per line. They also require a frame rate of 29.97 fps (either drop or non-drop frame). Yup, limited.

SRT captions are more flexible. SRT captions are known for simplicity and ease-of-use, especially when compared to other formats, many of which used XML-based code. It was adopted by YouTube as a caption format in 2008.

SRT captions only supports basic formatting changes including: font, color, placement and text formatting. HOWEVER, there is no clear standard for these style changes. Even if you apply them to your captions there is no guarantee that the software playing your movie will know how to interpret them.

For this reason, when exporting SRT files using File > Export > Media (screen shot), turn off Include SRT Styling for best playback results on other systems.


… for Apple Final Cut Pro X

Tip #623: Not All Captions Look Alike

Larry Jordan – LarryJordan.com

Captions are designed for simplicity, not fancy formatting.

SRT caption formatting controls in Apple Final Cut Pro X.

Topic $TipTopic

SCC and SRT closed captions are designed for readability and flexibility, not formatting. The Federal Communications Commission’s rules about closed captioning include details about caption accuracy, placement, and synchronicity. They don’t say anything about formatting. Avoid problems – read this.

Captions are designed for readability and flexibility – you can turn them on or off, or choose between languages. Captions are not designed to be styled. All captions, except SCC, are designed to be stored in sidecar files. These are separate files from the media, but linked to it.

SCC captions, which can be embedded in the video itself — well, one language at least – are limited to two lines per screen each with only 37 characters per line. They also require a frame rate of 29.97 fps (either drop or non-drop frame).

Yup, limited.

SRT captions are more flexible. SRT captions are known for simplicity and ease-of-use, especially when compared to other formats, many of which used XML-based code. It was adopted by YouTube as a caption format in 2008.

SRT only supports basic formatting changes including: font, color, placement and text formatting. HOWEVER, there is no clear standard for these style changes. Even if you apply them to your captions there is no guarantee that the software playing your movie will know how to interpret them.

The basic rule is: If you need text with style, use titles. If you need to enable or disable text on screen, use captions – but don’t expect much style control.


… for Codecs & Media

Tip #612: The Background of Blu-ray Disc

Larry Jordan – LarryJordan.com

A quick look at the history of Blu-ray Disc.

The Blu-ray Disc logo.

Topic $TipTopic

I’ve gotten a fair amount of email recently asking about Blu-ray Discs.

The specs for Blu-ray Disc were developed by Sony and unveiled in October, 2000 specifically for HD media. The first Blu-ray prototype was released in April, 2003. The format is now controlled by the Blu-ray Disk Association.

Blu-ray Disc was named for the blue laser it uses to read and write media. Blue lasers support higher density storage than the red lasers used by DVDs.

A single layer Blu-ray Disc holds 25 GB, a dual-layer holds 50 GB. While vast at the time of its release, these small file limits today mean that we need to use significant media compression to get our files to fit. Currently, Blu-ray Discs support HD, HDR and 3D media formats, all within the same storage capacity.

NOTES

  • The original DVD was designed for SD media and holds about 4.7 GB single layer or 8.5 GB dual-layer.
  • CD-ROMs hold between 650 – 700 MB.

EXTRA CREDIT

Tip #613 has a list of all supported Blu-ray Disc distribution formats.


… for Codecs & Media

Tip #514: The Brave New World of 8K Media

Larry Jordan – LarryJordan.com

8K files require vast storage with super-fast bandwidth.

File storage requirements as frame size increases for ProRes 422 and 4444.

Topic $TipTopic

Technology continues its relentless advance and we are hearing the drumbeats for 8K media. Editing 4K takes a lot of computer horsepower. Editing 8K requires 4 TIMES more than 4K! Which is why Apple is promoting the new Mac Pro for use with 8K workflows.

I don’t minimize the need for a powerful CPU or the potential of the new Mac Pro when editing frame sizes this huge. However, important as the computer is in editing media, the speed and size of your storage are even MORE critical.

Let’s start by looking at storage requirements for different frame sizes of media.

NOTE: For this example, I’m using ProRes 422 and 4444 because Apple has done a great job documenting the technical requirements of these codecs. Other codecs will have different numbers, but the size and bandwidth relationships will be similar.

More specifically, the three frame sizes in my chart are:

  • 1080/30 HD. 30 fps, 1920 x 1080 pixels
  • UHD/30. 30 fps, 3940 x 2160 pixels
  • 8K/30. 30 fps, 8192 x 4320 pixels

As the screen shot illustrates, an hour of 8K media takes 1.2 TB for ProRes 422 and 2.5 TB for ProRes 4444! These amounts require totally rethinking the capacity of our storage – and remember, this does not include typical work or cache files, many of which will also be 8K.

EXTRA CREDIT

Here’s a link to my website to learn more, including the bandwidth needs of these super-huge frame sizes.


… for Codecs & Media

Tip #513: How Changing Frame Rate Affects File Size

Larry Jordan – LarryJordan.com

Faster frame rates more than double file size.

As frame rates increase, file storage needs also increase – dramatically.

Topic $TipTopic

I want to look at the effect increasing video frame rates has on storage capacity and bandwidth.

NOTE: In this example, I’m using Apple ProRes as a measurement codec. Other codecs will generate different numbers, but the overall results are the same. Here’s a white paper from Apple with all the source numbers.

Regardless of frame size, as frame rates increase, storage needs and bandwidth also increase. If we set the storage needs of 24 fps video (regardless of frame size) to 100%, then:

  • 25 fps video = 104% capacity and bandwidth increase
  • 30 fps video = 125% capacity and bandwidth increase
  • 50 fps video = 208% capacity and bandwidth increase
  • 60 fps video = 250% capacity and bandwidth increase

Just as capacity increases by these amounts, so, also, does bandwidth. Higher frame rates require bigger and faster storage.

EXTRA CREDIT

Here’s a link to my website to learn more.


… for Apple Final Cut Pro X

Tip #518: Super-Secret, Super-Fast Export Trick

Larry Jordan – LarryJordan.com

The key to speed is to use the Browser.

Image courtesy of StandardFilms.com.
Set an In and Out first, then Command-drag to define multiple segments.

Topic $TipTopic

Imagine you need to get multiple highlights of breaking news/sports/weather/life up to the web like, ah, yesterday. Final Cut has a very fast way to make that happen. Watch…!

In order for us to export a segment from the timeline, we need to use the Range tool (or keyboard shortcuts) to set an In and Out. No problem – except that we can only have one In and one Out in the timeline at any time.

This doesn’t help us when we need to export a bunch of highlights as fast as possible.

But… there’s a hidden trick in FCP X that makes exporting segments even faster. Remember that I wrote: “You can only have one In and Out in the timeline?” That’s true for the timeline, but NOT true for the Browser.

Clips in the Browser support as many segments as you want. For example, in this screen shot, I have three separate areas in the same clip selected – all at the same time!

NOTE: This multiple selection technique applies to clips in the Browser, but not Projects.

To select more than one section in a clip, drag to set the In and Out for the first section, then press the Command key and drag to set as many additional sections as you want!

With the areas you want to export selected, choose File > Share and note that this menu now shows the number of clips you’ll export.

Exporting from FCP X has always been fast. But, when you need to break a movie into sections, it will be even faster – and at the highest possible quality – to export directly from the Browser.


… for Codecs & Media

Tip #503: Why Timecode Starts at 01:00:00:00

Larry Jordan – LarryJordan.com

It all comes down to finding what you seek.

A sample timecode setting displayed as: Hours, Minutes, Seconds and Frames.

Topic $TipTopic

Back in the old days of video tape, all programs originating in North America (and, perhaps, elsewhere) started at timecode hour 01. A tradition that often continues today for broadcast, mostly out of habit. Why?

NOTE: Programs originating in Europe, I discovered many years ago, tended to start at hour 10. This made it easy to quickly see which part of the world a program originated from.

Back in the days of large quad videotape machines, each of which could easily cost a quarter-of-a-million dollars, the tape reels were 15-inches in diameter and weighed up to 30 pounds. The tape flew through the system at 15 inches per second – all to create a standard-definition image!

Setting up a quad tape system for playback required tweaking each of the four playback heads on the machine and adjusting them for alignment, color phase, saturation and brightness. (It was these machines that first taught me how to read video scopes.)

The problem was that getting this much iron moving fast enough to reliably play a picture took time. Eight seconds of time.

So, the standard setup for each tape required recording:

  • 60 seconds of bars and tone (to set video and audio levels)
  • 10 seconds of black
  • 10 seconds of slate
  • 10 seconds of countdown

If timecode started at 0:00:00:00 for the program, that would mean the setup material would start at 23:58:30:00. Since 23 hours is after 0 hours, sending the tape machine to seek the starting timecode – an automated feature that was used all the time in the high-speed, high-pressure turnaround of live news – means the tape deck would scan forward to the end of the tape.

To prevent this, all programs started at 1 hour (or 10 hours) with setup starting at 00:58:30:00.

And now you know.