Media Apple FCP X

… for Adobe Premiere Pro CC

Tip #624: Not All Captions Look Alike

 

Captions are designed for simplicity, not fancy formatting.

SRT Caption formatting controls in Apple Final Cut Pro X.

Topic $TipTopic

When you import SRT files and XML files that have open caption data in them, Premiere Pro automatically converts these files to CEA-708 CC1 closed caption files. You can then edit these files and burn in the captions as subtitles while exporting using Premiere Pro or Adobe Media Encoder.

However, SRT closed captions are designed for readability and flexibility, not formatting. The Federal Communications Commission’s rules about closed captioning include details about caption accuracy, placement, and synchronicity. They don’t say anything about formatting. Avoid problems – read this.

Captions are designed for readability and flexibility – you can turn them on or off, or choose between languages. Captions are not designed to be styled. All captions, except SCC, are designed to be stored in sidecar files. These are separate files from the media, but linked to it.

SCC captions, which can be embedded in the video itself — well, one language at least – are limited to two lines per screen each with only 37 characters per line. They also require a frame rate of 29.97 fps (either drop or non-drop frame). Yup, limited.

SRT captions are more flexible. SRT captions are known for simplicity and ease-of-use, especially when compared to other formats, many of which used XML-based code. It was adopted by YouTube as a caption format in 2008.

SRT captions only supports basic formatting changes including: font, color, placement and text formatting. HOWEVER, there is no clear standard for these style changes. Even if you apply them to your captions there is no guarantee that the software playing your movie will know how to interpret them.

For this reason, when exporting SRT files using File > Export > Media (screen shot), turn off Include SRT Styling for best playback results on other systems.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Apple Final Cut Pro X

Tip #623: Not All Captions Look Alike

Larry Jordan – LarryJordan.com

Captions are designed for simplicity, not fancy formatting.

SRT caption formatting controls in Apple Final Cut Pro X.

Topic $TipTopic

SCC and SRT closed captions are designed for readability and flexibility, not formatting. The Federal Communications Commission’s rules about closed captioning include details about caption accuracy, placement, and synchronicity. They don’t say anything about formatting. Avoid problems – read this.

Captions are designed for readability and flexibility – you can turn them on or off, or choose between languages. Captions are not designed to be styled. All captions, except SCC, are designed to be stored in sidecar files. These are separate files from the media, but linked to it.

SCC captions, which can be embedded in the video itself — well, one language at least – are limited to two lines per screen each with only 37 characters per line. They also require a frame rate of 29.97 fps (either drop or non-drop frame).

Yup, limited.

SRT captions are more flexible. SRT captions are known for simplicity and ease-of-use, especially when compared to other formats, many of which used XML-based code. It was adopted by YouTube as a caption format in 2008.

SRT only supports basic formatting changes including: font, color, placement and text formatting. HOWEVER, there is no clear standard for these style changes. Even if you apply them to your captions there is no guarantee that the software playing your movie will know how to interpret them.

The basic rule is: If you need text with style, use titles. If you need to enable or disable text on screen, use captions – but don’t expect much style control.


… for Codecs & Media

Tip #612: The Background of Blu-ray Disc

Larry Jordan – LarryJordan.com

A quick look at the history of Blu-ray Disc.

The Blu-ray Disc logo.

Topic $TipTopic

I’ve gotten a fair amount of email recently asking about Blu-ray Discs.

The specs for Blu-ray Disc were developed by Sony and unveiled in October, 2000 specifically for HD media. The first Blu-ray prototype was released in April, 2003. The format is now controlled by the Blu-ray Disk Association.

Blu-ray Disc was named for the blue laser it uses to read and write media. Blue lasers support higher density storage than the red lasers used by DVDs.

A single layer Blu-ray Disc holds 25 GB, a dual-layer holds 50 GB. While vast at the time of its release, these small file limits today mean that we need to use significant media compression to get our files to fit. Currently, Blu-ray Discs support HD, HDR and 3D media formats, all within the same storage capacity.

NOTES

  • The original DVD was designed for SD media and holds about 4.7 GB single layer or 8.5 GB dual-layer.
  • CD-ROMs hold between 650 – 700 MB.

EXTRA CREDIT

Tip #613 has a list of all supported Blu-ray Disc distribution formats.


… for Codecs & Media

Tip #514: The Brave New World of 8K Media

Larry Jordan – LarryJordan.com

8K files require vast storage with super-fast bandwidth.

File storage requirements as frame size increases for ProRes 422 and 4444.

Topic $TipTopic

Technology continues its relentless advance and we are hearing the drumbeats for 8K media. Editing 4K takes a lot of computer horsepower. Editing 8K requires 4 TIMES more than 4K! Which is why Apple is promoting the new Mac Pro for use with 8K workflows.

I don’t minimize the need for a powerful CPU or the potential of the new Mac Pro when editing frame sizes this huge. However, important as the computer is in editing media, the speed and size of your storage are even MORE critical.

Let’s start by looking at storage requirements for different frame sizes of media.

NOTE: For this example, I’m using ProRes 422 and 4444 because Apple has done a great job documenting the technical requirements of these codecs. Other codecs will have different numbers, but the size and bandwidth relationships will be similar.

More specifically, the three frame sizes in my chart are:

  • 1080/30 HD. 30 fps, 1920 x 1080 pixels
  • UHD/30. 30 fps, 3940 x 2160 pixels
  • 8K/30. 30 fps, 8192 x 4320 pixels

As the screen shot illustrates, an hour of 8K media takes 1.2 TB for ProRes 422 and 2.5 TB for ProRes 4444! These amounts require totally rethinking the capacity of our storage – and remember, this does not include typical work or cache files, many of which will also be 8K.

EXTRA CREDIT

Here’s a link to my website to learn more, including the bandwidth needs of these super-huge frame sizes.


… for Codecs & Media

Tip #513: How Changing Frame Rate Affects File Size

Larry Jordan – LarryJordan.com

Faster frame rates more than double file size.

As frame rates increase, file storage needs also increase – dramatically.

Topic $TipTopic

I want to look at the effect increasing video frame rates has on storage capacity and bandwidth.

NOTE: In this example, I’m using Apple ProRes as a measurement codec. Other codecs will generate different numbers, but the overall results are the same. Here’s a white paper from Apple with all the source numbers.

Regardless of frame size, as frame rates increase, storage needs and bandwidth also increase. If we set the storage needs of 24 fps video (regardless of frame size) to 100%, then:

  • 25 fps video = 104% capacity and bandwidth increase
  • 30 fps video = 125% capacity and bandwidth increase
  • 50 fps video = 208% capacity and bandwidth increase
  • 60 fps video = 250% capacity and bandwidth increase

Just as capacity increases by these amounts, so, also, does bandwidth. Higher frame rates require bigger and faster storage.

EXTRA CREDIT

Here’s a link to my website to learn more.


… for Apple Final Cut Pro X

Tip #518: Super-Secret, Super-Fast Export Trick

Larry Jordan – LarryJordan.com

The key to speed is to use the Browser.

Image courtesy of StandardFilms.com.
Set an In and Out first, then Command-drag to define multiple segments.

Topic $TipTopic

Imagine you need to get multiple highlights of breaking news/sports/weather/life up to the web like, ah, yesterday. Final Cut has a very fast way to make that happen. Watch…!

In order for us to export a segment from the timeline, we need to use the Range tool (or keyboard shortcuts) to set an In and Out. No problem – except that we can only have one In and one Out in the timeline at any time.

This doesn’t help us when we need to export a bunch of highlights as fast as possible.

But… there’s a hidden trick in FCP X that makes exporting segments even faster. Remember that I wrote: “You can only have one In and Out in the timeline?” That’s true for the timeline, but NOT true for the Browser.

Clips in the Browser support as many segments as you want. For example, in this screen shot, I have three separate areas in the same clip selected – all at the same time!

NOTE: This multiple selection technique applies to clips in the Browser, but not Projects.

To select more than one section in a clip, drag to set the In and Out for the first section, then press the Command key and drag to set as many additional sections as you want!

With the areas you want to export selected, choose File > Share and note that this menu now shows the number of clips you’ll export.

Exporting from FCP X has always been fast. But, when you need to break a movie into sections, it will be even faster – and at the highest possible quality – to export directly from the Browser.


… for Codecs & Media

Tip #503: Why Timecode Starts at 01:00:00:00

Larry Jordan – LarryJordan.com

It all comes down to finding what you seek.

A sample timecode setting displayed as: Hours, Minutes, Seconds and Frames.

Topic $TipTopic

Back in the old days of video tape, all programs originating in North America (and, perhaps, elsewhere) started at timecode hour 01. A tradition that often continues today for broadcast, mostly out of habit. Why?

NOTE: Programs originating in Europe, I discovered many years ago, tended to start at hour 10. This made it easy to quickly see which part of the world a program originated from.

Back in the days of large quad videotape machines, each of which could easily cost a quarter-of-a-million dollars, the tape reels were 15-inches in diameter and weighed up to 30 pounds. The tape flew through the system at 15 inches per second – all to create a standard-definition image!

Setting up a quad tape system for playback required tweaking each of the four playback heads on the machine and adjusting them for alignment, color phase, saturation and brightness. (It was these machines that first taught me how to read video scopes.)

The problem was that getting this much iron moving fast enough to reliably play a picture took time. Eight seconds of time.

So, the standard setup for each tape required recording:

  • 60 seconds of bars and tone (to set video and audio levels)
  • 10 seconds of black
  • 10 seconds of slate
  • 10 seconds of countdown

If timecode started at 0:00:00:00 for the program, that would mean the setup material would start at 23:58:30:00. Since 23 hours is after 0 hours, sending the tape machine to seek the starting timecode – an automated feature that was used all the time in the high-speed, high-pressure turnaround of live news – means the tape deck would scan forward to the end of the tape.

To prevent this, all programs started at 1 hour (or 10 hours) with setup starting at 00:58:30:00.

And now you know.


… for Codecs & Media

Tip #505: Why HDV Media is a Pain in the Neck

Larry Jordan – LarryJordan.com

Interlacing, non-square pixels, and deep compression make this a challenging media format.

Topic $TipTopic

HDV (short for high-definition DV) media was a highly-popular, but deeply flawed, video format around the turn of the century.

DV (Digital Video) ushered in the wide acceptance of portable video cameras (though still standard definition image sizes) and drove the adoption of computer-based video editing.

NOTE: While EMC and Avid led the way in computerized media editing, it was Apple Final Cut Pro’s release in 1999 that converted a technology into a massive consumer force.

HDV was originally developed by JVC and supported by Sony, Canon and Sharp. First released in 2003, it was designed as an affordable recording format for high-definition video.

Their were, however, three big problems with the format:

  • It was interlaced
  • It used non-square pixels
  • It was highly! compressed

If the HDV media was headed to broadcast or for viewing on a TV set, interlacing was no problem. Both distribution technologies fully supported interlacing.

But, if the video was posted to the web, ugly horizontal black lines radiated out from all moving objects. The only way to get rid of them was to deinterlace the media, which, in most cases, resulted in cutting the vertical resolution in half.

In the late 2000’s Sony and other released progressive HDV recording, but the damage to user’s perception of the image was done.

NOTE: 1080i HDV contained 3 times more pixels per field than SD, yet was compressed at the same data rate. (In interlaced media, two fields make a frame.)

The non-square pixels meant that 1080 images were recorded at 1440 pixels horizontally, with the fatter-width pixel filling a full 1920 pixel line. In other words, HDV pixels were short and fat, not square.

As full progressive cameras became popular – especially DSLR cameras with their higher-quality images, HDV gradually faded in popularity. But, even today, we are dealing with legacy HDV media and the image challenges it presents.


… for Random Weirdness

Tip #477: How to Test the Lenses You Buy

Larry Jordan – LarryJordan.com

It is better to test your lens than find a problem during a shoot.

(Image courtesy of pexels.com)

Topic $TipTopic

The team at PetaPixel and Michael the Maven have an interesting article and YouTube video on the importance of testing your lenses. Here’s the link. This is an excerpt.

You may not be aware that no two lenses are exactly the same. Why? Sample variation. Performance can vary widely from edge to edge or from wide to tight.

Here’s a quick way to test your lenses: Set your camera on a tripod in front of a flat, textured surface like a brick wall and snap photos at various apertures: wide open, f/2.8, f/4 and f/8. Feel free to add in f/5.6 if you’re feeling comprehensive. If you’re testing a zoom lens, we recommend repeating this process at various focal lengths as well.

Try to get the sensor as parallel to the wall as possible, and inspect each photo from the center out to the edges. It should be immediately obvious if you have a really bad lens at any particular focal length.

Then, as a bonus test, shoot some power lines against a blue sky and see if the lens is producing any dramatic chromatic aberration, which will show up as color fringing at the high-contrast edges between the black wires and the blue sky.


… for Codecs & Media

Tip #474: DNxHR vs. ProRes

Larry Jordan – LarryJordan.com

These two codecs are directly comparable, but not the same.

Topic $TipTopic LowePost summarized the differences between Avid’s DNx and Apple’s ProRes codecs. Here’s the link. This is an excerpt.

The Avid DNxHR and Apple Prores codec families are designed to meet the needs of modern, streamlined post-production workflows.

Both the DNxHR and ProRes families offer a variety of codecs for different compressions, data rates and file sizes. Some with just enough image information needed for editing, others for high-quality color grading and finishing, and lossless ones for mastering and archiving.

Codec facts

  • DNxHR 444, ProRes 4444 and ProRes 4444 QC are the only codecs with embedded alpha channels.
  • DNxHR 444 and ProRes 4444 XQ are the only codecs that fully preserve the details needed in HDR- (high-dynamic-range) imagery.
  • Both codec families are resolution independent, but bitrate will vary depending on if you output a proxy file or a higher resolution file.
  • Both codec families can be wrapped inside MXF or MOV containers.

An important difference, however, is that some of the major editing and finishing systems available lacks support for ProRes encoding for Windows. This means Windows users can read a ProRes encoded file, but in some cases cannot export one. For this reason, many post-production facilites have abandoned ProRes and implemented a full DNxHR workflow.