Media Apple FCP X

… for Codecs & Media

Tip #903: A Caution About Frame Rate Conversions

Larry Jordan – LarryJordan.com

The highest image quality occurs when media is played at its source frame rate.

Topic $TipTopic

During this last week, I’ve gotten more emails than usual about frame rate conversions. Some of the concerns are software related; for example, there may be a problem with Compressor converting 24 fps material to 25 fps.

However, the bulk of my email centered on jitter caused by the conversion.

It is important to stress two key points:

  1. The web does not care about the frame rate of your media. The web plays anything. There’s no benefit to changing rates.
  2. The best image quality is ALWAYS when media is played back at the frame rate it was shot. As much as possible, shoot the frame rate you need to deliver.

Converting from 24 to 48, 59.94 to 29.97 or 60 to 30 is smooth and easy; every other frame is dropped or each frame is doubled. But converting 24 to 25, or 30 to 25 or 29.97 to 24 is a recipe for stutter and jitter.

To decrease your stress, before you start shooting, carefully think about the frame rate you need to deliver – or potentially might need to deliver – then shoot that rate. (And, remember, again, that the web doesn’t care about frame rates. So don’t convert media for the web.)


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #904: Why You Should Avoid HDV

Larry Jordan – LarryJordan.com

Where possible, avoid using HDV to maximize image quality.

Topic $TipTopic

HDV was one of the first, if not THE first, consumer-grade HD video format. As such, it was an eye-opening opportunity to discover the benefits of HD.

However, compared to media formats today, HDV has three serious limitations:

  • First, most HDV material is interlaced. While this plays acceptably on TV monitors, it looks awful on the web. This is because TVs are designed to display interlaced media, while the web is designed for progressive.
  • Second, unlike all digital media today, HDV uses rectangular pixels which are stretched to fill the frame, rather than square pixels. This means that an HDV image won’t look as sharp as digital images today.
  • Third, HDV records half the color information compared to most modern cameras. (And one-quarter the color of high-end cameras.)

NOTE: The only way to get rid of interlacing is to remove every other line of video, thus cutting vertical image resolution in half. Then, existing lines are either duplicated or guessed at using various methods of image interpolation.

So, if you are given the option to shoot or convert media into HDV, be very cautious before you agree. There are very few situations today where this makes sense.

EXTRA CREDIT

If you have existing HDV material, consider getting it transcoded to ProRes 422. While not required, you do need to start thinking about how to preserve and convert your older media assets, especially if you plan to edit them in the future.


… for Codecs & Media

Tip #905: What’s a Media “Container?”

Larry Jordan – LarryJordan.com

Containers simplify storing different media types in one place.

Topic $TipTopic

QuickTime, MXF, WAV and MPEG-4 are all media containers. But, what’s a container and why is it used?

Wikipedia: “A container format (informally, sometimes called a wrapper) belongs to a class of computer files that exist to allow multiple data streams to be embedded into a single file, usually along with metadata for identifying and further detailing those streams.”

Because a media file can have different attributes based on whether it holds audio, video, timecode, captions or other media information, it becomes easier to store each of these elements in its own file, then store all these different components in a single container.

An analogy is a file folder holding different sheets of paper. Each paper could be written in a different language, but unified by being contained in that single folder.

Wikipedia: “Container format parts have various names: “chunks” as in RIFF and PNG, “atoms” in QuickTime/MP4, “packets” in MPEG-TS (from the communications term), and “segments” in JPEG. The main content of a chunk is called the “data” or “payload”. Most container formats have chunks in sequence, each with a header, while TIFF instead stores offsets. Modular chunks make it easy to recover other chunks in case of file corruption or dropped frames or bit slip, while offsets result in framing errors in cases of bit slip.”

Here are two links to learn more: Wikipedia and Mozilla


… for Codecs & Media

Tip #882: What is Resolution?

Larry Jordan – LarryJordan.com

DPI is irrelevant for digital media. The key setting is total pixels across & down.

The New Image menu in Photoshop.

Topic $TipTopic

When you create a new image in Photoshop, one of the parameters you need to set is Resolution. But, is resolution even relevant for digital media?

The short answer is: No.

Resolution is a print term that defines – for a fixed size image – how many pixels fit into a given space.

Digital media is the opposite. The number of pixels is fixed, but the size of the shape – the monitor – varies widely.

When creating images for the web, we have standardized on a resolution setting of 72. NOT because this is an accurate setting, it isn’t. Rather it’s to remind us to look only at total pixels across by total pixels down.

These are the pixels that will be spread to fit whatever sized monitor / frame they are displayed in.

EXTRA CREDIT

When creating images for the web or video, RGB 8-bit is the best and most compatible choice.


… for Codecs & Media

Tip #849: 8 Reasons Why You Should Shoot Raw

Larry Jordan – LarryJordan.com

RAW files are bigger and require processing, but the advantages are worth it.

A simulated RAW (left) and corrected image. (Courtesy of Pexels.com)

Topic $TipTopic

This article, written by Rob Lim, first appeared in PhotographyConcentrate.com. This is an excerpt.

NOTE: This article was originally written for shooting still images in JPEG. However, these comments also apply to shooting video using AVCHD or H.264 codecs.

Raw is a file format that captures all image data recorded by the sensor when you take a photo. When shooting in a format like JPEG image information is compressed and lost. Because no information is compressed with raw you’re able to produce higher quality images, as well as correct problem images that would be unrecoverable if shot in the JPEG format.

NOTE: Raw is not an acronym. So, unless you are discussing ProRes RAW, it’s spelled lower case.

Here’s a list of the key benefits to shooting raw:

  1. Get the Highest Level of Quality
  2. Record Greater Levels of Brightness
  3. Easily Correct Dramatically Over/Under Exposed Images
  4. Easily Adjust White Balance
  5. Get Better Detail
  6. Enjoy Non-Destructive Editing
  7. Have an Efficient Workflow
  8. It’s the Pro Option

EXTRA CREDIT

The article linked at the top has more details on each of these points.


… for Codecs & Media

Tip #851: A Comparison: Frame Size vs. File Size

Larry Jordan – LarryJordan.com

This chart, measured in GB/hour, illustrates how file size expands with frame size.

Topic $TipTopic

As frame sizes continue expanding to equal a living room wall, the accompanying file sizes explode as well.

This chart in this screen shot illustrates how quickly file sizes increase with frame size.

NOTE: This table is based on ProRes 422, at two frame rates: 24 fps and 60 fps. Shooting raw or log files would increase these file sizes about 2X.

Here are the source numbers for this chart.

 
 

Gigabytes Needed to Store 1 Hour of ProRes 422 Media

24 fps 60 fps
720p HD 26 66
1080p HD 53 132
UHD 212 530
6K 509 1,273
8K 905 2,263

(File sizes published by Apple in their ProRes White Paper.)


… for Codecs & Media

Tip #852: What is ProRes RAW?

Larry Jordan – LarryJordan.com

ProRes RAW is a codec optimized for speed and quality.

Processing flowchart for ProRes RAW. Note that image processing is done in the application, not camera.

Topic $TipTopic

Apple ProRes RAW is based on the same principles and underlying technology as existing ProRes codecs, but is applied to a camera sensor’s pristine raw image data rather than conventional image pixels.

ProRes RAW is available at two compression levels: Apple ProRes RAW and Apple ProRes RAW HQ. Both achieve excellent preservation of raw video content, with additional quality available at the higher data rate of Apple ProRes RAW HQ. Compression-related visible artifacts are very unlikely with Apple ProRes RAW, and extremely unlikely with Apple ProRes RAW HQ.

ProRes RAW is designed to maintain constant quality and pristine image fidelity for all frames. As a result, images with greater detail or sensor noise are encoded at higher data rates and produce larger file sizes.

ProRes RAW data rates benefit from encoding Bayer pattern images that consist of only one sample value per photosite. Apple ProRes RAW data rates generally fall between those of Apple ProRes 422 and Apple ProRes 422 HQ, and Apple ProRes RAW HQ data rates generally fall between those of Apple ProRes 422 HQ and Apple ProRes 4444.

NOTE: What is means is that, rather than creating RGB images in camera, which triples file size, the raw image is processed later, in the application. This still provides the highest image quality, but decreases the size of the native raw files.

Like the existing ProRes codec family, ProRes RAW is designed for speed. Raw video playback requires not only decoding the video bitstream
but also demosaicing the decoded raw image. Compared to other raw video formats supported by Final Cut Pro, ProRes RAW offers superior performance in both playback and rendering

EXTRA CREDIT

Here’s the link to Apple’s ProRes RAW white paper, which contains much more information on this format.


… for Codecs & Media

Tip #782: Compare Proxy Files to Source Media

Larry Jordan – LarryJordan.com

Proxy files are optimized for editing and small file size.

Topic $TipTopic

Here’s a table that compares proxy file storage and bandwidth requirements to source media.

Keep in mind that unlike H.264, proxy files are optimized for editing. H.264 is often difficult to edit on older or slower systems.

Data Rates and Storage Needs for UHD Media
4K Media Frame rate Bandwidth Store 1 Hour
H.264 30 fps 18.75 MB/sec 67.5 GB
ProRes Proxy 30 fps 22.75 MB/sec 82 GB
ProRes 422 30 fps 73.6 MB/sec 265 GB
BMD RAW 3:1 30 fps ~175 MB/sec 630 GB
R3D Redcode 4K 4:1 30 fps 215 MB/sec 774 GB

Notes:

  • H.264 specs based on JVC specs
  • ProRes specs from Apple ProRes White Paper
  • Blackmagic specs interpolated from Blackmagic Design website
  • Red redcode specs from Red website

… for Adobe Premiere Pro CC

Tip #739: Premiere: No Support for FireWire DV Capture

Larry Jordan – LarryJordan.com

FireWire capture of DV media is no longer supported on Macs.

Topic $TipTopic

This tip first appeared on Adobe’s support page. While this won’t affect a lot of folks, it is still worth knowing.

Starting with macOS 10.15 Catalina, Premiere Pro, Audition, and Adobe Media Encoder no longer support the capture of DV and HDV over FireWire.

This change does not impact other forms of tape capture.
You can still edit DV/HDV files that have previously been captured.
DV/HDV capture is still available with Premiere Pro on Windows.

WORKAROUND

If you need access to DV/HDV ingest you can:

  • On macOS: Use Premiere Pro 12.x and 13.x on macOS 10.13.6 (High Sierra) or 10.14 (Mojave)
  • On Windows: Continue to use the latest versions of Premiere Pro with no impact.

… for Codecs & Media

Tip #744: What is Interlacing?

Larry Jordan – LarryJordan.com

Interlacing was needed due to limited bandwidth.

Interlace artifact – thin, dark, horizontal lines radiating off moving objects.

Topic $TipTopic

Even in today’s world of 4K and HDR, many HD productions still need to distribute interlaced footage. So, what is interlacing?

Interlacing is the process of time-shifting every other line of video so that the total bandwidth requirements for a video stream are, effectively, cut in half.

For example, in HD, first all the even numbered lines are displayed, then 1/2 the frame rate later, all the odd numbered lines are displayed. Each of these is called a “field.” The field rate is double the frame rate.

NOTE: HD is upper field first, DV (PAL or NTSC) is lower field first.

In the old days of NTSC and PAL this was done because the broadcast infrastructure couldn’t handle complete frames.

As broadcasters converted to HD at the end of the last century, they needed to make a choice; again due to limited bandwidth: They could either choose to broadcast a single 720 progressive frame, or an interlaced 1080 frame.

Some networks chose 720p because they were heavily into sports, which looks best in a progressive frame. Others chose interlaced, because their shows principally originated on film, which minimized interlaced artifact, which is illustrated in the screen shot.

As we move past HD into 4K, the bandwidth limitations fade away, which means that all frames are progressive.

EXTRA CREDIT

It is easy to shoot progressive and convert it to interlaced, with no significant loss in image quality. It is far harder to convert interlaced footage to progressive; and quality always suffers. Also, the web requires progressive media because interlacing looks terrible.

For this reason, it is best to shoot progressive, then convert to interlacing as needed for distribution.