… for Codecs & Media

Tip #535: How to Convert 32-bit Media

 

Media based on 32-bit codecs needs to be converted before it can be played.

Kyno allows conversion of older media even when running on Catalina.

Topic $TipTopic
Since the release of macOS Catalina (10.15) older media based on 32-bit codecs no longer plays. If you were able to convert all your media before updating, great. If not, read this.

There’s nothing you can do in Catalina that will allow you to play older media based on 32-bit codecs. Catalina doesn’t support 32-bit anything. However, you are not totally out of luck.

If you have older media, you have two options:

  1. Transfer it to an older system, or borrow or rent one, and convert your media.
  2. A 3rd-party utility – Kyno – can find and convert older media, even if Kyno is running on a Catalina system.

Link to Kyno: Kyno.software.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #539: What is a Sidecar File?

Larry Jordan – LarryJordan.com

Sidecar files track data that the main image file can’t.

Image courtesy of Pexels.com.
Sidecars hold stuff the main file can’t.

Topic $TipTopic

Sidecar files are XML computer files that store data (often metadata) which is not supported by the format of a source file. There may be one or more sidecar files for each source file.

In most cases the relationship between the source file and the sidecar file is based on the file name; sidecar files have the same base name as the source file, but with a different extension. The problem with this system is that most operating systems and file managers have no knowledge of these relationships, and might allow the user to rename or move one of the files thereby breaking the relationship.

Examples include:

  • XMP. Stores image metadata.
  • THM. Stores digital camera thumbnails
  • EXIF. Stores camera data to keep it from becoming lost when editing JPG images.

EXTRA CREDIT

Rather than storing data separately, it can be stored as part of the main file. This is particularly done for container files, which allow certain types of data to be stored in them. Instead of separate files on the file system, multiple files can be combined into an archive file, which keeps them together, but requires that software processes the archive file, rather than individual files. This is a generic solution, as archive files can contain arbitrary files from the file system.

Container formats include QuickTime, MXF and IFF.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #541: What is Bit Depth?

Larry Jordan – LarryJordan.com

Bit depth is always expressed as a power of 2.

An illustration of 8-bit vs. 10-bit depth. (8-bit is on top).

Topic $TipTopic

Bit depth determines the number of steps between the minimum and maximum of a value. The bit depth number (8, 16, 24) actually describes a power of 2.

  • A bit depth of 4 = 2^4 = 16 steps
  • A bit depth of 8 = 2^8 = 256 steps
  • A bit depth of 10 = 2^10 = 1,024 steps
  • A bit depth of 16 = 2^16 = 65,536 steps

In the screen shot, the top row represents an image with a bit depth of 8. The lower image represents an image with a bit depth of 10.

NOTE: These are illustrations, actual bit depth variations don’t look quite this bad.

Where higher bit depths help image quality is in color grading, gradients and anywhere smooth shading from one value to another is important.

EXTRA CREDIT

In audio, bit depth determines the dynamic range; the amount of variation in audio levels between soft and loud. Bit depth is only meaningful in reference to a PCM digital signal (i.e. WAV or AIF). Non-PCM formats, such as lossy compression formats (i.e. MP3), do not have associated bit depths.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #508: Pick the Best Audio Format for Editing

Larry Jordan – LarryJordan.com

Choose AIF or WAV audio files. File sizes are larger, but the quality is worth it.

A typical monoaural audio waveform of human speech.

Topic $TipTopic

This article, written by Charles Yeager, first appeared in PremiumBeat.com. This is a summary.

When using various audio files in your video edits, such as music tracks and sound effects, does the audio file type really make a difference? (Spoiler: yes, it does.) But the real question is why are there so many different audio file formats? And what is the purpose for each one? So let’s break that down, and in so doing, determine the best audio file formats to use when editing videos.

There are three principle audio groups:

  • Uncompressed file formats: .WAV, .AIFF
  • Compressed Lossless file formats: .FLAC, .ALAC (Apple Lossless)
  • Compressed Lossy file formats: .MP3, .AAC, .WMA, .OGG

UNCOMPRESSED

Uncompressed audio formats are the equivalent of RAW video formats.This allows for a wide range of audio bit depth and sample rates. This results in better audio quality and covers the full frequency that the human ear can hear.

Uncompressed audio files are typically easier to work with in audio and video editors because they require less processing to play back. And since uncompressed files contain more data, you’ll get better results when you’re manipulating the audio in post with various effects.

COMPRESSED LOSSLESS

The name “compressed lossless” may sound like a contradiction. However, the compression isn’t occurring in a way that degrades the audio itself. Think of it almost like ZIP-compressing a music file, then unzipping it during playback.

Compressed lossless audio files can be anywhere from 1/2 to 1/3 the size of uncompressed audio files — or even smaller, while the audio quality is still lossless, enabling full frequency playback.

The drawbacks for compressed lossless files are that they are the least supported (compared to uncompressed and compressed lossy.) They also require a little more computing power to play back, because they need decoding.

COMPRESSED LOSSY

Compressed lossy audio formats are likely the most common audio files you use when listening to music. This is because compressed lossy audio files have the most support among portable devices, and they have the smallest file sizes; up to 1/10 WAV or AIF.

Compressed lossy audio files are ideal for streaming online.

However, all that compression comes at a cost. The drawback is that the audio has a limited frequency range and noticeable audio artifacts when compared to a lossless format. Another drawback is that you have less range in post when it comes to editing and audio manipulation.

WHICH TO USE FOR AUDIO EDITING?

WAV or AIF.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #513: How Changing Frame Rate Affects File Size

Larry Jordan – LarryJordan.com

Faster frame rates more than double file size.

As frame rates increase, file storage needs also increase – dramatically.

Topic $TipTopic

I want to look at the effect increasing video frame rates has on storage capacity and bandwidth.

NOTE: In this example, I’m using Apple ProRes as a measurement codec. Other codecs will generate different numbers, but the overall results are the same. Here’s a white paper from Apple with all the source numbers.

Regardless of frame size, as frame rates increase, storage needs and bandwidth also increase. If we set the storage needs of 24 fps video (regardless of frame size) to 100%, then:

  • 25 fps video = 104% capacity and bandwidth increase
  • 30 fps video = 125% capacity and bandwidth increase
  • 50 fps video = 208% capacity and bandwidth increase
  • 60 fps video = 250% capacity and bandwidth increase

Just as capacity increases by these amounts, so, also, does bandwidth. Higher frame rates require bigger and faster storage.

EXTRA CREDIT

Here’s a link to my website to learn more.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #514: The Brave New World of 8K Media

Larry Jordan – LarryJordan.com

8K files require vast storage with super-fast bandwidth.

File storage requirements as frame size increases for ProRes 422 and 4444.

Topic $TipTopic

Technology continues its relentless advance and we are hearing the drumbeats for 8K media. Editing 4K takes a lot of computer horsepower. Editing 8K requires 4 TIMES more than 4K! Which is why Apple is promoting the new Mac Pro for use with 8K workflows.

I don’t minimize the need for a powerful CPU or the potential of the new Mac Pro when editing frame sizes this huge. However, important as the computer is in editing media, the speed and size of your storage are even MORE critical.

Let’s start by looking at storage requirements for different frame sizes of media.

NOTE: For this example, I’m using ProRes 422 and 4444 because Apple has done a great job documenting the technical requirements of these codecs. Other codecs will have different numbers, but the size and bandwidth relationships will be similar.

More specifically, the three frame sizes in my chart are:

  • 1080/30 HD. 30 fps, 1920 x 1080 pixels
  • UHD/30. 30 fps, 3940 x 2160 pixels
  • 8K/30. 30 fps, 8192 x 4320 pixels

As the screen shot illustrates, an hour of 8K media takes 1.2 TB for ProRes 422 and 2.5 TB for ProRes 4444! These amounts require totally rethinking the capacity of our storage – and remember, this does not include typical work or cache files, many of which will also be 8K.

EXTRA CREDIT

Here’s a link to my website to learn more, including the bandwidth needs of these super-huge frame sizes.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #499: What is Pixel Aspect Ratio?

Larry Jordan – LarryJordan.com

Pixel aspect ratios were used in the past to compensate for limited bandwidth.

An exaggerated example of non-square pixels used in a variety of SD video.

Topic $TipTopic

Pixel aspect ratios determine the rectangular shape of a video pixel. In the early days of digital video, bandwidth, storage and resolution were all very limited. Also, in those days, almost all digital video was displayed on a 4:3 aspect ratio screen.

This meant that the image was 4 units wide by 3 units high, composed of 720 pixels across and 480 pixels high. (The reason I use the word “units” was that then, like now, monitors came in different sizes, but all had the same resolution regardless of size.)

However, standard definition video, though displayed as a 4×3 images, was composed of 720 pixels horizontally by 480 pixels vertically. This was not 4×3. To get everything to work out properly, instead of being square, each pixel was tall and thin. Each pixel was 0.9 units wide to 1.0 unit tall. (The screen shot shows an exaggerated example of this difference in width.)

As digital video started to encompass wide screen, rather than add more pixels, which was technically challenging, engineers changed the shape of the pixel to be fat. (A pixel aspect ratio of 1.0×1.2) This provided wide screen support (16×9 aspect ratio images) without increasing pixel resolution or, more importantly, file size and bandwidth requirements.

These non-square pixels continued for a while into HD video, with both HDV and some formats of P2 using non-square pixels.

However, as storage capacity and bandwidth caught up with the need for more pixels in larger frame sizes, pixels evolved into the square pixels virtually every digital format uses today. This greatly simplified all manner of pixel manipulation.

However, most compression software has settings that allow it to work with legacy formats back in the days when pixels weren’t square.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #503: Why Timecode Starts at 01:00:00:00

Larry Jordan – LarryJordan.com

It all comes down to finding what you seek.

A sample timecode setting displayed as: Hours, Minutes, Seconds and Frames.

Topic $TipTopic

Back in the old days of video tape, all programs originating in North America (and, perhaps, elsewhere) started at timecode hour 01. A tradition that often continues today for broadcast, mostly out of habit. Why?

NOTE: Programs originating in Europe, I discovered many years ago, tended to start at hour 10. This made it easy to quickly see which part of the world a program originated from.

Back in the days of large quad videotape machines, each of which could easily cost a quarter-of-a-million dollars, the tape reels were 15-inches in diameter and weighed up to 30 pounds. The tape flew through the system at 15 inches per second – all to create a standard-definition image!

Setting up a quad tape system for playback required tweaking each of the four playback heads on the machine and adjusting them for alignment, color phase, saturation and brightness. (It was these machines that first taught me how to read video scopes.)

The problem was that getting this much iron moving fast enough to reliably play a picture took time. Eight seconds of time.

So, the standard setup for each tape required recording:

  • 60 seconds of bars and tone (to set video and audio levels)
  • 10 seconds of black
  • 10 seconds of slate
  • 10 seconds of countdown

If timecode started at 0:00:00:00 for the program, that would mean the setup material would start at 23:58:30:00. Since 23 hours is after 0 hours, sending the tape machine to seek the starting timecode – an automated feature that was used all the time in the high-speed, high-pressure turnaround of live news – means the tape deck would scan forward to the end of the tape.

To prevent this, all programs started at 1 hour (or 10 hours) with setup starting at 00:58:30:00.

And now you know.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #505: Why HDV Media is a Pain in the Neck

Larry Jordan – LarryJordan.com

Interlacing, non-square pixels, and deep compression make this a challenging media format.

Topic $TipTopic

HDV (short for high-definition DV) media was a highly-popular, but deeply flawed, video format around the turn of the century.

DV (Digital Video) ushered in the wide acceptance of portable video cameras (though still standard definition image sizes) and drove the adoption of computer-based video editing.

NOTE: While EMC and Avid led the way in computerized media editing, it was Apple Final Cut Pro’s release in 1999 that converted a technology into a massive consumer force.

HDV was originally developed by JVC and supported by Sony, Canon and Sharp. First released in 2003, it was designed as an affordable recording format for high-definition video.

Their were, however, three big problems with the format:

  • It was interlaced
  • It used non-square pixels
  • It was highly! compressed

If the HDV media was headed to broadcast or for viewing on a TV set, interlacing was no problem. Both distribution technologies fully supported interlacing.

But, if the video was posted to the web, ugly horizontal black lines radiated out from all moving objects. The only way to get rid of them was to deinterlace the media, which, in most cases, resulted in cutting the vertical resolution in half.

In the late 2000’s Sony and other released progressive HDV recording, but the damage to user’s perception of the image was done.

NOTE: 1080i HDV contained 3 times more pixels per field than SD, yet was compressed at the same data rate. (In interlaced media, two fields make a frame.)

The non-square pixels meant that 1080 images were recorded at 1440 pixels horizontally, with the fatter-width pixel filling a full 1920 pixel line. In other words, HDV pixels were short and fat, not square.

As full progressive cameras became popular – especially DSLR cameras with their higher-quality images, HDV gradually faded in popularity. But, even today, we are dealing with legacy HDV media and the image challenges it presents.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #456: Uncompressed Audio File Sizes

Larry Jordan – LarryJordan.com

Audio file sizes increase with bit-depth and sample rate.

Topic $TipTopic

This article first appeared in Sweetwater.com. This is a summary.

Here’s a guide to determine how much disk space is required for uncompressed audio recording at various resolutions  (sizes are rounded):

Bit Rate 1 Minute of Mono 1 Minute of Stereo
16 bit / 44.1 kHz 5 MB 10 MB
16-bit / 48 kHz 5.5 MB 11 MB
24-bit / 44.1 kHz 7.5 MB 15 MB
24-bit / 48 kHz 8.2 MB 16.4
16-bit / 88.2 kHz 10 MB 20 MB
16-bit / 96 kHz 11 MB 22 MB
24-bit / 88.2 kHz 15 MB 30 MB
24-bit / 96 kHz 16.4 MB 32.8 MB
16-bit / 176.4 kHz 20 MB 40 MB
16-bit / 192 kHz 22 MB 44 MB
24-bit / 176.4 kHz 30 MB 60 MB
24-bit / 192 kHz 32.8 MB 65.6 MB

Please rate the helpfulness of this tip.

Click on a star to rate it!