… for Codecs & Media

Tip #1355: Create a Poster Frame for iPhone Video

Larry Jordan – LarryJordan.com

The iPhone displays the first frame of your video as the poster frame.

Topic $TipTopic

This tip was suggested by Darcy Peters, who discovered a very cool way to add poster frames for iPhone movies.

Larry, you’ve explained how to add poster frames to video (link). However, this doesn’t translate to storing said videos on your iPhone, for example. Those files arbitrarily show the first frame of the video.

However, I discovered a very cool workaround:

Export a single frame of the video that I want to appear as the image icon on my iPhone. Then insert that frame as the first frame (single frame) of the video. Because it’s a single frame at the start of the video, the viewer will never notice it, but it works!


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #1356: Fixing Unix Executable Files

Larry Jordan – LarryJordan.com

Most of the time, simply adding the correct file extension fixes this problem.

An example of files on a server missing file extensions.

Topic $TipTopic

Most of the time, your images, media and documents open when you need them on a Mac. However, if you discover your files are suddenly unopenable “Unix executables,” here’s how to fix it.

In most cases, a Unix executable is a file stored on a server without a file extension. Most of the time, simply adding the correct file extension fixes this problem.

NOTE: Files stored on a Mac use other indicators to track which application created the file. However, file servers don’t use the Mac operating system and require file extensions to properly store and access files.

The trick is figuring out what’s the right extension. Generally, I try to find a similar file created around the same time. Select the file that works and type CMD + I. In the Get Info window that appears, look in the Name & Extension field to determine the correct extension to use.

For example, most of my early (1995 – 2010) digital videos were saved as QuickTime movies. Simply adding .MOV as a file extension solved the problem.

EXTRA CREDIT

I’ve also run into problems with older image files and word processing documents. Adding file extensions fixed the problem with these, too.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #1332: Spectra: High-Quality, Cloud Video

Larry Jordan – LarryJordan.com

Spectra: High-performance, streaming virtual media encoder.

The Spectra logo.

Topic $TipTopic

Last week, Streambox introduced Spectra, a high-performance streaming virtual media encoder.

Spectra can simultaneously deliver high-quality, low-latency stream to multiple remote collaborators anywhere in the world. This means that editors accessing media in the Cloud are no longer limited to low-res proxy images.

For example, the editor or colorist can now view a live, color accurate video/audio stream in the edit suite (or even at home), which provides the same level of confidence they enjoy when working on traditional, in-house systems.

Spectra also works as a plug-in for Avid Media Composer systems. A free trial is available.

Here’s the link for more information.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #1336: Export Stills – Which Codec?

Larry Jordan – LarryJordan.com

Not all still formats are the same. When in doubt, use PNG or TIFF.

Still image export choices in Adobe Premiere.

Topic $TipTopic

All NLEs support exporting a still frame from a project. But, given all the choices, which format should you choose?

Premiere provides six options:

  • DPX
  • JPEG
  • OpenEXR
  • PNG
  • Targa
  • TIFF

Final Cut offers:

  • DPX
  • JPEG
  • OpenEXR
  • Photoshop file
  • PNG
  • TIFF

DPX, OpenEXR and Targa files are specialized image formats that most applications can’t open. Only use these if you know that the app you are moving the exported still into supports them.

Photoshop, PNG and TIFF are all uncompressed formats. These provide the highest quality export and are best used when moving stills from one high-quality application to another. All three formats support images with alpha channels, though TIFF or Photoshop would be preferred because not all apps support alpha channels in PNG files.

JPEG is a compressed format, best used when sending images to the web.

EXTRA CREDIT

Personally, I export PNGs as most of my stills require extra editing in Photoshop before the final compression into JPEG to post to the web.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #1337: What Does “Low Resolution Proxy” Mean?

Larry Jordan – LarryJordan.com

The small file size of proxy files is due to deeper compression and reduced frame size.

An example of four different proxy frame sizes: Full, 1/2, 1/4/ & 1/8.

Topic $TipTopic

We often talk about proxy files being “lower resolution.” But what does that actually mean?

Proxy files are designed to provide reasonable images for editing, while taking less space to store and fewer computing resources to display. This is accomplished using deeper compression settings, changing video codecs (for example, using H.264), and reducing image resolution.

NOTE: Audio is always stored at the highest quality, even in a proxy file.

For a long time, I would say the words “lower resolution,” but not understand what they meant. It wasn’t till I created a graphic for one of my webinars that I understood what was going on.

A “lower resolution” proxy file is a file created using a smaller frame size than the original image. For example, using a 1920 x 1080 pixel frame size for the source video:

  • 1/2 resolution = a frame size of 960 x 540 pixels
  • 1/4 resolution = a frame size of 480 x 270 pixels
  • 1/8 resolution = a frame size of 240 x 135 pixels

Obviously, the smaller the frame size, the smaller the proxy file, but the less image detail is displayed.

Most of the time, I use 1/2 frame size for my proxy files. However, if I’m doing multicam work, where the on-screen images are small to begin with, I’ll use 1/4 frame size. This allows me to play more cameras at the same time without dropping frames.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #1322: What is Hardware-Accelerated Encoding?

Larry Jordan – LarryJordan.com

Hardware compression sets the standard for both speed and quality.

The Effects panel in After Effects, showing which effects are accelerated.

Topic $TipTopic

In the past, when we needed to render or export a file, the software in the NLE did all the work. This worked fine, but took a long time. As video production became more of a mass market, there was incentive for hardware developers to incorporate video compression circuits onto the CPU.

This was MUCH faster than software compression, but, for video pros, still not fast enough.

Now, video compression is moving from the CPU to the GPU. For example, according to Adobe, Adobe Premiere Pro and Adobe Media Encoder can take advantage of available GPUs on your system to distribute the processing load between the CPU and the GPU to get better performance. Currently, most of the processing is done by CPU and GPU assists in processing certain tasks and features.

The Mercury Playback Engine (GPU Accelerated) renderer is used to render GPU accelerated effects and features.
Here is the list of GPU accelerated effects in Adobe Premiere Pro. To identify the GPU accelerated effects, navigate to the Effects panel and look for the Accelerated Effects icon.

Apart from processing these effects, the Mercury Playback Engine (GPU Accelerated) is used for image processing, resizes, color space conversions, recoloring and more. It is also used for timeline playback/scrubbing and full-screen playback using Mercury Transmit.

EXTRA CREDIT

A wild card in hardware acceleration is the new Apple silicon chips. In the past, hardware acceleration for both H.264 and HEVC was handled by the T-2 chip.
With the move to Apple silicon, all compression is now done using the M1 chip.

From a quality point of view, my studies show that for most compression, hardware acceleration looks the same as software compression, yet processes files much more quickly.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #1324: Test Compressed Image Quality – FAST!

Larry Jordan – LarryJordan.com

The Difference blend mode is a fast way to see how much your compression settings are damaging your images.

Two frames – one compressed and the source – compared using the Difference blend mode.

Topic $TipTopic

The most important concept you need to understand about video compression is that the process of compressing a file ALWAYS removes data during compression. Always. This means that the more you compress a file to reduce it’s file size, the more data is removed.

Once removed, you can’t put this data back. This is the reason you don’t want to re-compress an already compressed file. Another important note is that different movies, codecs and bit rates yield different results.

There’s a very fast way to compare the quality of a source file with the compressed image.

The process is simple: Using Photoshop, Final Cut Pro, or Premiere Pro, compare a frame from the source file with a frame from the compressed file using the Difference blend mode.

Perfectly matched frames are solid black. Frames with lots of differences – such as the screen shot – show lots of ghosting, especially around edges. This technique is a good way to test different compression technology and see which one works the best for your projects.

Here’s an article the explains this technique in detail and provides illustrations of the results from a variety of compression settings and software.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #1325: Why Is an Audio Fade Called +3 dB?

Larry Jordan – LarryJordan.com

Audio is a Strange Beast

Four different fade shapes available in Apple Final Cut Pro.

Topic $TipTopic

Unlike video, audio levels are logarithmic. For example, whenever the audio level increases (or decreases) around 10 dB, the perceived volume is doubled (or cut in half). These log values also have an impact in cross-fading between clips.

A +3 dB transition adds a 3 dB increase in volume to both clips in the middle of a cross-fade. If the software did not add this “bump,” the cross-fade would sound fainter in the middle of a transition, then louder at each end.

When fading to or from black, a straight-line (linear) transition is best. When cross-fading between two clips, both of which have continuous audio, a +3 dB transition will sound better.

EXTRA CREDIT

Some software allows you to change the shape of the curve manually. These rules still apply, but manual adjustments allow much greater control over how the transition sounds.

Still, the general rule of audio is: Whatever sounds the best to you IS most likely the best.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #1300: A Hidden SSD Speed Boost

Larry Jordan – LarryJordan.com

SSDs don’t have seek times or latency. This means MUCH faster storage speeds for multiple apps accessing storage at once.

Samsung T-5 SSD speed in isolation (top) and with BMD and AJA both running (bottom).

Topic $TipTopic

OK, I admit, I was playing. But I discovered something very intriguing about SSDs. Watch.

As we’ve learned over the last several tips, the speed of spinning hard drives are limited by seek times and latency (Tip #1287)

The speed of a network is limited by how the devices are connected to it (i.e. 100 Mb vs. 1 Gb vs. 10 Gb Ethernet), the number of users and the connected speed of the server.

But, direct-connected SSDs don’t have these limitations. Instead, speeds are controlled based upon the construction and connection protocol of the SSD (PCIe vs. NVMe – and – USB vs. Thunderbolt).

I plan to do this test in more detail in a few weeks, when I get a chance to play with a brand-new, high-performance NVMe SSD.

But, for this quick check, I connected a Samsung T-5 SSD to a 2019 Mac mini running Thunderbolt 3. While the Thunderbolt 3 protocol maxes out around 2500 MB/sec, the T-5 pegged the meter at 479 MB/s write and 526 MB/s read (see the top values in the screen shot).

HOWEVER, when I ran BOTH AJA System Test and Blackmagic Disk Speed Test at the same time, while the speed for each application dropped, the aggregate speed was FASTER than the speed for the isolated test.

NOTE: In my example, the single app read speed was about 525 MB/s. When both apps were running, the aggregate speed was about 660 MB/s!

What this means is that if you have multiple applications reading or writing to SSD storage at the same time – which is typical for many media apps – SSDs provide far less of a slow-down than spinning media because we can access all that storage directly, without waiting for platters to spin and heads to jump into place.

These tests are just preliminary – I’ll have more on this in a few weeks. But I think this is very, VERY interesting!


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #1281: Larger Frame Sizes Protect Projects

Larry Jordan – LarryJordan.com

Frame sizes will continue to increase, here’s how they benefit current projects.

Topic $TipTopic

We are in the middle of determining the “optimum” frame size for video projects as frame sizes continue to scale up. New projects are consistently shooting in 4k frame sizes, with cameras pushing up to 8K frame sizes and beyond.

First, while it could be argued that we can’t actually SEE 4K in most situations, that hasn’t stopped distributors from requesting it. However, even if we are creating HD projects, there is a value in shooting larger frame sizes. Recently, Jason Boone wrote a blog about the benefits of scaling larger frame sizes to fit smaller projects.

  • Reframe a shot. 4K provides so many extra pixels to choose from, you can convert a wide shot into a close-up. However, cutting into the frame won’t change depth of field, so the image won’t look the same as if you had zoomed in.
  • Use the same take multiple times. Using the same take for both wide shots and close-ups makes it seem as though you have two cameras. The benefit is that where talent is looking change. The disadvantage is that background and depth of field won’t change either.
  • Create camera moves. Using keyframes you can create movement where there was none in the original shot. However, like moves on a still, elements won’t change position as they would if you used a dolly on set.
  • Stabilize your footage. This is powerful. Stabilization always zooms into a shot. By having lots of extra pixels to work with, the image won’t lose detail or sharpness.
  • Adjust the image for graphics. There’s nothing worse than graphics you can’t read. 4K gives us extra pixels for scaling and repositioning.

4K may not be visible to the eye, but it can be a BIG benefit in post. And the same holds true for larger frame sizes yet – provided your storage is fast and large enough to hold it!


Please rate the helpfulness of this tip.

Click on a star to rate it!