… for Visual Effects

Tip #450: What Does Sharpening Do?

Larry Jordan – LarryJordan.com

Sharpening adjusts the apparent focus of a clip.

The top is unsharpened, the bottom is significantly sharpened.

Topic $TipTopic

Sharpening adjusts the apparent focus of a clip, without actually changing its focus.

Sharpening adjusts the contrast at the edges of objects in an image to improve their apparent focus. What our eye sees as “focus” is actually the sharpness of the edges between a foreground object and the background. If the edges are sharp, our eye considers the image in focus. If not, we consider the image – or that part of the image at least – blurry.

Unsharp Masking (which is the preferred method of sharpening) enhances the contrast between two adjacent edges. Our eye perceives that improved contrast as improved focus, though nothing about the focus of an image has changed.

When using Unsharp Mask, a little goes a long way. A Radius setting between 1.5 and 4 will yield perceptible results without making the image look like bad VHS tape.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #453: What is WebM?

WebM is supported by Mozilla, Firefox, Opera and Google Chrome.

Topic $TipTopic

Developed and owned by Google, WebM is an audiovisual media file format. It is primarily intended to offer a royalty-free alternative to use in the HTML5 video and the HTML5 audio elements. It has a sister project WebP for images. The development of the format is sponsored by Google, and the corresponding software is distributed under a BSD license. There is some dispute, however, if WebM is truly royalty-free.

According to Wikipedia, native WebM support by Mozilla Firefox, Opera, and Google Chrome was announced at the 2010 Google I/O conference. Internet Explorer 9 requires third-party WebM software. Safari for macOS which relied on QuickTime to play web media until Safari 12, still does not have native support for WebM.

VLC media player, MPlayer, K-Multimedia Player and JRiver Media Center have native support for playing WebM files Android also supports WebM.

Here’s a link to learn more.


… for Codecs & Media

Tip #454: More Than You Need to Know – About Codecs

Larry Jordan – LarryJordan.com

20 different codecs – all easy to compare.

Topic $TipTopic

I was wandering around Wikipedia and discovered this comparison table of twenty popular media “containers,” their features and related codecs. This is fascinating to explore, simply due to the diversity.

Even if you don’t understand all of this – and I don’t – it is still fun to look at. Why? Because this puts key features of popular codecs all in one place, making them easy to review and compare.

Here’s a link to learn more.


… for Visual Effects

Tip #436: What is a B-spline Curve?

Larry Jordan – LarryJordan.com

B-splines are used to create shapes with no sharp corners.

An example of an open-ended B-spline curve.

Topic $TipTopic

B-spline curves (short for Basis spline) are frequently used to create shapes because, unlike Bézier curves, B-splines have no corners.

A B-spline is a combination of flexible bands that pass through a number of points (called control points) to create smooth curves. These functions enable the creation and management of complex shapes and surfaces using a number of points. B-spline and Bézier functions are applied extensively for shape optimization.

B-splines can be open (where the ends are not connected, as in this screen shot), or closed.

The shape of the B-spline is determined by moving the nodes, the red dots in this illustration. These act as magnets, attracting the shape of the curve as the nodes move.

Neither Premiere nor Final Cut support B-splines, but After Effects and Motion do.

EXTRA CREDIT

An extension of B-splines are NURBS (short for “non-uniform rational B-splines”). The big benefit to NURBS is that they can exist in three dimensions. I’d, ah, show you the equations for these, but they make my brain hurt.


… for Codecs & Media

Tip #414: What is a Container Format?

Larry Jordan – LarryJordan.com

Containers hold stuff – like media.

Topic $TipTopic

QuickTime and MXF are often described as media “containers.” But, what is a container?

A “container,” also called a “wrapper,” is a metafile (analogous to a folder) whose specification describes how the different elements inside it are stored. Similar to a Keynote file or a Library in Final Cut Pro X, a container is a file that holds files, but still acts like a single file. Unlike a folder, when you double-click it, a container opens the files inside it.

By definition, a container could contain anything, but, generally, they focus on a specific type of data – most often involving media. Containers can hold video, audio, timecode, captions, and metadata that describes the contents of the container.

Popular containers include:

  • Both AIFF and WAV are containers, but only hold audio.
  • TIFF is a container for still images.
  • QuickTime, MXF and MPEG-2 Transport stream are containers for audio, video and related files.

The big benefit to containers is that they are not tied to a single codec, but allow us to use a single container for mutiple codecs, thus hiding the underlying technology inside a familiar format.


… for Codecs & Media

Tip #415: Everything Starts With an IFF

Larry Jordan – LarryJordan.com

All our media starts as a “chunk.”

Topic $TipTopic

Back in WWII, an “IFF” was a radar signal used for “identification friend or foe.” But, in the media world, IFF has an entirely different meaning – one that we use everyday.

The Interchange File Format (IFF) is a generic container file format, invented in 1985 by Jerry Morrison at Electronic Arts, along with engineers at Commodore, to simplify transferring data between computers.

Common IFF formats include:

  • AIFF (Audio IFF file)
  • TIFF (Tagged IFF file)
  • PNG (a modified form of IFF)
  • FourCC (a Windows media format)
  • QuickTime also has IFF elements as part of its structure

An IFF file is built up from chunks, small pieces of data containing media and information about that media, similar to an Ethernet packet.

Each type of chunk typically has a different internal structure, which could be numerical, text or raw (unstructured) data.

The benefit to using IFF files is that it become easy to move files from one program or computer to another. An even better benefit is that IFF, like Ethernet, does not require us to understand how it works in order to use it.


… for Codecs & Media

Tip #350: Isaac Newton’s Color Wheel

Larry Jordan – LarryJordan.com

The Color Wheel is almost 400 years old!

A modern color wheel, modeled after Sir Isaac Newton’s initial work.

Topic $TipTopic

I was reading Blain Brown‘s excellent book, Digital Imaging, earlier this week and discovered that the color wheel that we use virtually every day was invented by Isaac Newton in 1666.

It started with Newton passing light through a prism to reveal the spectrum of light. While the spectrum of light is linear, Newton’s insight was to connect the two ends to form a circle. This made it much easier to see the relationships between primary (red, green and blue) colors with secondary (yellow, cyan, and magenta) colors.

His experiments led to the theory that red, yellow and blue were the primary colors from which all other colors are derived.  While that’s not entirely true, it’s still influential in the color wheels developed in the early 1800s as well as the color wheel currently used today. Add to his initial work the secondary colors of violet, orange and green—those which result from mixing the primary colors—and the color wheel begins to take shape.

EXTRA CREDIT

The secondary colors – yellow and cyan – exist in the color spectrum and are formed by combining two primary colors. While magenta is formed by combining red and blue, they are at opposite ends of the color spectrum, which means that magenta, while a color, is not in the color spectrum!


… for Codecs & Media

Tip #374: Constant Bitrate vs. Constant Quality

Larry Jordan – LarryJordan.com

Two new encoding options for Blackmagic RAW media.

Topic $TipTopic

This article, written by Lewis MaGregor, first appeared in PremiumBeat. Let’s take a quick look at the two new encoding options in Blackmagic RAW.

  • Constant Bitrate. This makes sure your file sizes remain predictable and manageable because your media is never going to surpass the selected data rate. While Constant Bitrate is a surefire setting, to make sure the file sizes and quality will remain as advertised, it may cause issues when the footage being captured could do without the extra compression, ensuring that all details of a busy scene are clear.
  • Constant Quality. This has a variable bitrate with no upper data limit. This means if you’re filming a wedding and the guests start throwing confetti and rice, and more objects enter into focus, the bitrate will adjust to account for the increase in complex frame information, maintaining the overall quality of the entire image. Of course, this comes with larger file sizes that you can’t predict.

… for Codecs & Media

Tip #347: Codecs – Explained (Part 1)

Larry Jordan – LarryJordan.com

Always something new to learn about codecs.

Topic $TipTopic

I’ve used the term “codec” for years. Still, there’s always something new to learn. For example, according to Wikipedia, “A codec is a device or computer program which encodes or decodes a digital data stream or signal. Codec is a portmanteau of coder-decoder.”

NOTE: A “portmanteau” is a linguistic blend of words, in which parts of multiple words or their phonemes (sounds) are combined into a new word. (Right, I didn’t know that either.)

“In the mid-20th century,” Wikipedia continues, “a codec was a device that coded analog signals into digital form using pulse-code modulation (PCM). Later, the name was also applied to software for converting between digital signal formats, including compander functions.

“In addition to encoding a signal, a codec may also compress the data to reduce transmission bandwidth or storage space. Compression codecs are classified primarily into lossy codecs and lossless codecs.

NOTE: See Tip #348 for a description of lossy vs. lossless.

“Two principal techniques are used in codecs, pulse-code modulation and delta modulation. Codecs are often designed to emphasize certain aspects of the media to be encoded. For example, a digital video (using a DV codec) of a sports event needs to encode motion well but not necessarily exact colors, while a video of an art exhibit needs to encode color and surface texture well.
Audio codecs for cell phones need to have very low latency between source encoding and playback. In contrast, audio codecs for recording or broadcast can use high-latency audio compression techniques to achieve higher fidelity at a lower bit-rate.”

Many multimedia data streams contain both audio and video, and often some metadata that permit synchronization of audio and video. Each of these three streams may be handled by different programs, processes, or hardware; but for the multimedia data streams to be useful in stored or transmitted form, they must be encapsulated together in a container format; such as MXF or QuickTime.

Here’s the original Wikipedia article.


… for Codecs & Media

Tip #348: Codecs – Explained (Part 2)

Larry Jordan – LarryJordan.com

Lossy is smaller, Lossless is better

Topic $TipTopic

As we learned in Tip #347, there are two types of codecs: lossless and lossy. In this tip, I want to explain the difference. For this, we’ll turn to a Wikipedia article.

LOSSLESS

Lossless codecs are often used for archiving data in a compressed form while retaining all information present in the original stream. If preserving the original quality of the stream is more important than eliminating the correspondingly larger data sizes, lossless codecs are preferred. This is especially true if the data is to undergo further processing (for example editing) in which case the repeated application of processing (encoding and decoding) on lossy codecs will degrade the quality of the resulting data such that it is no longer identifiable (visually, audibly or both). Using more than one codec or encoding scheme successively can also degrade quality significantly. The decreasing cost of storage capacity and network bandwidth has a tendency to reduce the need for lossy codecs for some media.

LOSSY

Many popular codecs are lossy. They reduce quality in order to maximize compression. Often, this type of compression is virtually indistinguishable from the original uncompressed sound or images, depending on the codec and the settings used. The most widely used lossy data compression technique in digital media is based on the discrete cosine transform (DCT), used in compression standards such as JPEG images, H.26x and MPEG video, and MP3 and AAC audio. Smaller data sets ease the strain on relatively expensive storage sub-systems such as non-volatile memory and hard disk, as well as write-once-read-many formats such as CD-ROM, DVD and Blu-ray Disc. Lower data rates also reduce cost and improve performance when the data is transmitted.