… for Apple Motion

Tip #538: What Does “Four Corner” Do?

The “Four Corner” setting determines image distortion.

The Four Corner settings (top) determine image distortion (bottom).

Topic $TipTopic

When you select an object in Motion, one of the adjustments you can make is Four Corner. Inspector > Properties > Four Corner allows you to distort whatever you have selected. Here’s how it works.

When you adjust Inspector > Properties > Position, you can modify the position of the frame containing whatever you have selected.

However, when you adjust Inspector > Properties > Four Corner, you can distort the object itself, as illustrated in this screen shot.

Four Corner also provides separate control over the horizontal and vertical position of each corner.

EXTRA CREDIT

Keep in mind that all these distortion settings can be keyframed to animate a shape over time.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Visual Effects

Tip #543: What is Planar Tracking?

Larry Jordan – LarryJordan.com

Planar tracking solves problems with lost tracking points.

Topic $TipTopic

A planar tracker uses planes and textures to track as opposed to points or groups of pixels. This allows the tracker to stay on track even if your shot contains motion blur or a very shallow depth of field. Here’s a quick overview.

Planar tracking was developed by Allan Jaenicke and Philip McLauchlan in the University of Surrey. They founded Imagineer Systems in 2000 to provide commercial applications for this technology.

“Planar Tracking” gains its name from how the system analyzes the source video. It seeks out different ‘planes’, isolating surfaces that can be followed through a shot. The user can define a plane for the computer to follow, and if tracked successfully, the movement of the ‘tracked’ object can be used to drive the motion of newly composited elements, or inversely to stabilize footage within a frame.

Mocha, by Imagineer Systems, is an example of this technology. Once tracking information is derrived from a videoclip within Mocha, it can be used in After Effects to animate the motion of any composited layer. Virtual elements can use this tracking information to control what is essentially a camera move that mimics that of the original shot, so that the virtual and live action elements appear to have been shot by the same camera.

EXTRA CREDIT

While mocha was the first planar tracker, similar technology can be found in:

  • Nuke, The Foundry
  • Syntheyes, Andersson Technologies
  • Flame, Autodesk
  • fayIN, fyateq

Learn more from BorisFX, who acquired Imagineer Systems, here.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Visual Effects

Tip #542: What is Rotoscoping?

Larry Jordan – LarryJordan.com

Rotoscoping allows us to transfer an object onto a different background.

Image in the public domain.
Max Fleisher’s original rotoscope (1915).

Topic $TipTopic

Rotoscoping is an animation technique that animators use to trace over motion picture footage, frame by frame, to produce realistic action. Originally, animators projected photographed live-action movie images onto a glass panel and traced over the image. This projection equipment is referred to as a rotoscope, developed by Polish-American animator Max Fleischer. This device was eventually replaced by computers, but the process is still called rotoscoping.

In the visual effects industry, rotoscoping is the technique of manually creating a matte for an element on a live-action plate so it may be composited over another background.

Rotoscoping has often been used as a tool for visual effects in live-action movies. By tracing an object, the moviemaker creates a silhouette (called a matte) that can be used to extract that object from a scene for use on a different background. While blue- and green-screen techniques have made the process of layering subjects in scenes easier, rotoscoping still plays a large role in the production of visual effects imagery. Rotoscoping in the digital domain is often aided by motion-tracking and onion-skinning software. Rotoscoping is often used in the preparation of garbage mattes for other matte-pulling processes.

Rotoscoping has also been used to create a special visual effect (such as a glow, for example) that is guided by the matte or rotoscoped line. A classic use of traditional rotoscoping was in the original three Star Wars movies, where the production used it to create the glowing lightsaber effect with a matte based on sticks held by the actors. To achieve this, effects technicians traced a line over each frame with the prop, then enlarged each line and added the glow.

Learn more at Wikipedia.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #541: What is Bit Depth?

Larry Jordan – LarryJordan.com

Bit depth is always expressed as a power of 2.

An illustration of 8-bit vs. 10-bit depth. (8-bit is on top).

Topic $TipTopic

Bit depth determines the number of steps between the minimum and maximum of a value. The bit depth number (8, 16, 24) actually describes a power of 2.

  • A bit depth of 4 = 2^4 = 16 steps
  • A bit depth of 8 = 2^8 = 256 steps
  • A bit depth of 10 = 2^10 = 1,024 steps
  • A bit depth of 16 = 2^16 = 65,536 steps

In the screen shot, the top row represents an image with a bit depth of 8. The lower image represents an image with a bit depth of 10.

NOTE: These are illustrations, actual bit depth variations don’t look quite this bad.

Where higher bit depths help image quality is in color grading, gradients and anywhere smooth shading from one value to another is important.

EXTRA CREDIT

In audio, bit depth determines the dynamic range; the amount of variation in audio levels between soft and loud. Bit depth is only meaningful in reference to a PCM digital signal (i.e. WAV or AIF). Non-PCM formats, such as lossy compression formats (i.e. MP3), do not have associated bit depths.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #539: What is a Sidecar File?

Larry Jordan – LarryJordan.com

Sidecar files track data that the main image file can’t.

Image courtesy of Pexels.com.
Sidecars hold stuff the main file can’t.

Topic $TipTopic

Sidecar files are XML computer files that store data (often metadata) which is not supported by the format of a source file. There may be one or more sidecar files for each source file.

In most cases the relationship between the source file and the sidecar file is based on the file name; sidecar files have the same base name as the source file, but with a different extension. The problem with this system is that most operating systems and file managers have no knowledge of these relationships, and might allow the user to rename or move one of the files thereby breaking the relationship.

Examples include:

  • XMP. Stores image metadata.
  • THM. Stores digital camera thumbnails
  • EXIF. Stores camera data to keep it from becoming lost when editing JPG images.

EXTRA CREDIT

Rather than storing data separately, it can be stored as part of the main file. This is particularly done for container files, which allow certain types of data to be stored in them. Instead of separate files on the file system, multiple files can be combined into an archive file, which keeps them together, but requires that software processes the archive file, rather than individual files. This is a generic solution, as archive files can contain arbitrary files from the file system.

Container formats include QuickTime, MXF and IFF.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Random Weirdness

Tip #521: What is Color Temperature?

From warm to cool, color temperature tells us where white light falls.

Image courtesy of Bhutajata - CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=44144928
Color temperature in degrees Kelvin from 1000° K to 12,000° K.

Topic $TipTopic

Color temperature is the measure of the perceived color of white light on a scale from warm (gold) to cool (bluish). These lighting facts might interest you, ’cause I found them interesting.

  • What we would consider “white” light is around 6500° K. (“K” stands for “Kelvin” which is a measure of absolute temperature indicating how much you would need to heat a “black body” to get it to glow at this color.)
  • The effective color temperature of the sun is about 5780° K .
  • The changing color of the sun over the course of the day is mainly a result of the scattering of sunlight and is not due to changes in the sun itself.
  • The Earth’s atmosphere scatters blue color frequencies more than warmer colors, which is why the sky is blue. (It’s called Rayleigh scattering, named after the 19th-century British physicist: Lord Rayleigh.)
  • Color temperature is meaningful only for light sources that generate light in a range going from red to orange to yellow to white to blueish white. It does not make sense to speak of the color temperature of a green or purple light.
  • Color temperatures over 5000 K are called “cool colors” (bluish), while lower color temperatures (2700–3000 K) are called “warm colors” (yellowish).
  • Bizarre fact: The temperature of a “warm” light is cooler than the temperature of a “cool” light.
  • Most natural warm-colored light sources emit significant infrared radiation.
  • A warmer (i.e., a lower color temperature) light is often used in public areas to promote relaxation, while a cooler (higher color temperature) light is used to enhance concentration, for example in schools and offices.
  • Most digital cameras today have an automatic white balance function that attempts to determine the color of the light and correct accordingly. While these settings were once unreliable, they are much improved in today’s digital cameras and produce an accurate white balance in a wide variety of lighting situations.

Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Visual Effects

Tip #504: Comparing a Framing vs. Tripod Camera

Larry Jordan – LarryJordan.com

The difference is in what rotates – the camera or the subject.

(Image created in Apple Motion.)
Panning with a framing camera; the subject is the white line.

Topic $TipTopic

There are two ways to pivot a camera: around the tripod or around the subject. Here’s a quick tip to explain the difference.

We are all familiar with pivoting a camera on a tripod. The camera stays in the same place, while the field of view rapidly shifts. This is ideal for subjects who are moving from one place to another.

In other words, the camera position holds still while the subject moves.

But, what if you are shooting an object on a table? If you pivot the camera on a tripod, you lose the view of the table and need to reposition the table.

A “framing camera” fixes this problem. First invented for shooting animation stills, a framing camera pivots the camera around the subject.

In other words, the subject position holds still while the camera moves around it.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #508: Pick the Best Audio Format for Editing

Larry Jordan – LarryJordan.com

Choose AIF or WAV audio files. File sizes are larger, but the quality is worth it.

A typical monoaural audio waveform of human speech.

Topic $TipTopic

This article, written by Charles Yeager, first appeared in PremiumBeat.com. This is a summary.

When using various audio files in your video edits, such as music tracks and sound effects, does the audio file type really make a difference? (Spoiler: yes, it does.) But the real question is why are there so many different audio file formats? And what is the purpose for each one? So let’s break that down, and in so doing, determine the best audio file formats to use when editing videos.

There are three principle audio groups:

  • Uncompressed file formats: .WAV, .AIFF
  • Compressed Lossless file formats: .FLAC, .ALAC (Apple Lossless)
  • Compressed Lossy file formats: .MP3, .AAC, .WMA, .OGG

UNCOMPRESSED

Uncompressed audio formats are the equivalent of RAW video formats.This allows for a wide range of audio bit depth and sample rates. This results in better audio quality and covers the full frequency that the human ear can hear.

Uncompressed audio files are typically easier to work with in audio and video editors because they require less processing to play back. And since uncompressed files contain more data, you’ll get better results when you’re manipulating the audio in post with various effects.

COMPRESSED LOSSLESS

The name “compressed lossless” may sound like a contradiction. However, the compression isn’t occurring in a way that degrades the audio itself. Think of it almost like ZIP-compressing a music file, then unzipping it during playback.

Compressed lossless audio files can be anywhere from 1/2 to 1/3 the size of uncompressed audio files — or even smaller, while the audio quality is still lossless, enabling full frequency playback.

The drawbacks for compressed lossless files are that they are the least supported (compared to uncompressed and compressed lossy.) They also require a little more computing power to play back, because they need decoding.

COMPRESSED LOSSY

Compressed lossy audio formats are likely the most common audio files you use when listening to music. This is because compressed lossy audio files have the most support among portable devices, and they have the smallest file sizes; up to 1/10 WAV or AIF.

Compressed lossy audio files are ideal for streaming online.

However, all that compression comes at a cost. The drawback is that the audio has a limited frequency range and noticeable audio artifacts when compared to a lossless format. Another drawback is that you have less range in post when it comes to editing and audio manipulation.

WHICH TO USE FOR AUDIO EDITING?

WAV or AIF.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #499: What is Pixel Aspect Ratio?

Larry Jordan – LarryJordan.com

Pixel aspect ratios were used in the past to compensate for limited bandwidth.

An exaggerated example of non-square pixels used in a variety of SD video.

Topic $TipTopic

Pixel aspect ratios determine the rectangular shape of a video pixel. In the early days of digital video, bandwidth, storage and resolution were all very limited. Also, in those days, almost all digital video was displayed on a 4:3 aspect ratio screen.

This meant that the image was 4 units wide by 3 units high, composed of 720 pixels across and 480 pixels high. (The reason I use the word “units” was that then, like now, monitors came in different sizes, but all had the same resolution regardless of size.)

However, standard definition video, though displayed as a 4×3 images, was composed of 720 pixels horizontally by 480 pixels vertically. This was not 4×3. To get everything to work out properly, instead of being square, each pixel was tall and thin. Each pixel was 0.9 units wide to 1.0 unit tall. (The screen shot shows an exaggerated example of this difference in width.)

As digital video started to encompass wide screen, rather than add more pixels, which was technically challenging, engineers changed the shape of the pixel to be fat. (A pixel aspect ratio of 1.0×1.2) This provided wide screen support (16×9 aspect ratio images) without increasing pixel resolution or, more importantly, file size and bandwidth requirements.

These non-square pixels continued for a while into HD video, with both HDV and some formats of P2 using non-square pixels.

However, as storage capacity and bandwidth caught up with the need for more pixels in larger frame sizes, pixels evolved into the square pixels virtually every digital format uses today. This greatly simplified all manner of pixel manipulation.

However, most compression software has settings that allow it to work with legacy formats back in the days when pixels weren’t square.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Visual Effects

Tip #487: What’s a Tilt/Shift Blur

Larry Jordan – LarryJordan.com

A Tilt-Shift Blur blurs an image in stages, simulating depth of field.

Gaussian blur on the left, Tilt-Shift blur on the right. The difference is at the bottom.

Topic $TipTopic

A tilt-shift blur simulates depth of field or the softening of edges with distance from a light source. Here’s what it looks like.

In this screen shot, the left side is a normal Gaussian blur. On the right, is a tilt-shift blur.

Notice in the image on the left, the entire image is blurred by the same amount. While, in the image on the right, the foreground is in focus, the mid-ground is softly out of focus and the background is deeply out of focust.

This effect more accurately simulates how a camera lens might interpret an image.

NOTE: Final Cut supports this effect using Blur > Focus. Premiere does not currently support this effect. The screen shot was created in Photoshop.


Please rate the helpfulness of this tip.

Click on a star to rate it!