… for Codecs & Media

Tip #744: What is Interlacing?

Larry Jordan – LarryJordan.com

Interlacing was needed due to limited bandwidth.

Interlace artifact – thin, dark, horizontal lines radiating off moving objects.

Topic $TipTopic

Even in today’s world of 4K and HDR, many HD productions still need to distribute interlaced footage. So, what is interlacing?

Interlacing is the process of time-shifting every other line of video so that the total bandwidth requirements for a video stream are, effectively, cut in half.

For example, in HD, first all the even numbered lines are displayed, then 1/2 the frame rate later, all the odd numbered lines are displayed. Each of these is called a “field.” The field rate is double the frame rate.

NOTE: HD is upper field first, DV (PAL or NTSC) is lower field first.

In the old days of NTSC and PAL this was done because the broadcast infrastructure couldn’t handle complete frames.

As broadcasters converted to HD at the end of the last century, they needed to make a choice; again due to limited bandwidth: They could either choose to broadcast a single 720 progressive frame, or an interlaced 1080 frame.

Some networks chose 720p because they were heavily into sports, which looks best in a progressive frame. Others chose interlaced, because their shows principally originated on film, which minimized interlaced artifact, which is illustrated in the screen shot.

As we move past HD into 4K, the bandwidth limitations fade away, which means that all frames are progressive.

EXTRA CREDIT

It is easy to shoot progressive and convert it to interlaced, with no significant loss in image quality. It is far harder to convert interlaced footage to progressive; and quality always suffers. Also, the web requires progressive media because interlacing looks terrible.

For this reason, it is best to shoot progressive, then convert to interlacing as needed for distribution.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #745: What is HDR Rec. 2020 HLG

Larry Jordan – LarryJordan.com

HLG is compatible with both HDR and SDR broadcast and television sets.

Chart showing a conventional SDR gamma curve and Hybrid Log-Gamma (HLG). HLG uses a logarithmic curve for the upper half of the signal values which allows for a larger dynamic range.

Topic $TipTopic

High-dynamic-range video (HDR video) describes video having a dynamic range greater than that of standard-dynamic-range video (SDR video). HDR capture and displays are capable of brighter whites and deeper blacks. To accommodate this, HDR encoding standards allow for a higher maximum luminance and use at least a 10-bit dynamic range in order to maintain precision across this extended range.

While technically “HDR” refers strictly to the ratio between the maximum and minimum luminance, the term “HDR video” is commonly understood to imply wide color gamut as well.

There are two ways we can display HDR material: HLG and PQ. (Tip #746 discusses PQ).

HLG (Hybrid Log Gamma) is a royalty-free HDR standard jointly developed by the BBC and NHK. HLG is designed to be better-suited for television broadcasting, where the metadata required for other HDR formats is not backward compatible with non-HDR displays, consumes additional bandwidth, and may also become out-of-sync or damaged in transmission.

HLG defines a non-linear optical-electro transfer function, in which the lower half of the signal values use a gamma curve and the upper half of the signal values use a logarithmic curve. In practice, the signal is interpreted as normal by standard-dynamic-range displays (albeit capable of displaying more detail in highlights), but HLG-compatible displays can correctly interpret the logarithmic portion of the signal curve to provide a wider dynamic range.

HLG is defined in ATSC 3.0, among others, and is supported by video services such as the BBC iPlayer, DirecTV, Freeview Play, and YouTube. HLG is supported by HDMI 2.0b, HEVC, VP9, and H.264/MPEG-4 AVC.


… for Random Weirdness

Tip #716: 3-2-1 Backup Rule

Larry Jordan – LarryJordan.com

3 copies – 2 different media – 1 different location.

3 copies – 2 different media – 1 different location.

Topic $TipTopic

This article, written by Trevor Sherwin, first appeared in PetaPixel.com. This is an excerpt.

Whether you take your photos professionally or for fun, how many of you out there can truly say you’re are happy with your photo backup strategy? If a drive were to fail, will you lose any photos? If you have a house fire or were to be burgled, do you have a copy elsewhere?

Getting your backup processes in place is a bit boring and not very creative but the more seriously you take your photography, the more you need to have a robust workflow in place.

Put simply, the 3-2-1 backup strategy provides an easy-to-remember approach to how many copies of your data you should have and where those copies should be stored in order to protect against the most likely threats to your photos.

  • 3 (copies of your data)
  • 2 (different media or hard drives)
  • 1 (copy of your photos in another location)

The article, linked above, has more details, include a sample workflow on how to safely and efficiently backup your data.


… for Codecs & Media

Tip #731: What is a Watermark?

Larry Jordan – LarryJordan.com

Watermarks are used to deter theft and trace stolen images.

Topic $TipTopic

Video watermarks are used for branding, identification and to deter theft. Most of us are familiar with the watermarks that are burned into the lower right corner of a video. However, there are actually two types of watermarks:

  • A still or moving image burned into your image
  • A digital code embedded into the media file itself

The first option is easy, but does nothing to prevent piracy. The second is much harder and, while it can’t prevent theft, it can help determine where in the distribution pipeline the theft occurred.

All NLEs and most video compression software allows burning watermarks into video during compression.

A digital watermark is a kind of marker covertly embedded in a noise-tolerant signal such as audio, video or image data. It is typically used to identify ownership of the copyright of such signal. Digital watermarks may be used to verify the authenticity or integrity of the carrier signal or to show the identity of its owners. It is prominently used for tracing copyright infringements and for banknote authentication.

Since a digital copy of data is the same as the original, digital watermarking is a passive protection tool. It just marks data, but does not degrade it or control access to the data.

One application of digital watermarking is source tracking. A watermark is embedded into a digital signal at each point of distribution. If a copy of the work is found later, then the watermark may be retrieved from the copy and the source of the distribution is known. This technique reportedly has been used to detect the source of illegally copied movies.

EXTRA CREDIT

In case you were wondering, Section 1202 of the U.S. Copyright Act makes it illegal for someone to remove the watermark from your photo so that it can disguise the infringement when used. The fines start at $2500 and go to $25,000 in addition to attorneys’ fees and any damages for the infringement.

Here’s a Wikipedia article to learn more about digital watermarking.


… for Codecs & Media

Tip #732: How Many Megapixels is the Eye?

Larry Jordan – LarryJordan.com

The eye is 576 megapixels – except, ah, it really isn’t.

The eye is more like a movable sensor than a camera.

Topic $TipTopic

This article first appeared in Discovery.com. This is an excerpt.

According to scientist and photographer Dr. Roger Clark, the resolution of the human eye is 576 megapixels. That’s huge when you compare it to the 12 megapixels of an iPhone 7’s camera. But what does this mean, really? Is the human eye really analogous to a camera?

A 576-megapixel resolution means that in order to create a screen with a picture so sharp and clear that you can’t distinguish the individual pixels, you would have to pack 576 million pixels into an area the size of your field of view. To get to his number, Dr. Clark assumed optimal visual acuity across the field of view; that is, it assumes that your eyes are moving around the scene before you. But in a single snapshot-length glance, the resolution drops to a fraction of that: around 5–15 megapixels.

Really, though, the megapixel resolution of your eyes is the wrong question. The eye isn’t a camera lens, taking snapshots to save in your memory bank. It’s more like a detective, collecting clues from your surrounding environment, then taking them back to the brain to put the pieces together and form a complete picture. There’s certainly a screen resolution at which our eyes can no longer distinguish pixels — and according to some, it already exists — but when it comes to our daily visual experience, talking in megapixels is way too simple.


… for Codecs & Media

Tip #733: How Much Resolution is Too Much?

Larry Jordan – LarryJordan.com

The eye sees angles, not pixels.

At a normal viewing distance for a well-exposed and focused image HD, UHD and 8K look the same.

Topic $TipTopic

This article, written by Phil Platt in 2010 discussing how the human eye perceives image resolution, first appeared in Discovery.com. The entire article is worth reading. Here are the highlights.

As it happens, I know a thing or two about resolution, having spent a few years calibrating a camera on board Hubble, the space telescope.

The ability to see two sources very close together is called resolution. It’s measured as an angle, like in degrees. For example, the Hubble Space Telescope has a resolution of about 0.00003 degrees. That’s a tiny angle!

Since we measure resolution as an angle, we can translate that into a separation in, say, inches at certain distance. A 1-foot ruler at a distance of about 57 feet (19 yards) would appear to be 1 degree across (about twice the size of the full Moon). If your eyes had a resolution of 1 degree, then the ruler would just appear to you as a dot.

What is the resolution of a human eye, then? Well, it varies from person to person, of course. If you had perfect vision, your resolution would be about 0.6 arcminutes, where there are 60 arcmin to a degree (for comparison, the full Moon on the sky is about 1/2 a degree or 30 arcmin across).

To reuse the ruler example above, and using 0.6 arcmin for the eye’s resolution, the 1-foot ruler would have to be 5730 feet (1.1 miles) away to appear as a dot to your eye. Anything closer and you’d see it as elongated (what astronomers call “an extended object”), and farther away it’s a dot. In other words, more than that distance and it’s unresolved, closer than that and it’s resolved.

This is true for any object: if it’s more than 5730 times its own length away from you, it’s a dot. A quarter is about an inch across. If it were more than 5730 inches way, it would look like a dot to your eye.

But most of us don’t have perfect vision or perfect eyesight. A better number for a typical person is more like 1 arcmin resolution, not 0.6. In fact, Wikipedia lists 20/20 vision as being 1 arcmin, so there you go.

[Phil then summarizes:] The iPhone4 has a resolution of 326 ppi (pixels per inch). …The density of pixels in the iPhone 4 [when viewed at a distance of 12 inches] is safely higher than can be resolved by the normal eye, but lower than what can be resolved by someone with perfect vision.

LARRY’S EDITORIAL COMMENT

There’s a lot of discussion today about the value of 8K images. Current research shows that we need to sit within 7 feet (220 cm) of a 55″ HD image to see individual pixels. That converts to 1.8 feet to see individual the pixels in a UHD image. And 5 inches to see individual pixels in an 8K image on a 55 monitor.

Any distance farther and individual pixels can’t be distinguished.


… for Codecs & Media

Tip #701: How to Export an Alpha Channel

Larry Jordan – LarryJordan.com

Alpha channels are not supported in H.264 or HEVC media.

Topic $TipTopic

The alpha channel determines transparency in a clip. However, no compressed codec supports alpha channels. Why? Because including the alpha channel makes a file really big!

Here, courtesy of RocketStock.com is a list of video codecs and image formats that support alpha channels.

Video Codecs and Image Formats with Alpha Channels

  • Apple Animation
  • Apple ProRes 4444
  • Avid DNxHD
  • Avid DNxHR
  • Avid Meridien
  • Cineon
  • DPX
  • GoPro Cineform
  • Maya IFF
  • OpenEXR Sequence With Alpha
  • PNG Sequence With Alpha
  • Targa
  • TIFF

Be sure to test your codec before committing to a project. Not all versions of DNx or GoPro Cineform support alpha channels.


… for Codecs & Media

Tip #702: Is GoPro Cineform Still Useful?

Larry Jordan – LarryJordan.com

GoPro Cineform is available for free for both Mac and Windows.

Topic $TipTopic

When GoPro canceled GoPro Studio a while back, it became more difficult to convert GoPro footage into a format that can be easily edited.

This article, from David Coleman Photography, describes how to convert and play GoPro footage.

While GoPro Studio is no more, you can download the codecs themselves from the GoPro-Cineform decoder page. There you’ll find versions for Mac and Windows. In the case of the Mac version, it’s still called NeoPlayer, which is its old name.


… for Codecs & Media

Tip #703: What is GoPro Cineform?

Larry Jordan – LarryJordan.com

This 12-bit, full-frame video codec is optimzied for speed and image quality.

Topic $TipTopic

GoPro CineForm is a 12-bit, full-frame wavelet compression video codec. It is designed for speed and quality, at the expense of a very high compression size. Image compression is a balance of size, speed and quality, and you can only choose two. CineForm was the first of its type to focus on speed, while supporting higher bit depths for image quality. More recent examples would be Avid DNxHD and Apple ProRes, although both divide the image into blocks using DCT.

The full frame wavelet has a subject quality advantage over DCTs, so you can compression more without classic ringing or block artifact issues. Here are the pixel formats supported:

  • 8/10/16-bit YUV 4:2:2 compressed as 10-bit, progressive or interlace
  • 8/10/16-bit RGB 4:4:4 compressed at 12-bit progressive
  • 8/16-bit RGBA 4:4:4:4 compressed at 12-bit progressive
  • 12/16-bit CFA Bayer RAW, log encoded and compressed at 12-bit progressive
  • Dual channel stereoscopic/3D in any of the above

Compression ratio: between 10:1 and 4:1 are typical, greater ranges are possible. CineForm is a constant quality design, bit-rates will vary as needed for the scene. Whereas most other intermediate video codecs are a constant bit-rate design, quality varies depending on the scene.

EXTRA CREDIT

Here’s a link to learn more.


… for Visual Effects

Tip #694: What is Parallax?

Larry Jordan – LarryJordan.com

Reducing parallax is important in panoramic stills, VFX and Stereo 3D video.

Topic $TipTopic

Parallax is the difference in the apparent position of an object viewed along two different lines of sight; say from the left eye to the right eye, or each lens of a stereo 3D video camera.

As the eyes of humans and other animals are in different positions on the head, they present different views simultaneously. This is the basis of stereopsis, the process by which the brain exploits the parallax due to the different views from the eye to gain depth perception and estimate distances to objects.

In addition to its use in making stereo3D believable, parallax is also used in panoramic images, visual effects and web design.

EXTRA CREDIT

Even if your camera setup is perfectly level, you won’t be happy with the results for panoramic images until you eliminate image parallax. Image parallax occurs when near and far objects don’t align in overlapping images. For example, if you’re shooting a scene that contains a fence line, each fencepost in Image 1 should line up with its twin in Image 2. You can eliminate the effects of parallax by placing the optical center of the lens (not the camera) directly over the point of rotation.

Learn more here