… for Codecs & Media

Tip #731: What is a Watermark?

Larry Jordan – LarryJordan.com

Watermarks are used to deter theft and trace stolen images.

Topic $TipTopic

Video watermarks are used for branding, identification and to deter theft. Most of us are familiar with the watermarks that are burned into the lower right corner of a video. However, there are actually two types of watermarks:

  • A still or moving image burned into your image
  • A digital code embedded into the media file itself

The first option is easy, but does nothing to prevent piracy. The second is much harder and, while it can’t prevent theft, it can help determine where in the distribution pipeline the theft occurred.

All NLEs and most video compression software allows burning watermarks into video during compression.

A digital watermark is a kind of marker covertly embedded in a noise-tolerant signal such as audio, video or image data. It is typically used to identify ownership of the copyright of such signal. Digital watermarks may be used to verify the authenticity or integrity of the carrier signal or to show the identity of its owners. It is prominently used for tracing copyright infringements and for banknote authentication.

Since a digital copy of data is the same as the original, digital watermarking is a passive protection tool. It just marks data, but does not degrade it or control access to the data.

One application of digital watermarking is source tracking. A watermark is embedded into a digital signal at each point of distribution. If a copy of the work is found later, then the watermark may be retrieved from the copy and the source of the distribution is known. This technique reportedly has been used to detect the source of illegally copied movies.

EXTRA CREDIT

In case you were wondering, Section 1202 of the U.S. Copyright Act makes it illegal for someone to remove the watermark from your photo so that it can disguise the infringement when used. The fines start at $2500 and go to $25,000 in addition to attorneys’ fees and any damages for the infringement.

Here’s a Wikipedia article to learn more about digital watermarking.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #732: How Many Megapixels is the Eye?

Larry Jordan – LarryJordan.com

The eye is 576 megapixels – except, ah, it really isn’t.

The eye is more like a movable sensor than a camera.

Topic $TipTopic

This article first appeared in Discovery.com. This is an excerpt.

According to scientist and photographer Dr. Roger Clark, the resolution of the human eye is 576 megapixels. That’s huge when you compare it to the 12 megapixels of an iPhone 7’s camera. But what does this mean, really? Is the human eye really analogous to a camera?

A 576-megapixel resolution means that in order to create a screen with a picture so sharp and clear that you can’t distinguish the individual pixels, you would have to pack 576 million pixels into an area the size of your field of view. To get to his number, Dr. Clark assumed optimal visual acuity across the field of view; that is, it assumes that your eyes are moving around the scene before you. But in a single snapshot-length glance, the resolution drops to a fraction of that: around 5–15 megapixels.

Really, though, the megapixel resolution of your eyes is the wrong question. The eye isn’t a camera lens, taking snapshots to save in your memory bank. It’s more like a detective, collecting clues from your surrounding environment, then taking them back to the brain to put the pieces together and form a complete picture. There’s certainly a screen resolution at which our eyes can no longer distinguish pixels — and according to some, it already exists — but when it comes to our daily visual experience, talking in megapixels is way too simple.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #733: How Much Resolution is Too Much?

Larry Jordan – LarryJordan.com

The eye sees angles, not pixels.

At a normal viewing distance for a well-exposed and focused image HD, UHD and 8K look the same.

Topic $TipTopic

This article, written by Phil Platt in 2010 discussing how the human eye perceives image resolution, first appeared in Discovery.com. The entire article is worth reading. Here are the highlights.

As it happens, I know a thing or two about resolution, having spent a few years calibrating a camera on board Hubble, the space telescope.

The ability to see two sources very close together is called resolution. It’s measured as an angle, like in degrees. For example, the Hubble Space Telescope has a resolution of about 0.00003 degrees. That’s a tiny angle!

Since we measure resolution as an angle, we can translate that into a separation in, say, inches at certain distance. A 1-foot ruler at a distance of about 57 feet (19 yards) would appear to be 1 degree across (about twice the size of the full Moon). If your eyes had a resolution of 1 degree, then the ruler would just appear to you as a dot.

What is the resolution of a human eye, then? Well, it varies from person to person, of course. If you had perfect vision, your resolution would be about 0.6 arcminutes, where there are 60 arcmin to a degree (for comparison, the full Moon on the sky is about 1/2 a degree or 30 arcmin across).

To reuse the ruler example above, and using 0.6 arcmin for the eye’s resolution, the 1-foot ruler would have to be 5730 feet (1.1 miles) away to appear as a dot to your eye. Anything closer and you’d see it as elongated (what astronomers call “an extended object”), and farther away it’s a dot. In other words, more than that distance and it’s unresolved, closer than that and it’s resolved.

This is true for any object: if it’s more than 5730 times its own length away from you, it’s a dot. A quarter is about an inch across. If it were more than 5730 inches way, it would look like a dot to your eye.

But most of us don’t have perfect vision or perfect eyesight. A better number for a typical person is more like 1 arcmin resolution, not 0.6. In fact, Wikipedia lists 20/20 vision as being 1 arcmin, so there you go.

[Phil then summarizes:] The iPhone4 has a resolution of 326 ppi (pixels per inch). …The density of pixels in the iPhone 4 [when viewed at a distance of 12 inches] is safely higher than can be resolved by the normal eye, but lower than what can be resolved by someone with perfect vision.

LARRY’S EDITORIAL COMMENT

There’s a lot of discussion today about the value of 8K images. Current research shows that we need to sit within 7 feet (220 cm) of a 55″ HD image to see individual pixels. That converts to 1.8 feet to see individual the pixels in a UHD image. And 5 inches to see individual pixels in an 8K image on a 55 monitor.

Any distance farther and individual pixels can’t be distinguished.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #701: How to Export an Alpha Channel

Larry Jordan – LarryJordan.com

Alpha channels are not supported in H.264 or HEVC media.

Topic $TipTopic

The alpha channel determines transparency in a clip. However, no compressed codec supports alpha channels. Why? Because including the alpha channel makes a file really big!

Here, courtesy of RocketStock.com is a list of video codecs and image formats that support alpha channels.

Video Codecs and Image Formats with Alpha Channels

  • Apple Animation
  • Apple ProRes 4444
  • Avid DNxHD
  • Avid DNxHR
  • Avid Meridien
  • Cineon
  • DPX
  • GoPro Cineform
  • Maya IFF
  • OpenEXR Sequence With Alpha
  • PNG Sequence With Alpha
  • Targa
  • TIFF

Be sure to test your codec before committing to a project. Not all versions of DNx or GoPro Cineform support alpha channels.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #702: Is GoPro Cineform Still Useful?

Larry Jordan – LarryJordan.com

GoPro Cineform is available for free for both Mac and Windows.

Topic $TipTopic

When GoPro canceled GoPro Studio a while back, it became more difficult to convert GoPro footage into a format that can be easily edited.

This article, from David Coleman Photography, describes how to convert and play GoPro footage.

While GoPro Studio is no more, you can download the codecs themselves from the GoPro-Cineform decoder page. There you’ll find versions for Mac and Windows. In the case of the Mac version, it’s still called NeoPlayer, which is its old name.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #703: What is GoPro Cineform?

Larry Jordan – LarryJordan.com

This 12-bit, full-frame video codec is optimzied for speed and image quality.

Topic $TipTopic

GoPro CineForm is a 12-bit, full-frame wavelet compression video codec. It is designed for speed and quality, at the expense of a very high compression size. Image compression is a balance of size, speed and quality, and you can only choose two. CineForm was the first of its type to focus on speed, while supporting higher bit depths for image quality. More recent examples would be Avid DNxHD and Apple ProRes, although both divide the image into blocks using DCT.

The full frame wavelet has a subject quality advantage over DCTs, so you can compression more without classic ringing or block artifact issues. Here are the pixel formats supported:

  • 8/10/16-bit YUV 4:2:2 compressed as 10-bit, progressive or interlace
  • 8/10/16-bit RGB 4:4:4 compressed at 12-bit progressive
  • 8/16-bit RGBA 4:4:4:4 compressed at 12-bit progressive
  • 12/16-bit CFA Bayer RAW, log encoded and compressed at 12-bit progressive
  • Dual channel stereoscopic/3D in any of the above

Compression ratio: between 10:1 and 4:1 are typical, greater ranges are possible. CineForm is a constant quality design, bit-rates will vary as needed for the scene. Whereas most other intermediate video codecs are a constant bit-rate design, quality varies depending on the scene.

EXTRA CREDIT

Here’s a link to learn more.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #689: What Does Video Bit-Depth Determine?

Larry Jordan – LarryJordan.com

Bit-depth determines the maximum number of colors in a video frame.

Image courtesy of VideoMaker.com

Topic $TipTopic

So, what is bit depth? Well, essentially it determines the range of possible colors your camera is capable of capturing. The higher the bit depth, the higher the number of possible colors your camera is able to capture, which means smoother gradations and less (or no) color banding. However, the higher the bit depth, the larger the files, which means a higher need for storage space and possibly a more powerful computer to handle all of the data.

Keep in mind, though, that even if you go with a camera whose file formats support higher bit depths, that doesn’t necessarily automatically translate to amazing image quality. There are many other factors that play a role in both gamut and color depth, including color sampling and data rate.

If you’re still confused about whether or not you need a camera that offers high bit depth, keep these things in mind.

  • Color banding is ugly.
  • Can you handle all that extra data?
  • Higher bit depth affords you more latitude during color grading.

EXTRA CREDIT

Here’s a link to a VideoMaker presentation, on NoFilmSchool.com, that explains bit depth in three minutes.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #690: H.264 vs. HEVC – What’s the Difference?

Larry Jordan – LarryJordan.com

Smaller file size, greater image quality; but requiring more CPU power to encode or decode.

Topic $TipTopic

This tip, written by Ana Rodrigues, first appeared in Medium.com. This is a summary.

Conceived to boost video streaming, High Efficiency Video Coding (HEVC), or H.265, is a video compression standard designed to substantially improve coding efficiency when compared to Advanced Video Coding (AVC), or H.264.

With this new format, image resolutions around 8192×4320 become possible to display and stream. HEVC reduces file sizes 40-60%, depending upon frame size. As well, when compared to H.264, HEVC/H.265 delivers a significantly better visual quality, when compressed to the same file size or bitrate.

However, apart from the fact that the codec is patented by various parties and it is associated with high licensing fees, HEVC/H.265 comes with the trade-off requiring almost 10x more computing power.

Both codecs work by comparing different parts of a video frame in order to find the ones that are redundant within the subsequent frames. These areas are replaced with a short information, describing the original pixels. What differs HEVC/H.265 from H.264 is the ability to expand the size of these areas into bigger or smaller blocks, called coding tree units (CTU) in the HEVC/H.265. The pattern CTU sizes can be from 4×4 to 64×64, whilst H.264 only allows a maximum block-size of 16×16 (CTU is particular feature of HEVC). An improved CTU segmentation, as well as a better motion compensation and spatial prediction require much more signal processing capability for video compression, but has a significantly less impact on the amount of computation needed for decompression. Motion compensated prediction, another great progress in HEVC/H.265, references blocks of pixels to another area in the same frame (intra prediction) or in another frame (inter prediction).


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #691: Compare Post-Production Codecs

Larry Jordan – LarryJordan.com

Compare Cineform, DNx, ProRes, DPX and Uncompressed; all in one table.

Topic $TipTopic

This tip, written by David Kong, first appeared in Frame.io Insider. This is a summary.

The team at Frame.io pulled together a list of more than 50 of the most common intermediate codecs used in video post-production, so that you can compare codecs against each other.

This covers intermediate codecs, not camera codecs. Each company publishes their own specifications in different formats, but they scoured the Internet and brought them all into a single page. If you want to compare ProRes vs DNxHD, ProRes vs Cineform, DNxHD vs. DPX, or any other combination, this table can help you choose the right codec for your next project.

Click the link above to view the comparison table.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #675: Which Codecs Support Alpha Channels

Larry Jordan – LarryJordan.com

Not all codecs support transparency. When you need it, use one of these.

Topic $TipTopic

To include transparency in video, you need to create it in software which supports alpha (transparency) channels. These include Final Cut, Motion, Premiere, After Effects, Avid and many other professional editing packages.

Then, you need to choose a codec which also supports alpha channels. Not all of them do.

Rocketstock has compiled a list, though not all of these are video codecs:

  • Apple Animation
  • Apple ProRes 4444
  • Avid DNxHD
  • Avid DNxHR
  • Avid Meridien
  • Cineon
  • DPX
  • GoPro Cineform
  • Maya IFF
  • OpenEXR Sequence With Alpha
  • PNG Sequence With Alpha
  • Targa
  • TIFF

Please rate the helpfulness of this tip.

Click on a star to rate it!