… for Codecs & Media

Tip #1433: Uncompressed vs. Raw vs. Log Video

Larry Jordan – LarryJordan.com

Not all cameras shoot all formats. Here’s how to choose.

Image courtesy of Bruno Massao from Pexels.com

Topic $TipTopic

This article, written by Andy Shipsides, first appeared in HDVideoPro.com. This is a summary.

NOTE: Andy is the Chief Technology Officer for video rental house: AbleCine.

With so many cameras these days offering different recording options, combined with the popularity of external recorders, it’s no wonder there are a lot of questions about this topic.

To really answer the question, and to understand the difference between all of these formats, we need a little bit of background. ARRI’s ALEXA camera is unique in that it can output raw, uncompressed and record in a Log format, so I’ll use that camera as an example throughout this discussion. Let’s start with raw, which comes first for many reasons.

So what is raw anyway? Simply put, it’s just sensor data before any image processing. In a single-sensor camera, like the ALEXA, color is produced by filtering each photosite (or pixel) to produce either red, green or blue values. The color pattern of the photosites most often used is the Bayer pattern, invented by Dr. Bryce E. Bayer at Kodak. The raw data in a camera like this represents the value of each photosite. Because each pixel contains only one color value, raw isn’t viewable on a monitor in any discernible way. In a video signal that we can see on a monitor, each pixel contains full color and brightness information; video can tell each pixel on a monitor how bright to be and what color. This means that raw isn’t video. Raw has to be converted to video for viewing and use.

NOTE: The “Debayering process” converts image data into video for viewing.

Raw data isn’t necessarily uncompressed. In fact, it’s usually compressed. The RED cameras shoot in REDCODE, which has compression options from 3:1 to 18:1. Likewise, Sony’s F65 has 3:1 and 6:1 compression options in F65RAW mode. The raw data is compressed in much the same ways that traditional video is compressed, and the process does have some effect on image quality.

Raw data is usually at high bit depth, between 12- and 16-bit, but video is usually around 8- or 10-bit. In RGB (4:4:4) video, each pixel contains color and brightness information, which would be rather large with 16-bit depth. So, video is generally reduced in bit depth. Additionally, color information is generally reduced as well, from 4:4:4 to 4:2:2. These are both forms of compression that happen in the camera, even before recording. A standard for HD-SDI output on a professional camera is considered to be uncompressed; however, the specification for a single HD-SDI output in a 1920×1080 resolution is 10-bit 4:2:2.

Many cameras, including those from Sony, Canon, RED and ARRI cameras have a Log recording mode. When the Log modes are activated, the image becomes flat and desaturated, but you can still see it on a monitor. This should clue you in that Log recording is just standard video recording in the sense that all pixels display color and brightness information. Log isn’t raw; it’s video. However, it’s a special way of capturing that maximizes the tonal range of a sensor.

Raw is not Log because Log is in a video format, and raw is not video. Raw data has no video processing baked in and has to be converted into video for viewing. Log is video and has things like white balance baked into it. They’re very much not the same; however, they’re both designed to get the most information out of the sensor. Raw is getting everything the sensor has to offer; likewise, Log curves are designed to get the most tonal range out of the sensor. While they’re very different formats, they have the same general application. Both raw and Log can be uncompressed, but that depends on the recording device.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Random Weirdness

Tip #1361: The Vocabulary of the Gimbal

Larry Jordan – LarryJordan.com

Knowing how to explain what you are doing improves the quality of your images.

Image courtesy of Learn Online Video.

Topic $TipTopic

This article, written by Oakley Anderson-Moore, first appeared in NoFilmSchool.com. This is a summary.

Not so long ago, epic cinematic shots were mainstays of jibs, cranes, and dollies. Now, a lone filmmaker with a few hundred bucks can pull off Hollywood-caliber movement—with one hand and a gimbal.

However, just buying a gimbal doesn’t equal good cinematography. You have to know how to use it, and how to communicate to your cast and crew what you are planning.

Steve Wright at Learn Online Video, provides 10 moves – and names – to help us communicate better.

  1. The Follow
  2. The Reverse Follow
  3. Step In Reveal
  4. Mini Jib Reveal
  5. Side Track
  6. Chest Transition
  7. Soft Focus Reveal
  8. Wipe Transition
  9. The Orbit
  10. The Fake Drone

EXTRA CREDIT

The article includes details on each shot, plus a video that illustrates them in use.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #1337: What Does “Low Resolution Proxy” Mean?

Larry Jordan – LarryJordan.com

The small file size of proxy files is due to deeper compression and reduced frame size.

An example of four different proxy frame sizes: Full, 1/2, 1/4/ & 1/8.

Topic $TipTopic

We often talk about proxy files being “lower resolution.” But what does that actually mean?

Proxy files are designed to provide reasonable images for editing, while taking less space to store and fewer computing resources to display. This is accomplished using deeper compression settings, changing video codecs (for example, using H.264), and reducing image resolution.

NOTE: Audio is always stored at the highest quality, even in a proxy file.

For a long time, I would say the words “lower resolution,” but not understand what they meant. It wasn’t till I created a graphic for one of my webinars that I understood what was going on.

A “lower resolution” proxy file is a file created using a smaller frame size than the original image. For example, using a 1920 x 1080 pixel frame size for the source video:

  • 1/2 resolution = a frame size of 960 x 540 pixels
  • 1/4 resolution = a frame size of 480 x 270 pixels
  • 1/8 resolution = a frame size of 240 x 135 pixels

Obviously, the smaller the frame size, the smaller the proxy file, but the less image detail is displayed.

Most of the time, I use 1/2 frame size for my proxy files. However, if I’m doing multicam work, where the on-screen images are small to begin with, I’ll use 1/4 frame size. This allows me to play more cameras at the same time without dropping frames.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Adobe Premiere Pro CC

Tip #1265: Dublin Core Metadata

Larry Jordan – LarryJordan.com

Dublin Core provides a standardized way to identify and find resources, including media.

The DCMI logo – note the use of 15 dots.

Topic $TipTopic

I was wondering about what Dublin Core metadata actually is. So, I looked it up.

The original Dublin Core of thirteen (later fifteen) elements was designed to standardize key labels about resources. It was first published in a report from a workshop in 1995. It was formalized into ISO, ANSI/NISO and IETF standards a few years later.

NOTE: “Dublin” refers to Dublin, Ohio, USA where the schema originated during the 1995 invitational OCLC/NCSA Metadata Workshop. “Core” refers to the metadata terms as “broad and generic being usable for describing a wide range of resources”.

The resources described using the Dublin Core may be digital resources (video, images, web pages, etc), as well as physical resources such as books or CDs, and objects like artworks.

From this initial paper, the Dublin Core Metadata Initiative (DCMI) evolved into the role of “de facto” standards agency by maintaining its own, updated documentation for DCMI Metadata Terms. The DCMI Usage Board currently serves as the maintenance agency for the ISO spec.

For more than twenty years, the DCMI community has developed and curated Dublin Core Specifications. More recently, DCMI has become recognised as a trusted steward of metadata vocabularies, concept schemes and other metadata artefacts, and has taken responsibility for other community-created specifications. DCMI remains committed to this important work, and is actively developing more efficient and sustainable approaches to the stewardship of these standards.

EXTRA CREDIT

Here’s a link to the DCMI website.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #865: What is HDMI?

Larry Jordan – LarryJordan.com

HDMI is an uncompressed audio and video standard for connecting devices.

Three types of HDMI connectors: Type D (Micro), Type C (Mini) and Type A (from left to right).

Topic $TipTopic

We’ve used it for years, but what, exactly, is HDMI? At its simplest, HDMI is a standard used to connect high-definition video devices.

More specifically, HDMI (High-Definition Multimedia Interface) is a proprietary audio/video interface for transmitting uncompressed video data and compressed or uncompressed digital audio data from an HDMI-compliant source device, such as a display controller, to a compatible computer monitor, video projector, digital television, or digital audio device. HDMI is a digital replacement for analog video standards.

NOTE: See the use of “uncompressed” in the preceeding paragraph. HDMI may be easy to use, but it provides the highest possible quality.

Several versions of HDMI have been developed and deployed since the initial release of the technology in 2003, but all use the same cable and connector. In addition to improved audio and video capacity, performance, resolution and color spaces, newer versions have optional advanced features such as 3D, Ethernet data connection, and CEC extensions.

The challenge remains for HDMI to keep up with the constant growth in media technology, specifically larger frame sizes and faster frame rates.

As you’ll read in Tip #863, the latest version of HDMI seeks to take our computers and TVs “to infinity … and beyond.”

Here’s a Wikipedia article to learn more.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #830: Count the Timecode Formats

Larry Jordan – LarryJordan.com

Timecode takes many forms, all with the goal of clearly labeling every frame of video.

The timecode display in Apple Final Cut Pro X.

Topic $TipTopic

Most of us are familiar with timecode: A unique label for each frame of video in a clip, expressed as four pairs of numbers: Hours:Minutes:Seconds:Frames (or milliseconds, depending upon format).

While timecode expresses these locations as time values, there is no necessary relationship between timecode and the time of day the image was recorded. Sometimes there is, but it isn’t required.

Thinking about timecode got me wondering about how many different timecode formats there are. And that took me to Wikipedia.

In video production and filmmaking, Wikipedia writes, SMPTE timecode is used extensively for synchronization, and for logging and identifying material in recorded media. During filmmaking or video production shoot, the camera assistant will typically log the start and end timecodes of shots, and the data generated will be sent on to the editorial department for use in referencing those shots. This shot-logging process was traditionally done by hand using pen and paper, but is now typically done using shot-logging software running on a laptop computer that is connected to the time code generator or the camera itself.

The SMPTE family of timecodes are almost universally used in film, video and audio production, and can be encoded in many different formats, including:

  • Linear timecode (LTC), in a separate audio track
  • Vertical interval timecode (VITC), in the vertical blanking interval of a video track
  • AES-EBU embedded timecode used with digital audio
  • Burnt-in timecode, in human-readable form in the video itself
  • CTL timecode (control track)
  • MIDI timecode

Keycode, while not a timecode, is used to identify specific film frames in film post-production that uses physical film stock. Keycode data is normally used in conjunction with SMPTE time code.

NOTE: Rewritable consumer timecode is a proprietary consumer video timecode system that is not frame-accurate, and is therefore not used in professional post-production.

EXTRA CREDIT

All these different timecode formats provide one key reason why we need to copy all files from a camera card to our hard disk. Many times, timecode is not embedded in the video.

Also, aside from BWAV (Broadcast WAV) files, audio does not support timecode.

Here’s a Wikipedia article to learn more.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Random Weirdness

Tip #811: Talent & Location Releases – What’s Needed?

Larry Jordan – LarryJordan.com

Nothing ruins a great production like a lack of releases.

Topic $TipTopic

This article first appeared in MotionArray.com. This is an excerpt.

Picture this, you just had the best video shoot ever. You hired the perfect actress to play a part in your commercial. She was a natural in every way. The setting was outdoors on a perfect day, and you got exactly what you needed. You have two days to edit everything together and deliver the spot to the client.

The day before it’s due, you get a call from the talent who says they no longer want to be a part of the commercial. And you never got them to sign the proper release. Guess what? You are in big trouble.

It may seem like when someone says they are in, then everything will be fine. But without the proper legal documents, you have no power when it comes to release and usage. And you won’t realize what a headache this can be until it happens to you.

Getting proper talent and location releases signed by the right people is one of the most important and often overlooked aspects of any video or film shoot. So, don’t let it slip through the cracks while you are busy working on shot composition or directing talent.

This article explains what talent and location releases are, when you need them and links to get forms for your next project.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Random Weirdness

Tip #809: A Beginner’s Guide to Frame Rates

Larry Jordan – LarryJordan.com

Frame rates are fundamental to video – and difficult to change.

Topic $TipTopic

I’ve written a lot about frame rates, with my key point being that changing frame rates in post is, almost always, difficult and unsatisfactory. In this PremiumBeat.com article, written by Lewis McGregor, you’ll discover the basics of frame rates. Along the way, Lewis illustrates where they came from and how to decide which frame rate to use for your next project.

As Lewis writes: “Different mediums and different regions all demand different frame rates for various reasons. But, the number of frames per second you decide to give to your shot can also drastically change how your project looks and what you can do with the footage.”

Click the link above to read more.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #814: What is the VP9 codec?

Larry Jordan – LarryJordan.com

VP9 is a replacement codec for HEVC.

Topic $TipTopic

One of the complaints heard after WWDC was that Apple did not make mention of VP9 during the two keynotes. Still, this got me wondering what VP9 is?

According to Wikipedia:

VP9 is an open and royalty-free video coding format developed by Google. It is supported in Windows, Android and Linux, but not Mac or iOS.

VP9 is the successor to VP8 and competes mainly with MPEG’s High Efficiency Video Coding (HEVC/H.265).

In contrast to HEVC, VP9 support is common among modern web browsers with the exception of Apple’s Safari (both desktop and mobile versions). Android has supported VP9 since version 4.4 KitKat.

An offline encoder comparison between libvpx, two HEVC encoders and x264 in May 2017 by Jan Ozer of Streaming Media Magazine, with encoding parameters supplied or reviewed by each encoder vendor (Google, MulticoreWare and MainConcept respectively), and using Netflix’s VMAF objective metric, concluded that “VP9 and both HEVC codecs produce very similar performance” and “Particularly at lower bitrates, both HEVC codecs and VP9 deliver substantially better performance than H.264”.

Here’s a link for more information.


Please rate the helpfulness of this tip.

Click on a star to rate it!

… for Codecs & Media

Tip #813: What is Handbrake?

Larry Jordan – LarryJordan.com

Handbrake is a free, general-purpose media compression program.

Topic $TipTopic

HandBrake is an open-source video transcoder available for Linux, Mac, and Windows. Everyone can use HandBrake to make videos for free.

HandBrake takes videos you already have and makes new ones that work on your mobile phone, tablet, TV media player, game console, computer, or web browser—nearly anything that supports modern video formats.

HandBrake does:

  • Convert nearly any video to MP4 or MKV
  • Crop and resize video
  • Restore old and low-quality video
  • Remove combing artifacts caused by interlacing and telecine
  • Pass-through audio without conversion for certain audio types
  • Downmix discrete surround sound to matrixed surround or stereo
  • Adjust audio volume levels, and dynamic range for certain audio types
  • Preserve existing subtitles, and add or remove soft subtitles (subtitles stored as text)

HandBrake does not:

  • Combine multiple video clips into one
  • Pass-through video without conversion (video is always converted)
  • Create Blu-ray, AVCHD, or DVD discs

HandBrake also does not defeat or circumvent copy protection of any kind. It does not work with video files employing Digital Rights Management (DRM). This includes but is not limited to, copy protected content from iTunes, Amazon Video, Netflix, or other online providers, and many commercial DVD and Blu-ray discs.

Here’s the link to learn more.


Please rate the helpfulness of this tip.

Click on a star to rate it!