One of the things I freely admit is that I am afraid of/confused by the math used in digital video. As my career has steered more toward motion over the years, I’ve begrudgingly tried to understand the many properties of video technology – which involves a bit of math.
The world of video tech has a bunch of important numerical properties that can be overwhelming. For example, video is recorded at many different frame rates, including 24.976, 29.97, and 60 frames per second (FPS). Video frames can be “P” (progressive) or “I” (interlaced). Size also matters, as video can be shot in a variety of resolutions: HD (720), Full HD (1080), and up into 4K, 8K and beyond.
What is an artist to do? There are numbers and abbreviations everywhere! Fear not – we will tackle these issues one at a time and remove the fear. Let’s start with the fundamentals: what size frame video are you shooting and at what frame rate?
Video Frame Sizes, Resolutions, and Frame Rates for Beginners
Modern digital camera sensors can produce video in a bunch of frame sizes, allowing the artist to choose either full frame (in the still photography sense), Super 35 (which is considered “full frame” in the motion picture world but is considered “cropped” in the photography world), or some other cropped output. Each camera records information across either all or part of its sensor, the entirety of which corresponds to the maximum amount of pixels from which a camera can record information. For example, the Canon 5D Mark IV has a sensor that is 36mm x 24mm and can produce RAW still images that are 6720 x 4480 pixels, or ~31 megapixels. But what about for video?
Interlaced vs Progressive
The “P” versus “I” designation after the frame size value (e.g. “720p / 1080i”) refers to whether the frames are interlaced or progressive.
In interlaced, the frame (for interlaced, it is referred to commonly as “field” instead) is scanned at the top line first, then a line is skipped, and then another lower line is scanned, and so on down the field. One field shows imagery on the odd number of lines and another field is showing imagery on the even lines. Your brain fills in the blanks but you’re only getting, essentially, half the image at any time. This is a relic of the cathode-ray tube TV era and can give less pleasing results (but with less data throughput, which is its benefit) than a progressive scan. When you see 60i, that means 60 fields per second are recorded but that equates to only 30 frames per second because it takes 2 fields to make a frame.
With progressive, the frame is scanned in top-to-bottom row order from highest to the lowest; every pixel is refreshed in order and one full image is displayed after another in rapid succession. This is much more like traditional film where full frames fly by and the faster they move, the smoother the motion looks – like a flip book, which we’ll talk about again soon. Unlike in 60i, in 60p will have a true 60 full frames per second.
Resolution and Quality
1080p vs 1080i will produce the same frame size resolution of 1920 × 1080 pixels. It will not product the same quality results. I vs P matters, as does frame rate. The specs of the 5D Mark IV above show different frame rates and frame size options. Note the main frame size options of 4K, FHD, and HD. Standard frame sizes change over time. Plus, there are always outliers (I’m looking at you, IMAX). The main choices are SD (480), HD (720), Full HD (1080), 4K Ultra HD, 6K, and 8K. There are some additional outliers in-between and beyond. But these are the accepted “standards.”
You may notice in the chart above the absence of a video output frame size that matches the camera’s full frame (6720 x 4480) size potential – why is this? Some interpolation (frames inserted between real frames in the source to reduce motion artifacts/compensate for display motion blur) is taking place and the reason why is the camera’s capacity to handle the enormous computational processing task of recording 24-30 frames of information per second at the 4K size. Higher-end video-specific models, such as the C300 and the Red Scarlet 8K, have more sophisticated electronics and processing and can record massive video files at many frame rates and sizes. In short, the 5D Mark IV can only handle so much when recording video.
Understanding Frame Rates
Now, looking at the frame rate section from the same chart above: How many frames will be recorded and later displayed to your viewers each second? What options do you have, based on your choice of frame size or, in other words, how does your camera’s processing circuitry handle the vast amounts of information and distill it down into video files at whatever frame sizes it can record?
Before we get into the math, let’s rise above the technical for a moment and consider the aesthetic. Imagine a flip book. Yes, a paper book (remember those?) with drawings on each page of a person walking. If you wanted the walking person to look smooth to your viewer, how many pages would you have to flip through each second?
Aside from fans of stop-motion animation, most viewers will not describe an animation as “pleasing” until around 15 frames (or pages, in this example) per second. The majority of the population seems to like a sweet spot of 24-30 frames per second. How was this chosen as a standard and what do you gain or lose from moving beyond it?
Frame Rates at the Dawn of Cinema
To fully understand frame rate, let’s go back in time to the dawn of the film era: the 1920s. My great, great grand-uncle Thomas Bell Meighan was a star in the silent film age (See Male and Female with co-star Gloria Swanson). His films were likely shot and shown at a frame rate of between 16 and 24 frames per second.
Back in Meighan’s day, cinematographers hand-cranked celluloid film through cameras recording at around 60 feet per minute (16 FPS) and projectionists also hand-cranked the films in projectors in theaters at this same rate, or at an even higher rate. That’s correct, projectionists could vary the frame rate to match the action taking place in the film as it was projected and they could also run the films faster to fit more showings in each day.
Sound’s Effect on Frame Rates
As sound began to become incorporated into films, varying the projection rate was no longer possible. People do not comfortably tolerate sound recordings varying in speed. So a standard was needed. 16 FPS was initially chosen in 1917 by the Society of Motion Picture Engineers (SMPTE). Projectionists typically kept cranking the films in theaters faster, however, so the SMPTE set a standard of ~22 frames per second (80 feet per minute) for projection after consulting Warner Brothers’ projectionists about what typical projection rates were in use at the time.
Frame Rates and Resolution Limitations
Referring back to the chart of specs for the 5D Mark IV, your choices for frame rates are 23.98 FPS (or 23.976), 24, 29.97, 59.94, and 119.9. Each are only available at certain resolutions. Notice that if you want to shoot at an ultra-quick 119.9 FPS, you’re limited to HD (720) on the 5D Mark IV – a much lower resolution than the camera’s potential maximum. You can’t record all of these frame rates at whatever frame size you want! You are limited to the combination frame rate/size options given in the specs of your particular camera.
So why such specific frame rate values? The complete answer is far too complicated for this post. In short, the different precise rates have to do with sound/audio syncing, television broadcast standards for sound and color, and conversion between actual film and over-the-air TV radio-wave broadcast frequencies. Today, the TV broadcast standard value is 29.97 FPS. Film is 24 FPS/23.976. Which one to choose depends on your desired result. Shooting for a client? Always be sure to ask your editor and producers before you start rolling!
In art, though, rules are meant to be bent if not broken. The different available frame rates can be manipulated or transcoded to the artistic choice of the filmmaker. You can film a skier at 119.9 FPS and play it back in a 24 FPS timeline for slow motion. Filmmakers used to have to use super high-end cameras to get these frame rates. Interested in trying this out? Check out BL’s collection of cameras that can shoot at frame rates of 100 or above in Full HD (1920 x 1080) resolution or above.
Many of us have a capable camera in our pocket all day long. The Apple iPhone 7 can shoot a whopping 240 FPS at 720p and 120 FPS at Full HD 1080p. With iOS 10 and an iPhone 6 or newer, you can make a change in your settings (Settings > Photo and Camera) to shoot at the higher 240 FPS rate. Frame rates are also manipulated for artistic purposes. It takes advantage of how the audience “feels” watching footage projected at different frame rates. The human eye/brain can begin to discern smooth motion at and above around 10-12 FPS. 24 FPS is easily perceived as motion and the motion gets even smoother at 50+ FPS.
Choosing a Shutter Speed and the 180 Degree Shutter Rule
A certain amount of motion blur is required to fool the brain into thinking that what it’s looking at is really a moving image. But just how much motion is “real” and how can the filmmaker change settings to achieve different creative results?
Consider the “180º Shutter Rule”, which hearkens back to the days of the mechanical shutter and physical film. When shooting film at 24 FPS, the mechanical shutter blade in most cameras was a spinning half-circular disc with rectangular film moving past it at 24 FPS. The shutter speed was carefully tuned to allow the correct exposure of the film as it flew past the shutter, which was set at double the speed of the film to account for half of the disc obscuring the film at any given time before it rotated back into place. This results in 1/48th of a second (~1/50th) shutter speed. This gave a pleasing amount of motion blur to the footage when played back at 24 FPS and didn’t sacrifice exposure levels.
If you are shooting on a 24 FPS timeline frame rate today, you typically don’t want to shoot the individual frames with your shutter open any faster than 1/50th of a second. For 29.97 (call it 30…) you’d want to stick to 1/60th of a second or slower. This adheres to the 180º Shutter Rule. It provides a pleasing picture with plenty of motion blur. It is a great starting point for just about any beginner video project.
Again, rules are meant to be broken. If you enjoyed the war scenes in Saving Private Ryan for their realism and grit, you might be pleased to know that they were purposefully shot by Steven Spielberg with a smaller shutter angle of 45-90º for some scenes. This meant using faster-than-normal shutter speeds with less motion blur to increase the staccato nature of the footage.
The Rolling Shutter Effect and How Shutter Speed Affects It
One of the plagues of the CMOS sensor video world is rolling shutter. Most video cameras on the market today use CMOS sensors. These do not instantly record visual information across the entire sensor at once as CCD sensors do. Instead, they record from the top-down across the light-sensitive rows of sensor pixels (sometimes called “sensels”). A group of lines on the sensor are being read while other lines on the sensor are still being exposed. This speeds up processing. It clears the upper rows of the sensor to begin the next frame while it is still recording the lower ones. How quickly these “lines” or “rows” can be read depends on the frame rate and your sensor’s overall architecture.
Below is a great 1 minute video from DPReview’s YouTube channel demonstrating the rolling shutter effect compared between the 5D Mark IV and the 1D X Mark II. It also quickly compares the 5D Mark IV to the Sony a6300 and shows how each camera handles “Jello” when the frame rate is changed.
Rolling shutter is a major drawback of CMOS sensors, but what is it? Because of the “top-down” way the sensor records, things filmed from a static position that is moving very fast will have moved laterally across its relative position on the sensor from row to row by the time the camera records its location in the scene. This creates an unpleasing “jitter” of the object across the video screen, or distorts its shape markedly.
Rolling shutter also results when the camera itself is moving quickly, especially relative to close objects. If the camera is moving, a subtle sideways lean of objects can be seen. For example, buildings shot from a speeding train. Other examples might include a camera mounted on a race car, or a motorcycle; scenery whipping by and close to the speeding vehicle will exacerbate the rolling shutter effect.
Avoiding Rolling Shutter Effects
There are several ways to avoid this unpleasant result in your footage. First, employ some sort of stabilization rig to eliminate camera shake. An example at the high-end of the scale is the MōVI or Steadicam, while a cheaper alternative popular in the action camera and drone market are the many DIY “wire rope” stabilizer solutions. Either option will help eliminate the “micro jitters” you get from hand-held footage. Changing the camera angle relative to the moving object can also minimize rolling shutter. Position yourself so the speeding object is moving laterally across your position (and your camera’s sensor) as little as possible. Also, take care not to make excessively fast pans across a scene. Lastly, drag your shutter as much as you can. Stay close to the 180º shutter rule. Fast-moving objects will retain pleasing motion blur and the effects of rolling shutter will be minimized.
Some filmmakers choose to attack rolling shutter in post-production techniques, such as with Warp Stabilizer, the Rolling Shutter Repair Tool in Adobe After Effects, or one of many available plug-ins that tackle this specific issue, like the Turbo Video Stabilizer.
Don’t Rely on Post Production
“Fixing it in post” is never to be relied upon. The best way to avoid rolling shutter in your footage is to keep your camera as stable as possible. Pay attention to what’s moving in your scene. Bump up your frame rate as much as you can. But keep your shutter angle in mind and you won’t notice the rolls as much. If you need to film something like an airplane flying straight towards you, then move away from a CMOS sensor camera to something like a Canon EOS C700.
In the video below, I briefly show the differences in video resolutions. This is followed by examples of frame rates that produce slow motion. It ends with a visual demonstration of rolling shutter/Jello effect and how it is affected by shutter speed.
To quote early-2000’s Saturday Night Live: math will always be a part of the Axis of Evil. But it needn’t be something artists fear. Armed with historical background on basic concepts, filmmakers can make informed choices about frame rates.Tags: DCI 4K, UHD 4K, videography Last modified: June 3, 2020