An Introduction to Frame Rates, Video Resolutions, and the Rolling Shutter Effect

An Introduction to Frame Rates, Video Resolutions, and the Rolling Shutter Effect

One of the things I freely admit is that I am afraid of/confused by the math used in digital video. As my career has steered more toward motion over the years, I’ve begrudgingly tried to understand the many properties of video technology – which involves a bit of math. The world of video tech has a bunch of important numerical properties that can be overwhelming. For example, video is recorded at many different frame rates, including 24.976, 29.97, and 60 frames per second (FPS). Video frames can be “P” (progressive) or “I” (interlaced). Size also matters, as video can be shot in a variety of resolutions: HD (720), Full HD (1080), and up into 4K, 8K and beyond.

What is an artist to do? There are numbers and abbreviations everywhere! Fear not – we will tackle these issues one at a time and remove the fear. Let’s start with the fundamentals: what size frame video are you shooting and at what frame rate?

Video Frame Sizes, Resolutions, and Frame Rates for Beginners

Modern digital camera sensors can produce video in a bunch of frame sizes, allowing the artist to choose either full frame (in the still photography sense), Super 35 (which is considered “full frame” in the motion picture world but is considered “cropped” in the photography world), or some other cropped output. Each camera records information across either all or part of its sensor, the entirety of which corresponds to the maximum amount of pixels from which a camera can record information. For example, the Canon 5D Mark IV has a sensor that is 36mm x 24mm and can produce RAW still images that are 6720 x 4480 pixels, or ~31 megapixels. But what about for video?

Introduction to Full Frame vs Crop Frame Sensors Plus Great Sensor Comparison Resources

The “P” versus “I” designation after the frame size value (e.g. “720p / 1080i”) refers to whether the frames are interlaced or progressive. In interlaced, the frame (for interlaced, it is referred to commonly as “field” instead) is scanned at the top line first, then a line is skipped, and then another lower line is scanned, and so on down the field. One field shows imagery on the odd number of lines and another field is showing imagery on the even lines. Your brain fills in the blanks but you’re only getting, essentially, half the image at any time. This is a relic of the cathode-ray tube TV era and can give less pleasing results (but with less data throughput, which is its benefit) than a progressive scan. When you see 60i, that means 60 fields per second are recorded but that equates to only 30 frames per second because it takes 2 fields to make a frame.

With progressive, the frame is scanned in top-to-bottom row order from highest to the lowest; every pixel is refreshed in order and one full image is displayed after another in rapid succession. This is much more like traditional film where full frames fly by and the faster they move, the smoother the motion looks – like a flip book, which we’ll talk about again soon. Unlike in 60i, in 60p will have a true 60 full frames per second.

1080p vs 1080i will produce the same frame size resolution of 1920 × 1080 pixels but not the same quality results – I vs P matters, as does frame rate. The specs of the 5D Mark IV above show the different frame rates and frame size options for this popular camera. Note the main frame size options of 4K, FHD, and HD. Standard frame sizes change over time and there are always outliers (I’m looking at you, IMAX) but the main choices are SD (480), HD (720), Full HD (1080), 4K Ultra HD, 6K, and 8K, with some additional outliers in-between and beyond – but these are the accepted “standards.”

You may notice in the chart above the absence of a video output frame size that matches the camera’s full frame (6720 x 4480) size potential – why is this? Some interpolation (frames inserted between real frames in the source to reduce motion artifacts/compensate for display motion blur) is taking place and the reason why is the camera’s capacity to handle the enormous computational processing task of recording 24-30 frames of information per second at the 4K size. Higher-end video-specific models, such as the C300 and the Red Scarlet 8K, have more sophisticated electronics and processing and can record massive video files at many frame rates and sizes. In short, the 5D Mark IV can only handle so much when recording video.

Now, looking at the frame rate section from the same chart above: How many frames will be recorded and later displayed to your viewers each second? What options do you have, based on your choice of frame size or, in other words, how does your camera’s processing circuitry handle the vast amounts of information and distill it down into video files at whatever frame sizes it can record?

Before we get into the math, let’s rise above the technical for a moment and consider the aesthetic. Imagine a flip book. Yes, a paper book (remember those?) with drawings on each page of a person walking. If you wanted the walking person to look smooth to your viewer, how many pages would you have to flip through each second?

Aside from fans of stop-motion animation, most viewers will not describe an animation as “pleasing” until around 15 frames (or pages, in this example) per second. The majority of the population seems to like a sweet spot of 24-30 frames per second. How was this chosen as a standard and what do you gain or lose from moving beyond it?

To fully understand frame rate, let’s go back in time to the dawn of the film era: the 1920s. My great, great grand-uncle Thomas Bell Meighan was a star in the silent film age (See Male and Female with co-star Gloria Swanson). His films were likely shot and shown at a frame rate of between 16 and 24 frames per second.

Back in Meighan’s day, cinematographers hand-cranked celluloid film through cameras recording at around 60 feet per minute (16 FPS) and projectionists also hand-cranked the films in projectors in theaters at this same rate, or at an even higher rate. That’s correct, projectionists could vary the frame rate to match the action taking place in the film as it was projected and they could also run the films faster to fit more showings in each day.

My great, great grand-uncle Thomas Meighan coming off a trolley on set in 1921. Cameramen of this era hand-cranked celluloid film through cameras recording at around 60 feet per minute (16 FPS). The film is exposed to light, frame-by-frame, and then stored in a can. Today we have CMOS sensors reading each pixel site from the top to the bottom of the frame in sequence (if shooting in progressive), then it starts over at the top for the next frame. The sensor is exposed to light and the information is stored in a card. Same concept – different materials! – Image courtesy of the North Carolina Collection at Pack Library.

As sound began to become incorporated into films, varying the projection rate was no longer possible as people do not comfortably tolerate sound recordings varying in speed. So a standard was needed and 16 FPS was initially chosen in 1917 by the Society of Motion Picture Engineers (SMPTE). Projectionists typically kept cranking the films in theaters faster, however, so the SMPTE set a standard of ~22 frames per second (80 feet per minute) for projection after consulting Warner Brothers’ projectionists about what typical projection rates were in use at the time.

Referring back to the chart of specs for the 5D Mark IV, your choices for frame rates are 23.98 FPS (or 23.976), 24, 29.97, 59.94, and 119.9 with each only available at certain resolutions. Notice that if you want to shoot at an ultra-quick 119.9 FPS, you’re limited to HD (720) on the 5D Mark IV – a much lower resolution than the camera’s potential maximum. You can’t record all of these frame rates at whatever frame size you want! You are limited to the combination frame rate/size options given in the specs of your particular camera.

Intro to Video Frame Rates and Frames Per Second Shooting Speeds

So why such specific frame rate values? The complete answer is far too complicated for this post but, in short, the different precise rates have to do with sound/audio syncing, television broadcast standards for sound and color, and conversion between actual film and over-the-air TV radio-wave broadcast frequencies. Today, the TV broadcast standard value is 29.97 FPS, whereas film is 24 FPS/23.976. Which one to choose depends on your desired result. Shooting for a client? Always be sure to ask your editor and producers before you start rolling!

In art, though, rules are meant to be bent if not broken. The different available frame rates can be manipulated or transcoded to the artistic choice of the filmmaker. For example, you can film a skier at 119.9 FPS (120 FPS) and play it back in a 24 FPS timeline in your editor to achieve buttery-smooth slow motion. It used to be that filmmakers would need to turn to super high-end cameras to have these frame rates available to them but not anymore. Interested in trying this out? Check out BL’s collection of cameras that can shoot at frame rates of 100 or above in Full HD (1920 x 1080) resolution or above.

Many of us have a capable camera in our pocket all day long – the Apple iPhone 7 can shoot a whopping 240 FPS at the 720p size and 120 FPS at Full HD 1080p. With iOS 10 and an iPhone 6 or newer, you can make a change in your settings (Settings > Photo and Camera) to shoot at the higher 240 FPS rate. Frame rates can also be manipulated for artistic purpose to take advantage of how the audience “feels” in watching footage projected at different frame rates. The human eye/brain can begin to discern smooth motion at and above around 10-12 FPS. 24 FPS is easily perceived as motion and the motion gets even smoother at 50+ FPS.

Choosing a Shutter Speed and the 180 Degree Shutter Rule

A certain amount of motion blur is required to fool the brain into thinking that what it’s looking at is really a moving image. But just how much motion is “real” and how can the filmmaker change settings to achieve different creative results?

Consider the “180º Shutter Rule”, which hearkens back to the days of the mechanical shutter and physical film. When shooting film at 24 FPS, the mechanical shutter blade in most cameras was a spinning half-circular disc with rectangular film moving past it at 24 FPS. The shutter speed was carefully tuned to allow the correct exposure of the film as it flew past the shutter, which was set at double the speed of the film to account for half of the disc obscuring the film at any given time before it rotated back into place. This results in 1/48th of a second (~1/50th) shutter speed. This gave a pleasing amount of motion blur to the footage when played back at 24 FPS and didn’t sacrifice exposure levels.

“When the shutter is open, the film is exposed. When it closes, the next frame of film is brought into position by the claw.” – Courtesy of Wikipedia and Joram van Hartingsveldt, CC BY-SA 3.0. The speed of this half-circular disc needs to be about double the speed of the film running past it – traditionally that is ~24 FPS and ~1/50th shutter speed.

If you are shooting on a 24 FPS timeline frame rate today, you typically don’t want to shoot the individual frames with your shutter open any faster than 1/50th of a second. For 29.97 (call it 30…) you’d want to stick to 1/60th of a second or slower. This adheres to the 180º Shutter Rule and provides a pleasing picture with plenty of motion blur and is a great starting point for just about any beginner video project.

Again, rules are meant to be broken. If you enjoyed the war scenes in Saving Private Ryan for their realism and grit, you might be pleased to know that they were purposefully shot by Steven Spielberg with a smaller shutter angle of 45-90º for some scenes; this meant using faster-than-normal shutter speeds with less motion blur to increase the staccato nature of the footage.

The Rolling Shutter Effect and How Shutter Speed Affects It

One of the plagues of the CMOS sensor video world is rolling shutter. Most video cameras on the market today use CMOS sensors which do not instantly record visual information across the entire sensor at once as CCD sensors do. Instead, they record from the top-down across the light-sensitive rows of sensor pixels (sometimes called “sensels”). In other words, a group
of lines on the sensor are being read while other lines on the sensor are still being exposed. This speeds up processing, as it clears the upper rows of the sensor to begin the next frame while it is still recording the lower ones. How quickly these “lines” or “rows” can be read depends on the frame rate and your sensor’s overall architecture.

Below is a great 1 minute video from DPReview’s YouTube channel demonstrating the rolling shutter effect compared between the 5D Mark IV and the 1D X Mark II. It also quickly compares the 5D Mark IV to the Sony a6300 and shows how each camera handles “Jello” when the frame rate is changed.

Rolling shutter is a major drawback of CMOS sensors, but what is it? In technical terms, because of the “up to down” or “top-down” way the sensor records, anything being filmed from a static camera position that is moving very fast will have moved laterally across its relative position on the sensor from row to row by the time the camera records its location in the scene. This creates an unpleasing “jitter” of the object across the video screen, or distorts its shape markedly.

Rolling shutter also results when the camera itself is moving quickly, especially relative to close objects. If the camera is moving, a subtle (or not so subtle) sideways lean of objects can be seen – for example, buildings shot from a speeding train. Other examples might include a camera mounted on a race car, or a motorcycle; scenery whipping by and close to the speeding vehicle will exacerbate the rolling shutter effect.

There are several ways to avoid this unpleasant result in your footage. First, employ some sort of stabilization rig to eliminate camera shake. An example at the high-end of the scale is the MōVI or Steadicam, while a cheaper alternative popular in the action camera and drone market are the many DIY “wire rope” stabilizer solutions. Either option will help eliminate the “micro jitters” you get from hand-held footage. Changing the camera angle relative to the moving object can also minimize rolling shutter. Try to position yourself so the speeding object is moving laterally across your position (and your camera’s sensor) as little as possible. Also, take care not to make excessively fast pans across a scene. Lastly, drag your shutter as much as you can; stay close to the 180º shutter rule and fast-moving objects will retain pleasing motion blur and the effects of rolling shutter will be minimized.

Some filmmakers choose to attack rolling shutter in post-production techniques, such as with Warp Stabilizer, the Rolling Shutter Repair Tool in Adobe After Effects, or one of many available plug-ins that tackle this specific issue, like the Turbo Video Stabilizer.

“Fixing it in post” is never to be relied upon. The best way to avoid rolling shutter in your footage is to try to remember to keep your camera as stable as possible and pay attention to what’s moving in your scene. Bump up your frame rate as much as you can while keeping your shutter angle in mind and you won’t notice the rolls as much. If you absolutely need to film something like  an airplane flying straight towards you, then move away from a CMOS sensor camera to something like a Canon EOS C700.

To recap everything we just learned, in the video below I briefly show the differences in video resolutions followed by examples of frame rates that produce slow motion and ending with a visual demonstration of rolling shutter/Jello effect and how it is affected by shutter speed.

For those of you who are also a fan of early-2000’s Saturday Night Live, math will always be a part of the Axis of Evil – but it needn’t be something artists fear. Armed with a little historical background on basic concepts, filmmakers can make informed choices about their production’s resolution and frame rates. They can thoroughly understand the capabilities of their gear and consciously select video settings based on how they want their footage to be received when it is viewed.

The following two tabs change content below.
Born in Hawaii, educated in New Zealand, and now living in Lake Tahoe, Grant Kaye specializes in landscape, night-sky photography, motion-controlled time-lapse, and creative filmmaking. His clients have included Red Bull, MSNBC, Yahoo, and many others. See more of his work on his website or join him for a workshop.

Leave a comment, a question, or show us your work!