Have you ever noticed when photographing a sunrise or a sunset that your DSLR camera can’t capture the brightest and darkest part of the scene at the same time in one exposure? That you’re forced to choose a single exposure that’s correct for one but not the other?
This limitation is a factor of an imaging device’s “dynamic range”, which refers to the gap in luminosity between the lightest and darkest parts of an image. The limited dynamic range of cameras compared to our own vision can be extremely frustrating to landscape photographers, many of whom (myself included) strive to capture a natural scene with as much fidelity to reality as possible – we want to create photographs that match what we see in our mind’s eye.
Dynamic Range and the Human Eye
When facing a vivid, colorful sunset, humans can easily see both pretty colors in the clouds as well as details in the backlit shadowy foreground – a very wide dynamic range. In fact, the human eye can see about 20 stops (EV) of dark/light difference, whereas the best modern digital SLR camera CMOS sensors currently on the market can see around 14-16 (such as the Nikon D810 at base ISO 32 or the Sony a7SII/a7RII). Some of the newly-manufactured cinema cameras, like the Red Weapon, are beginning to push above 16 stops and closer to the 18-stop range.
Given that limitation, what can we do with today’s modern cameras to go about increasing dynamic range in our photographs and achieve a High Dynamic Range, or HDR, image?
There are three ways to overcome the camera’s limited dynamic range:
Graduated Neutral Density (GND) Filters
Some photographers use plastic or glass GND filters to expose for the shadows of an image that also has bright areas, placing the filter in front of their camera’s lens to diminish the portion of the image with bright highlights – typically the upper half.
These filters and their application in landscape photography was invented in the 1970s by the late master Galen Rowell. But the technique is dated, has limited use, and is particularly challenging when used in scenes with non-linear horizons (mountains) or when vertical objects such as trees or buildings get in the way.
There are dedicated stand-alone software and software plug-ins that can create high dynamic range images from a carefully constructed set of “bracketed” photographs. Bracketing refers to taking a set of photos and changing only 1 setting in the Exposure Triangle (typically the shutter speed) to achieve a set with a range of 1, 3, 5, or even 7 EV so that you have some underexposed images, some overexposed images, and some just in the middle. These images are then fed into HDR software which apply algorithms to blend portions of the images together through the use of tone mapping or exposure fusion automatically; there is also some degree of user control over the blend. Prominent examples of HDR software are Photomatix, Nik HDR Efex, and the native interfaces of either Photoshop or Lightroom.
Multiple Exposure Blending (MEB) or “Blending”
The third means of achieving a wide dynamic range image is by taking two or more images of differing exposure, typically by varying shutter speed or ISO, and then hand-blending these together in Photoshop as layers using layer masks. This method works well for scenes where GND filters fail, such as mountain landscapes or cityscapes with portions of sky showing.
If you’re not going to adopt a graduated neutral density workflow because you don’t want to bother with fragile, expensive, and sometimes limiting filters, then you have to choose between either HDR or MEB and plan both your shooting and your post-processing accordingly. Each method has its own plusses and minuses and there are different strategies to applying each technique that will bear better results in some situations as opposed to others.
And before the purists out there grab their pitchforks and torches, it pays to remember that high dynamic range imaging is not new and is certainly not exclusive to digital photography. Long before photographers sat at a computer to work on images, they were blending different pieces of differing exposures together in a darkroom.
In the late 1800s, pioneering darkroom wizards such as Gustav LeGrey sliced negatives and used different exposures to balance out tonality in dark versus light areas of photographs. The widely recognized master of landscape photography Ansel Adams developed a high-level skill in dodging and burning, which is actually a form of manual tone mapping done today automatically by HDR software.
HDR vs MEB
Let’s go over the main differences between software-based HDR and MEB and run the same images through each workflow to shine some light on the pros and cons of each technique. After looking at the results, we will cover when you might want to apply one versus the other. We’ll do this using a set of bracketed landscape images I made of the November 2016 “supermoon” rising over Lake Tahoe.
Here are the three images:
HDR software is great at quickly taking bracketed image stacks and pushing out a finished wide dynamic range image. Adobe offers this functionality in two different workflows: a simple interface via “Photo Merge—> Merge to HDR” in Lightroom (which has a limited set of controls) and through “Edit In—-> Merge to HDR Pro” in Photoshop, which allows more fine-grained control.
This is the result of using the “Merge to HDR” control panel in Lightroom. It’s simple and straightforward, with little control over any aspect of how the blend is done.
This is the “Merge to HDR Pro” control panel in Photoshop, which offers a much greater amount of control. I used the 16-bit “Photorealism Low Contrast” default preset here with no changes made to any settings.
Photomatix offers more versatility and user control in creating HDR images than Lightroom or Photoshop. Even the default 32-bit blend looks much better than the results of Lightroom or Photoshop’s HDR workflows.
When you delve in a little deeper, Photomatix also offers Tone Mapping and Exposure Fusion as separate workflows, each with their own suite of fine-grained controls over white/black points, colors, and many other options.
Looking at the three images side-by-side in Lightroom’s Library module (green = Photoshop Merge to HDR Pro, Blue = PhotoMerge HDR, red = Photomatix), one can easily discern the differences in tonality and dynamic range and Photomatix appears to be the clear winner in terms of bringing out the widest dynamic range with the most realistic result.
However, if you look carefully at the moon and the clouds (since both were moving), there are subtle areas of the HDR-software generated images where the motion appears ghostly – especially the moon as it moves quickly relative to the horizon immediately after rising. Note the oblong shape in this 100% crop.
This is an undesirable result. It is one of the areas where software-based HDR suffers in comparison to MEB. Moving objects or portions of a scene that change over long exposures present difficulty for HDR software.
Multiple Exposure Blending Steps
Multiple Exposure Blending is the other option for creating high dynamic range images. It is done by hand-blending two or more images via layers. Then, you brush on masks in Photoshop to show or hide certain parts. I do almost all of my landscape photographs with some combination of GND filters and MEB techniques. This method allows photographers to employ the full functionality of Photoshop’s myriad ways of processing and color-correcting images. Skilled users can deftly blend some, or all, of different areas of the different images together. They do it in ways that are nearly impossible to detect. In our Tahoe supermoon example, the “best” exposed full supermoon can be retained and added to the final image. This is regardless of from which of the three exposures it was taken.
After processing each individual image a bit more carefully, below is the final result when manually performing Multiple Exposure Blending on the same three images that were run through the HDR software.
Below is a screenshot of the blend as done in Photoshop, showing the many different layers and masks that were employed to blend the three images together. Red shows the active mask on a Curves adjustment layer. This is used to increase contrast in the water and sky but not the dock. Careful blending was achieved through meticulous application of brushed masks of differing opacity.
Advanced users should look into Tony Kuyper’s luminosity masking actions for Adobe Photoshop. These use RGB channels to analyze the luminosity of a photograph. Then they automatically assign masks of differing levels of opacity. This makes blending for MEB a breeze.
In the case of the supermoon set of bracketed images, only the shutter speed was varied. All other settings were left the same. With the MEB workflow (as opposed to the HDR software workflow), I could have varied things besides shutter speed. I could have used a wide aperture and changed focus to blend a final image with a deep DOF. This could have helped me shorten my exposure time. That, in turn, would have helped make the moon crisp and sharp.
In the end, the differences between the MEB final image and the HDR images are subtle. But the blended composite more closely matches my vision for the scene.
HDR vs Multiple Exposure Blending (The Good/Bad, Pros/Cons, and How-to/When to Use)
Overcoming the dynamic range limitations of DSLR cameras is an important step for many landscape photographers. We strive to create realistic images matching the dynamic range of what our eyes see.
Here’s a recap of the pros/cons for HDR versus MEB, with sample situations when one is preferred over the other:
● Fast and “easy”.
● Powerful software, integrates easily into Lightroom workflows.
● Relatively easy to get a decent result.
● Budget real estate photography.
● Landscapes with trees, mountain ranges.
● Can be easy to go “over the top” and create a garish look that is “ugly” to many.
● Can create halos around breaks in contrast (trees).
● Need to shoot in RAW.
● Fast-moving clouds, water, or people.
● Windy scenes with grass or water.
● Capturing the moon if included in composition.
● Areas of grey or white (snow).
Multiple Exposure Blending
● With time invested and skill, best way to achieve the most realistic result.
● Fine-grained control of/blending of portions of multiple images. Star trail long exposures with a short, light-painted foreground exposure. Images with completely different settings.
● Achieving deep DOF with focus blends when desired exposure time for can‘t be long. This is made necessary by small aperture chosen for DOF.
● Picking a “plate” image for a foreground. Then making many different exposures of a changing or moving background. Finally, choosing the ones to bring together.
● Need to learn layer mask-blending techniques in Photoshop.
● Requires investment of time longer than using HDR software to blend images.
● Intricate blends can be very difficult.
● Very large files can result from more than two images.
● Forested shots with trees sticking up above the horizon (hard to mask for).
When mastering both types of high dynamic range imaging, it’s good to carefully shoot a wide variety of bracketed images while in the field. This provides a solid foundation for either workflow. Having multiple exposures of the same scene is also great “imagery insurance” when culling favorites later.
One Final Thought
I convey the following to all my students in my photography classes. We are making art and in art there truly is no wrong answer, only your answer.
Latest posts by Grant Kaye (see all)
- Ultimate High ISO Review: Canon 5D Mark IV vs Nikon D850 vs Sony a7R III - October 18, 2018
- How to Shoot Aviation Photography - April 11, 2018
- Fundamentals of a Good Travel Production for First-Time Videographers - August 11, 2017