An HDR Comparison

Hi Folks:

When you mention the term HDR, many people’s thoughts automatically jump to tonemapping and the results that can produce. That’s not what this post is about. If you don’t understand what HDR is all about or why you might want to use it in your photography, I suggest starting here: Why Use HDR? I’ll wait…

Okay, welcome back. I recently acquired a Sony A7R III and one of the features of this camera is that it has a very wide dynamic range – 12 to 14 stops are claimed. To that end, HDR capture with this camera isn’t often necessary. However, a friend of mine and I were out at Victoria’s famous Butchart Gardens last weekend and I wanted to try bracketing a few exposures just to see. Now, when it comes to the question of how many exposures to make and at what EV levels, there’s really only one answer: it depends. It depends on the scene and it also depends on the camera you’re using and what capabilities it has. For my experiment I decided to shoot 5 bracketed exposures at -4/-2/0/+2/+4 EV. Here’s an example of one of those combined images after having been pushed around a bit in Lr.

Water Dragon

I wasn’t planning to make bracketed exposures that day and so didn’t have a tripod with me. Fortunately most HDR software has at least some auto-alignment feature. Since the camera is still new to me, however, I wanted to do a proper test. I dug out my old Cullman tripod and set it up. I chose this section of one of our bookcases for a couple of reasons. The first was that the titles (all by author Deborah Harkness) offer a broad range of colours and graphics along with white text. The second was that there’s a strong light coming from the right side, which consequently added a deep shadow. Starting from the top left, the five images were made at -4/-2/0/+2/+4 EV.

There are a number of different software packages that allow you to do HDR image stacking. I have five of them, four of which are connected to Lightroom. They are:

  • Adobe Lightroom 6.14
  • Affinity Photo
  • Nik/Google/DxO HDR Efex Pro
  • Photomatix Merge to 32-bit HDR plugin
  • Timothy Armes’ Lr/Enfuse plugin

Affinity Photo is the one standalone program. I used the same five .arw files for each stack and did as much as I could to equalize the output – .tif, 16 or 32-bit, Adobe RGB, no noise reduction or tonemapping, etc. but the software packages themselves have variances. The Photomatix plugin for example creates a 32-bit half-floating point output whereas the internal LR HDR merge uses a 16-bit floating point output that creates a .dng file. To some extent these factors are irrelevant; the bottom line is the results themselves.

In the image below are six files. Starting from the top left is the single-image 0EV exposure, output as a .tif file then re-imported into Lightroom. Next to that are the Affinity Photo image and the HDR Efex Pro image. On the bottom row are the Lr/Enfuse, Photomatix and internal Lr HDR merge files. All six images are as they were after exposure blending. As you can see, there was quite a difference in output.

Each image would need to be processed differently, but in order to get them to the same starting point I selected them all, used the 0EV exposure as the most selected and used Auto-Sync in the Lr Develop module to set an auto White Balance and an auto Tone. Here are the results of that:

There are some contrast and lighting differences but any of them could produce a usable result. It’s also worth noting the image in the upper left – the single-image, 0EV exposure. The Sony A7R III is largely ISO-less, and so exposure bracketing with this camera has somewhat less use than it would on other cameras. Someday I’ll do a comparison post on making HDR and panoramic images using my cell phone.

Okay, that’s it for now. Go out and make some photographs!

Hugs,
M&M

P.S. In addition to Lightroom 6.14 I’ve also been playing a bit with Capture One 12 from Phase One. The HDR merge files generated by the Photomatix Lr plugin are unusable in Capture One. I don’t know why, but they look like the image on the left. In comparison, I took the Lr HDR Merge-created .dng file, exported that into Capture One and created the image on the right. As mentioned, I don’t yet know Capture One very well or I’m sure I could have done a better job of it.

P.S. II. the sequel: Making these blends involved working with 5, 85MB images. To that end I haven’t made the raw files available. If you really want them to try out on your own computer, let me know either by leaving a comment here or by filling in our Comment form and I’ll let you know where you can find them.

P.S. III! There are some 85 posts on our blog now on digital photography and Lightroom. You can find them all here.

7 Replies to “An HDR Comparison”

  1. Duncan Gibson

    ..I use the Nikon and when I use the bracket method I use -1 under -7 -3 0 + 3 + 7 +1 over ..I notice your # are -4 thru + 4 on the ev scale ..what your camera is saying it works on 4 full stops under that’s quite dark and the same for the other end 4f stops over is very bright ?? ..I don’t know anything about Sony cameras ..Oh one other thing my comp and Psh CC 2018 only does 16 bit .. Have I read your #s wrong ??

    Reply
    1. wolfnowl Post author

      Hi Duncan: I don’t want to pretend to understand this stuff more than I do, but I’ll take a stab at what you\’ve asked and if I get it wrong hopefully someone else will correct us.

      First, and I believe you know this already, but what we think of as a digital camera is really a computer with a lens stuck on the front of it. As such, while a film camera makes an image by allowing light onto a light-sensitive plastic (or other things like paper, glass or metal), a digital camera doesn’t make images. A digital camera captures light as information. Since a digital camera is really a computer, we can take that information and use it to make something that looks like an image.

      Now, dynamic range refers to the range of tones in an image. A black cat in a coal mine and a snowshoe hare in winter both have limited dynamic range. On the other hand, an image with the sun streaming in from one side and causing deep shadows is an example of an image with high dynamic range. The human eye has tremendous ability to look at the scene and very quickly adapt to seeing detail in both the brightest and the darkest parts of the scene… but our brain is a much better computer than a digital camera.

      So we have two factors here. One is the range of tones, from light to dark in the scene. The other is the ability of our camera to capture that range of tones. If we had a scene with say, 12 stops of dynamic range, and a camera capable of capturing say, 5 stops of dynamic range, by making one exposure we could either capture the darkest parts of the scene (everything bright would be washed out), the brightest parts of the scene (everything dark would be black) or somewhere in the middle – leaving both the brightest and the darkest parts of the scene un-recovered. Or, we could take several images at different exposures and use computer software to look at each pixel of each of the images and choose the one with the best information. It then builds a mosaic out of the information from those individual exposures to create one image that has an expanded dynamic range. The trick to doing this well is to have enough overlap in exposure between the images so there aren’t gaps in the data. What I mean is, let’s say you have a camera with 5 stops of dynamic range and a scene with 12 stops. If you set your camera so that one exposure captures light levels from 1-5, the second captures from 6-10 and the third captures light levels from 11-12 (and beyond) you’ll have the range covered but you won’t have any comparison between the images. If you set up your exposures to catch light levels 1-5, 4-8 and 8-12, then the software will have enough information overlap to be able to build one extended range out of the data. With five exposures instead of three you could create more overlap.

      You’re using a Nikon D2x, and it has about 8 stops of dynamic range for each exposure. This varies for each camera model so people really need to look up what their camera can capture. The way you have your bracketing setup you’re using the minimum shift in exposure (0.3 stops) to make your brackets: -1, -0.7, -0.3, 0, 0.3, 0.7, 1 EV. By setting it up that way you’ve got a lot of overlap between exposures but you’re not capturing anything that’s more than 1 stop over or underexposed. With 8 stops of dynamic range for your camera you could easily set your camera to one stop apart: -2, -1, 0, +1, +2 or even two stops apart. This would still give you good overlap but capture more of the scene. For my cell phone, I shoot brackets at one stop apart because anything more than that leaves gaps. Again, it depends on what you’re shooting with and what you\’re looking at in terms of light levels in the scene.

      The second thing is that raw files get written out as 16-bit. Jpg files are 8-bit. Cameras don’t really capture 16 bits of information (except maybe the really high end ones); most shoot 12-bit or 14-bit but that information gets stretched out over 16 bits. Anyway, remember that a digital camera is capturing light as information. If you’ve got 3 or 5 or more 16-bit image files of the same scene you’re gathering together a lot more information about that scene. That’s why some HDR conversion software programs create 32-bit output for their HDR files. Some don’t, and it has to do with the way they assemble that information together. I can’t explain it better than that because here we’re getting to the limits of my understanding.

      Mike.

      Reply
  2. Amaka

    i just started photography with my iPhone xs mas. and i do appricate your write up. going to help me in the long run

    Reply
  3. Shane Haumpton

    This article is very nice and informative. As a beginner photographer, this can help me take a better photo using HDR mode.

    Reply

Leave a Reply to wolfnowl Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.