Why Use HDR?

Hi Folks:

I was at an informal gathering of photographers recently where we were sharing and discussing our work.  I displayed a sunset image that I had made (this one)

Seeing the Light

and explained that it’s an HDR panorama made from five sets of three bracketed exposures.  One of those present asked the question, “How is this HDR?  Show me where the HDR comes into play.”  It isn’t obvious from looking at this picture that it’s an HDR image (precisely my point) so why bother with the extra time and effort?

From what I’ve seen in my wandering around the ‘net and in talking with other photographers there seems to be a lot of misunderstanding surrounding what is essentially a simple idea.  I’ve heard people talking about an ‘HDR-look’, about ‘pseudo-HDR’ and about making an HDR image from one raw file.  None of these are essentially true, although it is true that many HDR images have been tonemapped to give them a certain gritty, industrial ‘look’.  That can work for some images, but it can certainly be overdone and often is (in my opinion).  Photographer Vincent Versace has taken to calling his work ‘XDR’ to differentiate his more natural results from the ‘HDR-look’.

I tend to target these photography posts more toward beginning photographers, so let’s start at the beginning.  In order to understand HDR (High Dynamic Range) images, one needs to understand what dynamic range is, and in order to understand dynamic range we need to begin with light.

Every lens allows light to pass through it to reach the film or sensor within the camera, and every lens has a maximum amount of light it can allow through – based on the diameter of the lens, the length of the tube, etc.  Most lenses also have a built-in diaphragm that can restrict the amount of light passing through it.  When this diaphragm is at it’s widest opening, the lens is said to be at maximum aperture.  Now (ignoring for a moment the two f/0.95 lenses in the world), let’s assume that the maximum amount of light possible to pass through a hypothetical lens has a value of ‘1’.  In the days of mechanical aperture controls the diaphragm was adjusted from one click-stop position to the next, and each ‘stop’ let in twice as much light or half as much light as the adjacent stops.  So, with a hypothetical maximum value of 1, the next stops would let in 1/2, 1/4, 1/8, 1/16, 1/32, 1/64, 1/128, 1/256, 1/512, etc. amounts of light.  As you can see, the numbers get pretty big so instead of using those numbers they used the square root values instead.  Therefore we have stop values of 1, 1.4, 2, 2.8, 4, 5.6, 8, 11, 16, 22, etc.  The larger the number, the smaller the diaghragm opening.  Because lenses are physical objects, there’s a relationship between the diameter and the length of the lens, so a lens might have a maximum aperture of f/1.7 or f/3.2 or some other value.

Okay, on to dynamic range.  With regard to light, dynamic range is the difference between the brightest and darkest areas of a scene, expressed as a ratio.  The human eye is capable of differentiating a tremendous range of light values.  From Wikipedia: “A human can see objects in starlight (although colour differentiation is reduced at low light levels) or in bright sunlight, even though on a moonless night objects receive 1/1,000,000,000 of the illumination they would on a bright sunny day…”  Obviously the eye cannot see in both starlight and sunlight at the same time, and it does take time for the eye to adjust from one lighting condition to another, but still, the capacity of the eye to render a scene and for the brain to interpret what is being received is far beyond the ability of any film or digital sensor.

There’s quite a lot of discussion over the dynamic range of .JPG vs. RAW files (for more information on .JPG vs. RAW you could start here: Photography and Colour Management), and there’s also a huge difference between what can be seen on a monitor and what can be seen in a print.  Some monitors advertise 300:1 or 500:1 contrast ratios; in a print there are variables such as paper, number of inks, etc, but let’s set 5-8 stops as the approximate dynamic range for a print.

Here’s a quick graph that illustrates the differences in dynamic range, from essentially none to 32 stops:

Gradient Incidentally, you should be able to see all 33 bars in the bottom row on your monitor.
If you can’t you may need to adjust your monitor’s brightness level.

There’s another factor that comes into play as well, and that has to do with the way the camera sensor records the amount of light reaching it.  However, rather than getting into that here, I’m going to refer those who are interested to these two sites:

Tonal quality and dynamic range in digital cameras
Digital Dog: Tips and Articles

Onto the real world.  Why do HDR photography?  The essential reason is that some scenes contain more dynamic range than can be captured with one image.  This isn’t true of every scene; the white hare in the snowstorm and the black cat at night obviously represent scenes with limited dynamic range.  However, in a scene with tones ranging from bright sunlight to deep shadows, one must decide where to cut corners so to speak as the camera can’t record the range of tones necessary to capture all of the information across the entire scene.  Yes, shooting RAW gives you more latitude because RAW files contain more information, but there are limits even to that.  To make an HDR image one makes several images of the same scene at different exposures and then combines them into one image using special software.  Doing so creates a 32-bit image than can then be tonemapped to maximize the tonal range throughout the image.  One can think of tonemapping as ‘kneading’ the extra information within the combined image to yield the desired effect, and it’s this process that yields the often ‘overcooked’ HDR look of which people are familiar.  Can you tell I’m not really a fan?  At the same time, the tonemapping need not be so extreme, and one can use the same software to achieve a result that looks natural but that has a smoother blend of tones than can be achieved using one image.  There are a number of HDR programs and plugins available: Photoshop CS5 has a built-in HDR capability, and some of the most popular packages are available from Photomatix and Nik HDR Efex Pro.  Both PTGui and Autopano Pro are panorama software packages that also allow you to combine images into HDR results.  Also, for those using Lightroom, there’s a plugin from Timothy Armes called LR/Enfuse.  While not a true HDR conversion (no 32-bit file), it will combine images made at different exposures into one file, and gives very good results.  For more information, try our ‘Using the LR/Enfuse plugin for Lightroom‘ post.  It’s the LR/Enfuse plugin that I used for the results below.  As to how many exposures to make and what exposure differences to use, it depends on the scene, your camera and whether you’re shooting RAW or .JPG.  Since it will be necessary for each image to be exactly the same composition, a tripod is required for this work.

Almost exactly a year ago I did a ‘Photo of the Month‘ post on HDR imaging, so for this article I went back to the same location and shot essentially the same subject.  I used my little Fuji walkaround camera, and to really illustrate the effect, made 7 exposures at +3/+2/+1/0/-1/-2/-3 EV.  Normally I make three exposures at +1/0/-1 EV.  Here are the seven images I made:

HDR Gradient

Looking at the first image one can see that most of the image is completely blown out, but there is some detail in the deepest shadow areas.  In contrast (pun intended) the last image is completely black in the shadows, but preserves detail in the brightest areas of the scene.

So, let’s begin with the fourth image, the one shot without exposure compensation.  Processed in Lightroom, the final image looks like this:

Stump 0EV

The exposure’s not bad overall, but it’s clear that there is detail lost in the shadows and blown out areas in the highlights.

Combining the seven images into one yields this result:

Stump HDR 1

I know, you’re thinking it looks terrible, and you’re right.  But before we go on, take a second to look at the extra information available in both the shadows and the highlight areas.  By pushing this around in Lightroom, we get the final result:

HDR Stump

Compare this to the single image above, and there is no comparison.  The HDR image has much more detail, and yet it still looks natural and balanced.

Stump 0EV

HDR Stump

These images were made from .JPG files, but even more tonal control can be achieved with RAW files.

Finally, since HDR is all about tones or luminance values, it can be used equally effectively with both colour and B&W images, as can be seen with these lichens:

Lichens Colour

Lichens Black and White

Now go out and make some photographs!

Mike.

Update: December 19, 2013. HDRSoft, the makers of Photomatix HDR software have come out with a plugin for Lightroom 4.1 or later that will take your original bracketed files and create a 32-bit floating point .tif file.  LR 4 and 5 have the capacity in the Develop module to work directly with 32-bit images, so you can do your tonemapping from within LR itself rather than having to use an external editor.  The plugin price is $29 USD, available here.

P.S. Here are some other very good articles on HDR:

Also, Laura Shoe has an excellent article on bit depth and what it means.  A .pdf version of her article may be read or downloaded here: 8 Bit, 16 Bit, 32 Bit: What Does This Mean for Digital Photographers?

P.S. II, the Sequel: You can find more of our posts on photography and Lightroom tutorials here, and you can find links to over 200 other sites that have Lightroom tips, tutorials and videos here.

10 Replies to “Why Use HDR?”

  1. Ken Hurst

    Hi Mike – Good article! I agree totally about how this HDR look has some people (who seem to be otherwise talented and gifted photographers) producing really garish looking stuff. I suppose that's OK if they're just trying to reach a market of print buyers who seem to like that sort of thing. If this over-the-top HDR look is a matter of overuse of the controls in PhotoMatix, do you think those same photographers would use the maximum (or miminum) settings on all the sliders/controls in Lightroom or Photoshop on their other photos? I guess I'm a fan of high dynamic range or even just wide dynamic range but I'm not too wild about the "HDR look" usually.

    Ken

    Reply
    1. wolfnowl Post author

      Hi Ken, and thanks for dropping by! Like anything else, it's a tool that can be used or abused… some people like the 'haloed' look, and that's fine for them. Different strokes for different folks!

      Take care,
      Mike.

      Reply
  2. here

    You are so right! Every picture can become a masterpiece after proper usage of hdr tools. But of course, each photographer should learn how to use it.

    Reply
    1. wolfnowl Post author

      Thanks! Different people have different ideas over what constitutes a good photograph – fortunately there’s room enough for all of us!

      And thanks for dropping by our little corner of the ‘net!

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.