Making Panoramas in the Rainforest (part one)

Hi Folks:

I’ve been making digital panoramas for a lot of years now, and I’ve written at least a half-dozen different posts on different aspects of them. This one is for a specific case scenario rather than a general post on panoramas, but before we get too far, we need to cover a few basics. If you want to skip the basics and go straight to the rainforest part, click here.

First, understand that digital cameras don’t capture images. Digital cameras read light and render it as information; that information can be displayed in a way that makes it look like a photograph. All digital cameras capture raw files; not all digital cameras give you access to them. Now, raw files require digital post-processing on a computer (as compared to .jpg files, which are post-processed using an algorithm provided by the camera manufacturer and the processing power of the camera). The other side of that comparison is that raw files provide much more information to play with than .jpg files. This is especially important when shooting in the rainforest, as we’ll get to below.

Second, to do this with any degree of efficiency it’s important to understand at least the basics of colour management as it relates to cameras and computers. Remember: it’s less about accurate colour and more about consistent colour between devices.

So that we’re all on the same page, it’s important to understand the difference between a panoramic image and a digital panorama. Compare these two images:
(click on any image to see it larger)

a 1x4 aspect ratio image of the shoreline near Dallas Road in Victoria, BC. This is a grayscale image, with a winter storm bringing in waves from the left of the frame, and colliding with the rocks, driftwood and beach on the right.

Dallas Road shoreline, Victoria, BC

a digital panorama of sixty images, showing the north cliff face of Third Beach, near Tofino, BC. The ocean is on the left, and there's a small beach and some rocks in the foreground

Third Beach, Tofino, BC

The image at the top is panoramic in look, but it’s a 1 x 4 aspect ratio crop from a single image frame. The second image is a digital panorama made from 60 base images. To make a digital panorama then, is a two-step process (it’s actually multi-step, but we’ll separate them into two groups). The first step is to make a collection of images from the same vantage point; the second step is to use software to combine these individual frames into one image.

We also need to consider that when most people think of panoramas, they think of something like this:

A 25-image panorama of the waves at Chesterman Beach, Tofino, BC

Chesterman Beach, Tofino, BC – 25 images

but a panorama could just as easily look like this:

A panorama of a piece of driftwood on the south shore of Third Beach, Tofino, BC. Someone has made small stacks of stones, creating miniature castles on the log. This image is made from six base images

Castles – Third Beach, Tofino, BC – 6 images

or this:

a vertical panorama of a section of shoreline on Gabriola Island, BC. The shore is sandstone, which has been eroded by wind and waves to create a complex tapestry of pockmarks

When Water Speaks with Stone – Descanso Bay Regional Park, Gabriola Island, BC – 3 images

There are two more questions (for now). The first is, “Why make panoramas?” In broad terms, I have two answers for that. One is to make an image that includes more of the scene than can be captured in one frame, given your camera/ lens combination. Obviously, a wide-angle lens has a greater field of view than a telephoto lens, but that can come at the cost of corner vignetting and lens distortion. There are ways to deal with both of those with post processing, but the point remains. The second reason is to increase image resolution. If one compares an image made with, say a cell phone camera (any cell phone camera) with, for example a Phase One IQ4 150MP MF camera back, there is absolutely no comparison in terms of detail, colour depth, etc. How visible these differences are will depend on several factors (including image size, viewing distance, zoom level…) but they will always be there. My Sony A7RIII has a 42MP full frame sensor, but if I create a 10-image panorama for example, I can create an image with a level of detail impossible to capture with a single frame with the same equipment.

The second question is, “How many images make a panorama?” In my world, you need at least two images to stitch together to build a panorama. As to the maximum, this is entirely dependent on the resources of your computer. The most (so far) I’ve used for one stitched frame is 118 images. Sometimes I try to see if I can make smoke come out of my poor little computer. However, if you do a search for gigapixel images, you’ll find some that are truly incredible in terms of the size/ detail they offer. NB: The other side of this is file size. With a few of my panoramas, the file size is several gigabytes and that’s a lot for any software package to handle. Capture One Pro for example limits file creation to 750MB/ panorama.

Before we continue, I’ll mention that if you have a smartphone, it’s most likely that the camera has a Panorama feature. There is also software that allows one to create a panorama from a video clip. Those have their place, but I’ll assume that those who use them aren’t interested in fine art photography. And on we go!

_____

Now, when it comes to the first step (image capture) there are several factors to consider. When working in the rainforest, some of these are even more pressing. The most important consideration for every panorama is parallax error. This is more relevant for lenses with a wider field of view, but the laws of physics dictate that parallax is less of an issue with increased subject distance. If we look at an image like this:

A panorama of Cox Bay beach at low tide at sunset. In the middle ground there is a family of three and their dog silhouetted against the waves behind them. The sunset colours are reflected in the tidal pools in the foreground.

Cox Bay Sunset – Tofino, BC – 12 images

there isn’t much in the foreground to cause parallax issues. With an image like this, however, one must be more careful regarding stitching errors:

Twisted - a 1x4 panorama of bull kelp on a rocky shore, rolled back and forth by the waves into an elongated, twisted braid.

Twisted – China Beach, Juan de Fuca Provincial Park, Vancouver Island, BC – 9 images

If one has a technical camera with a movable back and a lens of sufficient diameter, it’s possible to make panorama captures without moving the bulk of the camera. Basically, the lens can display more of the scene than the sensor can capture. One manipulates the camera back within the image circle of the lens. For most, however, one must rotate the camera in order to capture the images in the series. To avoid parallax errors, it’s vital to rotate the camera around the entrance pupil of the lens. (NB: every lens has both an entrance pupil and a nodal point; zoom lenses have more than one, depending on focal length setting. The point of rotation is often called the nodal point or the no parallax point. Technically it’s the entrance pupil of the lens around which one must rotate the camera, but feel free to call it whatever you want. The concept is what’s most important.) Depending on camera to subject distance and the number of images in the series to produce the final image, it is possible to make panoramas handheld as long as one is conscious of the point of rotation. For serious work, most photographers use what’s called a panoramic head or a nodal head mounted to a tripod.

To that end, a little over a year ago I purchased a used, motorized panorama head. It’s not really designed for my Sony A7RIII, but when it was built, mirrorless cameras didn’t yet exist. Once the head has been calibrated to fit the camera/lens combination (I use a Sony 55mm lens with this head), the head basically does the rest of the work. One sets the top left of the desired area to cover, the bottom right corner, and the head does the rest. Based on the field of view programmed into the head, it determines the number of rows/columns of images and will move itself from frame to frame automatically. Having calibrated the head to the entrance pupil of the lens, parallax is pretty much eliminated. That covers the first criterion. There are other considerations, however.

Exposure is the next consideration. I should mention there are a couple of general rules for making base images for panoramas. One is that one should never use polarizing filters as the filter will vary the exposure depending on the angle to the sun. Another is that one should set the camera to manual exposure/ manual ISO to ensure consistency between frames. This is fine for something like the beach images above, but in the forest, things are different. Rainforests have a tendency to be fairly dark because of the overhanging canopy, but here’s the rub. If one is shooting a vertical panorama like this one:

A long vertical panorama in grayscale of two red cedar trees. The one on the left is standing dead, home to a variety of mosses, lichens, fungi and more. The one on the right is still alive, and while covered in mosses as well, it may well stand for several more centuries

Ancient Cedars – Pacific Northwest National Park Reserve, Tofino, BC – 4 images

the range of exposures from the bottom of the frame to the top of the frame will be significant. If using a motorized head one can program the time delay between exposures, but this will be fixed, not variable. Therefore, one must set the time spacing to slightly longer than the exposure time.

There are two other (related) issues to consider here: dynamic range and ISO. In brief, dynamic range is not the range from the lightest to the darkest parts of your image; dynamic range refers to your camera’s ability to separate the range of tones from the lightest to the darkest parts of the image. This is specific to the camera you’re using. I’ve used this image before in previous blog posts, but it gives a quick visual guide to dynamic range:

a graphic image showing increasingly distinct separations between black and white. At the top there is only black and white. At the bottom there are 32 levels of gray in between

Dynamic Range Gradient

Incidentally, if your monitor is calibrated properly, you should be able to see all 32 stops of dynamic range in the bottom row. Most monitors shipped from the factory are much too bright and with too much contrast.

My Sony A7RIII for example has 12 (11.57) stops of dynamic range … But. You knew there was going to be a but there. When it comes to exposure, there are two and only two factors that come into play: f/stop and shutter speed. For a given amount of light, you control exposure by setting the size of the lens opening at the diaphragm (f/stop), and the amount of time the sensor (or film, or plate) is exposed to light.

What about ISO, you ask? Remember that digital cameras don’t capture images. Digital cameras read light reaching the sensor and convert that light into electrical signals. Those signals are subsequently converted into binary numbers using an Analogue to Digital Controller. ISO fits into that process. ISO can’t create more or less light and it can’t change the amount of light reaching the sensor the way f/stop and shutter speed can. Changing the ISO setting on your camera changes the ways those light readings are amplified. Many modern mirrorless cameras are considered ISO-less (to a point). One can underexpose by ___ stops during the moment of capture, increase Exposure by the same number of stops in post-processing, and not see a notable difference. For more on this, I’d recommend perusing The Last Word by Jim Kasson.

Increasing ISO has two drawbacks. The first is noise. We’re not going to go into luminance noise vs. colour noise and things like that. Suffice to say that luminance noise is a graininess that masks the sharpness in an image. Now at this point you may be thinking, “No problem! I’ll just crank my ISO up to 12800 and use my (AI-driven) noise reduction software to make everything perfect.” Not so fast. It is true that excellent noise-reduction software exists and while it can use pattern matching to remove noise and try to recreate detail, it can’t give you back what wasn’t captured in the first place. More on that here for those who are interested: Low Light, High Noise and ISO Invariance.

The second issue with using high ISO (that few people know about) is dynamic range. As I mentioned in the previous paragraph, my Sony A7RIII offers 12 stops of dynamic range … at base ISO (100). If I increase the ISO to 12800, the dynamic range drops to 5.4 stops … basically half (with huge thanks to photonstophotos.net). Go back to the bar chart above and compare the bars from two adjoining rows and you’ll get the idea. Remember, dynamic range refers to the camera sensor’s ability to separate tones into different lightness levels. One can go much further down the rabbit hole with this. I’d recommend Noise, Dynamic Range, and Bit Depth as a good place to start.

The next issue to cover is focus/ Depth of Field (DoF). We’re not going to cover Circles of Confusion (as that tends to confuse people); suffice to say that DoF refers to the range of subject distances from the camera that appear to be in focus. Again, there are several factors to this. DoF is dependent on sensor size, focal length of the lens, lens aperture and subject distance from the camera. Lens aperture also affects Exposure (above) and, depending on the lens can involve issues of vignetting (darkening in the corners of the frame at wider lens openings) and diffraction (distortion at the edges at smaller lens openings). This is highly lens-dependent.

In broad terms, digital cameras today have both autofocus and manual focus. Both of these have their pros and cons, and within each there are variables. For single images where there’s a clear subject, autofocus is generally faster and more accurate than manual focus. The Sony A7RV for example not only has the option to focus on the eye of the subject, one can specify human vs. (other) mammal vs. bird vs… This is great for bird photography. For panoramas, however, things aren’t so simple, for a couple of reasons. If one is making a panorama such as the sunset image above, then the camera to subject distance is such that one can use autofocus or even hyperfocal distance and all is good. In the rainforest, however, things are different. Remember that with my motorized panorama head (for example) I can set the corners of the frame I want to capture and the head will happily do the rest. Let’s say I set the camera to autofocus, use the center of each frame as a focal point, and let it make 4, 6, 20 images or whatever. When I get back to my computer I see that for most of the images in the sequence, the trunk of the tree I was photographing became the focal point. However, on one image, there was a salal bush or a tree branch sticking out that was, say, 2m in front of my main subject and the camera focused on that instead. So much for autofocus. We haven’t (yet?) reached the point where we can simply say, “See that log? Focus each shot on that.”

Here’s an example of this:

A landscape panorama showing a bigleaf maple with wide-swept branches standing at the top of a ravine with mixed forest behind it.

Bigleaf Maple – Durrance Lake, Highlands, BC – 15 images

If you were a camera sensor, where would you focus for each of the 15 base images? In this case, I had sufficient space behind me to be able to set lens at f/11 for adequate DoF and use the hyperfocal distance for my lens (a Sony 24-105mm, set at 50mm) to capture this scene. There are DoF tables available online and there are any number of camera calculator apps available for both Android and iOS for use in the field. Just remember that everything is connected. For a given sensor, DoF is affected by lens aperture and subject distance, but lens aperture affects shutter speed and may have issues with vignetting or diffraction. Choices, choices …

In the previous paragraph I mentioned in this case I had sufficient room behind me to set up my tripod in a convenient location. Depending on where you find yourself, you may not have much choice in the matter. Rainforests are – by definition – wet much of the year and tend to be highly productive in terms of plant growth. Unless one is making one’s own way through the forest – not often an easy task (I’m thinking devil’s club, fallen logs, salal and much more) – one will be following a trail or (in many cases) a boardwalk through the forest. Please, please, please do not go off-trail to make your images. Many of these habitats are highly sensitive and someone stomping around unknowingly can cause long-term damage that may not be obvious.

However, since you will be (for the most part) restricted to a specific path, the camera to subject distance may not be your choice. Hyperfocal distance can be a useful tool, but not in every case. For example, with my Sony camera with the 55mm lens at f/11 (calculated CoC of 0.0287mm), the hyperfocal distance of 9.64m means that objects from 4.82m to infinity will be deemed to be in focus. That’s great, but if my subject is 3m in front of me, it’s not going to work.

NB: If working on a boardwalk in the forest, often they’re suspended quite a bit above the ground. Be safe out there, but also be aware that since boardwalks are typically made from wood, if there is other traffic (hikers, joggers …) on the walkway their footsteps will cause the boardwalk to bounce. This is not helpful when making a long exposure, but all one can do is work around it as best possible.

I’ve now made a few thousand images with my motorized panorama head and when I’m working in close quarters (like the forest) I find it best to use the head in semi-automatic mode. What I mean by that is that I use the head to set the framing for each image and then pause. While it’s holding position, I make decisions about focal point and exposure for each image. I use the camera’s self-timer to make the exposure then direct the head to the next frame, and so on. This allows me several advantages. For one, I get to choose the focal point for each image. For another, while I leave the f/stop and ISO constant (usually ISO 100) I can modify the shutter speed to some degree for each image. Since some exposures will be longer than others, I don’t have to worry about the motorized head moving to the next frame before I’m ready. And if someone comes thundering by in the middle of an exposure, I can simply delete the frame and make another one. As should be obvious, this is not a quick process. Marcia often accompanies me on our forest hikes (she’s got her own photographer’s eye) but when I bring the panorama head I go alone.

That covers some of the parameters for making panoramas in the rainforest and we managed (just barely) not to break out into volumes. Part two will involve dealing with the images in post-processing,  but we’ll leave that for another post.

That’s it! Now go out and make some photographs!

Hugs,
M&M

P.S. One of the best resources I’ve found for panoramic photography in general is Panoramic photography Guide by Arnaud Frich. He goes into great detail so you may need to set aside blocks of time to cover it all.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.