Sony a7RIII, Pixel Shift and Focus Stacking

Hi Folks:

This post covers a few different topics; I’m not going to go into too much depth on any of them (there are a lot of resources available on the ‘net) but I will provide an overview of my experiments, specifically as related to my Sony a7RIII camera.

First (and arguably most important) is the subject of our experiment. A little over a year ago we picked up an expired (finished blooming) Dendrobium orchid from a local flower shop for $5. This one was not in good shape when we brought it home. It had been grossly overwatered, half of the roots were rotten, but we cleaned it up and put it into a new terracotta pot with some bark chips and Marcia began administering her own special brand of magic. The orchid responded as might be expected – putting out new roots and new leaves. This spring we had a new root form that looked different to the others; as it turned out it wasn’t a new root at all. By the beginning of September we were here:

This image shows Marcia's orchid with the first of six blossoms open.

Marcia’s Orchid

It took a while, but over the next month all six blossoms opened, and that brings us to the reason for this post. I wanted to capture the beauty of this flower.

Pixel Shift

Okay, to explain pixel shift we’re going to have to get a little bit technical. In very basic terms, a digital camera’s sensor is a collection of very tiny solar panels, arranged in a grid. The Sony a7RIII has over 42 million of these little solar cells, called pixels. With the shutter open, light coming through the lens reaches these little cells, and each one generates an electrical signal based on the amount of light reaching it. This electrical charge is then converted into a digital value. In essence, all digital sensors are grayscale – they don’t measure colours, they only measure light. To calculate the amount of light of each colour (RGB or Red/Green/Blue) reaching the sensor, most cameras have a grid of colours overlain on the sensor in a pattern called a Bayer Matrix.

NB: Fuji cameras use a slightly different design known as an X-Trans sensor, and Sigma cameras work entirely differently.

An image showing a Bayer filter on a camera sensor

By en:User:Cburnett – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=1496858

More information on Bayer and Fuji filters can be found here.

Two things to note here: One, sensor pixels don’t store colours. They store digital information about the light as filtered through the colour. Two, each pixel only receives information about one colour. To get full colour information for all of the pixels, each cell interpolates colours from the cells surrounding it.

What if it was possible to get colour information for all of the colour channels for each pixel? In essence, that’s what pixel shift is all about. What the camera does is make four images instead of one, moving the position of the sensor so that each pixel can capture colour information from all of the channels instead of one. Now, in order for this to work, a few rules must be in place. One, both the subject and the camera must be completely still. Two, the light falling on the subject needs to be consistent.

NB: I believe Sony was among the first companies to offer pixel shift with its cameras. Other companies have also joined in, and with some pixel shift involves more than four captures, but all of this is camera dependent.

Once the four images are captured, one needs to combine them into one image. The only way to do that (to my knowledge) with Sony cameras is to use Sony’s Imaging Edge software. It’s a relatively painless process, but… you knew there was going to be a but there. RAW files generated by Sony cameras are in the .arw format. They’re pretty universally recognized among raw conversion software. When one combines the four captures into one pixel shift image, however, the result is an .arq file. With Sony’s software there’s no way to make it anything other than an .arq file, and the problem with that is that few other software companies will read it. Capture One does not, and neither does Affinity photo. Those are my two image processing programs of choice.Sony Imaging Edge does provide some basic image editing, but it’s not anywhere close to what others can provide. Personally I’d prefer Sony stick to making camera equipment.

I remember reading that later versions of Lightroom CC are able to read .arq files, and while Adobe and I parted ways years ago there’s one piece of (free) Adobe software I still keep: Adobe DNG Converter. From the earliest days of digital cameras, each company seemed content to create their own (proprietary) raw file formats – usually changing them from one camera model to the next. Their reasons for it remained their own. Early on the folks at Adobe decided to make a free, open source raw file format: .dng A couple of companies have adopted .dng as their default raw file format, but the beauty of the DNG Converter is that it can convert just about any raw file into a .dng file. Does it work with .arq files? It does. So, I took the pixel shift image I made with Sony’s Imaging Edge Software, converted it to .dng, imported it into Capture One and processed it. I also took one of the four original images and used identical processing on it. Here are the results:

This image shows two versions of the same image. One is a single raw file. The other is a four-image pixel shift composite.One of these images is the four-image pixel shift composite. The other is one of the original raw files. I realize what you’re seeing is a low-resolution screen capture, and there are subtle differences between the two images. Which is which? I’m not going to tell you. To me, the end result doesn’t justify the effort. With a different camera system the differences may be more pronounced, but for my camera I don’t see a use for pixel shift. Yes, I’m aware that the flower is more in the red and blue channels and the background is mostly green, so that may have had some impact.

Focus Stacking

Okay, to explain focus stacking we need to explore depth of field (DoF). I’m not going to go into a long explanation of what DoF is, because when I start explaining circles of confusion and things like that people tend to look at me funny. In short, if you focus your camera at a distance of __ft (m), only objects at that distance from the camera are in focus. However, there will be objects both closer to and farther from the camera that appear to be in focus. This is governed by a number of factors, including lens focal length, sensor size, viewing distance, image magnification, f/stop, subject distance and more. In round terms, small sensors (like your cell phone) have greater DoF than something like a full-frame digital camera, even with equivalent focal-length lenses. Subjects at a distance have a greater DoF than subjects that are close-up and so on. Focus stacking is a process of making a series of exposures at different focus points, then using software to stack the images together to artificially increase the DoF. Again, this depends on several factors. For example, with my Sony a7RIII with a 55mm lens set to f/8 and a subject distance of 1 foot, the DoF is 0.3 inches. That’s not a lot.

Focus stacking is most commonly used with macro photography but isn’t limited to that. My last post on the subject was on Focus Stacking for Landscape Photography. There are two components to focus stacking: making the images and then joining them together. Needless to say, it’s important both the camera and the subject be completely still for this. With many newer cameras the first part can be done somewhat automatically. One provides the camera with the closest and most distant focusing points and the number of images and the camera makes an exposure, shifts focus, makes another exposure, etc… Starting with the a7IV series until the A1M2 today, every newer camera Sony makes offers this feature, but they didn’t upgrade the firmware for the A7III series. 

My first attempt at focus stacking images of our orchid involved setting my camera on the tripod and (using a remote shutter release) making a series of exposures by manually shifting the focus point each time. Unfortunately, as careful as I was, there were subtle camera shifts between exposures and the final result was unusable. Helicon Focus does have good retouching tools to mask out small movements (like an insect’s antenna for example) but this was a bridge too far.

Enter Helicon Remote.

Helicon Remote is a separate software package from Helicon Focus, one that does what newer cameras do automatically. As I understand it there’s an app version of the software but it’s only for Canon and Nikon, so I downloaded and ran the trial of the Windows version. Basically one connects the camera to the computer via a USB-C cable (NB: the Sony a7RIII has both a micro-USB and a USB-C connection, but I’ve never used the former) and the software takes over the role of shifting focus and making exposures. With the a7RIII, in order to facilitate this one must first set the camera up to allow Helicon Remote to take remote control. To do this, go to the Menu settings, Setup, tab 4 and down to USB Connection. Change this to PC Remote and exit the menu, then restart the camera. Before starting I suggest shifting to the camera’s electronic shutter to stop movement from shutter slap. Connect the USB-C cable to the camera and the PC and start Helicon Remote.

This image shows the appropriate menu screen with the USB Connection set to PC Remote.This isn’t intended to be an extensive tutorial of Helicon Remote so I’ll just provide a brief overview.

This image shows the main screen of Helicon Remote, including the major controls.This shows the main screen of Helicon Remote. The important controls are highlighted in red. In the upper left is the Fast preview button. Pressing this will update the live preview of the image. Once this has been generated, one can double-click on the image to zoom in to 100% and use the scroll wheel to zoom in or out from there. Double-clicking again will return you to full screen. If you want to zoom in again, you must first generate another preview. On the right side, A and B denote the closest and farthest focusing points for the subject. Above them, the left and right arrows are used to shift the focus closer or farther from the camera in fine, medium or gross increments.

This is an image of the Helicon Remote screen with the subject zoomed in to choose the closest focus pointThe first step then is to mark the closest focusing point for the subject. One can use either Helicon Focus or the camera’s focus controls to mark this. NB: do not touch the camera after this until the process is done. If one checks the Focused box, Helicon Remote will provide a blue focus peaking overlay but to me it just got in the way. Once the closest focusing point has been set, click the A button to lock it.

This is the Helicon Remote screen showing the farthest focus point of the subject.With the closest focusing point set, use the arrows to set the farthest focusing point for this image and lock it by clicking on the B. One may need to generate more previews and zoom in/out to achieve this.

The next step is to choose the number of exposures to make. With the Auto button enabled the software will calculate the number of exposures required based on the camera, lens, f/stop and subject distance. To make more exposures with more overlap, uncheck this box and change the number of Shots. The Interval will update automatically. When ready, click the Start Shooting button and Helicon Remote will make the requested exposures. When it’s finished, click the Helicon Focus button and it will start that software and stack the images. NB: both software programs are sold separately and while linked, are independent of each other.

This shows the Helicon Remote screen outlining some of the advanced features.NB: Helicon Remote does have advanced features for exposure bracketing, flash compensation, burst shooting, etc. but I prefer to do those in camera.

So, once Helicon Focus was finished I saved the finished stack as a .dng file and imported it into Capture One for processing. Here’s the finished result:

This image shows the processed focus-stacked image. It looks good to me.

I think it came out well. If you scroll up to the Pixel Shift images for comparison you can see the increase in the DoF with the focus-stacked image.

So. Would I buy the Helicon software? As far as Helicon Remote, no. It’s a little clunky to use but that’s not the reason. Most of my photography is landscapes, and carrying my laptop and a USB-C cable out into the field just isn’t practical. With Helicon Focus, if/when I upgrade my camera to a newer model I may revisit it at that time.

There’s one last thing to mention, and that’s focus breathing. Focus breathing is a shift in perspective caused by shifting the focus point of a lens. It affects pretty much every lens to some degree; whether or not it affects your lens in a way to make it unusable is something you need to discover for yourself. If you’re serious about focus stacking, particularly for macro work and want to circumvent focus breathing, the only option is to use either a manual or an automatic focusing rail. This equipment moves the entire camera rather than changing the focus point of the lens.

Still here? Congratulations! Now go out and make some photographs.

Hugs,
M&M