Sony a7RIII, Pixel Shift and Focus Stacking

Hi Folks:

This post covers a few different topics; I’m not going to go into too much depth on any of them (there are a lot of resources available on the ‘net) but I will provide an overview of my experiments, specifically as related to my Sony a7RIII camera.

First (and arguably most important) is the subject of our experiment. A little over a year ago we picked up an expired (finished blooming) Dendrobium orchid from a local flower shop for $5. This one was not in good shape when we brought it home. It had been grossly overwatered, half of the roots were rotten, but we cleaned it up and put it into a new terracotta pot with some bark chips and Marcia began administering her own special brand of magic. The orchid responded as might be expected – putting out new roots and new leaves. This spring we had a new root form that looked different to the others; as it turned out it wasn’t a new root at all. By the beginning of September we were here:

This image shows Marcia's orchid with the first of six blossoms open.

Marcia’s Orchid

It took a while, but over the next month all six blossoms opened, and that brings us to the reason for this post. I wanted to capture the beauty of this flower.

Pixel Shift

Okay, to explain pixel shift we’re going to have to get a little bit technical. In very basic terms, a digital camera’s sensor is a collection of very tiny solar panels, arranged in a grid. The Sony a7RIII has over 42 million of these little solar cells, called pixels. With the shutter open, light coming through the lens reaches these little cells, and each one generates an electrical signal based on the amount of light reaching it. This electrical charge is then converted into a digital value. In essence, all digital sensors are grayscale – they don’t measure colours, they only measure light. To calculate the amount of light of each colour (RGB or Red/Green/Blue) reaching the sensor, most cameras have a grid of colours overlain on the sensor in a pattern called a Bayer Matrix.

NB: Fuji cameras use a slightly different design known as an X-Trans sensor, and Sigma cameras work entirely differently.

An image showing a Bayer filter on a camera sensor

By en:User:Cburnett – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=1496858

More information on Bayer and Fuji filters can be found here.

Two things to note here: One, sensor pixels don’t store colours. They store digital information about the light as filtered through the colour. Two, each pixel only receives information about one colour. To get full colour information for all of the pixels, each cell interpolates colours from the cells surrounding it.

What if it was possible to get colour information for all of the colour channels for each pixel? In essence, that’s what pixel shift is all about. What the camera does is make four images instead of one, moving the position of the sensor so that each pixel can capture colour information from all of the channels instead of one. Now, in order for this to work, a few rules must be in place. One, both the subject and the camera must be completely still. Two, the light falling on the subject needs to be consistent.

NB: I believe Sony was among the first companies to offer pixel shift with its cameras. Other companies have also joined in, and with some pixel shift involves more than four captures, but all of this is camera dependent.

Once the four images are captured, one needs to combine them into one image. The only way to do that (to my knowledge) with Sony cameras is to use Sony’s Imaging Edge software. It’s a relatively painless process, but… you knew there was going to be a but there. RAW files generated by Sony cameras are in the .arw format. They’re pretty universally recognized among raw conversion software. When one combines the four captures into one pixel shift image, however, the result is an .arq file. With Sony’s software there’s no way to make it anything other than an .arq file, and the problem with that is that few other software companies will read it. Capture One does not, and neither does Affinity photo. Those are my two image processing programs of choice.Sony Imaging Edge does provide some basic image editing, but it’s not anywhere close to what others can provide. Personally I’d prefer Sony stick to making camera equipment.

I remember reading that later versions of Lightroom CC are able to read .arq files, and while Adobe and I parted ways years ago there’s one piece of (free) Adobe software I still keep: Adobe DNG Converter. From the earliest days of digital cameras, each company seemed content to create their own (proprietary) raw file formats – usually changing them from one camera model to the next. Their reasons for it remained their own. Early on the folks at Adobe decided to make a free, open source raw file format: .dng A couple of companies have adopted .dng as their default raw file format, but the beauty of the DNG Converter is that it can convert just about any raw file into a .dng file. Does it work with .arq files? It does. So, I took the pixel shift image I made with Sony’s Imaging Edge Software, converted it to .dng, imported it into Capture One and processed it. I also took one of the four original images and used identical processing on it. Here are the results:

This image shows two versions of the same image. One is a single raw file. The other is a four-image pixel shift composite.One of these images is the four-image pixel shift composite. The other is one of the original raw files. I realize what you’re seeing is a low-resolution screen capture, and there are subtle differences between the two images. Which is which? I’m not going to tell you. To me, the end result doesn’t justify the effort. With a different camera system the differences may be more pronounced, but for my camera I don’t see a use for pixel shift. Yes, I’m aware that the flower is more in the red and blue channels and the background is mostly green, so that may have had some impact.

Focus Stacking

Okay, to explain focus stacking we need to explore depth of field (DoF). I’m not going to go into a long explanation of what DoF is, because when I start explaining circles of confusion and things like that people tend to look at me funny. In short, if you focus your camera at a distance of __ft (m), only objects at that distance from the camera are in focus. However, there will be objects both closer to and farther from the camera that appear to be in focus. This is governed by a number of factors, including lens focal length, sensor size, viewing distance, image magnification, f/stop, subject distance and more. In round terms, small sensors (like your cell phone) have greater DoF than something like a full-frame digital camera, even with equivalent focal-length lenses. Subjects at a distance have a greater DoF than subjects that are close-up and so on. Focus stacking is a process of making a series of exposures at different focus points, then using software to stack the images together to artificially increase the DoF. Again, this depends on several factors. For example, with my Sony a7RIII with a 55mm lens set to f/8 and a subject distance of 1 foot, the DoF is 0.3 inches. That’s not a lot.

Focus stacking is most commonly used with macro photography but isn’t limited to that. My last post on the subject was on Focus Stacking for Landscape Photography. There are two components to focus stacking: making the images and then joining them together. Needless to say, it’s important both the camera and the subject be completely still for this. With many newer cameras the first part can be done somewhat automatically. One provides the camera with the closest and most distant focusing points and the number of images and the camera makes an exposure, shifts focus, makes another exposure, etc… Starting with the a7IV series until the A1M2 today, every newer camera Sony makes offers this feature, but they didn’t upgrade the firmware for the A7III series. 

My first attempt at focus stacking images of our orchid involved setting my camera on the tripod and (using a remote shutter release) making a series of exposures by manually shifting the focus point each time. Unfortunately, as careful as I was, there were subtle camera shifts between exposures and the final result was unusable. Helicon Focus does have good retouching tools to mask out small movements (like an insect’s antenna for example) but this was a bridge too far.

Enter Helicon Remote.

Helicon Remote is a separate software package from Helicon Focus, one that does what newer cameras do automatically. As I understand it there’s an app version of the software but it’s only for Canon and Nikon, so I downloaded and ran the trial of the Windows version. Basically one connects the camera to the computer via a USB-C cable (NB: the Sony a7RIII has both a micro-USB and a USB-C connection, but I’ve never used the former) and the software takes over the role of shifting focus and making exposures. With the a7RIII, in order to facilitate this one must first set the camera up to allow Helicon Remote to take remote control. To do this, go to the Menu settings, Setup, tab 4 and down to USB Connection. Change this to PC Remote and exit the menu, then restart the camera. Before starting I suggest shifting to the camera’s electronic shutter to stop movement from shutter slap. Connect the USB-C cable to the camera and the PC and start Helicon Remote.

This image shows the appropriate menu screen with the USB Connection set to PC Remote.This isn’t intended to be an extensive tutorial of Helicon Remote so I’ll just provide a brief overview.

This image shows the main screen of Helicon Remote, including the major controls.This shows the main screen of Helicon Remote. The important controls are highlighted in red. In the upper left is the Fast preview button. Pressing this will update the live preview of the image. Once this has been generated, one can double-click on the image to zoom in to 100% and use the scroll wheel to zoom in or out from there. Double-clicking again will return you to full screen. If you want to zoom in again, you must first generate another preview. On the right side, A and B denote the closest and farthest focusing points for the subject. Above them, the left and right arrows are used to shift the focus closer or farther from the camera in fine, medium or gross increments.

This is an image of the Helicon Remote screen with the subject zoomed in to choose the closest focus pointThe first step then is to mark the closest focusing point for the subject. One can use either Helicon Focus or the camera’s focus controls to mark this. NB: do not touch the camera after this until the process is done. If one checks the Focused box, Helicon Remote will provide a blue focus peaking overlay but to me it just got in the way. Once the closest focusing point has been set, click the A button to lock it.

This is the Helicon Remote screen showing the farthest focus point of the subject.With the closest focusing point set, use the arrows to set the farthest focusing point for this image and lock it by clicking on the B. One may need to generate more previews and zoom in/out to achieve this.

The next step is to choose the number of exposures to make. With the Auto button enabled the software will calculate the number of exposures required based on the camera, lens, f/stop and subject distance. To make more exposures with more overlap, uncheck this box and change the number of Shots. The Interval will update automatically. When ready, click the Start Shooting button and Helicon Remote will make the requested exposures. When it’s finished, click the Helicon Focus button and it will start that software and stack the images. NB: both software programs are sold separately and while linked, are independent of each other.

This shows the Helicon Remote screen outlining some of the advanced features.NB: Helicon Remote does have advanced features for exposure bracketing, flash compensation, burst shooting, etc. but I prefer to do those in camera.

So, once Helicon Focus was finished I saved the finished stack as a .dng file and imported it into Capture One for processing. Here’s the finished result:

This image shows the processed focus-stacked image. It looks good to me.

I think it came out well. If you scroll up to the Pixel Shift images for comparison you can see the increase in the DoF with the focus-stacked image.

So. Would I buy the Helicon software? As far as Helicon Remote, no. It’s a little clunky to use but that’s not the reason. Most of my photography is landscapes, and carrying my laptop and a USB-C cable out into the field just isn’t practical. With Helicon Focus, if/when I upgrade my camera to a newer model I may revisit it at that time.

There’s one last thing to mention, and that’s focus breathing. Focus breathing is a shift in perspective caused by shifting the focus point of a lens. It affects pretty much every lens to some degree; whether or not it affects your lens in a way to make it unusable is something you need to discover for yourself. If you’re serious about focus stacking, particularly for macro work and want to circumvent focus breathing, the only option is to use either a manual or an automatic focusing rail. This equipment moves the entire camera rather than changing the focus point of the lens.

Still here? Congratulations! Now go out and make some photographs.

Hugs,
M&M

Happy Father’s Day!!

Hi Folks:

All of Marcia and Mike’s parents are gone now, but we have two sons and a son-in-law and five beautiful grandchildren. We are Grandy and Grandalf! 🧙‍♀️&🧙‍♂️

As with those who are moms, Happy Father’s Day today to all of the strong, loving men who are fathers, to those who are chosen dads, surrogate dads, step-dads, adoptive dads, and to the women who are also dads.

Special thanks to all those who love and support them.

Hugs,
M&MThis is a photograph made at the Victoria Butterfly Gardens. The edges of the frame are surrounded by plants, and there's a rocky stream running vertically down the center. At the top there's a small statue of a stone Buddha sitting in quiet contemplation.

My Favourite Image of the Year

Hi Folks:

Mostly every year since 2010, on New Year’s Day we’ve taken the opportunity to make some images of Marcia, and without hesitation it’s my favourite image of the year. We started out doing this at Government House, but for the past several years we’ve headed to Beacon Hill Park instead. This is the image for 2025:

This is a portrait orientation image of Marcia - sporting a bright red hat, a multi-coloured scarf, black raincoat and black pants, leaning against one of the giant sequoia trees in Victoria's Beacon Hill Park.

Marcia, New Year’s Day 2025

I know she looks beautiful here, but the image doesn’t really do her justice. You’ll just have to take my word for it.

From Marcia and me, we wish you a new year filled with as much happiness, health, prosperity, excitement, love, peace and adventure as you can handle!

Hugs,
M&M

2025 Photo Calendars (part two)

Hi Folks:

This is just a quick update to our previous calendar post as we’ve gone through the thousands of images we’ve made this year and picked out 12 for our calendar. As usual, some of the images were made with our cell phones and some were made with the Sony a7rIII camera. Some of the images were made by Marcia and some by Mike, but none of that really matters. All of this year’s images were made in and around Victoria, BC except for October’s, which was made on Mayne Island, BC.

Combined, our calendar looks like the image below. If you’d like to download a copy for yourself, click the image to link to a .pdf version.

This image shows all 12 of our calendar pages, each with an image at the top and the monthly calendar at the bottom. They're aligned in two rows of six months each. Continue Reading →

2025 Photo Calendars (part one)

Hi Folks:

As we’ve passed mid-November, we’re slowly closing out 2024. That means it’s time to make our photo calendar templates available, both for MS Word users (for those who don’t use graphics programs) and as .png files for those who do. As before we will be making our own calendar available in .pdf format for those who are interested, but (as we did last year) we’re doing the post in two parts. For our calendar we use images made in that month (i.e. the image for May 2025 was made in May 2024). Since we haven’t yet gotten to December our calendar isn’t yet complete, but we wanted to make the templates available so others can work on their own calendars. Continue Reading →

Spooktacular Hugs!!

This image shows a chalk drawing on the sidewalk in front of our house. It consists of a (not very scary) ghost, and text that reads, "Share Sppoktacular Hugs Here"

Hi Folks:

We haven’t done much chalk art recently because of the rain (no complaints – it refills the aquifer and we’ll appreciate it next summer). Unfortunately rain isn’t very kind to chalk art! We’re expecting another major storm this weekend so we’ll have to put out something else for Hallowe’en, but in the meantime, remember to hug someone you love today. Or a stranger. Or, preferably, both!!

Hugs,
M&M

Making Panoramas in the Rainforest (part two)

Hi Folks:

Making digital panoramas is essentially a two-part process. In part one of this post I covered a bit about digital panoramas in general and some considerations that become important when collecting the images to be used for the panorama. Part two is focused (pun intended) more toward what to do with the images once you have them on your computer.

NB: If you have a smart phone you can use the panorama mode on your phone to make a simple panorama. Some even allow you to create a panoramic image from a video. Depending on your phone and your expectations, that may be sufficient for your needs. For me, it’s mostly not, because one of the benefits I find in making a digital panorama is the increase in resolution I obtain from joining together several images into one. The downside to that is that file sizes can get quite large, so when rendering the final image file it’s best to balance what you want against the capabilities of your computer. Sometimes I try to make smoke come out of mine… 🙂

This post is (typically) very long, and so we’ve broken it up into segments for you. Clicking on the subtitles will bring you to the relevant section:

How Panorama Software Works
Projections
Panorama Software Options
Making Panoramas
Stitching Errors
Exposure and Image Noise
Parallax
White Balance
Chromatic Aberration, Fringing Colour Artifacts
Image Cropping
Keystoning
Final Thoughts

Continue Reading →

Put On a Happy Face 🙂

Hi Folks:

This is a chalk art drawing in front of our house. The main drawing is a large (4' diameter circle, filled in to make a happy face. At the top it reads, "You Are Beautiful." At the bottom it reads, "Share Hugs Here."As Labour Day weekend is upon us we’re starting to wind down summer once again. Vacations are (for the most part) coming to an end, children and adults are going back to school… Still, we wanted to offer a reminder that even as seasons change, some fundamental things don’t. Self-value is inherent, and not linked to what we do. And kindness is always the right response.

To that end, we wanted to offer you a reminder to put on a happy face and engage your world with love. 💗

Hugs,
M&M

P.S. This drawing took two whole sticks of yellow! But it was worth it… 🙂
P.S. II, the sequel: This is our ninth Hugs chalk art pattern for 2024. If you’d like to see all of them, click here: 2024 Hug Zones.

 

Using Capture One Pro in Black and White

Hi Folks:

Yes, I’m aware I haven’t yet posted part two of my ‘Making Panoramas in the Rainforest‘ post. It’s coming. Truly!

Okay, the idea for this post came from a couple of sources, but most notably from an image I made recently with my phone camera. I have a Galaxy S21 phone, and in pro mode it allows me to shoot in raw/dng format. I can open those images in Capture One as raw files the way I would any other.

Before we continue I want to reiterate a couple of things. Those who have read our previous posts will be familiar with them. The first is a reminder that digital cameras don’t capture images. Digital cameras capture light as information, and we can take that information and arrange it in such a way that it looks like an image – either on screen or in a print. This happens because we arrange that information into a grid of little coloured dots (on paper) or little boxes (pixels) on the screen. Continue Reading →