On all the smartphones I get my hands on these days there’s a n HDR function. Quite often I find that the default is ‘Auto’ but what difference does it really make?
What Is HDR?
High Dynamic Range or HDR is a technique of combining three or more images to make one evenly exposed image. That is, one for shadows, one for mid-tones and one for highlights. Photographers have been creating HDR images for some time using software from Photomatix, Adobe or other third party software providers.
The Smartphone HDR
What smartphones do however is take the images and combine them on the fly, meaning just the final image is saved to memory. Let’s examine the difference between an HDR and non HDR image taken on the iPhone 6;
The first difference we see is the size of the images,
We can see here that the HDR image is .18MB larger, not a great difference in this case but that .18 is .18 more information, and that’s key.
Where’s The Data
(Prior to the following steps, and to avoid me pulling out my OCD hair, I ran the Upright command in ACR)
To examine where the extra data might me I’ll load both images into Adobe Camera Raw. Here I can target the various luminance values of the image, I can also compare their histograms;
For this image there’s three areas that catch my eye as being different;
1 : The Shadows are a little over to the right, difficult to spot in a static image, but flip flopping between the two I can see that the non HDR shadows peak is more toward the the Midtones.
2: The Midtones in the non HDR is not so high. The image is darker here.
3: The most significant difference is here in the Highlights. There are far more luminance values here in the HDR image. In the non HDR image the highlights are bunched up to the right, suggesting that the highlights are blown out and lack detail.
Right off the bat I can see the difference in the highlights, in the HDR image I can see more detail, while the clouds in the non HDR image are blown out.
Recovering the Highlights
Reducing the luminosity of the Highlights and whites on the non HDR image recovers little to non of the blown highlights.
The HDR image recovers the highlights better due to the extra data I have stored here. For the finished image I might choose to balance this a little more evenly however. By recovering the highlights I’ve shifted a lot of the luminance values down and I’ve actually ‘lost’ some data;
I’ll do the same experiment, this time with the shadows to see what I can recover;
Not a great deal of difference here. When taking the image I focused on the dark area and so the iPhone then adjusted the exposure for that part of the image too.
Because I can, I’ll add in some Clarity before I adjust the image more, just to see what happens;
This makes some significant changes. Because Clarity works by adding contrast to edges the HDR image can cope better with the transition between light and dark.
Tidy Up and Finishing Off
I’ve adjusted the sliders here with an eye on the histogram. What I’m aiming for is for the graph to go from one end to the other.
I’ve used Adobe Camera Raw here for experimentation but I’d probably edit on my phone or iPad. Here’s an edit done in Snapseed;
I’ve taken better, and this is a reminder to give some wiggle room to images. I’ve lost the top of the cathedral when I straightened and cropped. Guess I’ll have to take another wander down to Peterborough Cathedral and take some more…?