Extending dynamic range

From The Battle for Wesnoth Wiki
Revision as of 09:26, 19 June 2007 by Sgt. Groovy (talk | contribs) (fix typo)

In the area of imaging, dynamic range means the imaging hardware's ability to tell differences in luminosity. Due to the non-linear nature of the human photoreception mechanism, the density range of human perception is very large, and it's difficult to capture sceneries with high range of luminosities. A typical example is a scenery against a background of bright clouds. If you set the camera exposure to a setting suitable to the brightness of the foreground, the luminosity of the sky goes outside the density range of the camera and most of the sky pixels are clipped to white, resulting in ugly flat "burnt through" white sky. On the other hand, if you set the exposure low enough to bring out the details of the clouds, the foreground luminosity drops below the lower end of the density range, most of the pixels get clipped to black, and you are left with nothing but a dark silhouette.

There is a trick to extend the density range of a camera: by taking shots of the same scenery in different exposures we can capture the details of both highligth and shadow parts of the scenery, which can then be combined digitally in one image. The picture below is an example of this technique. The leftmost image was taken with an exposure setting suitable for the foreground, but it got the clouds blended into a white haze. The middle image was taken with exposure low enough to make the cloud structure visible, but now the foreground is too dark. The rightmost image is a combination of the two others. The foreground pixels are mostly from the high-exposure image, while the sky is a blend of the two. The result presents the details of various luminosities much better than either of the other images and is closer to what human eye would see. The good thing about this technique is that no extra equipment nor fancy sofware is needed, the pictures were taken with a regular cheap digital camera (a film camera would also do, as long as you can adjust the exposure) and the postprocessing was conducted with GIMP, using a simple plugin downloaded from the plugin registry.

dynrange.jpg

Scanners also have a limited density range, and similar problems can occur when scanning images that depend on very small differences in luminosity for detail. Pencil drawings typically fall into this category. In order to read the detail in the darkest parts, plenty of light is needed to penetrate the pigment to reflect back from the paper, but this much light washes away the faintest pencilwork. The pictures below demonstrate the problem, scanned at different exposures. The darker one has all the lighter shades and linework present, but has the darker parts lacking in contrast and detail and the whole picture is generally too dark. The lighter one brings the detail of the darkest parts out better, but the rest of the picture is totally washed out.

stoat1.jpg

stoat2.jpg

The solution to the problem is exactly the same as with the photographs, combine the different exposures into one image.

Some notes about scanning the different versions: most scanner sofware offer "image enhancement" options by changing the gamma, brightness and contrast of the image. These, however, usually only manipulate the data after it has been scanned, and don't really add new detail to the image that we need. What you must change is the actual brightness of the scanning head light source (or the sensitivity of the CCD, whatever option is available for your scanner). In Xsane, these options are found in the "Standard options" window. Also, when using Xsane, when you change these settings, the preview window is not updated until you press the "Acquire preview" button. You must scan exactly the same area for each exposure, so to make sure it doesn't change between previews, uncheck the "Preselect scanarea" button in the "Preferences -> Xsane setup -> Enhancement" menu.

Open GIMP or some other image manipulation program and put the exposures in the same image as separate layers. Simplest way to combine the images is to change the opacity of the upper layer to get a uniform blend. Most to the times, however, you want to mix the images selectively, by making some parts closer to the low exposure image, and other parts closer to the lighter one. This can be achieved by setting a mask to the upper layer. Paint the mask with 50% grey and you have a equal mix of the two layers. Now paint with black to the mask in parts where you want your image to be closer to the lower layer, and with white where you want it to be closer to the upper one. This method is labourous, but it offers you complete control of the result.

The mask can be also produced automatically. In GIMP, there are two plugins for that. They don't ship with the standard distribution, but you can download them from the GIMP plugin registry. The first one is called Dynamic range extender and it presumes two exposures, the darker one on top. Uncheck the "merge layers", because you might want to try different opacities for the upper layer. The plugin creates a mask for the darker layer, which makes the the lower layer selectively visible.

The second one is called HDR Tone Mapping, and it presumes three different exposures. The order persumably doesn't matter, and it produces a mask for the two upper layers (though in all my trials the mask of the upmost layer was almost totally black, thus not contributing significantly to the combined image). NOTE: this plugin presumes an RGB image, so if you scanned your images in greyscale, you must change the image into RGB before using this plugin (you can change it back to greyscale afterwards). If you try to use it on a greyscale image, the plugin just hangs indefinitely.

With both plugins, it's worth to try to change the masked layer's opacity after you have run the plugin. I have found that it often results in a better image.

Here's the result of the first plugin applied to the exposures above, with the masked layer's opacity set to 70%:

stoat3.jpg