A look at high dynamic range photography

Apple’s latest revision to its iPhone software, 4.1, adds a feature called high dynamic range (HDR) imaging to its built-in camera.

By PHIL BAKER
September 13, 2010 23:09
4 minute read.
iphone

iphone. (photo credit: Courtesy)

 
X

Dear Reader,
As you can imagine, more people are reading The Jerusalem Post than ever before. Nevertheless, traditional business models are no longer sustainable and high-quality publications, like ours, are being forced to look for new ways to keep going. Unlike many other news organizations, we have not put up a paywall. We want to keep our journalism open and accessible and be able to keep providing you with news and analyses from the frontlines of Israel, the Middle East and the Jewish World.

As one of our loyal readers, we ask you to be our partner.

For $5 a month you will receive access to the following:

  • A user uxperience almost completely free of ads
  • Access to our Premium Section and our monthly magazine to learn Hebrew, Ivrit
  • Content from the award-winning Jerusalem Repor
  • A brand new ePaper featuring the daily newspaper as it appears in print in Israel

Help us grow and continue telling Israel’s story to the world.

Thank you,

Ronit Hasin-Hochman, CEO, Jerusalem Post Group
Yaakov Katz, Editor-in-Chief

UPGRADE YOUR JPOST EXPERIENCE FOR 5$ PER MONTH Show me later Don't show it again

Apple’s latest revision to its iPhone software, 4.1, adds a feature called high dynamic range (HDR) imaging to its built-in camera.

Coincidentally, I have been working on a column on this subject for the past few weeks and making some HDR images using my Pentax K-7 digital SLR, and then processing them using Photomatix software, the final step in the process.

Be the first to know - Join our Facebook page.


One of the limitations of both conventional film and digital photography is the inability to capture the wide range of brightnesses in a single image. This has been a problem since the beginning of photography, and one I grappled with in designing cameras at Polaroid decades ago.

The camera designer’s best solution has been to provide several means of figuring out the best exposure for a particular scene, such as using a centerweighted or a spot reading. But often scenes being photographed have areas that are very bright and other parts that are dark. An example would be those areas in an outdoor scene with a very bright sky and other areas in deep shadow, where details in both areas are lost.

Newer cameras have intelligent capabilities to figure out the kind of scene and adjust to get the best possible exposure. For example, a camera might detect the scene to be a portrait and expose for the face while ignoring the background.

But that’s still a compromise, just a single exposure, because the film or digital sensor lacks the ability to capture the wide dynamic range that our eyes are able to adjust to and accommodate.

With the advent of digital photography and new software tools that have become available, there’s an entirely new way to address these limitations.



How does it work? The trick is to take several exposures, usually three, of a given scene and combine them digitally, taking the best parts of each exposure so that almost everything in the composite image is perfectly exposed.

Typically, the first image is made at the normal exposure, a second image two stops brighter (4 times the shutter speed), and another 2 stops darker (1/4 the exposure). In the first exposure, the average portions will be well exposed, in the second the dark shadows will be, and in the third, the bright sky will be properly exposed.

Of course, for this to work well, the camera needs to be held steady on a tripod, and it may not work for images with fast moving elements.

I set my Pentax to aperture priority mode and used its bracketing setting to take three consecutive images at 0, +2, and -2.

(Aperture mode means the aperture will stay fixed, while the shutter speed will vary for each of the three images; that provides a consistent focus across all images).

Once the exposures are made, you load them into the computer.

The images are then processed into a single HDR image using Photomatix Pro software ($99 from hdrsoft.com) by dragging the three images into the program’s window. With a few clicks the program merges the images, even adjusting for misalignment due to camera or tripod movement between exposures.

Photomatix, available for both Mac OS X and Windows, includes a final step called tone mapping to improve the HDR image it generates. It’s akin to image tweaking found in many photo programs.

The resulting images looked like none I’d ever taken before. In outdoor shots, foliage in the foreground and to the sides were bright shades of green, while the sky in the upper half was dark blue with puffy white clouds.

Images that contained both the interior of a room and a bright scene through the window showed both areas perfectly exposed.

To see for yourself, check out the huge collection of HDR images on the site stuckincustoms.

com, a popular travel photography blog written by Trey Ratcliff. Ratcliff is an award-winning photographer who had created the first HDR image to be exhibited in the Smithsonian Museum. His images are stunning, with some seeming to be surrealistic. For example, there’s a nighttime image of Hong Kong from The Peak that captures the illuminated buildings, the ship traffic in the harbor and the clouds and mountains in the background, all perfectly exposed. His site also provides a tutorial on HDR.

In addition to the iPhone, I expect the feature to be incorporated into other automatic point and shoot cameras and smartphones.

While not up to the level of what you can do with a DSLR and Photomatix, HDR is about to move from the serious photographer to the casual shooter, driven, surprisingly, by a computer company, rather than a camera company.

Related Content

The Teva Pharmaceutical Industries
April 30, 2015
Teva doubles down on Mylan, despite rejection

By GLOBES, NIV ELIS