Probably every amateur photographer will sooner or later find himself on the HDR phenomenon, i.e. (in a simplified way) the technique consisting in software extension of the image dynamics beyond the possibilities offered by the standard light-sensitive matrix. Photos taken in this way allow you to obtain rich details in both very dark and very bright parts of the image. With HDR technology we can, for example, avoid a scorched sky or faceless faces of people photographed in the full sun. It seems that HDR is pretending to be the panacea of the world of mobile and compact photography.
In most cases, the wide tonal range of the scene is achieved by making several frames with different exposure times, which are finally combined into one whole, averaging the brightness of individual parts of the picture (the so-called tone mapping). Unfortunately, using this method, merging several frames with different exposure values made at relatively long intervals may generate so-called artifacts, i.e. errors in the resulting image. As in the case of merging several images into a panorama, frame binding errors can be called artifacts, in the case of HDR technology, artifacts are, for example, ghosts – people, leaves, clouds, vehicles or other objects that were in motion while taking a sequence of photos.
Like many other photographers, I fell into a hole called HDR. Attracted by the magnificence of supernatural colours and radioactive grass, I lived in the belief that it was the best thing in the world. Of course, I was wrong, but there was one photograph left from that time, on which one can clearly see artifacts caused by objects in motion (the center of the frame, people climbing the stairs).
Therefore, it is not a flawless technique, especially since artifacts are only one of the few problems associated with this method of imaging (the others are, for example, shining edges, excess of detail or, paradoxically, no contrast). Of course, there are sophisticated algorithms which are quite good at detecting and masking shifts visible during the generation of resultant files, but these are complex solutions, often requiring a lot of computing power, which today excludes their use in telephones or compact cameras.
Even for some SLR cameras, the contrasting scene, which is a challenge, does not pose a major problem for HDR+ algorithms. In addition, the scene looks natural. With HDR+, there is no need to look for sharpened details or radioactive grass typical of tone mapping.
However, Google’s engineering team decided to approach the problem in a slightly different way. Instead of taking a series of photographs of different exposition times, they proposed a solution based on equal brightness with the preservation of details in the bright parts of the photographed scene. As in the classic approach, several (or even several) frames are made here, but their exposition does not differ. This minimizes the spacing between each photo and allows the entire sequence to take 1/3 instead of e.g. an entire second, reducing the number of potential artifacts.
Until then, however, everything seems to be devoid of any elementary sense. Finally, HDR is a technique to increase the perceived dynamics of the image. So how can this be done by merging several almost identical frames with limited dynamics? The target effect is achieved by aggressive shadow illumination, which previously eliminates digital noise by averaging the signal from several very similar frames. This technique has been known for a long time to astrophotographers who, thanks to average signal from many similar pictures, get rid of noise at very long exposures of several hours. If you realize that digital noise is random and that the elements of the scene (objects, people) are constant, then each pixel averages the image free of noise, but rich in details. This allows you to drastically brighten the dark areas of the photograph without compromising quality.
Very good (for the physical size of the sensor) amount of detail in all plans, smooth tonal transitions and no noise in both the brightness (luminance) and color (chrominance) channels.
But that’s not all Google has to offer. HDR+ takes several photos and then selects the sharpest ones as the basis for further calculations. It then performs an edge analysis of the image to match all frames (so that the final image is sharp and not displaced). Finally, an average value for each pixel and brightens the shadows. The result is a natural image similar to what our eyes see.
Due to the specificity of the algorithm, HDR+ is also useful in situations with relatively low contrast (e.g. in rooms or late in the evening). Instead of using a built-in lamp with very little power, it’s much better to take a series of pictures in the ambient light and get rid of the noise. The final effect will be much closer to what we are used to seeing in the analogue world.
In this example, it is not difficult to see the deterioration in quality typical of very small dies. However, it still makes an impression when one considers that the scene is mainly lit by the moonlight.
As of today, Google offers HDR+ only to Nexus 5 and Nexus 6 series device owners. Probably this decision is dictated by the presence of optical stabilization in the lenses of these telephones, which is of great importance in the process of matching individual frames taken in a sequence (and let us not forget that we are talking about photos taken “from the hand”). Undoubtedly, marketing considerations also play a significant role in this limitation. However, I will not hide the fact that I would be happy to see a similar solution to the problem of contrast scenes in other telephones and compact cameras.
The proliferation of this and other similar techniques has made it possible to further reduce the distance between compact cameras and full-size SLR cameras. The ability to take thirty million-pixel photos with a rectalinear perspective is something you could only have imagined a few years ago with a multifunctional device in your pocket. On the other hand, generating a depth of field map and simulating it later calls into question the need to have large and heavy lenses in travel photography, and averaging the value of pixels when taking hundreds of pictures will allow to obtain a water blur effect – typical for photos with long exposure time.
And what is Canon doing during this time? He boasts of the new APS-C camera, which this time has had its GPS module built in. Well, well…