Google’s Pixel 3 camera software, the best photo smartphones Google’s Pixel 3 camera software, the best photo smartphones
5
Google has done it again. Their Pixel 2 and Pixel 2 XL set the bar very high thanks to their great photographic performance, and... Google’s Pixel 3 camera software, the best photo smartphones 5

Google has done it again. Their Pixel 2 and Pixel 2 XL set the bar very high thanks to their great photographic performance, and the new Pixel 3 and Pixel 3 XL have gone even a little further. The philosophy of Google if we stick to the field of photography is interesting because it is not following in the wake of most of its competitors, who bet on two, three, and even four back cameras .

Like its predecessors, the new Pixel 3 and Pixel 3 XL incorporate a single main camera, which, yes, has proven to perform really well in many of the test scenarios to which we have subjected during our in-depth analysis . In fact, these two smartphones, which have exactly the same photographic features, are two clear candidates for better photographic mobile of the current generation.

These are the three “ingredients” that make a good camera possible

The elements that we need to pamper to develop a quality camera are exactly the same whether we stick to the dedicated cameras or the integrated ones in mobile phones. All manufacturers know them and tend to take care of them in their higher-end products, but most of them do not usually pay the same attention to each of these parameters.

The three basic “ingredients” that define the overall performance of any camera, whether dedicated or integrated in a smartphone, are the optics, the sensor and the post-processing . Some brands devote many resources to optics and turn to companies specialized in the manufacture of lenses and lenses to help them to fine-tune the optics of their smartphones. This strategy is used, for example, by Huawei, which has developed, together with Leica, the objectives of some of its terminals, such as the P20 Pro and the Mate 20 Pro . A quality optics must achieve that the light converges on the surface of the sensor with precision, and, at the same time, introduce the minimum distortion and chromatic aberration possible.

Some smartphone manufacturers choose to design the interior of their proposals so that they can include sensors with important physical dimensions. Almost all brands introduce premium terminals with a respectable size in their terminals , but one of the most ambitious in this area is, again, Huawei, which has introduced a sensor of 1 / 1.7 “both in its P20 Pro and in the Mate 20 Pro (still impresses the sensor of 1 / 1,5 “that Nokia and Microsoft rode in the already veteran Lumia 1020 ).

A step behind in this area are the other manufacturers, such as Sony, with the sensor 1 / 2.3 “of your Xperia XZ3 , followed closely by Samsung, Apple or Google, which are some of the brands that have bet to include in one of the cameras of its last terminals a 1 / 2,55 “sensor. However, it is important that we take into account that the size of the photodiodes or photoreceptors, which are the tiny cells of the sensor that are responsible for collecting light, does not depend solely on the dimensions of the sensor; the resolution also influences , as is logical.

The quality of the optics, the size of the sensor and the sophistication of the processing have a direct impact on the finish of the photos we take with our phones

If we compare two sensors with the same size and different resolution we can be sure that the larger photodiodes will have the sensor that has less megapixels, and therefore, a lower number of photodiodes. This characteristic usually, which not always, causes the sensor with the larger photoreceptors to capture more light and, therefore, produce a lower noise level in those shots fired with low ambient light.

If we stick to the two parameters that we have just seen, the optics and the sensor, we could conclude that the smartphone with the objective of better quality and the biggest sensor is the one that better photos will allow us to take. And this is not always the case. These two “ingredients” are very important, of course, but we can not leave out of the equation a third parameter that is also crucial: the post-processing carried out by the smartphone based on the information that the sensor has collected.

Precisely, Google Pixels shine in the photographic field because they have a sophisticated processing that has a direct impact on the final finish of the photos. Possibly they do not have the best optics, and surely they do not have the largest sensor, but their post-processing algorithms compensate with sufficiency in most of the use scenarios. The conclusion we can draw from everything we have reviewed so far is that the quality of the photographs we take with a smartphone is largely weighted by the three parameters we have seen.

There are other features and technologies that also have a significant impact on the quality of our snapshots and the experience offered by these smartphones with photographic ambition, such as the presence or absence of optical stabilization or the focusing system. Even so, the scope of the optics, sensor and processing is the largest possible because these three elements define the quality and finish that each and every one of the snapshots that we take with our mobile will have. Interestingly, in Pixel 3 postprocessing is very important even though we work with RAW files , as we will see below.

This is how the processing works that keeps Pixel 3 at the forefront

The real protagonist of this article, as reflected by its owner, is the post-processing that Google has implemented in its new Pixel 3 and Pixel 3 XL , which includes very important innovations that are not present in Pixel 2. Google is a software company, and not hardware, and it shows. And is that this process is largely responsible for the great performance of these smartphones in the field of photography.

For this reason, I propose that we immerse ourselves in it, although before doing so I would like to point something out: the photographs that illustrate each processing algorithm have been taken by my colleague Amparo Babiloni with a Pixel 3 XL. In your analysis of this mobile you have many more snapshots (more than 100) that can help you to judge yourselves with precision and objectivity what place deserves to occupy this terminal compared to other high-end mobile phones with photographic ambition that we can find in the market today .

Improves portraits through machine learning

Machine learning is a discipline of artificial intelligence (AI) that consists of designing methods that allow computers to develop a behavior based on the analysis of input data. This means, using a slightly less formal and more simple definition, that this branch of AI aims to find a way for computers to learn . What does this have to do with the portraits we can take with a Pixel 3?

Simply, the automatic learning is important because it is one of the tools used by this smartphone to allow us to obtain a background blur ( bokeh ) of more quality, an improvement that has a very important impact on smartphones. The Pixel 2 and Pixel 3, as well as the XL versions of both smartphones, have in common the use of Dual Pixel technology . This innovation is not only used by Google; Canon also implements it in some of its cameras and Samsung in its high-end smartphones , among other brands that also bet on this technique.

But what is really interesting is that it requires two photodiodes to be integrated into each sensor cell instead of just one . This strategy has a beneficial impact on the focus and, in addition, allows the sensor to capture two images each time we press the shutter without having to resort to a second camera. These two images are useful when you want to blur the background and keep only the object in the foreground because the software can analyze them to identify the differences that exist between them, however subtle they may be, in order to obtain depth information and use it to generate the appropriate blur mask. It is a process similar to that carried out by our brain based on the information collected by our two eyes.

It is important that we take into account that both Pixel 2 and 3 carry out blurring by software . The main difference between these two terminals is that the Pixel 2 uses a traditional image analysis algorithm, but the Pixel 3 uses, as I anticipated at the beginning of this section, automatic learning procedures. For this reason, on paper it should be able to generate the background blur mask more precisely regardless of the complexity of the object in the foreground.

The improvements that in theory we should be able to perceive thanks to this technology are a more homogeneous blurring of the background and a more precise contour discrimination of the object in the foreground, even though it includes gaps that allow us to see the background, and that, therefore, also they must be out of focus According to our tests, the combination of Dual Pixel technology with machine learning works well. The background blur offered by the new Pixel 3 has been satisfactory in most of the scenarios in which we have tried it, but, as you can see in the photograph you have above these lines, it is not infallible.

HDR + and high resolution zoom

The HDR + technology that we can find in Pixel 2 is also present in the new Pixel 3. This innovation consists in shooting, instead of a single photograph, a burst of slightly underexposed images , and therefore with a certain lack of light . An algorithm is responsible for analyzing each of these snapshots to identify the regions of each photograph that contain more information and less noise with one objective: combine them to recreate a single photograph that brings together the best of all those snapshots and gives us the feeling of having been correctly exposed.

In some way HDR + technology tries to overcome the restrictions imposed by the optics and sensor of the smartphone, giving us to a certain extent the sensation that the photograph has been taken by a higher quality optics and a larger sensor. The funny thing is that Google has taken this idea even further in his Pixel 3. Combining HDR + technology with superresolution techniques and optical stabilization manages to increase the level of detail of some photographs that involve the zoom in a remarkable way .

Superresolution algorithms are not used only in photography; They are also used in medicine (magnetic resonance, tomography, etc.), microscopy or sonar, among other possible applications. Its objective is to analyze a set of images with the same resolution by comparing the pixels of each one of the regions that conform them with the objective of reconstructing a new image with a greater level of detail and spatial resolution . The recovery of this additional information is possible, without going into too complex details, because the same photo-diode of the sensor can capture slightly different information in the different shots of a single burst.

These small variations of the information collected by each photodiode in successive shots are beneficial, as we have just seen. This has caused Google engineers to take advantage of the combination of optical stabilization and the small vibrations that our pulse causes to ensure that each of the snapshots of the burst collect different information at the pixel level. These data are then processed by the super resolution algorithm, which, if everything goes well, will recreate a new pixel pattern equivalent to the initial shot of a photograph with more resolution than the photos of the initial burst actually had.

An interesting detail is that Google uses the optical stabilization of the main camera of the Pixel 3 to accurately control the relative displacement of the sensor photodiodes with respect to the image we are capturing (we must not forget that the optical stabilization actually acts on the objective ). But this is not all. Another advantage of this strategy is that it allows to dispense with a chromatic interpolation algorithm to reconstruct the original color of the image thanks to the RGB filters placed behind each of the photo receptors of the sensor.

It is not necessary to investigate the procedure used to recover color information, but it is good to know that the absence of a chromatic interpolation algorithm also contributes, on the one hand, to the increase in resolution, and, on the other hand, reduces the noise because this interpolation procedure is itself a source of noise. Interestingly, as we have seen, our bad pulse in this context is beneficial.

Of course, in order to use the super resolution of Pixel 3 it is necessary that we resort, at least, to the 1.2x zoom . However, as far as possible, we are not interested in exceeding ourselves because we will obtain the highest level of detail with a zoom value as close as possible to 1.2x.

Processing of shots with low luminosity

The photographs that we need to take in spaces with very little ambient light usually force us to resort to long exposure times (usually several seconds) to ensure that the sensor will be able to capture enough light. However, this strategy has two important problems . The first is that if we are photographing an object that does not remain static it is possible that it moves during the exhibition and appears blurred in our photography.

The second problem is caused by the trepidation that our pulse introduces in the long exposure photographs, which can be especially intense when we use a smartphone because, as we all know, it is a very light device and we usually do not use it with a tripod . To solve this shooting scenario in Pixel 3 Google engineers have chosen to use the same burst shot used by HDR + technology.

When we enable the night shot mode, the smartphone captures a burst of up to 15 snapshots with a maximum exposure time of 1/3 of a second for each of them. Once these images are obtained, the Google processing algorithm comes into action, which is responsible for analyzing and aligning all the photographs to generate a single snapshot that, in theory, collects the same light that we would have obtained with an exposure of up to 5 seconds .

The most obvious advantage of this strategy is that the probability of blurring caused by the vibration introduced by our pulse or by the movement of the object we are photographing is greater when we take a photograph with an exposure time of 5 seconds than when We shoot 15 snapshots for 1/3 of a second. Even so, in this last scenario this problem can also occur , which is why the precision with which the processing algorithm carries out the mixing of the photographs is so important.

This technology, like the previous ones, is not infallible, but, as you can see in the samples that illustrate this section, usually manages to resolve the night photographs recovering a lot of detail and keeping the noise under control. In addition, almost always manages to adjust in a convincing way the white balance thanks, again, to machine learning, which places the Pixel 3 in this field one step ahead of its predecessor.

Generation of RAW files through computational calculation

The strategy used by Google to generate RAW (DNG) files that have the highest possible quality uses, again, to obtain the resulting file from a burst of up to 15 shots . Once these snapshots are available it is the algorithm of analysis and mixing of images that we have talked about in the previous techniques that takes control to generate a single RAW file, correcting the fuzziness introduced by moving objects and, if it occurs , also by our pulse.

The generation of the RAW file from a collection of several images and not taking as a reference a single snapshot has two important advantages: it manages to collect more light and reduce the noise level of the resulting file. In this way, in theory the RAW files of the Pixel 3 can offer us a quality similar to what we can obtain with a camera with APS-C sensor, despite the fact that the Google mobile sensor is noticeably smaller.

The responsibility falls, again, on the algorithm used by Google to process and mix the images of the burst, as well as its ability to correct the mismatch that occurs between some snapshots and others when any of the objects collected in the image is moving. We can intuit without effort that the number of calculations that the microprocessor of the smartphone must carry out in order to successfully carry out this procedure is very high, so it is clear that the processing technology that Google introduces in its terminals is largely possible. thanks to the development that the SoC have experienced .

A very interesting feature of the RAW that the Pixel 3 generates through this procedure is that the red, green and blue (RGB) channels of the snapshots from which the DNG file is obtained are combined by the algorithm independently, so it is not necessary to use chromatic interpolation techniques. In this way , the noise level is reduced and the effective resolution of the resulting image is increased.

Computational fill flash

One last capacity of the processing technology implemented by Google in its Pixel 3 that is worth knowing is the synthetic flash . It is not a real flash, but a lighting procedure that is carried out through computational calculation and that serves to better illuminate the face of people photographed in those shooting scenarios in which the original result is not good enough. For example, in backlight shots and scenes with low ambient light.

This time the challenge is the same automatic learning algorithm we talked about in the section dedicated to portraits. In this context, it must be able, first, to identify with precision the face of the people that appear in the photograph, and, afterwards, to apply a homogeneous illumination that allows to recover the maximum possible level of detail, but without introducing an artificial finish. The effect that Google is looking for is the same as that of professional photographers using reflectors , and, as you can see in the photograph below these lines, the result offered by Pixel 3 is quite convincing.

Kim Hostler

I studied Communication Sciences because as a child I always wanted to be an announcer and make drawings for advertising campaigns. Life took me down another path and now I am a Communicator who has worked for Nokia, and Motorola. Where now instead of drawing, I take pictures, and instead of talking about my passion.