For some time, smartphone manufacturers have multiplied technological innovations in the field of photography. SuperSpectrum, HDR Plus, Dual Aperture, Dual Pixel, what's behind these marketing terms and how are manufacturers increasingly coming to the quality of true cameras? We explain everything in our file on the photo on smartphone.
From left to right: Galaxy S10, Pixel 3 and Huawei P30 Pro
While the most powerful chips equip some mid-range smartphones, that the longest autonomy is sometimes held by entry-level devices or that the design innovations are also done on smartphones at 500 euros, it is the photo that has become the differentiating factor of the highest-end smartphones.
To do this, smartphone manufacturers have put the dishes in large. Not only did they take over the traditional digital photo techniques, as found at Canon, Nikon or Panasonic, but they also went even further. Due to the small size of the photo sensors on the smartphone, digital photography has been enriched by numerous technological innovations to compensate for this defect inherent even in smartphones. We detail all that in our file.
Digital photography, how does it work?
Digital photography is largely based on the same ideas as silver photography, at least for light capture and therefore information, and exposure adjustment.
The more a digital camera will record light, the more the photo will be exposed. On the contrary, the less light it captures, the darker the shot will be. Until then, nothing but very logical.
To best manage light, digital cameras use four different parameters: sensor size, focal aperture, shutter speed, and ISO sensitivity. Depending on the type of shot that will be made, we will modify more or less this or that parameter to adapt the photo that we want to take not only the ambient light, but also the action on the screen and the type of shot (portrait, landscape, macro, etc.)
The size of the sensor
The sensor of a camera is not measured in number of megapixels, far from it. One of the most important data is actually its physical size. The larger the sensor, the larger the area that can capture light. In pictures, devices with the largest sensors, so-called full frame, are often the most expensive.
The Sony Alpha 7 III is equipped with a sensor of 24 × 36 mm said "full frame"
The focal opening
This is the opening of the diaphragm, the membrane that retracts or opens inside the lens to let in light. The aperture is defined in f / x, where f is the focal length and x is the diameter of the pupil, ie the space left by the diaphragm to let in the light. Attention therefore to the reading of a sheet of characteristics, the more the opening will be big, the more the number behind the f will be small. Thus, f / 16 is a very small aperture, while f / 1.4 will be a large aperture.
The aperture used here, f / 16, is a very small aperture, allowing little light to enter the sensor
The aperture is particularly useful, in conjunction with the size of the sensor, to play on the background blur. The larger the aperture will be – and therefore the more light will be recorded – the more the background behind the photographed subject will be blurred. It is a parameter that can be interesting to put forward on portraits for example as can be seen in the photos below, taken with a Panasonic Lumix GX7. The first is taken with an opening of f / 16 and the second with an opening of f / 1.7:
As its name suggests, the shutter speed is the time during which the sensor will record the image to be captured. The higher the speed, the less the sensor can record data and therefore the darker the picture. Conversely, a slow speed will save the image sometimes for several seconds and thus capture a lot more light.
Shutter speed can play an important role for blur. Indeed, if you hold your camera in hand, and you want to capture moving subjects, it will be better to go through a fast speed, otherwise the subject may be blurred, having moved during the capture. On the other hand, if you want to capture lighthouse streaks on a bridge over a highway in the middle of the night, it will be better to put your camera on a tripod and keep a slow speed.
Sensitivity comes from filmstrips used before digital photography. Roughly, it is an artificial saturation of brightness, allowing your camera to increase brightness without having actually captured more light. It is measured in ISO with often the lowest sensitivities at 100 ISO and the highest at more than 100 000 ISO.
This is often the last parameter to change when you shoot, if the others are not enough. Indeed, by increasing the sensitivity, the risk is to record digital noise, namely chromatic aberrations that are characterized for example by blue points, green or red in a sky that is supposed to be black.
Smartphones, more limited by their small sensor
Now that you have understood the principles of the exhibition on cameras, you can forget about them to switch to the photo on smartphone.
In fact, while there are sensors of 13.2 × 8.8 mm (1 inch), 17.3 × 13 mm (micro 4/3), 23.6 × 15.8 mm (APS-C) or 36 × 24 mm (full frame) on the cameras, it is far from the case on smartphone. For example, the Honor View 20 incorporates only a 6.4 × 4.8 mm main sensor. And this is one of the widest camera sensors on the market, also used on the Xiaomi Mi 9.
The Xiaomi Mi 9
With sensors as small, it is therefore more difficult to record as much light as a hybrid camera, SLR, or compact expert. If the sensitivity or shutter speed remain consistent criteria on the smartphone – which you can often edit in photo mode pro – it's much less the case of the aperture.
Indicate openings without specifying the size of the photo sensor is of very little interest
As we have seen, the opening depends mainly on the size of the sensor. The larger the sensor, the more effects the change will have. Therefore, to indicate openings of f / 2.2, f2.0, f / 1.8, even f / 1.5, often without even specifying the size of the photo sensor is of very little interest. We could also see it with the test of the Galaxy S9 last year, one of the first smartphones to offer a variable aperture, the change of aperture from f / 2.4 to f / 1.5 n ' ultimately has a tiny impact on the final exposure of the snapshot.
All this without even mentioning that the vast majority of current smartphones, with the notable exception of the Samsung Galaxy S and Note since the S9, offer only a fixed aperture, since the diaphragm is fixed in the photo module.
How manufacturers overcome these flaws
In smartphone manufacturers, it has been necessary to make with this constraint sensors smaller than those of digital cameras. It would be technically possible to offer larger sensors on a smartphone – Panasonic did it with its Lumix DMC-CM1 equipped with a one-inch sensor – but the footprint is clearly felt.
The Panasonic Lumix DMC-CM1
Always in the past, HTC was one of the first manufacturers to communicate not on the number of megapixels of its sensors, but on the size of the photosites, that is to say each cell responsible for capturing a pixel. With the HTC One, in 2013, the Taiwanese manufacturer put forward the size of 2 microns of each photosite, larger and therefore able to record more light.
The photo on smartphone brings today much more technologies than the photo on conventional digital devices
A trend that has not lasted, the Galaxy S10 offering for example photosites of 1.4 microns when the Huawei P30 Pro is content with photosites of 1 micron.
Nevertheless, as smartphone manufacturers are limited by sensor size constraints, they are forced to be more innovative. In many ways, the photo on smartphone brings much more technology today than the photo on conventional digital devices.
Sensor modifications to record more light
To allow sensors of the same size to capture more light, manufacturers have added some features, or some tips on camera sensors.
The first of these is the use of a monochrome grayscale sensor, in addition to RGB color sensors (red / green / blue). This is a tip that has been used by Huawei on the P9, Mate 9, P10 and Mate 10 Pro, but also by Essential on the PH-1 or by HMD on the recent Nokia 9 PureView.
Three of the five Nokia 9 PureView sensors are monochrome
The interest of these monochrome sensors is that they do not have to worry about colors to record and therefore they can focus only on light and details. At Huawei, monochrome sensors were thus of greater definition than color sensors and allowed for example to offer a hybrid zoom while maintaining an excellent level of detail. For its part, HMD Global explains that a monochrome sensor can record three times more light than an RGB sensor, and that the use of 3 monochrome sensors and 2 color sensors on the Nokia 9 PureView can capture 10 times more light than a single RGB sensor.
The RYB color sensor
But monochrome sensors are not their only tricks used by manufacturers. For its new P30 and P30 Pro, Huawei has decided to use a new type of photo sensor, called SuperSpectrum, which will record not red, green and blue lights, but red, yellow and blue. We go from a RGB sensor to a RYB sensor.
According to information provided by Huawei, this choice of sensor could record up to 40% more light than the P20 Pro sensor. The yellow photosites are therefore able to record both the red light, but also the green light. To distinguish yellow light from green light, Huawei trusts artificial intelligence. We will come back to it.
The binning pixel
Finally, the other technology used to improve the brightness of the sensors is that of the binning pixel. The idea is to offer sensors with a very large number of megapixels, such as the IMX586 sensor from Sony found on the Honor View 20 or Xiaomi Mi 9 with a total of 48 megapixels. However, in default mode, the snapshots that emerge from the sensor are four times smaller, just 12 megapixels.
The IMX586 sensor from Sony will combine the pixels between them to make them brighter
What has happened in the meantime? It is very simple. In fact, the photosites of the sensor, theoretically responsible for recording each one pixel, were assembled by four. By assembling the four recorded pixels into one, the smartphone will analyze what is the average color recorded and erase the chromatic aberration of each pixel. A good way not only to record four times more light for the same pixel and cancel the digital noise that can result from a high rise in ISO sensitivity.
Portrait mode and depth of field management
Because of the small size of the photo sensors of smartphones, and negligible variations of the focal aperture, the depth of field is necessarily high natively on smartphone. Concretely this means that without treatment, it would be physically impossible to propose pretty background blur for the photos in portrait mode, to detach the face of the person photographed as is the case on SLR for example.
Fortunately, the treatments exist, thanks to three possibilities.
Portrait mode with two sensors
To know where the subject is to be kept and which zone to blur, you still need to know the depth of the scene. This can be calculated by the smartphone using a second sensor. Thus, by analyzing the two clichés positioned at two different locations on the back, it will be possible for the smartphone to evaluate the difference between the two images and thus partially reconstruct the scene in 3D. The camera will then be able to understand which is the net subject, on which the focus has been made, and what are the fuzzy areas. This is for example what the second sensor on the back of the OnePlus 6T serves.
In addition, Google Pixels 3 and Pixel 3 XL, which use only one sensor, achieve the same result. To do this, the firm uses a technology that it describes as "dual pixel". To make it simple, each photosite of Pixel 3 is actually cut into two photosites, one on the left and one on the right. This allows the smartphone to record two images slightly offset, analyze the depth of the scene through parallax, and highlight the subject.
The portrait mode of the Google Pixel 3 XL
Portrait mode with a single sensor
A method, a little less reliable, to propose the portrait mode with a single sensor, is to use simple algorithms. This is often the case on front cameras, for selfies.
Manufacturers that offer this feature are actually based on the data of a single sensor. It is therefore up to the processor and the image rendering to analyze the captured photography to highlight the subject and integrate blur all around. Of course, because of less information, especially on depth, the result is often less accurate and it often happens that the clipping is random, especially in the hair or glasses. In addition, when you want to use the portrait mode – or aperture – on objects, some smartphones like the iPhone XR can prevent it, since this mode only works on faces.
Portrait mode with a ToF sensor
One of the latest innovations in the world of smartphone photography is that of the ToF sensor, called "time of flight". Specifically, it is a sensor that will analyze the speed at which a light signal emitted will take to be transcribed in the camera. By analyzing the different measurements for different points, the sensor will make it possible to establish a 3D cartography of the scene and thus to analyze the depth with precision.
The portrait mode of Huawei P30 Pro
Therefore, the camera that uses a ToF sensor in addition will be able to offer a particularly accurate portrait mode, including for the most complex areas, as is the case on the Huawei P30 Pro.
Image processing and machine learning
We have already seen it before, but in recent years, many smartphone manufacturers have only a mouth to speak to talk about photography: artificial intelligence. At Honor for example, it even goes so far as to include the mention "AI Camera" on the back of some smartphones like the Honor 8X.
The Honor 8X
Concretely, there are several forms of AI. It can be a simple scene recognition, which will make the automatic mode even more automatic by adjusting the settings depending on whether you take a portrait, a photo of food or landscape. At Huawei and Honor, the AI mode allows you to increase, for example, the blue of the sky when you take a landscape shot to make it even more out in the picture, or to accentuate the green of the vegetation. At Samsung, it's thanks to the recognition of scene and so to the IA mode that we will be able to go into night mode on the Galaxy S10.
Nevertheless, it can happen that it is an artificial intelligence even more, with machine learning. This is the case for example on Google Pixel 3. Google's servers actually host billions of images and Pixel 3 is able through these images to partially reconstruct what can be expected from a photo. Thus the Super Res Zoom of Pixel 3 is able to artificially increase the level of detail of a shot while maintaining a natural look, since it was based on millions of images of same type.
Several shots for better dynamic range
In photography, dynamic range is the difference between the brightest areas of an image and the darkest areas. On a device with a low dynamic range, it will be impossible to have both the blue sky and the colors of the buildings properly lit at the same time: or the sky will be all white, as if burned by the light, or the buildings will be black, drowned by shadows.
Photo in HDR mode taken with the OnePlus 6T
Fortunately, to increase this dynamic range, manufacturers have integrated HDR (high dynamic range or high dynamic range) modes on their cameras. This is a technique directly derived from digital photography. Specifically, just take multiple shots at several different exposures, and then assemble them into an image editing software to ensure that all areas of the photo will be exposed properly. In the example of our landscape photo for example, the sky will be blue in the least exposed photo and the building will be well lit in the most exposed photo. By combining the two photos, we get the perfect photo.
On smartphone, the HDR is all the more interesting that it is integrated base on smartphones. No need to go through photo editing software, the rendering is calculated natively by the smartphone processor.
Generally, smartphones will take several shots in a row, each with a different exposure. Problem, the rendering can sometimes be vague, the device being able to move from one take to another, just like the subjects of the photo. To remedy this, HMD Global has integrated five cameras that all take the same picture at the same time on his Nokia 9 PureView.
Night mode, an HDR +++++++++ mode
In recent years, manufacturers have begun to integrate a night mode to the photo application of smartphones. It is found for example on the Huawei Mate 20, P30 or P30 Pro, but also Xiaomi, OnePlus or especially Google with its pixels.
A photo of the Galaxy S9 in low light
In concrete terms, several technologies can be used for this night shot mode. At Samsung for example, for the "Super Low Light" mode of the Galaxy S9, the smartphone would simply go up ISO sensitivity and capture 12 different images. The smartphone was then responsible for comparing them to identify places where digital noise could appear. It was enough for him then to remove the artifacts which appeared only on a photograph and not the 11 others, then to combine the whole to produce a photo in low light.
At Huawei, the night mode is more like a boosted HDR mode. The smartphone will open the shutter for several seconds to capture maximum light and therefore information. Thanks to the AIS software stabilization, the smartphone is then able to analyze the movement of your hand and thus cancel the blur during image processing. It is not necessary to use a tripod or ask the smartphone as is the case at OnePlus for example. It's the same for moving subjects, the smartphone managing to keep only the first still image of people.
First photo: what I see Second photo: what the # Pixel3 sees in night mode pic.twitter.com/G2nyDmPGBa
– Ulrich Rozier 🤭 (@UlrichRozier) March 13, 2019
Finally, on the side of Google's Pixel smartphones, here again the Night Sight mode. is similar to an HDR mode. It's about capturing multiple images at different exposures based on the motion and brightness of the scene, and then combining them not only to cancel the digital noise, but also to remove the blur from the final shot. Finally, using the machine learning, the smartphone will be able to automatically do the white balance by referring to images of the same type.
Zoom with fixed focus lenses
For two years, manufacturers are multiplying sensors on the back of smartphones, but also optics. If we have seen that this multiplication of cameras could be used to capture more light thanks to monochrome sensors, or to propose a portrait mode, one of the most recent uses is the integration of optical zooms.
Manufacturers can indeed offer zooms qualified x2, x3, x5 or even x10 as is the case on the Huawei P30 Pro. Be careful though: as it stands, this figure does not mean much.
A zoom is defined according to the difference between the shortest focal length and the longest. In concrete terms, the shorter the focal length, the greater the angle of vision. Conversely, a long focal length will allow to have a weak field of vision and thus to see further.
The vast majority of photos taken at the smartphone are in wide-angle
Today, the vast majority – if not all – of smartphones on the market offer a main camera with a wide-angle lens, equivalent to 24 to 28 mm. This wide angle is qualified as such because it offers a focal length shorter than 50 mm, the focal length defined as standard. Attention: the wide-angle should not be confused with the ultra-wide angle that is mentioned a little lower.
In addition to this wide-angle lens, we will see optics called "zoom" x2, x3 or x5. This is often a zoom depending on the focal length of the main lens. Thus, on the Samsung Galaxy S10, the optical zoom x2 is in fact an optical lens with an equivalent focal length of 52 mm.
The same goes for the ultra wide-angle lenses found on the Xiaomi Mi 9, the Huawei P30 Pro (photos above) or the Galaxy S10. This is as their name suggests optical with an even larger angle – and therefore a shorter focal length – than on the main cameras. Thus, the Galaxy S10 ultra-wide angle allows to go down to a focal length equivalent to 13 mm, while that of the P30 is limited to 16 mm. The first offers a field of view of 120 ° when the second allows a field of view of 107 °.
Overall, with very few exceptions – like the Samsung Galaxy S4 Zoom – there is no real progressive optical zoom on the smartphone. The optical zoom level is done step by step, when switching from one optics to another. For all the rest, it's a hybrid zoom. Between the wide angle and the telephoto lens, the images will for example be calculated by the smartphone to define the sharpness using the data from the two sensors.
Read on FrAndroid: What are the best smartphones to take pictures in 2019?