HOW COMPUTATIONAL PHOTOGRAPHY IS LETTING MOBILE PHONES PERFORM LIKE ILCS

We take a look at three photo features that are turning up in today’s smartphones and the technology behind them to see how they’re producing ILC-like images.

Portrait of Tammy Strobel

We take a look at three photo features that are turning up in today’s smartphones and the technology behind them to see how they’re producing ILC-like images. 

Computational photography is the capture and processing of images involving the use of digital computation on top of the regular optical processes. Simply put, instead of just recording a scene as the sensor captures it, computational photography also uses the information gathered to fill in details that have been missed out. Here are three applications of this technique that you’ll see in the latest smartphones today. 

My Reading Room

NS with tripod 

My Reading Room

Handheld Night Sight 

1. SEEING IN THE DARK WITH GOOGLE 

The problem with trying to capture images in low light with digital sensors is that you get image noise which results in artefacts and random spots of color in the image. Every camera suffers from this because at low light the number of photons entering the lens varies greatly. 

Traditionally, we counter this by letting more light in to the sensor for each exposure. Placing the camera on a tripod works, but you’ll then need your subject to hold still enough for the capture. So, Google uses multiple short exposures and the optical flow principle to calculate the optimal time for each exposure. 

Other than dealing with noise, Night Sight also uses machine learning to let the camera learn how to shift the color balance for troublesome lighting. A wide variety of scenes were captured with Pixel phones, then hand- corrected for proper white balance on a color-calibrated monitor to serve as a base database for the algorithm to draw from. 

To prevent the camera from doing too good a job – making a night scene look like it was shot in the day for example – Google’s engineers introduced multiple S-curves for adjusting light levels over various regions of the image instead of applying a global adjustment, thus keeping the tone mapping accurate.

My Reading Room

2. GOING BEYOND OPTICAL ZOOM WITH HUAWEI 

Like every other smartphone on the market, Huawei’s latest Mate 20 Pro doesn’t come with a zoom lens. However, the Mate 20 Pro is able to give images with greater detail and less artefacts than what you’d get from simply cropping the original picture by using what Huawei calls Hybrid Zoom technology. 

You see, Hybrid Zoom intelligently combines data from its multiple cameras to do what’s known as super resolution. This works similar to Canon’s Dual pixel AF, where sets of information from images with a slightly different point of view are combined for a better result. 

By using multiple lower resolution samples, the camera is able to create a higher resolution image. The Mate 20 Pro has the advantage of having multiple lenses at different focal lengths, so each one can help to fill in the image information needed at the various “zoom” levels. More information of course leads to a better picture with more details, and Super Resolution processing is applied to enlarge the sides of an image by three times, thus improving the resolution. In addition, compression artefacts are identified and suppressed, allowing the camera to obtain clearer details and textures than with conventional digital zoom.

My Reading Room

3. TRUE DEPTH WITH APPLE’S PORTRAIT MODE 

Depth of Field or the bokeh effect as photographers call it, has always been a bit of a holy grail because it’s generally more pronounced when you use lenses with wider apertures. As you can imagine, these lenses also tend to be larger and more expensive, so bokeh has never been something you’d associate with mobile phone cameras until now. 

Apple’s iPhone 7 Plus introduced a dual-camera system which leverages two sets of images of the same object collected from slightly different angles to create a disparity map which gave the camera depth information for everything in the frame. With the iPhone X, Apple added a TrueDepth camera, which uses infrared light to calculate depth.   

The iPhone XR uses a single camera to do depth capture with a single camera by leveraging the Dual Pixel technique described earlier and tapping into neural net software to create a highly detailed mask around the subject. This mask analyzes what part of the picture is a person and what isn’t, and preserves individual hairs and eyeglasses, so when the blurring effect is applied, it doesn’t affect your subject, making for a nice approximation of the bokeh effect. 

Apple has also improved Portrait Mode so you can “change” the aperture used, by using computational photography to adjust the amount of bokeh to match what you would get if you varied the aperture setting on a physical lens. 

PICTURES 123RF, GOOGLE, HUAWEI, APPLE