This Is How Google’s Night Sight Works On Your Pixel Phone

Last month, Google rolled out Night Sight for its Pixel phones, including the Pixel 2 and 2016’s Pixel.

Portrait of Tammy Strobel

Last month, Google rolled out Night Sight for its Pixel phones, including the Pixel 2 and 2016’s Pixel. First announced alongside the Pixel 3 phones at Google’s October hardware event in New York, Night Sight is basically a dedicated night mode that boosts image quality using Google’s computational photography smarts. 

In a nutshell, it’s basically night photography on steroids. Google says its goal was to improve photos taken with lighting between 3 and 0.3 lux, which is the difference between a sidewalk lit by street lamps and a room so dark you can’t find your keys on the floor. What’s more, it does this using the Pixels’ single camera and no LED flash.

My Reading Room

IT CAPTURES MULTIPLE FRAMES 

To start off, Night Sight uses positive- shutter-lag, or PSL, which waits until after you press the shutter button before it starts capturing images. This is in contrast with the default picture- taking mode on Pixel phones, which uses a zero-shutter-lag (ZSL) protocol and begins capturing frames once you open the camera app. PSL requires that you hold still for a short time after pressing the shutter, but it allows for longer exposures and thus improves the signal-to-noise ratio at much lower brightness levels. 

IT ADJUSTS FOR SHAKY HANDS 

However, longer exposure times can lead to motion blur. Beyond the optical image stabilization on the Pixel 2 and 3, Google uses something called motion metering, which looks at the phone’s movement, the movement of objects in the scene, and the amount of light available to decide on the right exposure time to minimize motion blur. This means that if the phone is being stabilized on a tripod for example, the exposure for each frame could be increased to as much as one second. 

Ultimately, the number of frames and exposure times depends on the Pixel model you have (the first Pixel doesn’t have OIS, so exposure times are shorter), how much your hand is shaking, scene motion, and scene brightness. To summarize, Night Sight can capture anything between 15 frames of 1/15 second or less each and 6 frames of one second.

My Reading Room

IT ALIGNS AND MERGES DIFFERENT FRAMES 

Google then aligns and merges the frames it’s captured to further reduce image noise, and all this happens within a few seconds on the phone. The idea of averaging frames to reduce image noise is an old one, and the Pixel 3 leverages a tweaked version of Google’s Super Res Zoom technology to average multiple images together. However, the Pixel and Pixel 2 use a modified version of HDR+’s merging algorithm, which is better able to detect and reject misaligned pieces of frames than the regular HDR+ algorithm. 

IT LEARNS HOW TO CORRECTLY COLOR A SCENE

Cameras are able to adjust the colors of images to compensate for the dominant color of illumination, a process known as auto white balancing (AWB). They effectively shift the colors in the image to make it seem as if a scene is lit by neutral white light. However, this process breaks down in very dim lighting, which is why Google developed a learning-based AWB algorithm to cope. 

This algorithm is trained to tell between an image with good white balance and a poorly balanced one. In the latter case, the algorithm can then suggest how to shift its colors to make the illumination appear more neutral. This sort of training requires photographing a wide range of scenes using Pixel phones, then hand-correcting their white balance while looking at the photo on a color-calibrated monitor. 

My Reading Room

IT TONE MAPS SCENES THAT ARE TOO DARK TO SEE

One of the most distinctive features of the first Night Sight reviews has been how it’s been able to illuminate scenes that are seemingly nearly pitch black when captured using the normal camera mode. Google does this with tone mapping, which is basically mapping one set of colors to another in order to brighten shadows while still maintaining the overall brightness. 

A high-end DSLR can make an image captured at night seem like day with long enough exposure times, but that’s probably not the effect you want. To deal with this, Night Sight employs an S-curve into tone mapping, which increases contrast while still cloaking the scene in shadow. In this manner, you can see what’s going on in the scene, but you can still tell that it was taken in the dark. 

PICTURES GOOGLE