OF MATH AND PHOTOGRAPHY

Marc Levoy, Distinguished Engineer, Google.

Portrait of Tammy Strobel

Seems like many companies are using multiple cameras to get better results. Is that the way to go?

We’re still experimenting in that area. A telephoto lens isn’t the greatest solution because you get a very limited field of view. A monochrome camera might help, but then you have to align the images and do it robustly, and that’s hard. I think avoiding ghosts and artefacts is really hard when you have another camera in another place and you have to merge those images in. You’re also not getting the same color resolution with a monochrome camera, so the color might be blurry. Consistently getting everything right is hard.

Can you improve the quality of the image captured by a lens mathematically?

Well, there’s this thing called a “diffraction spot size” on any lens, and the way to think about that is – if you were to take a picture of a tiny point, how big of a blob would that make on a sensor? The width of that is called the diffraction spot size. If that’s big, then there’s nothing you can do computationally. If it’s small relative to the size of the pixels, then you actually get moiré patterns, but then there’s at least a chance that if you take multiple pictures that are slightly offset, you can computationally restore some of the resolution.

 
My Reading Room

White Balance is a perfect example of what Mathematicians call an “ill-posed problem”.

Would it be possible to take information of a scene in good light and apply that information to a photo of that same scene, but taken in the dark?

We’ve thought about that, but it’s too easy to hallucinate a situation where something goes slightly wrong and looks monstrous. There are academic papers that talk about doing that. For example, restoring a mis-focused face because there are many images of a properly focused face in the same album, but the pose might be different, the shadow across the face… It’s a hard problem that’s still being looked at in the academic literature.

What else could artificial intelligence help with in photography?

We use Face detection to set the exposure so the person is properly exposed, but we need to be careful to not have false positives. White Balance is a perfect example of what mathematicians call an “illposed problem”. If I just showed you a picture from Iceland, you wouldn’t know if the snow was actually blue, or if it was white and reflecting the color of a northern blue sky. So, we need to shift the image so it looks more natural to you. That’s hard, but maybe we can use AI for this, because snow is naturally not blue.

How far can cell phone cameras go?

Look at the ways in which SLR cameras beat cell phones. Dynamic range - I think we’ve pretty much got that. Signal to noise ratio – we’re getting a lot better there. The shallow depth of field; it’s simulated but I think it’s pretty much what most people really want. Then it’s just telephoto zoom reach, which we partly address with Super Res Zoom. But if you want high magnifications, then there’s still no replacement for optics.

So for all meaningful purposes we’re there. I have to ask though, why are we only looking at computational photography now when we’ve been digital for so long?

We’ve been digital for so long, but we haven’t had this level of computation available on a camera that’s also programmable for very long. Go back in history, and fine-scale programmability of cameras by an app (started with) the Camera 2.0 API by Android. That came out of a paper from my team at Stanford, called the Frankencamera Project. Once that came out, we could look at taking bursts of images, but there was a limited amount of things we could do with the data because of the performance, and it was only recently that we could achieve the amount of computing needed for Night Sight mode.

Photos Jesse Levinson, Hector Garcia-Molina