Now that we’re into the third Pixel phone, has Google’s definion of the Pixel changed in any way since the original’s debut?
Dan Saunders, Director, Google Pixel Business
Now that we’re into the third Pixel phone, has Google’s definion of the Pixel changed in any way since the original’s debut?
We’ve become clearer in our thinking as we moved along, but the central thought has always been the same: a combination of hardware and software and artificial intelligence. And this applies not just for the Pixel phones, but for all our hardware products.
If you think about it, A.I. is a space that Google has been involved in for many years now, so it really goes to the heart of how Google thinks about organizing the world’s information and making it more accessible to people.
Does that mean Google sees the Pixel more of a vehicle to realize this mission of organizing the world’s information than to make money through hardware sales?
The Pixel is definitely an important piece to achieve that mission, but we also want to achieve the biggest sales volume possible. By having control of the hardware and software and A.I. in a vertically integrated way means we’re able to build the best experience that we know of and deliver it to end users. And of course increasing hardware sales then becomes one way to get these experiences into people’s hands.
For example, the Pixel 3 is selling in four new countries this time round: Japan and Taiwan in Asia Pacific; and France and Ireland in Europe.
Let’s talk a bit about the Pixel 3. Why is the notch on the Pixel 3 XL in this shape and size?
It’s all about optimizing the layout of the other technologies within the footprint of the device and making use of all the available space. Because we can’t put the camera behind the screen yet, we need the notch to achieve the overall all- screen effect. And for the Pixel 3 XL, within the notch we’ve not one but two cameras, including a wide- angle selfie camera. The ambient light sensor, far-field microphone, and the top front-firing speaker are also within that space.
Is Google’s approach to photography different from other phone makers? I mean, Pixel 3’s new camera features such as Top Shot and Night Sight are heavily powered by A.I. and software.
Artificial intelligence is really at the core of what Google does.
Like what we’ve done in the past to organize images and make them searchable in a way that’s useful, it’s really about understanding, visually, what an image is about using the smarts on the phone, be it on-device machine learning or Google Assistant. For instance, with Night Sight we’re able to look at an image and based on the shapes that we see make informed choices about how to create color pop to bring out the detail of a photo that’s shot in a low-light setting.
At the end of the day, anyone can put a high quality camera module on a phone, what makes the difference is the ability to treat and leverage that module with your own software and artificial intelligence - which is what we’re doing.
So am I right to say the single rear camera is a deliberate choice because Google is confident with what it’s doing through software?
Yes. And if you’re wondering why there are then two front-facing cameras, that’s because we’ve identified a problem that requires us to make some hardware choices in order to solve it. And that problem is the limited field of view when taking selfies with a single camera. So for us, it’s really about optimizing for the problems that we see and we make hardware and/or software choices accordingly.
About that “Not Pink” Pixel 3 color: I see it as typical Google humor, which means that it is pink, no?
(Laughs) Yes, I think that’s exactly right.
Photography Charles Chua