How Google Pixel 3's Camera Works Wonders With Just One Rear Lens

It's all about the software.
Image may contain Electronics Phone Mobile Phone and Cell Phone
Wired

When Samsung revealed the Galaxy Note 9 back in August, it showed off new AI-powered camera features, like flaw detection and a scene optimizer to tune the exposure and color of a shot before you’ve captured it. When Apple launched the iPhone XS and XS Max last month, it talked a lot about how the new phone’s AI-specific neural processor enabled better photos, especially Portrait pics.

Now, it’s Google’s turn to boast about its AI-enhanced smartphone camera—and show how its software smarts and access to vast networks of data give it a leg up on the competition.

Earlier today Google announced its new Google Pixel 3 and Pixel 3 XL smartphones. The new phones were expected (and had been leaked weeks beforehand), but since Google makes the vast majority of its revenue from digital advertising, any new hardware launch from the company piques a particular kind of interest. Google may not sell nearly as many phones as its flagship competitors do, but it knows that if it’s going to compete at all in the high-end smartphone market, it has to have a killer camera. The cameras on last year’s Pixel 2 and Pixel 2 XL phones were widely acknowledged to be excellent cameras. How was it going to make this year’s phones exceptional?

The answer, for Google, was clear: Anything you can do in AI, we can do better. The challenge was “not to launch gimmicky features, but to be very thoughtful about them, with the intent to let Google do things for you on the phone,” said Mario Queiroz, vice president of product management at Google.

At the same time, being thoughtful about using AI in photography also means being careful not to insert biases. This is something that Google has had to reckon with in the past, when its image-labeling technology made a terrible mistake; underscoring the challenges of using software to categorize photos. Google doing more things for you, as Queiroz put it, means it’s making more decisions around what a “good” photo looks like.

Third Time's a Charm

The company’s work on the Pixel 3 camera started before the Pixel 2 phone even launched, according to Isaac Reynolds, a product manager on the Google Pixel camera team. “If the phone starts somewhere between 12 to 24 months in advance [of shipping], the camera starts six to eight months before that,” he says. “We’ve been thinking about the Pixel 3 camera for a long time, certainly more than a year.”

During that time period, the Pixel camera team identified several features—as many as 10, though not all would make it into the phone—that Google’s computational photography researchers were working on. “It’s not, ‘Hey let’s assign a team to this particular project.’ We have a whole team that’s already researching these things,” says Sabrina Ellis, director of product management for Pixel. “For example, low light is an entire area of research for us. And the question becomes, ‘Is this something that would be a great feature for users or not?’”

Eventually, the Pixel team narrowed down the list to include the camera features that were both technically possible and actually useful. For example, new features called Top Shot, Photobooth, Super Res Zoom, and Motion Auto Focus all use artificial intelligence and machine learning to either identify or compensate for all our human fallibility. (Turns out, we’re not very good at standing still while taking photos.)

To be sure, some of the improvements to the Google Pixel 3 camera come from hardware upgrades. The front-facing camera now consists of two wide-angle, 12-megapixel camera lenses, better for wide-angle selfies. A slider tool below the viewfinder lets you adjust how wide you want the shot to go. The 12.2-megapixel rear camera has been improved, and the camera sensor is a “newer generation sensor,” though Reynolds conceded that it “has a lot of the same features.” The Pixel 3 also has a flicker sensor, which is supposed to mitigate the flicker effect you get when you’re shooting a photo or video under certain indoor lighting.

Some of the “new” features might not seem all that new, at least in the broader smartphone market. You can now adjust the depth effect on a Portrait photo after it’s been captured on the Pixel 3, something that Apple and Samsung already offer on their flagship phones. A synthetic fill flash brightens selfies snapped in the dark; Apple has done this for awhile too. The Pixel’s dynamic range has been improved again, but these days, HDR-done-right is a baseline feature on flagship phones—not a standout one.

There’s also the fact that the Google Pixel 3 still has a single-lens rear camera, while all of its high-end smartphone competitors have gone with double or even triple the number of lenses. Google argues it doesn’t really need another lens—“we found it was unnecessary,” Queiroz says—because of the company’s expertise in machine learning technology. Pixel phones extract enough depth information already from the camera’s dual-pixel sensor, and then run machine learning algorithms, trained on over a million photos, to produce the desired photo effect.

It’s exactly the kind of answer you’d expect from a company that specializes in software. It’s also a convenient answer when camera components are some of the key parts that are driving up the cost of fancy smartphones.

All Eyes on AI

But there are some features launching with the Pixel 3 that do appear to be the clear beneficiaries of Google’s AI prowess—specifically, Google’s Visual Core, a co-processor that Google developed with Intel. It serves as a dedicated AI chip for the Pixel camera. The Visual Core was first rolled out with the Pixel 2 smartphone, a signal that Google was willing to invest in and customize its own chips to make something better than an off-the-shelf component. It’s what powers the Pixel’s commendable HDR+ mode.

This year, the Visual Core has been updated, and it has more camera-related tasks. Top Shot is one of those features. It captures a Motion Photo, and then automatically selects the best still image from the bunch. It’s looking for open eyes and big smiles, and rejecting shots with windswept hair or faces blurred from too much movement.

Photobooth is another one. The new feature is based on technology from the Google Clips camera, a tiny static camera that automatically captures moments throughout your day, or during an event, like a birthday party. Photobooth only takes front-facing photos, but it works a little bit like Clips: You select that mode, raise the camera, and once the camera sees your face in the frame and sees you make an expression, it starts auto-snapping a bunch of photos.

If you’re trying to take a picture in the dark—so dark that your smartphone photos would normally look like garbage, as one Google product manager described it to me—the Pixel 3’s camera will suggest something called Night Sight. This isn't launching with the phone, but is expected to come later this year. Night Sight requires a steady hand because it uses a longer exposure, but it fuses together a bunch of photos to create a nighttime photo that doesn’t look, well, like garbage. All of this without using the phone’s flash, too.

Super Res Zoom, another feature new to Pixel 3, isn’t just a software tweak. It requires a lens that’s a little bit sharper than the camera’s sensor, so that the resolution isn’t limited by the sensor. But it enhances the resolution on a photo that you’ve zoomed way in on by using machine learning to adjust for the movement of your hand. (If you have the smartphone on a tripod or stable surface, you can actually see the frame moving slightly, as the camera mimics your hand movement.)

There are almost too many new camera features to take full advantage of. It’s hard to know without having really used the Pixel 3 yet which of these actually are useful and which are gimmicks, the thing Queiroz said Google was trying to avoid.

Picture Perfect

This relatively new trend in computational photography, the use of AI and machine learning to compensate for a lack of hardware or for human imperfection, raises some questions about the existence of bias in the machine learning models that Google is using. Google’s photo data sets have already been shown to have bias, as have others. One thing that stood out to me as I got a sneak peek at Google’s new Pixel cameras: There were an awful lot of references to photos with smiling, happy faces.

Top Shot looks for photos that would be considered decent by any photo standards, but it also looks for that group shot where you’re all smiling. Photobooth won’t start auto-snapping photos until you’ve made some sort of expression, like a smile or a goofy face. Google uses AI to make photos look better overall, for sure—but in doing that it’s also making subtle determinations around what a good photo is.

“If AI is just being used to make photos look better, then everyone likes it,” said Venkatesh Saligrama, a professor Boston University’s school of engineering who has researched gender biases in machine learning. “On the other hand, if it’s using information more broadly, to say this is what they like and what they don’t like and altering your photography that way, then it might not be something you want out of the system.”

“It could be applying broader culture influences, and in some cases that may not be good,” Saligrama added.

Reynolds, the Pixel camera product manager, says his team likens some of the new features to building a “shot list” of what photos most people would want to take in a given situation—say, at a wedding. “Everyone goes into a wedding with a shot list, and when we built Top Shot, we had those sorts of lists in mind,” he said. “And somewhere on that shot list is also a very serious pose, a dramatic photo. But I think we decided to focus on that group photo where everyone is smiling at the same time.”

Google also has specific machine learning models that can detect surprise, or amusement, in certain scenarios, Reynolds said. It has annotated over 100 million faces. It knows these things.

For the most part, this technology may very well translate into wow-worthy photos on the Google Pixel 3. It may surpass the already-impressive Google Pixel 2 camera. Or it may just nudge the future of smartphone photography forward slightly, in a year when every major smartphone camera is pretty darn good. One thing’s certain: Google’s doing it the Google way.


More Great WIRED Stories