The Pixel 3's Camera Proves Software Is the Only Smartphone Innovation That Matters
Image: Owen Williams

FYI.

This story is over 5 years old.

Tech

The Pixel 3's Camera Proves Software Is the Only Smartphone Innovation That Matters

Google has discovered a secret weapon: it’s building devices that improve over time, and has skipped the diminishing returns of yearly hardware bumps.

In New York City today, Google unveiled the next big refresh of its devices, from a new convertible laptop to an all-encompassing Home Hub, and a pair of Pixel 3 smartphones. The underlying narrative: We’ve bet big on software, and it’s starting to pay dividends.

Ever since the original Pixel was unveiled, Google’s philosophy appeared to be different. Some of its hardware lagged behind in pure hardware specs, but Pixel 2 was a phone that stood out for how its camera reliably produced photos that blew expectations. It didn’t have dual cameras like the top-end iPhone, but could produce photos that were as good or better using its image processing software.

Advertisement

Google sought to build experiences that delighted the person actually using it, rather than technical superiority, by delivering the photo that looks the best. It paid off: Pixel 2 is regularly cited as the best smartphone camera.

All of this came out of a single bet: building devices with commodity hardware available to all manufacturers, and leveraging machine learning to deliver the best result. Pixel 2 included a chipset called the visual core, a dedicated component that provided the raw power needed to run neural networks locally and quickly.

Image: Owen Williams

This year, the company has doubled down on that bet with Pixel 3: it’s almost identical on paper. Instead of flashy new hardware upgrades (the Pixel 3’s camera has the same number of megapixels as the Pixel 2’s), it built algorithms to create new types of photos.

New features across the board rely on machine learning to synthesize better photos, like Super Zoom, which allows the user to zoom in on a subject without compromising on quality. By snapping a bunch of photos, then analysing them and stitching them together, you get crisp details even when looking closely that traditionally required a whole second lens to produce, like Apple does today.

Top Shot, a new feature that captures a few seconds before and after you take the shot uses similar techniques. In case you miss the right moment, the device is able to understand that and surface up a solution as if it were magic: here’s the shot you wanted, we’ve got your back.

Advertisement

The same machine learning results are repeated in other modes, including Night Sight and Photobooth, as well as a variety of other areas of the device like new call screening functionality. All of this functionality is performed on device, without the cloud.

But here’s the crunch: Google doesn’t make you upgrade your phone to actually get these features. Because they’re software based changes that learn over time, older Pixel phones get the enhancements too.

Image: Owen Williams

This was a smart play simply because cameras, phones, and our devices in general have become a race to the bottom. We’re in the age of incremental improvements, rather than splashy new devices, and it’s starting to show in how painful many smartphone events have become, because something truly revolutionary is harder than ever to produce.

By betting that software alone can produce better results than the diminishing returns on the hardware front every year, Google has discovered a secret weapon: it’s building devices that get better as they learn, and don’t require an upgrade to get the fancy new technology. It has, in essence, bet on its own strength in software, and the logical endpoint of that race to the bottom: that the end user doesn’t care what’s under the hood, as long as it can deliver results.

Photography enthusiasts are quick to pick holes in this approach, how it’s implemented, or how the results from the device with better specifications might look technically better, but what really matters is how the user judges the result: which one is more visually pleasing?

We’re entering a new phase where it’s a question of what makes a real photo, let alone a good one, when machines are able to synthesize something visually pleasing out of thin air. Does it matter if the machine is smashing 100 photos together to produce the perfect result, or is that cheating? Users probably won’t care, or even know it’s happening, if it looks right.

Almost always, in my experience, Pixel 2 was able to deliver the best photos despite having underpowered hardware, and even keeps up with the next-generation iPhone a year after it was released. Nobody cares about the specs.

Google’s bet long term is that software will always produce a better result, and with the Pixel event today it showed off the power of that approach by pushing ever more sophisticated features out across its entire lineup, rather than new devices, and improving them over time.

It’ll be impossible to judge the Pixel 3 based on early reviews, because if Google continues this trend, the phone (and its camera) will continue to get better. Machine learning isn’t magic, but it’s far easier to improve it over time than it is to swap out a piece of metal and plastic in your pocket, and it’s a play that allows delight far into the future for free. Google’s bet on software as the solution to problems, rather than hardware, is a winner: it just took a few years to prove it.