+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Google just gave away its competitive edge in smartphone cameras for free

Mar 16, 2018, 15:01 IST
How to Get the Pixel 2's New Camera App On Your Old Google Phone
Google recently announced that its DeepLab-v3+ algorithm will now be open sourced, meaning anyone can use it, modify it and even make it better. This is the algorithm that helps Google’s artificial intelligence (AI) determine what each pixel in an image is. It can assign “semantic labels” to each of these pixels, calling them road, dog, sky and more.
Advertisement

When you click the shutter button on your Pixel and an image is clicked, Google takes a split second to actually make the image available to you. This is when the algorithm works its magic to determine what’s what.

Why is Google doing this?

Why Google is doing this is anybody’s guess, but it does take the Search giant’s edge away. The company’s primary advantage has always been software and by open sourcing this, Google may be giving up some of that advantage. That said, one could also argue that the Pixel phones have never been known for their bokeh images. They have good cameras overall, which is what gives them the edge. And the argument against that would be that semantic image segmentation could give other OEMs clues to how Google is doing whatever it is doing on Pixel cameras. All said and done, by open sourcing the tech behind the Pixel’s camera, Google may be doing a disservice to itself and a huge service to future smartphones.

What is the tech?

Google algorithms were not perfect at launch, but the company has since improved on it. Identifying what the subjects in an image are, then helps in recognising what needs to be blurred for a bokeh. In photography, a bokeh image is when the subject is kept sharp, while the background is out of focus. The technique of identifying subjects in the image is called semantic image segmentation.

Advertisement

The bokeh images you see on smartphones today are all thanks to Apple. Yet, while the Cupertino-based company (and its successors) use two cameras to do this, Google did it with one. True to its nature, the company did that using software...AI software to be precise. The Pixel 2, widely acknowledged to be the best smartphone camera today, shoots portrait mode photos without a second camera on its back.

What does this mean for you?

In the short term, this could give smaller smartphone makers a chance to reduce costs because they could simply strip their phones of the second camera (usually used for bokeh shots) and instead use Google’s algorithm. But more importantly, it could mean that every smartphone in future will have some version of this software. For instance, Samsung could choose to take Deeplab-v3+, modify it using its own existing camera software, and possibly come up with something that’s even better than Google.
You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article