At its I/O 2017 conference, Google today announced Google Lens, the next generation in its image recognition capabilities, allowing AI to easily identify objects and locations in your pictures, and in turn enabling you to do some really cool things with them.
Using its Knowledge Graph, Google can not only identify the objects contained within a given picture or video but also other traits, such as a person's facial expressions and their surroundings. The company even went so far as to claim that its image recognition algorithm was now better than humans and less error-prone.
The same technology can then be used for a number of interesting applications. For example, a picture taken with your phone in poor lightning conditions can be analysed by Google's systems and automatically adjusted to add a lot of the detail and colour that was initially missing based on its pre-existing data on how a specific object shoud look like.
Pichai also teased a future functionality which would allow users to remove unnecessary objects or obstructions in their pictures automatically, something that previously required playing around with image editing software like Photoshop and a certain level of expertise with using them.
All of Google's effort in image recognition is supposed to culminate in Google Lens, a group of features that are to be bundled with Google's Assistant and Photos apps. This will allow for automatic recognition of objects by simply pointing toward them with your camera. Lens can even identify locations and organisations by pointing toward the relevant building, supplementing the image with further information about the establishment such as user reviews and so on.
All in all, these improvements in image recognition will not only make users' lives a little more convenient but also serve as a powerful foundation for further technologies to do with machine learning, AI and search, all of which will yield improvements over time across the entirety of Google's lineup.