Google develops CSI-like technology that 'enhances' low resolution images

Ever watch a TV show where a bunch of investigators are looking at a blurry image and then “enhance it” to crystal clear quality? Did you then think to yourself “that’s so dumb; imaging doesn’t work like that”? We know we have, but reality has a surprise for us: imaging does work like that, at least in the case of one Google experiment.

Researchers at the company have been teaching neural networks how to recreate larger resolution images starting from very low-res samples. Essentially, the system “enhances” image quality and upscales the picture, making the subject much more recognizable to human eyes, in a number of cases. In some ways, the system acts as if it recreates lost data, with Google’s scientists saying it “hallucinates” the missing data.

The system, which un-blurred human faces in low-resolution images, actually relies on two separate neural networks. One of them has been taught what human faces look like, and tries to fill in the gaps when it recognizes partial faces. The other part of the system works to identify what a low-scaled version of an image might look like compared to the original, essentially continuously trying to match a low-res image to a higher-res one. As the two systems work together one part tries to recreate human faces, while the other tries to match it as best as it can to a bigger or smaller version of the same image.

Right images are originals, left images are what the system had to work with, center images are the result

But the system can do a bit more than just recreate faces. Researchers also used it to recreate photos of bedrooms from low-resolution samples, essentially showing that any type of data that can be fed to a neural network might be partially reconstructed when missing, through this type of system.

Of course, the system is far from the sci-fi-esque portrayals in shows like CSI, but Google does have an immediate use for such neural nets: by using the reconstruction method, the search engine can move around less image data, essentially lower resolution images, and just rely on the receiver to upscale the image to full resolution through this technology. This can have major implications for data storage, bandwidth use and other aspects, and in fact similar technology is already being used in a limited fashion on Google Plus.

Outside of data centers, technology such as this may soon have major implications for security, privacy and other aspects of our day to day lives.

Source: Google (arXiv) Via: The Guardian

Report a problem with article
Next Article

The EU wants you to use your digital subscriptions anywhere in the bloc

Previous Article

Get this Big Data and Hadoop Analytics Certification course Bundle for just $59

13 Comments - Add comment

Advertisement