This article is more than 1 year old

Your phone may be able to clean up snaps – but our AI is much better at touching up, say boffins

Grainy to clear pixels within milliseconds

Video Don’t worry if the lighting is a bit off in your photos – artificially intelligent software can fix that.

Computer scientists from Nvidia, Aalto University in Finland, and the Massachusetts Institute of Technology in the US, have trained a neural network to restore images marred by flecks of noise. Computer vision algorithms are already automatically used to improve snaps taken on smartphones like the Pixel 2 or the iPhone X, but this takes things further.

With this new technique, the training process is slightly different from how the likes of Google and Apple train their phone software to clean up pictures.

Instead of feeding neural networks a pair of images, where one is high quality and the other one is blurry, this latest model – nicknamed noise2noise – can learn how to clean images without needing to see high-resolution examples.

“We apply basic statistical reasoning to signal reconstruction by machine learning — learning to map corrupted observations to clean signals — with a simple and powerful conclusion: under certain common circumstances, it is possible to learn to restore signals without ever observing clean ones,” according to the paper’s abstract.

The theoretical basis for why this works is a little tricky to understand. More traditional techniques use pairs of low-resolution and high-resolution images, learn to minimize the loss function by estimating the difference in pixel values between both pictures.

There is a wide range of values that the pixels can take to recreate a crisper image, and over time the neural network learns to average out these values. The same idea can be applied when training on pairs of spoiled images, if the gap between the pixel values in both images is relatively similar to the ones between a clean and blurry image.

“This implies that we can, in principle, corrupt the training targets of a neural network with zero-mean noise without changing what the network learns,” the paper said.

Training with technology

The team trained their noise2noise model on 50,000 images taken from the ImageNet dataset and added a random distribution of of noise to each image. The system has to estimate the magnitude of the noise in the photo and remove it.

It was tested on three datasets with images of buildings, people, and medical resonance imaging scans. Here’s a video with some of the results.

Youtube Video

The model won’t cure all imperfections, however. It can’t restore objects that are just out of frame or reposition the photo to get the best angles. But it’s helpful for when there aren’t enough high resolution examples to train from like good images of galaxies or planets.

““There are several real-world situations where obtaining clean training data is difficult: low-light photography, physically-based rendering, and magnetic resonance imaging,” the team said.

“Our proof-of-concept demonstrations point the way to significant potential benefits in these applications by removing the need for potentially strenuous collection of clean data. Of course, there is no free lunch – we cannot learn to pick up features that are not there in the input data – but this applies equally to training with clean targets.”

The research is being presented at the International Conference in Machine Learning in Sweden this week. ®

More about

TIP US OFF

Send us news


Other stories you might like