Google has developed a special AI capable of lighting your photos in a simply incredible way

Google is preparing an artificial intelligence capable of cleaning images of noise without losing detail or quality.

Google has developed a special AI capable of lighting your photos in a simply incredible way

The relationship between cell phones and night photography is anything but pleasant. It’s true that having a camera in your pocket is a great advantage and one of the coolest things technology has given us over the past 20 years, but the truth is that shooting in low light does not always provide ideal results.

Indeed, if we take a picture in the street without natural light, it is normal for the optical sensor to generate a lot of electronic noise. There are many ways to reduce it, although one of them is to gain clarity in the photo by losing detail, the norm today for many smartphones. However, Google is training an AI that will allow us to eliminate noise without losing detail.

Detailed and perfect images in low light thanks to the Big G

Google has developed a special AI capable of lighting your photos in a simply incredible way

Google prepares the ultimate solution to noise in low-light photos

It’s the idea of ​​Mountain View on paper. For this, they launched an open source project known as MultiNeRF, as collected in PetaPixel. As digital noise and its consequences are still two big projects for engineers to work on, Google’s algorithms want to solve the problem using a neural network, the first (and impressive) results of which you can see in the next video:

This neural network is known as NeRF (Neural Radiance Fields), which was originally created to generate 3D images from 2D matrices. If Google decided to rely on this neural network, it is because, when generating a 3D image, it is much easier for it to analyze the information contained in an image, since it can ” move” through it.

The MultiNeRF project document clearly states its mission:

We modified NeRF to train the AI ​​directly on linear RAW images, preserving the full dynamic range of the scene. By rendering the raw output images of the resulting NeRF, we can perform new high dynamic range (HDR) view synthesis tasks. In addition to changing the camera’s point of view, we can manipulate focus, exposure, and tone mapping after analyzing the image.

In other words: the algorithm analyzes the raw data from the RAW file and uses artificial intelligence to see what the resulting photo would look like if there were no digital noise in the scene. The goal is to keep the maximum detail, with the minimum noise.

For now, the AI ​​​​that will be responsible for carrying out this whole process is in its early stages, although there is no doubt that it is something that we would like to see implemented in the Google Pixel as soon as possible. . It is still early to know if it will be possible to do so, but if so, it would not be bad for other manufacturers to join the bandwagon. In the meantime, do not hesitate to take a look at the phones with the best camera that you can find on the market and the best applications to edit your photos available for Android.

Leave a Comment