A team of researchers from ECE ILLINOIS and Intel led by graduate student Chen Chen recently worked together to create a neural network that uses an AI to take images which "appear pitch black or full of noise" and turns them into bright, clean, and colorful pictures. Chen Chen

To perform these postprocessing enhancements, the researchers developed the See-in-the-Dark (SID) data set which is "a group of 5,094 short-exposure images in RAW format," and then put it into a deep learning system. Afterward, the researchers trained the AI to compare the low-light images to corresponding pictures that were taken with longer exposure. The team produced what is essentially a pipeline for processing low-light images in which the network can operate directly on raw sensor data and replaces most of the traditional image processing pipeline.

Current camera technology is not optimal for extreme low-light photography or night-vision goggles. Low-light photography is often impractical requires additional equipment and long exposure times which can lead to blurry images. However, with this research, real-time extreme low-light image processing technology could soon become a reality. Such technology can be used to provide real-time image processing for a camera or optics system which will lead to more advanced military and camera technology such as headsets that allow for people to see perfectly in the dark.

Read more about this technology at The Next Web and their research here.