Computers have been capable of taking hand-drawn sketches and turning them into photorealistic images for some time now, but the color photographs that result haven't always been accurate. Obviously it's even more of a frustrating exercise for humans to tackle, let alone take even a cursory stab at.



That's where a neural network educated by Yagmur Gucluturk, Umut Guclu and others at Radboud University in Holland comes in. The process began with 200,000 images of faces pulled from the internet and converted them into drawings, grayscale and color sketches to teach a neural network with 11-layers to turn a sketch into a photograph of a face that could rival one actually taken by a camera.



This went so far as to train the neural network, and then the team gave it one more go using a completely different set of data. It was direct to start with a sketch and create a photorealistic image. The resulting images were surprisingly accurate, with the team making note that the line sketches produced images with color even when there wasn't any color to be found within them.



The neural network was tested again with an additional data set using an alternate set of sketches, and again the network produced admirable results. Some anomalies produced by the experiment were such that the network had difficulty with realistic results when there wasn't shading with regular pencil drawings.



Despite its shortcomings when it came to some particular data sets, the network was able to recreate some impressive images of artists like Van Gogh and Rembrandt using self-portraits sketched by the greats themselves.



These results were achieved in only a few years of work, paving the way once more for the startlingly impressive neural networks we're able to teach and train to perform complex tasks. What's next for these machines? The sky could be the limit.