Anomaly detection is a classic problem in computer vision. Typically, it should be addressed as a supervised learning problem, but this requires large, labeled datasets. For many unusual real-world problems, cases samples are of insufficient sizes to be effectively modeled.

In a new paper Durham University researchers introduce a anomaly detection model, GANomaly, comprising a conditional generative adversarial network that “jointly learns the generation of high-dimensional image space and the inference of latent space.” The process enables the model to perform anomaly detection tasks even in sample-poor environments.

Durham’s GANomaly model differs from previous GAN approaches on anomaly detection such as AnoGan and Efficient-GAN-Anomaly in that it compares image encoding latent space instead of image distribution. Researchers used encoder-decoder-encoder sub-networks in the generator network to enable the model to map the input image to a lower dimension vector, which was then used to reconstruct the generated output image. The additional encoder network then maps the generated image to its latent representation. “Minimizing the distance between these images and the latent vectors during training aids in learning the data distribution for the normal samples.”

Researchers tested the efficacy of GANomaly on several benchmark datasets, where it provided better performance in statistics and calculations. The new Durham method has potential applications in biomedicine, fintech, video surveillance, network systems and fraud detection.

The paper GANomaly: Semi-Supervised Anomaly Detection via Adversarial Training is on arXiv.