Our brains have the ability to accurately classify objects that we see despite huge variability in parameters such as illuminations, pose, and background features. Recent advances in machine learning have resulted in neural networks with similar abilities. However, researchers lack a mathematical understanding of how biological and artificial systems achieve such remarkable recognition accuracy. Here, we show how statistical mechanical theory can be used to explain fundamental principles underlying the ability of a neural circuit to discriminate objects in the face of enormous physical variability.

We geometrically model the variability in the neural representation of a particular object as a manifold. The number of manifolds that can be classified at a particular stage in the network grows in proportion to the dimensionality of the neural representation, but the proportionality depends on the shape of the manifolds. Our theory can be used to analyze the structure of the manifold representations as they transform and propagate through the network, leading ultimately to successful classification.

Our theory describes the shape of the neural manifolds using geometrical measures that are able to predict when a randomly labeled set of manifolds can be separated. These measures lead to quantities that characterize manifolds with arbitrary geometry and can be computed efficiently; we use them to analyze prototypical manifold models of neural responses.

Our work provides a new theoretical framework to understand and analyze the representations formed by hierarchical neural networks, and it may lead to novel insights into how perceptual systems efficiently code and process sensory information.