Summary: We propose a method to learn weakly symmetric deformable 3D object categories from raw single-view images, without ground-truth 3D, multiple views, 2D/3D keypoints, prior shape models or any other supervision.

This work has received the CVPR 2020 Best Paper Award.

[Paper · Project Page · Code]

Demo

Input Upload your own image (<1MB) Face Type Human Cat detect face region Lighting Mode predicted relighting geometry only Share Twitter Facebook

We store a copy of the uploaded image for 7 days, after which it will be automatically deleted. The uploaded image is not used for any other purpose.

or select one of the examples below

Method Overview

We propose a method to learn 3D deformable object categories from raw single-view images, without any manual or external supervision. The method is based on an autoencoder that factors each input image into depth, albedo, viewpoint and illumination. In order to disentangle these components without supervision, we use the fact that many object categories have, at least in principle, a symmetric structure. We show that reasoning about illumination allows us to exploit the underlying object symmetry even if the appearance is not symmetric due to shading. Furthermore, we model objects that are probably, but not certainly, symmetric by predicting a symmetry probability map, learned end-to-end with the other components of the model.

Photo-Geometric Autoencoding

Our method is based on an autoencoder that factors each input image into depth, albedo, viewpoint and lighting. These four components are combined to reconstruct the input image. The model is trained only using a reconstruction loss, without any external supervision.

Exploiting Symmetry

In order to achieve this decomposition without supervision, we exploit the fact that many object categories have a bilateral symmetry. Assuming an object is perfectly symmetric, one can obtain a virtual second view of it by simply mirroring the image and perform 3D reconstruction using stereo geometry [1, 2].

Here, we would like to leverage this symmetry assumption. We enforce the model to predict a symmetric view of the object by injecting a flipping operation, and obtain two reconstructions (with and without flipping) of the same input view through predicted viewpoint transformation. Minimizing two reconstruction losses at the same time essentially imposes a “two-view” constraint and provides sufficient signal for recovering accurate 3D shapes.

Note that even if an object has symmetric intrinsic textures (aka. albedo), it may still result in an asymmetric appearance due to asymmetric illumination. Here, this is handled by predicting albedo and lighting separately, and enforcing symmetry only on albedo while allowing the shading to be asymmetric. We assume a simple Lambertian illumination model, and compute a shading map from the predicted light direction and depth map.

In fact, doing so does not only allow the model to learn accurate intrinsic image decomposition, but also provides strong regularization on the shape prediction (similar to shape from shading)! Unnatural shapes are avoided since they result in unnatural shading and thus a higher reconstruction loss.

Probabilistic Modeling of Symmetry using Confidence Maps

Although symmetry provides strong signal for recovering 3D shapes, specific object instances are in practice never fully symmetric. We account for potential asymmetry using uncertainty modeling [3]. Our model additionally predicts a pair of per-pixel confidence maps, and is trained to minimize the two confidence-adjusted reconstruction losses at the same time, and with asymmetric weights to allow for a dominant side.

Acknowledgements

We are deeply indebted to all members of Visual Geometry Group for insightful discussions and suggestions, in particular, Sophia Koepke, Gül Varol, Erika Lu, Olivia Wiles, Iro Laina, Dan Xu, Fatma Güney, Tengda Han and Andrew Zisserman. We would also like to thank Abhishek Dutta, Ernesto Coto and João Henriques for their assistance in setting up this demo website. We are also grateful to Soumyadip Sengupta for sharing with us the code to generate synthetic face datasets, and to Mihir Sahasrabudhe for sending us the reconstruction results of Lifting AutoEncoders. This work is jointly supported by Facebook Research and ERC Horizon 2020 research and innovation programme IDIU 638009.

References



[1] Mirror Symmetry ⇒ 2-View Stereo Geometry. Alexandre R. J. François, Gérard G. Medioni, and Roman Waupotitsch. Image and Vision Computing, 2003.

[2] Detecting and Reconstructing 3D Mirror Symmetric Objects. Sudipta N. Sinha, Krishnan Ramnath, and Richard Szeliski. Proc. ECCV, 2012.

[3] What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? Alex Kendall and Yarin Gal. NeurIPS, 2017.



Author’s webpage: Shangzhe & Christian

Shangzhe Wu & Christian Rupprecht & Andrea Vedaldi, 26 February 2020