In this post, we’ll look into a kind of variational autoencoder that tries to reconstruct both the input and the latent code. Along the way we’ll show how to derive GAN’s discriminator from such a variational loss. We’ll start with the fact that VAE’s sampling is asymmetric, and why this asymmetry might give us problems.

VAE’s asymmetry

Notation: is the real data distribution. is the approximate posterior. is the prior over the latent code and is the likelihood of the generative model. Also, summations such as imply Monte Carlo integration, so gradient is not taken on these terms. I’ve used summation instead of expectation to make my LaTeX code readable. Also, all lower-case letters are vectors.

The objective to be minimized in VAE [9] is:

VAE is asymmetric in the sense that sampling is performed in only one direction (from ). In comparison, both Boltzmann Machine [10] and Helmholtz Machine [11] perform sampling from both (real data) and (fantasy data). The disadvantage of sampling from only is that there will be regions in whose probability under is zero, yet whose probability under is greater than zero. This means that when we sample from , we will get samples that are impossible under . In the image domain, this means that when we sample from the model we will get images that do not look real.

The diagram above illustrate this possibility. Because training is performed by sampling exclusively from , the regions that are not covered by may misbehave under .

An intuitive interpretation: consider the case where a teacher is teaching a student to solve some problems. The teacher has limited time, so he only explains a subset of all possible problems. Let’s call the problems that the teacher has shown to the student the “taught problems”, and the rest “untaught problems”. If the student blindly accepts whatever the teacher teaches, he will not be able to handle untaught problems that are different from the taught ones. However, if the student is very curious, and frequently asks questions, and the teacher in turn gives answers, then we can reasonably expect this curious student to be much better at solving the untaught problems. When sampling from , the teacher is teaching the “taught problems”. When sampling from , the student is asking questions about “untaught problems”. As we will later see, the discriminator in GANs is the device that “answers” the student’s questions.

Symmetric VAE

In the Symmetric VAE, the objective has two parts. The first part is the original VAE objective, with sampling performed from . The second part is its symmetric counterpart, with sampling performed from . It is written below:

This objective is an upper bound on and (plus two KLDs), which is exactly what we want. Unfortunately, we cannot optimize the above objective directly, because it involves , which can be sampled from but cannot be computed (recall is the real data distribution).

is the prior over that encourages generated samples to look “real”. It is an important term that cannot be left out. To optimize the above objective involving , we need to replace it with its approximator. In doing this, we will obtain the discriminator of GANs [12].

Edit: I now think the symmetric VAE objective is flawed, because in the sleep phase it tries to reconstruct every dimension of the latent code . This means that it requires every latent dimension to be utilized, and does not permit unused dimensions. To solve this problem, we’ll have to add variance or noise to samples in the wake phase and to samples in the sleep phase. The reconstruction loss can remain unchanged, or we might want to replace the reconstruction loss with a closed-form KLD. While adding variance to seems easy, doing so with samples is not, as it seems to require an adaptive prior similar to VampPrior [15]. We’re still exploring this.

Approximating

We can train a normalized probability to approximate by minimizing the cross entropy:



When and are identical, we can recover the original Symmetric VAE objective exactly.

One complication with the above approach is that training with , a normalized probability, is difficult. So instead we train with , an unnormalized probability of . As it turns out, in order to train to approximate , we need to minimize the unnormalized cross entropy from to , and at the same time we need to maximize the unnormalized cross entropy of to , as in:

This is the discriminator loss minimized in WGAN [1]. So is the discriminator/critic. Please refer to Appendix A for how this loss is derived from the perspective of approximating .

Previously we talked about a teacher answering the student’s questions. , or its approximation, , is this teacher. If we want the student to learn and improve its work, it will be much better if the teacher can provide “useful guidance” instead of just bashing on the student when he misbehaves. We will further explore this teaching interpretation later.

Extended Helmholtz Machine

After some experiments, we found it extremely difficult to optimize a Symmetric VAE, possibly because of the opponency that exists in the wake phase and the sleep phase. Specifically, the wake phase objective minimizes , yet the sleep phase maximizes it. Also, the wake phase objective maximizes , while the sleep phase minimizes it.

If we remove the opponency, we get the objective of the Extended Helmholtz Machine, which minimizes two cross-entropies:

This objective is equivalent to the original Symmetric VAE objective, plus two entropy terms and .

In VAE, we try to minimize the precision of the code by minimizing . In EHM, this term is removed, so the precision of the posterior is no longer regularized, and the only regularization left behind is the sparsity loss . Removing the precision regularization is undesirable, and we are still exploring ways to add it back.

Results of EHM

Here we provide some very early results of training the EHM. The images below are sampled from an EHM whose encoder, decoder and discriminator have structures similar to DCGAN [8]. This model is trained on 64x64 LSUN bedroom images.

Current Problems

There are several problems we have not been able to solve with EHM yet.

Problem 1: Generated images have significant artifacts. Some of the samples contain a kind of strange artifact (last column of row 4 and 5). This kind of artifact has been observed in other GAN methods before, and we are working on understanding such artifacts. Previously we had quite a few hypotheses about the artefacts, now we have narrowed it down to just one: large variance in gradients from the discriminator.

Basically, at the beginning of training, the generated distribution and have no overlap in support, as a result, the discriminator is able to perfectly discriminate. It does so by blowing up the magnitude of to very big values, using very big weights in the discriminator. This creates very steep gradients in the form of . Such large gradient variance leads to problematic learning.

As proposed in WGAN [1], we can use weight clipping to restrict this kind of variance. Alternatively, we can, as in improved WGAN [6], regularize the gradient norm towards a target. However, because generator receives gradient from several sources, once we weaken the gradient from the discriminator using heuristics, it may no longer be trained towards the direction we want. Therefore, we cannot naively tune the gradients. However, since we now recognize that the necessary mechanism is variance reduction, we resort to SVRG [14], but we have yet to conduct experiments with it.

Note: the discriminator, in RL’s terminology, provides a reward, and the generator is trained through policy gradient. However, traditional policy gradient methods use the REINFORCE estimator, whereas in GANs, the pathwise gradient term is taken instead. Just like in conventional policy gradient methods, variance reduction is critical. In GANs, we cannot use naive baseline methods as it would bias the gradient. Fortunately, the baseline provided by SVRG has small bias, which is why we prefer it. Also, we’ve developed a variant of SVRG in this post, which we hope will be more easily adapted to GANs (no need to keep a stale copy of the network), but we have yet to run experiment with it.

Problem 2: Sleep-phase reconstruction cost remains high. Update: this problem has been fixed by replacing with its adversarial approximator.

Problem 3: Wake-phase reconstructed samples remain blurry. Another problem with the current EHM is that, when we feed the encoder with a real image, and ask the generator to reconstruct it, the reconstruction error is relatively low, yet we get a rather blurry reconstruction. The image below demonstrates this. The first row is the input image, and second row the corresponding reconstruction.

These reconstructed samples look nothing like samples directly obtained by sampling from then mapping through the decoder. Update: The reason, we have now figured out, is that only occupies a very small subspace of .

Conclusion

We argue that probability models should be trained with symmetric sampling We showed that GAN can be derived from the sleep-phase of a Symmetric VAE We showed that the Extended Helmholtz Machine can be trained, but there are still many problems.

References

[1] Martin Arjovsky, et al., Wasserstein GAN

[2] Max Welling, et al., Bayesian Learning via Stochastic Gradient Langevin Dynamics

[3] Matthew Zeiler, et al., Adaptive Deconvolutional Networks for Mid and High Level Feature Learning

[4] Rafai Bogacz, A tutorial on the free-energy framework for modelling perception and learning

[5] Jost Springenberg, et al., Striving for Simplicity: The All Convolutional Net

[6] Ishaan Gulrajani, et al., Improved Training of Wasserstein GANs

[7] Sitao Xiang, et al., On the Effects of Batch and Weight Normalization in Generative Adversarial Networks

[8] Alec Radford, et al., Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

[9] Diederik Kingma, et al., Auto-Encoding Variational Bayes

[10] David Ackley, et al., A Learning Algorithm for Boltzmann Machines

[11] Peter Dayan, et al., The Helmholtz Machine

[12] Ian Goodfellow, et al., Generative Adversarial Networks

[13] Radford Neal, MCMC using Hamiltonian dynamics

[14] Rie Johnson, et al., Accelerating Stochastic Gradient Descent using Predictive Variance Reduction

[15] Jakub Tomczak, et al., VAE with a VampPrior





























Appendix

A. Discriminator loss

Note: summation implies Monte Carlo integration, as indicated at the beginning of this article.

We train an unnormalized probability to match . We begin by converting to a normalized form



where is the partition function. It can be estimated using samples:

This estimator of is in fact sampling from , but does not require us to estimate itself (as is required in the simpler estimator of ). Also, strictly, we are not required to sample from , however to reduce variance we have to sample from a distribution as similar to as possible. We can’t use itself, so is the best we have. Of course, if we can sample from directly, that’ll be even better. But this usually isn’t straightforward, which is why we use importance sampling in the first place. In order to sample from , we might have to use some variants of MCMC (Hamiltonian [13] or Langevin [2] dynamics). If we could do this, it means that we’ll be able to use the discriminator as the generator. Sounds like a pretty fun project, but we’ll leave that for another post.

Now, to approximate , we train to minimize the cross entropy from to :

Now we will expand to its estimator. Note that in are sampled from , and is completely independent from the samples taken from . To reflect this independence, we take out as an independent term. The above becomes:



Because we only minimize the above w.r.t. parameters of , it is equivalent to minimizing:

This is the discriminator loss minimized by WGAN [1].

B. Generator Gradient

For the sake of demonstrating VAE’s connection with GANs, we can also obtain a gradient term similar to the generator gradient in GANs. Since GANs do not have the inference network , we remove all components of , and replace with :

We differentiate against the decoder parameter (the summations are Monte Carlo integration):

Note that in GANs, , but it isn’t the case for VAEs, which is why we have the second term above. The first term in the above can be written as:



Which is the generator gradient. The above also suggests that the unnormalized probability can be used directly to replace in optimization.